In Googleβs sleek Singapore office at Block 80, Level 3, Mark Johnston stood before a room of technology journalists at 1:30 PM with a startling admission: after five decades of cybersecurity evolution, defenders are still losing the war. βIn 69% of incidents in Japan and Asia Pacific, organisations were notified of their own breaches by external entities,β the Director of Google Cloudβs Office of the CISO for Asia Pacific revealed, his presentation slide showing a damning statistic β most companies canβt even detect when theyβve been breached.
What unfolded during the hour-long βCybersecurity in the AI Eraβ roundtable was an honest assessment of how Google Cloud AI technologies are attempting to reverse decades of defensive failures, even as the same artificial intelligence tools empower attackers with unprecedented capabilities.
The historical context: 50 years of defensive failure
The crisis isnβt new. Johnston traced the problem back to cybersecurity pioneer James B. Andersonβs 1972 observation that βsystems that we use really donβt protect themselvesβ β a challenge that has persisted despite decades of technological advancement. βWhat James B Anderson said back in 1972 still applies today,β Johnston said, highlighting how fundamental security problems remain unsolved even as technology evolves.
The persistence of basic vulnerabilities compounds this challenge. Google Cloudβs threat intelligence data reveals that βover 76% of breaches start with the basicsβ β configuration errors and credential compromises that have plagued organisations for decades. Johnston cited a recent example: βLast month, a very common product that most organisations have used at some point in time, Microsoft SharePoint, also has what we call a zero-day vulnerabilityβ¦and during that time, it was attacked continuously and abused.β
The AI arms race: Defenders vs. attackers

Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, describes the current landscape as βa high-stakes arms raceβ where both cybersecurity teams and threat actors employ AI tools to outmanoeuvre each other. βFor defenders, AI is a valuable asset,β Curran explains in a media note. βEnterprises have implemented generative AI and other automation tools to analyse vast amounts of data in real time and identify anomalies.β
However, the same technologies benefit attackers. βFor threat actors, AI can streamline phishing attacks, automate malware creation and help scan networks for vulnerabilities,β Curran warns. The dual-use nature of AI creates what Johnston calls βthe Defenderβs Dilemma.β
Google Cloud AI initiatives aim to tilt these scales in favour of defenders. Johnston argued that βAI affords the best opportunity to upend the Defenderβs Dilemma, and tilt the scales of cyberspace to give defenders a decisive advantage over attackers.β The companyβs approach centres on what they term βcountless use cases for generative AI in defence,β spanning vulnerability discovery, threat intelligence, secure code generation, and incident response.
Project Zeroβs Big Sleep: AI finding what humans miss
One of Googleβs most compelling examples of AI-powered defence is Project Zeroβs βBig Sleepβ initiative, which uses large language models to identify vulnerabilities in real-world code. Johnston shared impressive metrics: βBig Sleep found a vulnerability in an open source library using Generative AI tools β the first time we believe that a vulnerability was found by an AI service.β
The programβs evolution demonstrates AIβs growing capabilities. βLast month, we announced we found over 20 vulnerabilities in different packages,β Johnston noted. βBut today, when I looked at the big sleep dashboard, I found 47 vulnerabilities in August that have been found by this solution.β
The progression from manual human analysis to AI-assisted discovery represents what Johnston describes as a shift βfrom manual to semi-autonomousβ security operations, where βGemini drives most tasks in the security lifecycle consistently well, delegating tasks it canβt automate with sufficiently high confidence or precision.β
The automation paradox: Promise and peril
Google Cloudβs roadmap envisions progression through four stages: Manual, Assisted, Semi-autonomous, and Autonomous security operations. In the semi-autonomous phase, AI systems would handle routine tasks while escalating complex decisions to human operators. The ultimate autonomous phase would see AI βdrive the security lifecycle to positive outcomes on behalf of users.β

However, this automation introduces new vulnerabilities. When asked about the risks of over-reliance on AI systems, Johnston acknowledged the challenge: βThere is the potential that this service could be attacked and manipulated. At the moment, when you see tools that these agents are piped into, there isnβt a really good framework to authorise that thatβs the actual tool that hasnβt been tampered with.β
Curran echoes this concern: βThe risk to companies is that their security teams will become over-reliant on AI, potentially sidelining human judgment and leaving systems vulnerable to attacks. There is still a need for a human βcopilotβ and roles need to be clearly defined.β
Real-world implementation: Controlling AIβs unpredictable nature
Google Cloudβs approach includes practical safeguards to address one of AIβs most problematic characteristics: its tendency to generate irrelevant or inappropriate responses. Johnston illustrated this challenge with a concrete example of contextual mismatches that could create business risks.
βIf youβve got a retail store, you shouldnβt be having medical advice instead,β Johnston explained, describing how AI systems can unexpectedly shift into unrelated domains. βSometimes these tools can do that.β The unpredictability represents a significant liability for businesses deploying customer-facing AI systems, where off-topic responses could confuse customers, damage brand reputation, or even create legal exposure.
Googleβs Model Armor technology addresses this by functioning as an intelligent filter layer. βHaving filters and using our capabilities to put health checks on those responses allows an organisation to get confidence,β Johnston noted. The system screens AI outputs for personally identifiable information, filters content inappropriate to the business context, and blocks responses that could be βoff-brandβ for the organisationβs intended use case.
The company also addresses the growing concern about shadow AI deployment. Organisations are discovering hundreds of unauthorised AI tools in their networks, creating massive security gaps. Googleβs sensitive data protection technologies attempt to address this by scanning in multiple cloud providers and on-premises systems.
The scale challenge: Budget constraints vs. growing threats
Johnston identified budget constraints as the primary challenge facing Asia Pacific CISOs, occurring precisely when organisations face escalating cyber threats. The paradox is stark: as attack volumes increase, organisations lack the resources to adequately respond.
βWe look at the statistics and objectively say, weβre seeing more noise β may not be super sophisticated, but more noise is more overhead, and that costs more to deal with,β Johnston observed. The increase in attack frequency, even when individual attacks arenβt necessarily more advanced, creates a resource drain that many organisations cannot sustain.
The financial pressure intensifies an already complex security landscape. βThey are looking for partners who can help accelerate that without having to hire 10 more staff or get larger budgets,β Johnston explained, describing how security leaders face mounting pressure to do more with existing resources while threats multiply.
Critical questions remain
Despite Google Cloud AIβs promising capabilities, several important questions persist. When challenged about whether defenders are actually winning this arms race, Johnston acknowledged: βWe havenβt seen novel attacks using AI to date,β but noted that attackers are using AI to scale existing attack methods and create βa wide range of opportunities in some aspects of the attack.β
The effectiveness claims also require scrutiny. While Johnston cited a 50% improvement in incident report writing speed, he admitted that accuracy remains a challenge: βThere are inaccuracies, sure. But humans make mistakes too.β The acknowledgement highlights the ongoing limitations of current AI security implementations.
Looking forward: Post-quantum preparations
Beyond current AI implementations, Google Cloud is already preparing for the next paradigm shift. Johnston revealed that the company has βalready deployed post-quantum cryptography between our data centres by default at scale,β positioning for future quantum computing threats that could render current encryption obsolete.
The verdict: Cautious optimism required
The integration of AI into cybersecurity represents both unprecedented opportunity and significant risk. While the AI technologies by Google Cloud demonstrate genuine capabilities in vulnerability detection, threat analysis, and automated response, the same technologies empower attackers with enhanced capabilities for reconnaissance, social engineering, and evasion.
Curranβs assessment provides a balanced perspective: βGiven how quickly the technology has evolved, organisations will have to adopt a more comprehensive and proactive cybersecurity policy if they want to stay ahead of attackers. After all, cyberattacks are a matter of βwhen,β not βif,β and AI will only accelerate the number of opportunities available to threat actors.β
The success of AI-powered cybersecurity ultimately depends not on the technology itself, but on how thoughtfully organisations implement these tools while maintaining human oversight and addressing fundamental security hygiene. As Johnston concluded, βWe should adopt these in low-risk approaches,β emphasising the need for measured implementation rather than wholesale automation.
The AI revolution in cybersecurity is underway, but victory will belong to those who can balance innovation with prudent risk management β not those who simply deploy the most advanced algorithms.
See also: Google Cloud unveils AI ally for security teams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

