WhatschatDocsCybersecurity
Related
How to Fortify Your Canvas Login Portals Against Mass Extortion Attacks: A Guide Inspired by the ShinyHunters IncidentPro-Iran Hacktivists Say They Wiped Data at Medical Giant Stryker, Forcing Mass EvacuationCyber Threat Landscape: Key Incidents and Vulnerabilities (March 30 – April 6)10 Critical Insights Into the npm Attack Surface: Threats and DefensesCanvas Outage During Finals: Cyberattack Disrupts Thousands of Schools9 Critical Cybersecurity Incidents You Need to Know – Late April 2026Unmasking the OceanLotus PyPI Attack: ZiChatBot Malware ExplainedZero-Day Supply Chain Attacks Strike Three Major Tools in Three Weeks – One Security Platform Stops All Without Prior Knowledge

The Evolving AI Threat Landscape: How Adversaries Are Using Generative AI for Cyberattacks

Last updated: 2026-05-12 20:57:58 · Cybersecurity

Introduction

Since our last update in February 2026, Google Threat Intelligence Group (GTIG) has closely monitored a significant shift in how adversaries integrate artificial intelligence into their operations. What once was experimental is now a mature, industrial-scale application. Drawing insights from Mandiant incident response, Gemini, and GTIG's proactive research, this article highlights the dual role of AI: both as a powerful engine for attacks and as a prime target. Below, we examine six key developments shaping the current threat environment.

The Evolving AI Threat Landscape: How Adversaries Are Using Generative AI for Cyberattacks
Source: www.mandiant.com

Vulnerability Discovery and Exploit Generation

For the first time, GTIG observed a threat actor using a zero-day exploit that was likely developed with AI assistance. This criminal group planned a mass exploitation event, but our proactive countermeasures may have prevented its use. State-linked actors from the People's Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK) have also shown strong interest in leveraging AI for vulnerability research. By automating the discovery of flaws, adversaries can accelerate their timelines and target previously overlooked weaknesses.

AI-Augmented Development for Defense Evasion

AI-driven coding is enabling adversaries to rapidly build infrastructure suites and polymorphic malware. These tools help evade defenses by generating obfuscation networks and integrating decoy logic. For example, suspected Russia-nexus threat actors have used AI-generated code to create malware that adapts its signature and behavior. The speed of AI-assisted development makes it far more challenging for security teams to keep pace.

Autonomous Malware Operations

The emergence of AI-enabled malware like PROMPTSPY marks a turning point toward autonomous attack orchestration. This malware interprets system states and dynamically generates commands, allowing it to manipulate victim environments without direct human intervention. Our analysis reveals previously unreported capabilities, including the ability to offload complex operational tasks to AI models. This approach enables scaled, adaptive attacks that can respond to countermeasures in real time.

AI-Augmented Research and Information Operations

Adversaries continue to use AI as a high-speed research assistant throughout the attack lifecycle. More concerning, they are moving toward agentic workflows—autonomous frameworks that plan and execute attacks. In information operations (IO), AI is used to fabricate consensus by generating synthetic media and deepfakes at scale. A key example is the pro-Russia campaign “Operation Overload,” which leveraged AI to flood platforms with misleading content.

The Evolving AI Threat Landscape: How Adversaries Are Using Generative AI for Cyberattacks
Source: www.mandiant.com

Obfuscated LLM Access

Threat actors are now pursuing premium-tier access to large language models through professionalized middleware and automated registration pipelines. This allows them to bypass usage limits and maintain anonymity. These infrastructures enable large-scale misuse while subsidizing operations via trial abuse and programmatic account cycling. The result is a shadow ecosystem of illicit AI access that challenges enforcement efforts.

Supply Chain Attacks Targeting AI Environments

Groups like “TeamPCP” (aka UNC6780) have begun targeting AI software dependencies and environments as an initial access vector. These supply chain attacks can lead to multiple types of breaches, from data theft to lateral movement into critical infrastructure. By compromising the tools that organizations rely on for AI development, adversaries can achieve broad and stealthy access.

In summary, the threat landscape is rapidly evolving as AI becomes both a weapon and a target. Organizations must adapt their defenses to address these new capabilities, from zero-day exploits driven by AI to autonomous malware and supply chain compromises. GTIG continues to monitor these developments to provide timely intelligence.