AI-enhanced hackers now operate with heightened speed, aggression, and effectiveness
In the rapidly evolving landscape of cybersecurity, a new threat has emerged: the use of generative AI (GenAI) by hackers. According to Adam Meyers, Head of Counter Adversary Operations at CrowdStrike, this development poses a significant risk to enterprises worldwide.
Hackers are leveraging GenAI to mount highly sophisticated, large-scale cyberattacks. They use generative models like GPT-4 and successors to create personalized, convincing phishing emails that bypass traditional filters, enabling widespread credential theft and malware deployment. Beyond phishing, attackers exploit vulnerabilities in enterprise AI systems, especially agentic AI tools—AI agents that perform autonomous workflows—thus expanding the attack surface beyond human entry points to include non-human AI identities and automation pipelines.
Key tactics include AI-generated phishing campaigns, exploitation of AI development and deployment tools, and empowering lesser-skilled hackers by automating complex code writing and hacking activities. These tactics increase the scale and speed of cyberattacks, overwhelming traditional security defenses that are not AI-native or intent-aware.
Potential consequences for enterprises are severe. They include financial losses due to successful phishing and malware attacks leading to fraud or ransom payments, reputational damage from breaches and leaked sensitive data, compromise and manipulation of enterprise AI workflows, and an increase in the scale and speed of cyberattacks.
The rise of generative AI transforms both offense and defense in cybersecurity. While it enhances threat detection and response automation on the defensive side, it simultaneously amplifies the scale, speed, and sophistication of attacker techniques. Enterprises must adopt AI-native, real-time, intent-aware security tools and focus on securing both human and AI-driven entry points to mitigate these emerging threats.
CrowdStrike has observed multiple instances of hackers exploiting vulnerabilities in tools used to build AI agents. Adversaries are treating GenAI agents like infrastructure and attacking them similarly to SaaS platforms, cloud consoles, and privileged accounts. The security company considers agentic AI systems as a core part of the enterprise attack surface.
In addition, four in five (81%) interactive intrusions were malware-free, relying on human hands on keyboards to stay undetected. Threat actors are using Generative AI to scale social engineering, accelerate operations, and lower the barrier to entry for intrusions.
CrowdStrike is particularly concerned about autonomous workflows and non-human identities becoming the next frontier of adversary exploitation. For instance, DPRK-nexus Famous Chollima is using generative AI to automate its insider attack program. Scattered Spider, a group believed to consist of UK and US nationals, deployed ransomware within 24 hours of accessing systems.
In conclusion, the use of generative AI by hackers as of 2025 poses a top email threat (AI-generated phishing), targets AI agent tools directly, and threatens to compromise sensitive enterprise assets with potentially devastating financial and operational consequences. Enterprises must be vigilant and proactive in their cybersecurity strategies to combat this evolving threat.
Technology and artificial-intelligence are essential components in today's advanced cyberattacks, with hackers using generative AI to create personalized, convincing phishing emails and exploit vulnerabilities in enterprise AI systems. Consoles utilized for building AI agents are also under attack, posing a significant risk to enterprises worldwide.