AI Transition from Coding to Hacking - Streetwise Artificial Intelligence
**Artificial Intelligence and the Rising Threat of Advanced Cyberattacks**
In the rapidly evolving digital landscape of 2025, the threat of AI-powered cyberattacks is growing at an alarming rate. AI is transforming the cyber threat landscape, making attacks more frequent, sophisticated, and challenging to defend against.
According to a recent survey, 93% of business leaders anticipate daily AI-based attacks, with common attack vectors including phishing (38%), ransomware (48%), and bot attacks (16%) [1]. AI-powered phishing, in particular, accounts for over 80% of phishing incidents and achieves a 60% success rate [1].
The use of AI enables cybercriminals to automate and scale attacks efficiently, refine tactics in real-time, and bypass traditional defenses. This has led to a significant increase in data breach costs, with the average breach cost reaching $4.9 million in 2025, up 10% from 2024 [3].
AI has also fueled a surge in deepfake fraud and synthetic identity attacks [1][2]. Executives report a growing challenge where staff find it harder to distinguish real threats from AI-generated fakes, with 59% acknowledging increased difficulty due to AI sophistication [2].
While "vibe hacking" may not be a commonly mainstream term, it generally relates to exploiting social engineering and psychological manipulation, which AI enhances by creating highly personalized and convincing phishing or misinformation campaigns. Generative Pre-trained Transformer (GPT) models and other generative AI services are increasingly used by attackers to craft sophisticated phishing emails, fake identities, and manipulate content at scale [1][2][4].
AI-powered attacks are already partially autonomous, with malware capable of adapting tactics and responding dynamically to defenses without human intervention [3]. However, fully autonomous attacks—where AI independently selects targets, plans, and executes multi-stage cyber intrusions end-to-end without human guidance—are not yet fully realized but are considered imminent [3].
Organizations remain largely underprepared for these threats; only about 29%-32% of executives feel ready for AI-powered and deepfake attacks [2]. On the defense side, AI is being embraced to improve cybersecurity, with 76-89% of organizations deploying AI-enabled defenses and about 40% of cybersecurity budgets allocated to AI systems [1]. AI-powered defense systems can block up to 95% of phishing attacks and reduce response times by 35-60%, but adversaries continuously evolve their AI techniques to bypass these countermeasures [1].
In the realm of exploit development, large language models (LLMs) are making significant strides, particularly in code generation. However, LLMs found exploit development itself much harder than vulnerability research, with no model completing all the tasks set [5]. Open-source LLMs were found unsuitable for basic vulnerability research, and 51% of all spam is now generated by AI [5].
Despite these advancements, there is currently no clear evidence of threat actors using vibe hacking (AI-powered hacking) to discover and exploit vulnerabilities. Michele Campobasso, a senior security researcher at Forescout, states that most reports link the use of LLMs to tasks where language matters more than code [5]. In a test conducted by Campobasso, over 50 AI models were evaluated against four test cases, and no evidence of AI being used for cyberattacks was found [5].
In conclusion, the threat of AI-powered cyberattacks is highly advanced and rapidly increasing in prevalence. While fully autonomous AI cyberattacks are not widespread yet, the technology is moving quickly toward that capability. Organizations are urgently working to catch up but remain largely underprepared for the scale and sophistication of AI-driven cyber threats expected imminently [1][2][3][4].
AI is being embraced not only for cyberattacks, but also for cybersecurity research, with 76-89% of organizations deploying AI-enabled defenses [1]. However, the use of AI for vibe hacking, or AI-powered hacking, to discover and exploit vulnerabilities is not yet prevalent. This area of research remains underdeveloped, with most AI models evaluated being found unsuitable for basic vulnerability research [5].