Skip to content

Emerging AI technology poses a growing risk to cybersecurity

AI-driven cyber attacks remain untraceable so far, yet experts caution that a powerful enough AI system in the market could soon be considerable enough to attract malicious attention.

Artificial Intelligence, particularly the generative variety, poses an impending danger to...
Artificial Intelligence, particularly the generative variety, poses an impending danger to cybersecurity.

Emerging AI technology poses a growing risk to cybersecurity

In the global arena, more than 42% of the population will be casting their votes in presidential, parliamentary, and general elections this year. Meanwhile, the digital landscape is witnessing a shift of unprecedented proportions, as the current state of AI-enabled cybersecurity attacks becomes increasingly sophisticated.

Melissa Ruzzi, Director of Artificial Intelligence at AppOmni, has warned that generative AI can turbocharge social engineering and phishing attacks. This potent tool is not only being used by threat actors to create highly convincing deepfakes and automated scams, but also to crack passwords and impersonate real people applying for jobs at companies.

The IBM X-Force Threat Intelligence Index 2024 report identified over 800,000 references to emerging AI technology on illicit and dark web forums last year. Hackers from nations like North Korea, Iran, and Russia have been reported to have leveraged Open AI to mount cybersecurity attacks.

These AI-powered attacks often exploit weak access controls and supply chain vulnerabilities, significantly increasing the cost and impact of data breaches. According to reports, organizations can expect to pay around $670,000 more per incident compared to breaches without AI involvement.

However, the adoption of AI technologies for enterprise cybersecurity is rapidly growing. Companies like Coca-Cola are partnering with tech giants like Microsoft to use their cloud and generative AI services. AI is playing a pivotal role in advanced threat intelligence, enabling enterprises to improve real-time threat detection, automate incident response, and predict attacks through behavioral analytics and large-scale data processing.

Yet, a significant challenge is the prevalence of "shadow AI"—unmonitored or unmanaged AI tools within organizations. This lack of governance and proper access controls introduces security risks. Nearly all organizations experiencing AI-related breaches lacked adequate AI access controls, emphasizing the need for robust governance frameworks to mitigate risks associated with rapid AI adoption.

Threat actors are using large language models (LLMs) to create more sophisticated phishing emails with high text volume and multistage payloads. Supply chain intrusions remain a common vector for compromising AI platforms, underlining the importance of zero-trust security principles like network segmentation to protect AI and related infrastructure.

The cybersecurity market is undergoing rapid transformation, with companies actively acquiring AI startups to bolster their defensive capabilities against emerging AI-powered threats. Despite some cost savings achieved through extensive use of AI in security, the market is cautioned that unguided AI adoption without proper governance dramatically increases breach risks and costs.

In the midst of these developments, fake President Biden robocalls were deployed during the recent New Hampshire primaries, while General Mills used Google's PaLM 2 model to deploy a private generative AI tool to its employees. AI can also be used to tailor sophisticated phishing attacks by scraping data about users from various sources on the internet.

As we move forward, cybercriminals are currently focused on ransomware, business email compromise, and cryptojacking. X-Force expects AI-enabled attacks in the near term, with the real threat emerging when AI enterprise adoption matures. The quality of attacks using generative AI is getting "kind of scary," according to Melissa Ruzzi, while Adam Meyers finds the potential volume and quality of these attacks starting to become concerning.

In summary, AI is both a powerful enabler for cybersecurity defense and a potent tool exploited by attackers. Enterprise adoption is accelerating, but requires strong governance to fully realize benefits and mitigate risks. As we navigate this digital landscape, vigilance and robust security measures will be key to safeguarding our data and digital identities.

[1] [Source] [2] [Source] [3] [Source] [4] [Source] [5] [Source]

  1. Despite the increasing use of AI for enterprise cybersecurity, threat actors are employing generative AI to escalate phishing attacks, automate scams, and impersonate individuals, as warned by Melissa Ruzzi, Director of Artificial Intelligence at AppOmni. (Source)
  2. The IBM X-Force Threat Intelligence Index 2024 report reveals that hackers are using AI-enabled cybersecurity attacks, leveraging Open AI to mount such attacks, as seen in nations like North Korea, Iran, and Russia. (Source)
  3. Rapid AI adoption by enterprises can increase vulnerabilities, particularly in the absence of robust governance and proper access controls, as highlighted by the concerning rise of "shadow AI" within organizations. (Source)

Read also:

    Latest