
Cybercriminals are always on the prowl for new ways to cause damage. One of the latest and most concerning developments is the utilization of generative AI in their attack strategies. This technology enables malicious actors to craft highly convincing and targeted emails that can fool even the most vigilant individuals and evade traditional security systems.
An example of the impact of generative AI on cyberattacks is the rapid increase in Business Email Compromise, or BEC, attacks. These attacks involve impersonating a legitimate entity within an organization to deceive employees into performing actions that compromise security or financial integrity.
According to recent research by Abnormal Security, BEC attacks have surged by 55% over just six months, and this trend could escalate exponentially as generative AI technology becomes more widespread. The ability to scale attacks with such convincing content could lead to a proliferation of successful cybercrimes, potentially resulting in financial losses for businesses and individuals alike.
“As the adoption of generative AI tools rises, bad actors will increasingly use AI to launch attacks at higher volumes and with more sophistication,” said Evan Reiser, CEO at Abnormal Security.
As generative AI becomes more accessible, it is important for organizations to enhance their security strategies with new tools. A new tool they can utilize is Abnormal’s CheckGPT, which is used to detect AI-generated attacks. The new capability determines when email threats, including BEC and other socially engineered attacks, have likely been created using generative AI tools.
“Security leaders need to combat the threat of AI by investing in AI-powered security solutions that ingest thousands of signals to learn their organization’s unique user behavior, apply advanced models to precisely detect anomalies, and then block attacks before they reach employees,” said Reiser.
Abnormal's email security solution diverges from conventional methods, offering a distinct approach tailored for countering advanced email attacks, making it particularly adept at thwarting AI-generated assaults. Its unique API-based architecture leverages a comprehensive array of signals, encompassing diverse attributes such as communication patterns, sign-in events and various other factors, to establish a baseline of normal behavior for each employee and vendor within an organization. The application of advanced AI models, including natural language processing, enables the identification of anomalies in email behavior, serving as potential indicators of impending attacks.
Following the initial email analysis, Abnormal's platform goes beyond mere classification, delving into the intent and origin of email attacks.
The CheckGPT tool employs a suite of open source LLMs to assess the likelihood that a generative AI model was responsible for generating a particular message. This assessment is based on the examination of the contextual relevance of each word in the message. If consistently high likelihood values suggest AI-generated content, it becomes a strong indicator of artificial generation.
This innovative detection capability has recently been demonstrated through Abnormal's research, showcasing instances of emails containing language strongly suspected to be AI-generated, including threats like business email compromise and credential phishing attacks.
“While it’s important to understand whether an email was generated by a human or AI to understand and stay ahead of evolving threats, the right system will detect and block attacks no matter how they were created,” said Reiser.
Edited by
Alex Passett