How does artificial intelligence create personalized online scams tailored just for you?


Getty Images




Abdulaziz Almaslukh

(SITE)



  • Social engineering cyberattacks are characterized by automation, adaptability, and targeted precision.

  • Artificial intelligence has been used to carry out social engineering cyberattacks, tricking individuals and companies out of millions of dollars.

  • Addressing this issue requires coordinated measures centered on cross-departmental collaboration.


Artificial intelligence technologies, epitomized by generative AI (GenAI) and large language models (LLMs), have already taken the world by storm. These technologies have demonstrated immense potential to automate a wide range of everyday tasks—spanning everything from basic IT helpdesk requests to sophisticated user behavior analysis. Such task automation is typically carried out by AI agents, which are autonomous software systems designed to perform specific tasks and execute actions. Notably, businesses across industries are increasingly adopting AI tools to boost efficiency and reduce costs.

However, the rise of AI models has also given rise to new, highly effective cyberattacks—AI-based attacks—that are characterized by automation, adaptability, and precise tailoring to their targets. These emerging threats have opened up a new frontier in cybersecurity, fundamentally reshaping the landscape of digital defense. In fact, MITRE has introduced the MITRE ATLAS framework as an extension of the widely adopted MITRE ATT&CK framework, specifically designed to address adversarial tactics within AI systems.

Artificial Intelligence and Cybersecurity

Although artificial intelligence technology is advancing rapidly and undeniably exciting, its misuse has also sparked significant concerns. In fact, the World Economic Forum’s *Global Risks Report 2024* identifies misinformation and disinformation as the most critical risks associated with AI technologies. By leveraging cutting-edge models, AI is reshaping the landscape of cybersecurity threats—and could potentially unleash devastating consequences. AI-powered attacks are equipped with both reasoning and action capabilities, rendering traditional defense mechanisms ineffective. This evolution is paving the way for increasingly sophisticated and complex threats, unfolding at a pace and scale that far surpasses human capacity.

In recent years, AI-powered attack waves have consistently dominated headlines. Deepfake technology is particularly notorious for leveraging artificial intelligence to generate deceptive content. In Hong Kong, this technology was used to trick a finance professional into transferring $25 million to scammers—during a video conference call where the fraudsters impersonated the company’s chief financial officer. Such AI-driven scams are virtually cost-free for attackers and are expected to accelerate further, posing a growing threat to businesses across the board.

AI-Driven Social Engineering Attacks

As individuals leave an ever-growing digital footprint online, and as AI-powered attacks become increasingly sophisticated, threat actors now have the capability to craft more personalized and deceptive attacks. One such attack is social engineering, where individuals are manipulated into divulging confidential information or performing actions that compromise their security. The emergence of advanced AI models—particularly large language models (LLMs)—has empowered even historically less-capable threat actors to develop highly effective social engineering tactics. For instance, scammers leveraging cutting-edge voice-synthesis technology managed to trap a mother of a 15-year-old daughter in a terrifying scenario: they claimed her daughter had been kidnapped, even though the teenager was actually safe and sound.

Digital deception technologies have made significant strides, opening up new frontiers for social engineering attacks. These attacks can have severe consequences for digital assets, including financial losses and privacy breaches. Importantly, such attacks can be carried out through various channels—such as email, phishing websites, text messages, voice or video calls, and even social media platforms. Typically, social engineering attacks rely heavily on exploiting human vulnerabilities rather than targeting weaknesses in digital infrastructure security.

While reliable defense technologies against AI-based attacks are still under development, the number of cybersecurity incidents involving AI technology has already surged significantly. AI-powered attacks are becoming increasingly sophisticated—and often highly effective—rapidly outpacing current protective measures, making it far more challenging to safeguard digital environments. Yet, this doesn’t mean that fraudsters and hackers have already won.

Addressing concerns related to AI-based attacks

Addressing the issue of AI-driven attacks can be achieved in three ways. First, it is essential to thoroughly assess how current cybersecurity controls perform against emerging AI-powered cyber threats. Before these AI attacks escalate further, organizations should immediately engage in dialogue and share information, which will help develop coordinated global countermeasures.

Second, safeguarding our critical assets depends on enhancing current security measures, developing robust defenses specifically tailored to these emerging threats, and educating and raising community awareness about new forms of technology. Additionally, revisiting existing frameworks and updating them to counter AI-driven attacks is a crucial step in protecting our valuable assets.

Finally, the entire ecosystem—spanning governments and leading technology players—should collaborate to support research centers, startups, and small-to-medium-sized enterprises focused on the intersection of artificial intelligence and cybersecurity. This investment serves as a critical catalyst, potentially unlocking groundbreaking solutions—much like how OpenAI revolutionized AI by leapfrogging all established industry players. Such collaboration is urgently needed on the global stage to ensure that cybersecurity defenses evolve faster than the ever-rising tide of these new AI-powered threats.


The above content represents the author's personal views only.This article is translated from the World Economic Forum's Agenda blog; the Chinese version is for reference purposes only.Feel free to share this on WeChat Moments; please leave a comment below the post if you’d like to republish.

Editor: Wang Can

The World Economic Forum is an independent and neutral platform dedicated to bringing together diverse perspectives to discuss critical global, regional, and industry-specific issues.

Follow us on Weibo, WeChat Video Accounts, Douyin, and Xiaohongshu!

"World Economic Forum"





Share this article