The threats posed by deepfakes are different from what was expected—so how can individuals protect themselves?


Deepfakes are here to stay—long-term—and we need to learn how to safeguard ourselves against their growing threats.

Image source:Freepik




Nathan Hamiel

Freepik




  • People had widely predicted that deepfakes would disrupt global elections and trigger a crisis of misinformation and disinformation.

  • Deepfakes didn’t sway the electoral prospects of any candidate, but ineffective doesn’t mean harmless.

  • As AI technology advances, organizations must remain vigilant and foster a culture of awareness to protect both employees and systems.


What is the biggest threat posed by deepfakes? A year ago, many might have said: Deepfakes could disrupt global elections and trigger a crisis of misinformation and disinformation. But that’s simply not the case anymore.

In the 2024 election cycle, memes, political propaganda, and low-quality "AI garbage" failed to sway the fortunes of any candidate.

However, deepfakes still exist all around us, and their potential to unfairly influence elections doesn’t mean they’re harmless.

Many people have misconceptions about deepfakes.

Since 2020, I’ve been writing articles about deepfakes, warning people that the threat posed by deepfakes is different from what’s commonly anticipated—it’s not primarily an issue of misinformation or disinformation aimed at influencing elections.

Meta's latest report confirms this. The report notes that, according to fact-checking efforts, less than 1% of all misinformation during the 2024 election cycle was AI-generated.

Major elections worldwide—including India’s massive general election—proceeded smoothly, with no AI-related incidents reported. Nevertheless, experts continue to emphasize the perceived dangers of AI, even as we approach the 2024 U.S. presidential election.

Many people overestimate the impact of deepfakes because they fail to grasp how highly realistic images or videos could possibly fail to deceive the public. After all, images of the Pope wearing a down jacket—or American singer Katy Perry at the Met Gala in a glamorous gown—did, in fact, trick our eyes before. However, these images didn’t involve significant stakes, nor did they clash with people’s deeply held beliefs.

Elections can be highly polarized and incredibly divisive. In such contexts, the way people consume and share information often reinforces their pre-existing beliefs. In these scenarios, the truth itself frequently fails to sway opinions.

Many people overlook the fact that even before the rise of generative AI (GenAI), the internet was already awash with misinformation. Yet, despite this, we haven’t yet faced an apocalyptic crisis of fake or misleading information. Instead, misinformation has evolved into a form of entertainment—people now share memes and politically charged content that aligns with their own beliefs, using them to rally support while simultaneously provoking outrage in opposing camps.

The Real Risks of Deepfakes

As deepfakes become more powerful and widespread, two direct risks have already emerged: harassment and social engineering attacks.

Deepfakes can target and harass individuals. The most extreme form of harassment involves distributing privately generated intimate images without consent. These images may be posted directly online or used in blackmail and sexual extortion schemes.

This form of harassment has sparked broader societal impacts. Countries are now proposing to revise existing laws specifically to address these issues. For instance, the U.S.'s "Delete Act" has already been unanimously approved by the Senate.

The second risk is social engineering attacks. Deepfakes can exploit vulnerabilities in both people and technology. For instance, attackers might use AI to mimic a family member’s voice, tricking relatives into handing over money. As these scams continue to rise, organizations like the U.S. Federal Trade Commission have already issued warnings to consumers.

In addition to scams, attackers may also launch more sophisticated social engineering attacks targeting individuals within the organization. They could mimic voices or even combine deepfake videos, employing increasingly complex tactics.

According to reports, in February 2024, a financial employee from a Hong Kong-based multinational company transferred $25 million to fraudsters. During the Zoom meeting attended by the employee, all participants—including the company’s chief financial officer—were digitally deepfaked.

Deepfakes could also be used to target systems—often those with authentication or authorization features. This could expose vulnerabilities in verification strategies, such as banks relying solely on basic voiceprint recognition.

While deepfakes can enhance social engineering attacks, so far, none of the complex deepfake techniques have been involved in the numerous unplanned incidents that Kudelski Security's IR team handles each year. Basic social engineering methods continue to remain the dominant approach.

How todeepfakesThe threat

As tools become more widespread and the barrier to content creation lowers, deepfake technology is here to stay. Unfortunately, this places most of the responsibility for prevention squarely on individuals.

We should discuss relevant scenarios with our family members, helping them recognize potential dangers to prevent scams. Additionally, families should establish a shared "code word" to verify whether the other person’s identity is genuine.

In organizations and workplaces, employees and existing technologies share the responsibility for prevention. To help users recognize relevant scenarios and potential risks, we need to reinforce user education and awareness programs by showcasing real-life examples of deepfake attacks.

Deepfake detection technology is continuously under research. However, there are steps we can take to mitigate the risks and make attacks more difficult to execute.

We can implement strong authentication and authorization: identifying vulnerabilities in authentication or authorization processes—such as basic voiceprint recognition or flat image file verification—and strengthening these areas with additional security measures.

For voice verification, we can add a secret passphrase known only to the user; for facial verification, we can implement live-person detection by asking users to turn their heads.

Files like photo IDs can be handled using the same method—simply by asking users to turn their heads. However, none of these technologies are foolproof; attackers have already begun exploiting vulnerabilities in them. This underscores the need for organizations to implement multi-layered security measures, making it significantly harder for attackers to succeed.

Some of the challenges posed by deepfakes are beyond our control. This may require government intervention to establish regulations specifically addressing certain technological applications—such as the unauthorized distribution of generated intimate images without consent. Additionally, if individuals experience harassment, they can report such incidents on tech platforms like Meta and X.

Most importantly, organizations need to be prepared to develop strategies and adapt to change. As AI technology advances and new threats emerge, organizations must remain vigilant, continuously monitoring attackers' techniques to safeguard both employees and systems.

The above content solely represents the author's personal views.This article is translated from the World Economic Forum's Agenda blog; the Chinese version is for reference purposes only.Feel free to share this in your WeChat Moments; please leave a comment at the end of the post or on our official account if you’d like to republish.

Translated by: Di Chenjing | Edited by: Wang Can

The World Economic Forum is an independent and neutral platform dedicated to bringing together diverse perspectives to discuss critical global, regional, and industry-specific issues.

Follow us on Weibo, WeChat Video Accounts, Douyin, and Xiaohongshu!

"World Economic Forum"





Share this article