From impersonating public figures to fueling scams with manipulated audio and video, their potential for misuse is both wide-ranging and deeply unsettling. But how are these hyper-realistic fakes actually made? And more importantly – can you spot one yourself? In this in-depth guide, we’ll explore deepfakes from every angle:
- how they’re created,
- where they’re most commonly used,
- how to detect them,
- and what laws are being put in place to fight back.
By the time you’re finished reading, you’ll be able to spot a deepfake from miles away!
What is a deepfake?
A deepfake is a piece of synthetic media – most commonly it is a video, an audio clip, or an image – that has been generated or manipulated using artificial intelligence to appear convincingly real. The term comes from a blend of “deep learning” and “fake”, pointing directly to the technology behind it.
Today’s deepfakes can mimic someone’s face, voice, or even movements with disturbing accuracy. Common types of deepfakes you might come across include:
- Face swaps which replace a person’s face in a video with someone else’s.
- Lip-sync deepfakes where the face is real, but the speech is altered while mouth movements are digitally reshaped.
- Voice cloning which doesn’t use one’s face, but instead replicates a person’s tone, cadence, and speech patterns.
Are you a fan of podcasts? We break down the deepfake phenomenon in an episode of our Unlocked 403 podcast.
How are deepfakes made?
Creating a deepfake involves using artificial intelligence to manipulate or generate fake visual or audio content so that it appears convincingly real. The core technique behind most deepfakes is a type of machine learning called a Generative Adversarial Network (GAN). This system is made up of two parts: the generator and the discriminator. The generator creates fake content – like a face or a voice – while the discriminator evaluates whether that content looks or sounds real. Through thousands of iterations, the system improves until the generator produces synthetic content that’s nearly indistinguishable from the real thing.
The process typically starts with training data. This involves collecting a large number of photos or videos of a target person from various angles, lighting conditions, and facial expressions. The more data available, the more realistic the output will be. For example, to create a deepfake video of a celebrity, developers might use clips from interviews, movies, or social media. Audio deepfakes require voice samples, which are used to model the unique pitch, cadence, and tone of someone’s speech.
Once the AI is trained, it can begin producing synthetic content. In video deepfakes, this often involves face-swapping, where the face of one person is digitally placed over another’s in a video. This isn’t just a static image overlay – it includes subtle movements, eye blinks, and facial expressions that make the fake seem real. In more advanced versions, the person’s mouth movements are modified to sync with new audio, a technique known as lip-syncing.
Audio deepfakes, on the other hand, use voice cloning technology to replicate someone’s voice. This can be as simple as generating a few spoken phrases, or as complex as full conversations. Some tools can now produce real-time voice transformations, making them even more difficult to detect.
While earlier versions of deepfake software required strong technical skills and powerful hardware, many tools today are relatively easy to use and open-source or web-based, making it easier for non-experts to experiment with this technology. Unfortunately, that also means the barrier to entry is lower for malicious actors.
What are deepfakes most often used for?
Fake news & misinformation - Deepfake videos are frequently used to spread false narratives or propaganda, especially involving public figures. Fabricated clips of personalities like Elon Musk, Joe Biden, and Volodymyr Zelensky have gone viral – misleading viewers into believing they made controversial statements or endorsed false claims.
Scams & identity theft - AI-generated voices and faces are increasingly used in fraud schemes, from CEO voice impersonation to fake investment pitches. Deepfakes can convincingly imitate someone’s identity to manipulate victims into sending money, clicking malicious links, or giving up sensitive information.
Nonconsensual pornography - This is one of the most troubling uses of deepfakes. AI tools are used to superimpose someone’s face onto explicit content without their consent – often targeting celebrities, influencers, even minors. These videos are sometimes used for harassment, blackmail, or bullying. The psychological impact on victims can be devastating.
Bypassing authentication - Deepfakes can be used to mimic a person’s voice and appearance to bypass facial/voice recognition systems. In this way, cybercriminals can falsely authorize financial transactions posing major threats to both individuals and organizations.
While deepfakes are often linked to misinformation and manipulation, not all uses are harmful. When applied responsibly, the technology can offer real benefits – especially in education, entertainment, and accessibility.
In classrooms, deepfakes can bring history to life by animating figures like Cleopatra or Einstein, making lessons more engaging for students. In film, they help recreate younger versions of actors or blend archival footage for a retro effect. Marketers use deepfakes to localize videos across languages without reshoots, while voice cloning supports people who’ve lost their ability to speak.
With clear labeling and ethical safeguards, deepfakes can be a creative tool rather than a threat – enhancing storytelling, learning, and inclusion.
How can you identify a deepfake?
Detecting a deepfake isn’t always easy, but there are visual, behavioral, and technical clues that can help. When suspecting you might be watching a deepfake video, look for the following tell-tale signs:
- Visual inconsistencies: Look for unnatural lighting, overly smooth skin textures, flickering edges, or blurred backgrounds.
- Facial anomalies: Misaligned eyes, distorted teeth, or unusual blinking patterns are common giveaways.
- Lip-sync mismatches: When the audio doesn’t quite match the mouth movements, it’s often a warning sign.
- Metadata manipulation: Examine a file’s creation and modification dates. Deepfake tools often alter or erase this metadata, leaving behind inconsistencies.
- Reverse search tools: Platforms like TinEye or Google Image Search can help you track down the original version of an image or video thumbnail. If strikingly similar video exists elsewhere with different content, you’ve likely spotted a fake.
When in doubt, you might also try specialized tools ranging from free browser plug-ins such as DeepFake-o-Meter to AI detection platforms such as Microsoft Video Authenticator or OpenVINO by Intel – that can analyze content for deepfake artifacts.
Is watching deepfakes illegal?
Generally speaking, watching a deepfake is not illegal unless the content itself is illegal. For example, deepfakes involving non-consensual explicit material, child exploitation, or criminal impersonation may be illegal to possess or share, depending on local laws.
The legal gray area lies more in creation and distribution of deepfakes. Laws are evolving to address malicious deepfakes, especially those used for harassment, fraud, or disinformation. Some countries and U.S. states have already introduced legislation penalizing specific uses of synthetic media.
Deepfake related legislatives
Take It Down Act (2025): A U.S. law requiring online platforms to quickly remove non-consensual intimate images, including AI-generated deepfakes. Platforms must act within 48 hours of a valid report. Violations can lead to fines or prison time – especially in cases involving minors.
EU AI Act (2024): The world’s first comprehensive AI regulation. It classifies AI systems by risk level, and sets strict rules for high-risk uses – like deepfakes. Synthetic content must be clearly labeled, and companies face fines for non-compliance. The goal: ensure AI is safe, transparent, and respects fundamental rights.
Best practices for navigating the world of deepfakes
As synthetic media becomes more widespread, it’s no longer enough to trust what we see or hear – we must actively combine awareness, tools, and responsibility. Whether you’re a casual viewer, a content creator, or part of an organization, these best practices can help you recognize and respond to deepfakes more effectively.
Strengthen media literacy
In an age where seeing is no longer believing, critical thinking is your first line of defense. Make a habit of questioning emotionally charged or sensational content, especially if it lacks context or attribution. Staying informed about how deepfakes work also makes them easier to spot—awareness is power.
Verify with fact-checkers
If something feels off, pause before you repost or click on an attached link in a video description. A quick check can prevent the spread of misinformation or falling for a scam. Use reverse image or video search tools like Google Images or TinEye to see if the content has appeared elsewhere – and in what context.
In addition, consult dedicated fact-checking platforms, which exist specifically to help verify questionable claims or viral content:
These sites frequently investigate trending videos, altered images, and misleading headlines – and often provide side-by-side comparisons or detailed breakdowns of what’s real and what’s not.
Build safeguards within organizations
For companies, publishers, and media platforms, it’s no longer enough to assume content is real until proven otherwise. Proactive steps include:
- Integrating detection tools into publishing or review workflows
- Labeling synthetic or AI-generated content
- Providing staff with training on identifying deepfakes and following internal escalation protocols
HR departments, marketing teams, and legal units can all benefit from understanding the risks and creating clear playbooks for when manipulated media surfaces.
Use tools and demand better ones
Deepfake detection is not just about awareness – it’s also about using the right technical tools. Options range from everyday browser extensions to enterprise-grade AI platforms. Beyond using tools, it’s important to push for platform-level transparency. Advocate for policies that support:
- Digital watermarking of original content
- Clear labeling of synthetic media
- Provenance tracking (e.g. blockchain-based edit history)
As deepfakes become more sophisticated, so do the risks they pose – from phishing scams to misinformation and identity theft. Stay one step ahead with ESET HOME Security Premium. It offers features such as Antivirus and Antispyware, Anti-Phishing, Safe Banking & Browsing, and Browser Privacy & Security.
explore eset home security premium
Deepfakes are no longer just a curiosity. They’re a powerful tool with real-world impact. As the line between real and fake continues to blur, staying informed is your best defense. Whether you’re spotting visual inconsistencies, verifying sources, or advocating for transparency, every small step helps protect the integrity of information.
With the right mix of awareness, tools, and critical thinking, we can navigate the synthetic future without losing sight of what’s real.
Expert tips & insights
“The capabilities of generative AI models are improving at an exponential pace. Every company developing artificial intelligence strives to capture the attention of the public and potential investors with new models capable of generating increasingly convincing content. As a result, even ordinary people now have access to tools capable of producing content that is difficult to distinguish from reality. This trend is set to accelerate further, and within months or, at most, a few years, even experienced users will find it truly challenging to recognize content created with AI. We should prepare for this era both in terms of technology and regulation.”
- Juraj Jánošík, Director of AI
Frequently asked questions
What exactly is a deepfake?
A deepfake is a piece of synthetic media – typically a video, an audio clip, or an image – that has been altered or entirely created using artificial intelligence.
Can deepfakes be detected?
Yes, to a degree. Advanced detection tools can spot known deepfake techniques with up to 98% accuracy when tested against existing datasets. However, the technology behind deepfakes continues to evolve, and newer or more refined methods can still evade current detection systems.
Is every AI-generated video considered a deepfake?
Not quite. While all deepfakes use AI, not all AI-generated videos are deepfakes. Some use more basic editing techniques that don’t involve deep learning but can still mislead viewers through simple cuts, reordering, or speed changes.
What should I do if I think I’ve come across a deepfake?
The best response is to pause before sharing. Then you can:
- Inspect file metadata if available
- Use reverse image/video search tools to find the source
- Check the content against reputable news outlets or fact-checking websites like Snopes or Reuters
How can I prevent my own media from being misused?
To make your content harder to manipulate:
- Share high-resolution originals that are harder to replicate convincingly
- Embed digital watermarks or signatures when possible
- Monitor where and how your content is used online









