Deepfake, fake news, and credibility: the new challenges of social media!

The rapid evolution of artificial intelligence and its integration into social media have given rise to new forms of information manipulation.

I spend a lot of time on social media for work, and honestly, I find it both infuriating and downright frustrating that these incredible technologies are, as always, being hijacked for malicious purposes! Deepfakes and fake news aren’t just harmless digital trends or generational quirks—they actively undermine public trust and the reliability of information. With ever more sophisticated algorithms, it’s becoming increasingly difficult to separate fact from fiction—and let’s be real, life is already confusing enough without this added chaos!

Technology: A Double-Edged Sword

I’m a huge fan of artificial intelligence. It has led to incredible advancements that make our lives easier and revolutionize countless fields:

  • In medicine, AI enables medical image analysis with unprecedented accuracy, helping detect diseases like cancer at an early stage. It also enhances digital accessibility, assisting people with disabilities in navigating the digital world through voice recognition and AI-powered assistants.
  • AI even plays a role in environmental preservation, helping manage energy resources, monitor deforestation, and predict natural disasters—potentially preventing future humanitarian crises.
  • In the arts, while still controversial, AI tools push creative boundaries, generating music, visual art, and even interactive scripts.

However, as always, these breakthroughs can be misused.
What starts as lighthearted entertainment—like turning yourself into a celebrity for a funny WhatsApp video—can quickly become a powerful tool for large-scale public manipulation. This is where the line between innovation and misuse becomes dangerously blurry.

Deepfakes and fake news perfectly illustrate this duality: a fascinating technology in the hands of its creators can turn into a dangerous weapon when exploited for misinformation.

Deepfakes: Manipulating Image and Sound

Deepfake technology is based on Generative Adversarial Networks (GANs), which generate hyper-realistic images and videos. In simple terms, we can now make someone say things they never actually said—with mind-blowing precision. It’s impressive… until it becomes a serious problem!

How GANs and Advanced Models Work

GAN (Generative Adversarial Network) is an AI system that learns through competition between two neural networks:

  • The generator creates fake images, videos, or audio designed to look real.
  • The discriminator evaluates these creations against real data, trying to detect fakes.

This cat-and-mouse game makes deepfakes more and more convincing—and therefore, more dangerous.
But it doesn’t stop there: advanced models like transformers and super-resolution techniques now enhance deepfake quality, making detection even harder.

These models use approaches such as:

  • Convolutional Neural Networks (CNNs): These analyze and replicate facial features, ensuring smooth transitions between real and fake faces.
  • Variational Autoencoders (VAEs): These refine textures and expressions, making faces appear more “human.”
  • Transformers (originally for language processing): Now applied to voice synthesis, they create artificial voices that mimic real human speech with eerie accuracy.
  • Enhanced Super-Resolution GANs (ESRGANs): These sharpen video/image resolution, removing telltale artifacts that previously exposed deepfakes.

Applications and dangers of deepfakes

Deepfakes are used across various industries, but some applications are particularly alarming:

  • Political manipulation: Fake videos have already been used to make world leaders say outrageous (or worse, dangerous) things. While some are just meme-worthy, others—like the 2022 deepfake of Zelensky urging Ukrainians to surrender—are far more sinister.
  • Media disinformation: Some outlets have aired fake interviews, and researchers at MIT Sloan found that fake news spreads six times faster than real information.
  • Fraud and identity theft: Scammers are now bypassing facial and voice recognition security. You might have heard about the Brad Pitt deepfake scam, but even businesses aren’t safe—one UK company lost $243,000 after a deepfake of its CEO authorized a fraudulent transfer!
  • Cyber harassment and fake intimate content: A terrifying trend is the rise of non-consensual deepfake pornography, used to intimidate, harass, or ruin reputations—even among teenagers. Thousands of fake explicit videos are circulating online, mostly targeting young women. This raises huge ethical and legal concerns, as these videos can be nearly undetectable and cause irreversible harm.

Fake News: rapid spread and public perception manipulation

Fake news has always existed, but thanks to social media, it now spreads like wildfire. Why? Because platforms prioritize engagement over accuracy, and let’s face it—we humans tend to share first, fact-check later (we’ve all done it at least once).

How fake news goes viral

  • Confirmation bias: People love information that reinforces their beliefs. If a story aligns with their views, they share it without thinking twice. Not great for objectivity… but hey, we’re only human!
  • Emotional impact: Fake news thrives on fear, anger, and outrage. The more shocking it is, the faster it spreads. Nuance and moderation? Not exactly clickbait material.
  • Bots and coordinated campaigns: Studies show thousands of fake accounts amplify fake news—especially during crises like COVID-19 or elections—manipulating public perception on a massive scale.
  • Engagement-driven algorithms: Social media platforms prioritize content that sparks reactions—meaning fake news (designed to be provocative) gets way more visibility than boring old facts.

The need for vigilance and critical thinking

Faced with deepfakes and fake news, critical thinking is more essential than ever. Technology is evolving, but it’s up to us to use it responsibly.

Before you hit “share” on that shocking video flooding your feed… take two minutes to verify it. It could save you from spreading misinformation (and from getting dragged into endless comment-section debates).