Skip links

Safer Internet Day: Telling What’s Real from What’s Fake Online

On Safer Internet Day, we ask an important question: how can you tell what’s real and what’s fake online?  

There’s plenty of fakery out there, due in large part to AI-generated content. And spotting the difference takes a bit of work nowadays. 

Taylor Swift showed us why back in January. More accurately, a Taylor Swift AI voice clone showed us why. Scammers combined old footage of Swift with phony AI-cloned audio that touted a free cookware giveaway. They went about it in a cagey way, using the Le Creuset brand as bait, a brand that her fans know she loves.  

Of course, all people had to do was “answer a few questions” to get their “free” cookware. When some did, they wound up with stolen personal info. It’s one of many full-on identity theft scams with a bogus celebrity AI twist.  

Of course, this wasn’t the first time that scammers used AI to trick well-meaning people. Last December saw AI voice-cloning tools mimic singer Kelly Clarksoni to sell weight-loss gummies. Over the summer, scammers posted other ads using the synthesized voice of Elon Muskii. 

Meanwhile, more quietly yet no less damaging, we’ve seen a glut of AI-generated fakes flood our screens. They look more convincing than ever, as bad actors use AI tools to spin up fake videos, emails, texts, and images. They do it quickly and on the cheap, yet this fake content still has a polish to it. Much of it lacks the telltale signs of a fake, like poor spelling, grammar, and design.  

Another example of AI-generated fake content comes from a BBC report on disinformation being fed to young studentsiii. In it, they investigated several YouTube channels that use AI to make videos. The creators of these channels billed them as educational content for children, yet the investigators found them packed with falsehoods and flat-out conspiracy theories.  

This BBC report offers a prime example of deliberate disinformation, produced on a vast scale, passing itself off as the truth. It’s also one more example of how bad actors use AI, not for scams, but for spreading outright lies. 

Amid all these scams and disinformation floating around, going online can feel like playing a game of “true or false.” Quietly, and sometimes not so quietly, we find ourselves asking, “Is what I’m seeing and hearing real?”

AI has made answering that question tougher, for sure. Yet that’s changing. In fact, we’re now using AI to spot AI. As security professionals, we can use AI to help sniff out what’s real and what’s fake. Like a lie detector. 

We showcased that exact technology at the big CES tech show in Las Vegas earlier this year. Our own Project Mockingbird, which spots AI-generated voices with better than 90% accuracy. Here’s a look at it in action when we ran it against the Taylor Swift scam video. As the red lines spike, that’s our AI technology calling out what’s fake … 

 

In addition to AI audio detection, we’re working on technology for image detection, video detection, and text detection as well — tools that will help us tell what’s real and what’s fake. It’s good to know technology like this is on the horizon. 

Yet above and beyond technology, there’s you. Your own ability to spot a fake. You have a lie detector of your own built right in. 

The quick questions that can help you spot AI fakes.  

Like Ferris Bueller said in the movies years ago, “Life moves pretty fast …” and that’s true of the internet too. The speed of life online and the nature of our otherwise very busy days make it tough to spot fakes. We’re in a rush, and we don’t always stop and think if what we’re seeing and hearing is real. Yet that’s what it takes. Stopping, and asking a few quick questions. 

As put forward by Common Sense Media, a handful of questions can help you sniff out what’s likely real and what’s likely false. As you read articles, watch videos, and so forth, you can ask yourself: 

  • Who made this? 
  • Who is the target audience? 
  • Does someone profit if you click on it? 
  • Who paid for this content? 
  • Who might benefit or be harmed by this message? 
  • What important info is left out of the message? 
  • Is this credible? Why or why not?” 

Answering only a few of them can help you spot a scam. Or at least get a sense that a scam might be afoot. Let’s use the Taylor Swift video as an example. Asking just three questions tells you a lot.  

First, “what important info is left out?” 

The video mentions a “packaging error.” Really? What kind of error? And why would it lead Le Creuset to give away thousands and thousands of dollars worth of their cookware? Companies have ways of correcting errors like these. So, that seems suspicious. 

Second, “is this credible?” 

This one gets a little tricky. Yet, watch the video closely. That first clip of Swift looks like a much younger Swift compared to the other shots used later. We’re seeing Taylor Swift from her different “eras” throughout, stitched together in a slapdash way. With that, note how quick the cuts are. Likely the scammers wanted to hide the poor lip-synching job they did. That seems yet more suspicious. 

Lastly, “who paid for this content?”  

OK, let’s say Le Creuset really did make a “packaging error.” Would they really put the time, effort, and money into an ad that features Taylor Swift? That would most certainly heap even more losses on those 3,000 “mispackaged” pieces of cookware. It doesn’t make sense. 

While these questions didn’t give definitive answers, they certainly raised several red flags. Everything about this sounds like a scam, thanks to asking a few quick questions and running the answers through your own internal lie detector. 

A safer internet calls for combo of technology and a critical eye. 

So, how you can tell what’s real and what’s fake online? In the time of AI, it’ll get easier as new technologies that detect fakes roll out. Yet as it is with staying safe online, the other part of knowing what’s true and false is you.   

Hopping online today calls for a critical eye more now than ever. Bad actors can cook up content with AI at rates unseen until now. And they create it to strike a nerve. To lure you into a scam or to sway your thinking with disinformation. With that, content that riles you up, catches you by surprise, or that excites you into action is content that you should pause and think about.  

Asking a few questions can help you spot a fake or give you a sense that something about that content isn’t quite right, both of which can keep you safer online. 

Introducing McAfee+

Identity theft protection and privacy for your digital life

Source