Something is wrong. Without naming names you might have felt it. The influencer’s celebration post is looking beyond perfect. A celebrity advocating for a crypto opportunity seems perfect. A “real person” supporting a political candidate goes a little more smoothly.
Your instinct is not wrong. This is artificial intelligence.
We have crossed the line where photos, videos, voices and even live calls can no longer be considered real. And it’s not coming – it’s already here.
Last year, Americans lost a record $15.9 billion to fraud, according to the Federal Trade Commission. A large part of that money was lost to people because they believed what they saw or heard.
Here are five specific ways this technology is being used against you right now — and what to do about each.
1. Fake influencers are already in your feed
During this year’s Coachella festival, social media was filled with gorgeous, perfectly lit posts from influencers who were clearly living their best lives in the California desert. The Verge reported that many of those “influencers” were never there.
Some of these AI-generated accounts were reportedly pulling in over $40,000 during the festival alone – through brand sponsorships and subscription revenue – without anyone even setting foot in the desert.
This is not just festival stuff. The Instagram account of a woman named “Jessica Foster” – which was shown in military uniform alongside prominent political figures – had reached over a million followers before Instagram took it down in March 2026. According to reports by Fast Company and The Washington Post, it was completely AI-generated.
Platform moderation is always responsive. By the time a fake account is removed, money has been made and the audience has often been exported to email lists or other platforms. The gap between what AI can produce and what moderators can catch is still huge.
Before following any new accounts, spend 60 seconds doing three things: run a reverse image search on the profile photo, watch a live video (not just polished reels), and check if they’ve ever responded to a comment. If all three come up blank, you’re probably following a machine.
2. Celebrity faces are being stolen to rob you
You might have seen them. Elon Musk is offering a cryptocurrency gift. Taylor Swift is giving away cookware. MrBeast is selling iPhone for $2. None of it was real – all deepfakes, all engineered to drain your account or steal your personal information.
These scams work because of some fundamental things about human psychology. We are willing to trust familiar faces. When you see someone you recognize endorsing a product, your attention is diverted before your brain can even process it.
The consequences can be disastrous. According to The New York Times, an 82-year-old retiree invested more than $690,000 in a scheme based entirely on Elon Musk’s fraudulent scheme. The co-founder of deepfake monitoring company Sensity told the newspaper that it could be the largest deepfake-powered scam on record.
According to research from McAfee, 1 in 5 people say they or someone they know has fallen for a deepfake scam in the past year.
The rule is simple: If a celebrity appears to be selling you something – especially high-return investments, crypto gifts, or deeply discounted products – assume it’s fake. Verify only through their official verified account. Any valid gift does not require you to send money first.
For more red flags, see “5 Celebrity Impersonation Scams and 7 Tips for Spotting Fake Content.”
3. The political content you are sharing may be completely fabricated
The New York Times reported in April 2026 that AI-generated “supporters” of President Donald Trump had spread widely on social media – many of them apparently reading from the same slightly stilted script. Not real people. No real grassroots enthusiasm.
This is not a partisan issue. In July 2025, reports emerged of attackers using a deepfake of Secretary of State Marco Rubio to attempt to manipulate government officials. A political opponent of Georgia Senator Jon Ossoff used fabricated videos of the senator to make statements he never made.
Research from disinformation tracking organization Grail found that more AI-generated political content appeared in the first 15 months of 2025 than in the previous eight years.
The harm goes beyond any specific lie. When your feed is filled with synthetic people expressing similar viewpoints, it shapes what you think everyone else believes. And once you believe that you are in the majority, doubts disappear.
Before sharing any political video, take 10 seconds to find its original source. If it’s gone viral but you can’t find it in a verified account or reputable news outlet, don’t blow it out of proportion. You can do the work of a bad actor for free.
4. Your voice can be cloned from 3 seconds of audio
It affects differently because it is individual. According to research by cybersecurity firm McAfee, publicly available AI tools can replicate your voice with 85% accuracy using just three seconds of audio.
Think about how much of your voice is already out there – voicemail greetings, social media videos, even a quick “yes” before placing a suspicious call.
Scammers use these clones to call your parents, your children, your spouse – people who sound exactly like you. They will describe an accident, an arrest, a kidnapping. They will say the money needs to be transferred now.
FBI logged in $893 million AI-related fraud losses in 2025, including these voice-clone “family emergency” scams. The total share of older Americans was $352 million.
For a closer look at how scammers are specifically targeting seniors this time, see “Over 60? Beware of 3 New Scams Depleting Retiree Bank Accounts.”
This fix doesn’t cost anything and takes five minutes. Pick a family safe word right now – something random that isn’t your pet’s name on Instagram.
If someone calls claiming to be a family member in distress, ask about the code word before doing anything. If they can’t render it, hang up the phone and call the number already saved in your phone.
We covered this loophole in depth in “This AI Scam’s Tactics Mean Everyone Needs a Safeword in 2026.”
5. Now even live video calls can be fake
This is what will keep you up at night. Hany Farid, a leading UC Berkeley digital forensics expert, recently said Said We are entering an era where entire video call participants can be synthesized in real time.
Not pre-recorded. Live, respond to you, embrace the moment.
He says voice cloning has crossed what he calls the indivisible limit. The audio cues that used to expose fakes – slightly wrong intonation, unnatural tempo – have largely disappeared.
Volume numbers tell the story. Cybersecurity firm DeepStrike estimates that the number of deepfake videos online will grow from about 500,000 in 2023 to nearly 8 million by 2025 – a nearly 900% annual increase.
The 2025 iProov study found that essentially no one – only 1 in 1,000 people tested – could correctly identify each piece of fake and real media shown. Not 1%. Point-one percent.
If you’re on a video call and someone is pressuring you to make a financial decision or transfer money, end the call.
Call the person back through the number you have already saved. Or walk straight into your bank. No legitimate organization finalizes anything important over a cold, unsolicited video call.
bottom line
Seeing had to be believed. That era is over.
According to the FTC, nearly 30% of Americans who lose money to fraud in 2025 were first contacted through social media, with total social media scam losses reaching $2.1 billion.
And according to the Identity Theft Resource Center, social media account takeover is now the No. 1 threat to the general public. These were not careless people. They were ordinary people who believed what they saw.
The technology is only getting better. But no technology is needed to protect you.
Reduce speed. Reverse image search. Establish a family code word. Call people back on numbers you already have. Treat with serious suspicion anything that makes you feel urgent, angry, or incredibly lucky.
