Though many specialists have warned of the dangers of AI, not many of us would predict that AI would be used to scam a parent into paying a ransom for their child. That’s precisely what happened to Jennifer DeStefano in Arizona. A scammer had recreated her daughter’s voice with the help of artificial intelligence such that she didn’t second-guess anything. Granted, it’s quite hard to think critically when one hears their child’s voice and then hears a man say they want $500,000 otherwise; they’ll do the unthinkable.
Most of us think of a Nigerian prince or princess trapped in their country when we think of a scam, but that’s quite an old one. The thing about scammers is that they’re constantly thinking of new ways to exploit innocent people. That’s why parents should be aware of the newest types of scams and how to avoid them. This way, they can teach their kids about scams and ensure their safety as well.
The Newest AI Scams You Should Spot Right Away
The newest scams use the power of AI to their advantage. Many of these scams are a spin on some older scams we all know (and love to hear about but not be a victim of).
Voice Cloning Scams
AI can be used to mimic a person’s voice quite easily nowadays. In 2023, Microsoft released an AI system that can replicate the voice of a person with only needing to listen to three seconds of their authentic voice. Voice cloning scams pose serious security risks, especially for governmental and banking institutions.
The Federal Trade Commission has given consumers some advice on how to deal with such scams. The first thing you should do is call the person whose voice you’ve heard on the phone directly and verify their story. If they’ve asked you for money, consider how they want you to pay. Is the transaction method such that it makes it difficult to get the money back? If so, make sure that you first contact the person who called you directly or try to get in touch through friends or family members.
Phishing Scams
AI systems such as ChatGPT can be used to generate convincing texts or emails. These texts can be quite convincing, and it can be very difficult to check if a human wrote them. Phishing scams usually ask for personal information such as payment information, or login credentials. Sometimes these emails can infect your device using malicious links.
Here are some tips you can use to spot phishing emails:
- Check the sender’s email address and domain name. Typos and unusual characters signal fishy business. For example, an email from Google might come from “@google.com,” not “@google.com.”
- Some phrases are just too obvious such as “You’ve won a prize” or “Your account is being suspended.”
- Grammatical errors in the text of the email should also arouse suspicion, as well as requesting personal and financial information like passwords, and verification codes. No legitimate company should ask you for such details.
- Don’t click on links that look unfamiliar, belong to domains that don’t match the sender’s website, or look suspicious.
Chatbot Scams
AI can power chatbots and imitate humans on various platforms, such as dating apps and social media. If trained, these chatbots can trick people into clicking on malicious links, sending money, or giving up personal information.
Here are some tips on how to find out if you’re talking to a chatbot:
- Chatbots usually send messages at unnatural speeds and are online 24/7.
- Chatbots are usually anonymous providing no bio, profile picture, or any personal information.
- Chatbots sometimes respond with irrelevant and unsympathetic answers.
- Chatbots might send malicious links without any context.
Conclusion
With the rapid advancement of artificial intelligence, reasonably differentiating between humans and AI becomes more difficult. This is to the advantage of scammers who usually try to scam hundreds of people at a time and desperately need to gain trust. That’s why parents should learn about AI scams and how to spot them so they can keep their children and themselves safe.