Con Watch: Artificial Intelligence Is Making Scams Worse

In just about every scam currently being perpetrated, criminals are using AI to make their deceptions more effective and convincing.

Shutterstock

Weekly Newsletter

The best of The Saturday Evening Post in your inbox!

SUPPORT THE POST

Steve Weisman is a lawyer, college professor, author, and one of the country’s leading experts in cybersecurity, identity theft, and scams. See Steve’s other Con Watch articles.

Artificial Intelligence is a tremendous tool that is already being used for many positive purposes, including improved medical diagnostics, customer service, quality control in manufacturing, self-driving cars, inventory management for retail stores, smart grid management in the energy sector, virtual assistants, and more. But like any tool, it can also be used in harmful ways.

In just about every scam currently being perpetrated, criminals are using AI to make their deceptions more effective and convincing.

Phone Calls

AI has created additional opportunities for phone call scams. It can be used to remove foreign accents from callers’ voices, making them sound more like a local representative. AI is also being used to create robocall scripts that enable more persuasive conversations with their targeted victims.

And perhaps most alarmingly, AI voice cloning technology is being used in the family emergency scam. In this age-old con, a grandparent gets a call late at night from someone posing as a family member with a made-up emergency that requires money to be sent immediately. Now, using readily available AI voice cloning technology, a scammer can grab as little as 30 seconds of a person talking on YouTube, TikTok, or Instagram to create an AI version of that person’s voice, which is then used to call the grandparent or other family member.

To avoid this scam, every family should have a code word known only to the members of the family to verify a family member’s identity in the event of an emergency.

One creative tactic used by the Federal Trade Commission (FTC) is their Voice Cloning Challenge, which promotes the development of new strategies to protect people from AI voice cloning technology scams. The FTC is accepting submissions with strategies for preventing or detecting voice cloning by unauthorized users. The winner will receive $25,000. This the fifth time that the FTC has used this type of challenge with a cash prize to address similar problems. Past challenges included uncovering security vulnerabilities in the Internet of Things and  creating a defense against robocalls.

AI is also being used to battle those robocalls. Machine Learning algorithms can learn to recognize patterns in robocalls and then block the calls that match these patterns. In addition, AI Natural Language Processing (NLP) technology can be used to analyze the content of robocalls and block the call. AI also can be used to combat spoofing by analyzing the caller’s true phone number and blocking the call if spoofing is identified.

Emails

Socially engineered spear phishing emails are now far worse because of AI. Scammers can create more sophisticated and effective emails that are more likely to convince a targeted victim to provide personal information that can lead to identity theft, click on a link that will download dangerous malware, or fall for a scam. In the past, phishing emails often lacked proper grammar, syntax, or spelling. Now, however, AI has solved that problem, making phishing emails more difficult to recognize.

Fortunately, AI can also be an effective tool in combatting enhanced emails. Machine Learning algorithms can analyze vast amounts of data to identify patterns and trends associated with scams. These algorithms can not only be used to recognize indications of spear phishing, but can also continually learn, adapt, and predict new forms fraudulent emails.

Social Media

Scammers have always used social media as a trusted delivery system for scams, mining posts for personal information that they could leverage against their victims. But with today’s advanced tools, criminals can use AI to set up social media bots, which are automated software applications programmed to appear to be real people. In the past, the lack of sophistication in some bots made them easy to identify, but now AI has enabled scammers to create large numbers of believable bots used to promote numerous scams, particularly involving cryptocurrencies. In the past, the gathering of personal information through social media could be a time-consuming effort, but now through AI, vast amounts of information can be harvested quickly and simply, and then used to craft effective messages.

Dating Apps

Finally, crooks can use AI to create fake profiles on multiple dating platforms and write effective and grammatically correct biographies. The scammers can also use AI to create photographs or deepfakes.

Much work remains to be done to increase the defenses to AI-enhanced scams, but just as AI may have made scams easier to perpetrate, it also has promise to help us avoid them as well.

Become a Saturday Evening Post member and enjoy unlimited access. Subscribe now

Comments

  1. Great information as always, Steve. Of course we always have to be careful; nowadays even more. It’s not unlike being a careful, defensive driver. The kind of social media (like Facebook) is a scammer’s dream, with so many putting anything and everything about themselves on there. Twitter (‘X’) not so much.

    The former is like choosing to drive into an area of town well known for crime, to get out of the car for a late night stroll, then being surprised when something terrible happens, if they live through it. It’s the person who chose to go there that set it into motion, not the perpetrators. Completely different than innocent people attacked just going about their daily lives, always in the backs of our minds today, as it unfortunately must be.

Reply

Your email address will not be published. Required fields are marked *