Extortion and spokesperson deepfakes
Deepfake fraud, which involves creating highly realistic but fake videos and audio, is one of the most alarming developments in AI-driven fraud. According to the FBI, deepfake images are used for extortion: creating sexually explicit photos, demanding ransom, and threatening to expose the fake images. This type of extortion often targets young adults.
Criminals increasingly use deepfakes to impersonate well-known figures, making scams more convincing and widespread. For example, a deepfake video of Elon Musk was widely circulated online, promoting a cryptocurrency scam. The video used footage from a real TED Talk featuring Musk, misleading viewers into believing it was a legitimate endorsement and leading to financial losses for some investors.
In another recent example, a retiree thought he had found an opportunity to secure a better future for his family when he stumbled upon a video of Elon Musk endorsing a promising investment. Convinced by the deepfake pitch, the man opened an account with an initial deposit of $248. Over the following weeks, he invested everything he had—more than $690,000—draining his entire retirement savings in the process. What began as a hopeful investment quickly turned into a devastating financial loss, illustrating the efficiency of today’s AI-powered fraud.
High-profile incidents like these highlight the power of deepfakes, but the threat isn't limited to celebrities. Recently, a business email compromise scam lured a Hong Kong finance officer into transferring $25 million to criminals who had used deepfake technology to impersonate his company's Chief Financial Officer during a video call. The scam was so convincing that it bypassed multiple security checks, demonstrating the serious risks deepfake fraud poses to financial institutions.
Voice cloning with AI
Voice cloning is another AI ability that can be used to make traditional scams more effective. An Arizona mother answered a call from an unfamiliar number, only to hear what she believed was her 15-year-old daughter in distress, supposedly being held by kidnappers demanding $50,000. The voice sounded exactly like her daughter’s, but AI had generated it. Fortunately, the mother’s concerned friends quickly contacted 911 and her husband, leading to the discovery that her daughter was safe.
Similarly, a Taylor Swift voice clone was used in an advertising scam in which the pop star appeared to endorse a giveaway of a popular brand of cookware. Fans were directed to a fake website where they were charged for nonexistent products. This scam exploited Swift's popularity and known fondness for the cookware brand, making it particularly effective in deceiving her followers.
AI-driven phishing
Phishing has long been a significant threat to financial institutions, but AI is taking these scams to a new level. A recent survey conducted by the Harvard Business Review showed that 60% of participants fell victim to AI-generated phishing. AI has bolstered phishing tactics, enabling scammers to pull in over $2 billion in 2022 alone. Since the arrival of ChatGPT in late 2022, there has been a staggering 1,265% surge in malicious phishing emails, according to cybersecurity experts at SlashNext. By analyzing a target’s communication patterns, AI can generate phishing emails that closely mimic the writing style of trusted colleagues or companies, making them incredibly convincing. AI phishing is expected to increase drastically in quality and quantity over the coming years.
Synthetic identity fraud
According to Forbes, identity theft has become a major concern across the globe, impacting more than 42 million individuals and accounting for around $52 billion in losses in the U.S. With the emergence of generative AI, the banking sector and other businesses are facing a new and more complex risk: synthetic identity fraud, where criminals combine real and fake information to create new, fictional identities. The convenience of digital banking and other online services have made personal details more easily accessible. Fraudsters are using AI to generate realistic names, social security numbers, and other identifying details that appear legitimate but do not correspond to actual people. These synthetic identities are used to open bank accounts, apply for credit, or commit insurance fraud. AI makes this process more seamless, increasing the volume of identification fraud.