Original: Jeff Kauflin, Emily Mason
Source: Forbes
Original Title: "How AI Is Fueling Financial Fraud And Making It Harder To Detect"
From "The Hunt", "The Gambler" to the recently released "The Parrot Killing", movies related to fraud are providing an effective means of propaganda for China's crime control, but with the popularization of new AI technology, future challenges may become more severe. For the United States across the ocean, anti-fraud may become a new consensus with China in uncertain times. In 2020 alone, a survey in the United States showed that as many as 56 million people across the country have been scammed by phone, which is about one-sixth of the U.S. population, and surprisingly, 17% of Americans have been scammed more than once.
In the field of telecom fraud, India is to the United States as Myanmar is to China. However, India evidently has a more conducive environment for developing the fraud industry than Myanmar. On one hand, with English being an official language with high prevalence in India, the country has a "natural advantage" in talent; on the other hand, since 2000, India has become the outsourcing destination for major multinational corporations' telephone customer service overseas, creating new phone scams with the same accent but different content. However, with the new wave of AI technology, India's "throne" in the field of phone fraud may be replaced.
From smooth-flowing scam text messages to cloned voices and face-swapping in videos, generative artificial intelligence is providing fraudsters with powerful new weapons.
"If you have done business with Chase Bank and received this message in an email or text, you might think it's real. It sounds professional, with no strange wording, grammatical errors, or odd salutations, all of which were common in phishing messages we used to receive."
This is not surprising, as this message was generated by the AI chatbot ChatGPT released by the tech giant OpenAI at the end of last year. As a prompt, we only need to input in ChatGPT: "Send someone an email telling them that Chase Bank owes them a $2000 refund. Ask them to call 1-800-953-XXXX for the refund. (To cooperate with ChatGPT, we must input a complete number, but obviously we will not disclose it here.)"
Soups Ranjan, co-founder and CEO of the San Francisco anti-fraud startup Sardine, said, "The scammers' language is now so perfect, just like any native speaker." An anonymous anti-fraud director at a U.S. digital bank also confirmed that more and more bank customers are being scammed because "the wording of the text messages they receive has almost no flaws." (To avoid becoming a victim, please refer to the five suggestions at the end of this article.)
In this new world ushering in generative artificial intelligence or deep learning models, these models can create content based on the information they have received, making it easier than ever for those with malicious intent to produce text, audio, and even video. This text, audio, and video can not only deceive potential individual victims but also deceive the programs currently used to thwart fraud. In this respect, artificial intelligence is not unique—after all, for a long time, bad actors have been early adopters of new technology, while law enforcement has been desperately trying to catch up. For example, as early as 1989, Forbes exposed how thieves used ordinary computers and laser printers to forge checks and deceive banks, and at that time, banks did not take any special measures to detect these counterfeit bills.
The Rise of AI-Fueled Online Fraud
U.S. consumers reported to the Federal Trade Commission (FTC) that they lost a record $8.8 billion to fraud last year—this does not include unreported stolen amounts.
Now, generative artificial intelligence is posing a threat and may ultimately render the most advanced anti-fraud measures obsolete, such as voice authentication and even "liveness checks" designed to match real-time images with recorded images.
Synchrony, one of the largest credit card issuers in the United States with 70 million active accounts, has the most intuitive understanding of this trend. Kenneth Williams, Senior Vice President at Synchrony, said in an email to Forbes, "We often see people using deepfaked images and videos for identity verification, and we can be confident that they are created using generative artificial intelligence."
A survey conducted by the New York network company Deep Instinct in June 2023 of 650 cybersecurity experts showed that three-quarters of the respondents found an increase in online fraud incidents in the past year, "85% of the respondents attributed this increase to bad actors using generative artificial intelligence."
In addition, the Federal Trade Commission also reported that in 2022, U.S. consumers lost $8.8 billion to fraud, an increase of over 40% from 2021. The largest losses usually come from investment scams, but impersonation scams are the most common, which is a ominous sign, as these scams may be further enhanced by artificial intelligence.
Criminals can use generative artificial intelligence in various dazzling ways. If you frequently post on social media or any network, they can have an AI model write in your style and then send a text to your grandparents, begging them to send money to help you out of trouble. Even more frightening, if they have a brief voice sample of a child, they can call the child's parents, pretending to be the child and claiming to have been kidnapped, demanding ransom. This is what Jennifer DeStefano, a mother of four in Arizona, experienced when she testified before Congress in June this year.
Not only parents and grandparents, but businesses have also become targets of attack. Criminals can impersonate suppliers and send what looks like genuine emails to accountants, claiming they need to receive payments quickly and attaching payment instructions for bank accounts under their control. Ranjan, CEO of Sardine, said many of Sardine's fintech startup clients have fallen into these traps, losing hundreds of thousands of dollars.
Although compared to the $35 million lost by a Japanese company in 2020 after a company director's voice was cloned, this is just a drop in the bucket (Forbes was the first to report on this carefully planned scam at the time), but in light of the increasingly frequent similar cases now, the previous case was just a trailer, as the artificial intelligence tools used for writing, voice imitation, and video manipulation are rapidly becoming more capable, more accessible, and even cheaper for ordinary fraudsters. Rick Song, co-founder and CEO of the anti-fraud company Persona, said that in the past, you needed hundreds or thousands of photos to create high-quality deepfake videos, but now you only need a few photos. (Yes, you can create a fake video without a real one, but obviously, it's easier if you have a real video to use.)
The Dangers of Deep Voice Cloning in Traps and Phone Fraud
Just as other industries are applying artificial intelligence to their businesses, scammers are also using generative artificial intelligence models released by tech giants to create ready-made tools, such as FraudGPT and WormGPT.
In a YouTube video released in January, the world's richest man, Elon Musk, appeared to be promoting the latest cryptocurrency investment opportunity to the audience: he claimed that Tesla was sponsoring a $100 million free event, promising to double the cryptocurrency investments of participants, including Bitcoin, Ethereum, Dogecoin, or other cryptocurrencies. "I know we're all gathered here for a common purpose. Now we're hosting a live event where every cryptocurrency owner can increase their income," a low-resolution Musk said on stage. "Yes, you heard that right, I'm hosting a large cryptocurrency event for SpaceX."
Yes, this video was a deepfake—the scammers used his speech about SpaceX's reusable spacecraft project in February 2022 to mimic his image and voice. YouTube has taken down the video, but anyone who sent cryptocurrency to the address in the video before that would almost certainly have suffered financial losses. Musk is a prime target for fake videos because there are countless audio samples of him online to support AI voice cloning, but now almost anyone can be impersonated.
Earlier this year, 93-year-old Larry Leonard, who lives in a retirement community in southern Florida, received a landline call while at home. A minute later, his wife handed the phone to Leonard, and he heard a voice on the other end that sounded just like his 27-year-old grandson, saying he had hit a woman with his truck and was locked up in jail. Although Leonard noticed that the caller called him "grandpa" instead of the "papa" his grandson usually liked to call him, the voice was exactly like his grandson's, and the fact that his grandson did indeed drive a truck made him put his doubts aside. When Leonard said he would call his grandson's parents, the caller hung up. Leonard soon found out that his grandson was safe, and the whole story—and the voice telling it—was fake.
In an interview with Forbes, Leonard said, "They were able to capture his voice, intonation, and tone accurately, which is both scary and surprising to me. There were no pauses between their sentences or words, indicating that these words were read from a machine or program, but they sounded real."
Elderly people are often the targets of such scams, but now all of us need to be vigilant about incoming calls, even if they come from seemingly familiar numbers—like neighbors. Kathy Stokes, director of the AARP Fraud Prevention Program, lamented, "More and more, we can't trust the calls we receive because scammers are spoofing (phone numbers)." AARP is a lobbying and service provider with nearly 38 million members aged 50 and older. "We can't trust the emails we receive, and we can't trust the texts we receive. So we're being shut out of the typical ways we communicate with each other."
Another ominous development is that even new security measures are under threat. For example, large financial institutions like Vanguard Group, which serves over 50 million investors, offer the option for clients to verify their identity through a phone call (rather than answering security questions). "Your voice is as unique as your fingerprint."
In a video in November 2021, Vanguard Group explained that the video urged clients to register for voice verification. But the advancements in voice cloning technology suggest that companies need to reconsider this practice. Ranjan of Sardine said he has seen some people successfully use voice cloning technology to pass bank identity verification and access accounts. A spokesperson for Vanguard Group declined to comment on what steps the company might take to prevent the advancement of voice cloning technology.
Small businesses (and even large ones) that pay bills or make transfers through informal procedures are also vulnerable to attacks by bad actors. For a long time, fraudsters have been sending fake invoices via email that appear to come from suppliers, requesting payment.
Now, using widely available AI tools, scammers can call company employees with cloned versions of executives' voices, pretending to authorize transactions, or asking employees to disclose sensitive data in "phishing" or "vishing" software. Rick Song, CEO of Persona, said, "When it comes to impersonating executives for high-value fraud, that's too good, and it's a very real threat." He described it as his "biggest fear in the voice area."
Can AI Fool Anti-Fraud Experts?
Criminals are increasingly using generative artificial intelligence to fool anti-fraud experts—even as these experts and tech companies play the roles of armed guards and armored trucks in today's digital financial system.
One of the main functions of these companies is to verify consumers' identities to protect financial institutions and their clients from losses. One method that anti-fraud companies like Socure, Mitek, and Onfido use to verify customer identities is "liveness checks"—where they have you take a selfie or video, and then match your face with the ID image you must submit.
Knowing how this system works, thieves buy real driver's license photos on the dark web and then use face-swapping programs—tools that are becoming cheaper and more accessible—to overlay someone else's real face onto their own. Then, they can speak and move their head behind the digital face of someone else, increasing their chances of passing the liveness check.
Song said, "The number of fake faces generated by AI has grown quite a bit, and these fake faces are of high quality, and automated fraud attacks generated for liveness checks are also growing." He said the surge in fraud varies by industry, but for some industries, "what we're seeing might be ten times what it was last year." Fintech and cryptocurrency companies are particularly vulnerable to such attacks.
Anti-fraud experts told Forbes that they suspect the anti-fraud metrics of well-known identity verification providers (such as Socure and Mitek) have declined as a result. Johnny Ayers, CEO of Socure, insists that "this is absolutely not true," and says their new models introduced in the past few months have increased the capture rate of the top 2% most risky identity fraud by 14%. However, he acknowledges that some clients have been slow to adopt Socure's new models, which may affect their performance. Ayers said, "One of our clients is a top-three bank in the U.S., and they are now four versions behind."
Mitek declined to comment specifically on the anti-fraud performance metrics of its models, but Chris Briggs, Senior Vice President at Mitek, said that if a particular model was developed 18 months ago, "then yes, you can say the old model's anti-fraud performance is not as good as the new model. Over time, Mitek will continue to train and retrain models using real-life data flows and lab data."
JPMorgan Chase, Bank of America, and Wells Fargo all declined to comment on the challenges they face in generative artificial intelligence fraud. Chime, the largest digital bank in the U.S., has also faced significant fraud issues in the past, and its spokesperson said that Chime Bank has not yet seen an increasing trend in fraud related to generative artificial intelligence.
Today, the masterminds behind financial fraud operate as individual criminals, as well as complex organizations composed of dozens or even hundreds of criminals. The largest criminal organizations, like companies, have multi-layered organizational structures and advanced technical members, including data scientists.
兰詹说:“They all have their own command and control centers.” Some members of the gang are responsible for releasing bait—such as sending phishing emails or making phone calls. If someone takes the bait, they'll have another accomplice continue to contact the victim, pretending to be a branch manager of a bank, trying to get you to transfer money out of your account. Another key step is: they often ask you to install a program like Microsoft TeamViewer or Citrix so they can remotely control your computer. "They can black out your computer completely," said Lanjan. "Then, the scammer will (use your money) to make more transactions and withdraw the money to another site they control." A common trick scammers use to defraud people, especially the elderly, is to falsely claim that the victim's account has been taken over by scammers and that cooperation is needed to recover the funds.
OpenAI and Meta's Different Strategies
The above fraud process does not necessarily require the use of artificial intelligence, but AI tools can make scammers' tricks more efficient and credible.
OpenAI has attempted to introduce safeguards to prevent people from using ChatGPT for fraud. For example, if a user asks ChatGPT to draft an email inquiring about someone's bank account, ChatGPT will refuse to execute the command and reply, "I'm sorry, but I can't assist with that request." However, using ChatGPT for fraudulent activities is still easy.
OpenAI declined to comment on this article, directing us to a blog post the company published in March 2022, which stated, "Responsible deployment is not a one-size-fits-all solution, so we attempt to understand the limitations of our models and potential misuse at every stage of development and deployment, and address the issues."
Compared to OpenAI, Meta's release of the large language model Llama 2 is more likely to be used as a criminal tool by experienced criminals, as Llama 2 is an open-source model, and all the code can be viewed and used. Experts say this provides more avenues for criminals to take ownership of the model and then use it to disrupt cybersecurity. For example, people can build malicious AI tools on top of it. Forbes invited Meta to comment on this but did not receive a response. Meta CEO Mark Zuckerberg said in July of this year that keeping Llama open source can enhance its "security and resilience, as open-source software undergoes more rigorous scrutiny, and more people can work together to find and fix issues."
Anti-fraud companies are also trying to innovate faster to keep up with the pace of AI development and make greater use of new types of data to detect fraudsters. Lanjan said, "The way you type, walk, or hold your phone—these behavioral characteristics define you, but this data is not accessible in the public domain. Embedded AI will play a crucial role in determining if someone is who they claim to be online." In other words, we need AI to catch criminals using AI for fraud.
Five tips to protect yourself from AI fraud:
This article is translated from:
https://www.forbes.com/sites/jeffkauflin/2023/09/18/how-ai-is-supercharging-financial-fraudand-making-it-harder-to-spot/?sh=19fa68076073
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。