Thinking Through the Impact of AI on Fraud

We’re far enough along in the artificial intelligence hype cycle that it’s tempting to blow off warnings about new AI threats. And, most cybersecurity professionals understand that AI is not new, and has in fact been an ingredient in attack vectors for many years. However, complacency about AI is dangerous. We’re in the middle of a leap forward in AI capabilities, one that bestows new and frightening powers on hackers and fraudsters.

We’re in the middle of a leap forward in AI capabilities, one that bestows new and frightening powers on hackers and fraudsters.

Synthetic fraud, in particular, should be an area of concern as new advances in AI permeate the criminal world. Synthetic fraud involves creating fake human identities for the purpose of theft or account takeover. According to research from the US Federal Reserve, synthetic fraud accounted for losses of $20 billion in 2020. As AI becomes more sophisticated, that number is likely to grow significantly in the coming years.

I spoke about this issue recently with Nir Stern, VP of Product at AU10TIX, a forensic identity intelligence software company. Synthetic fraud is Stern’s day-to-day obsession. (What’s yours?) For him, AI is a dangerous accelerant for existing patterns of synthetic fraud.

He cites the potential for voice impersonation in social engineering attacks as one example. With the new generation of AI, a fraudster can grab a sample of a person’s voice from a brief phone call and then use that voice to trick even the best anti-fraud voice detection software. What’s particularly troubling for Stern is the potential scale of a problem like this. “It’s one thing to fool a single call center,” he said. “Now, with automation and AI, a criminal gang can fool a hundred thousand call centers at the same time.”

“We’re facing off against major criminal organizations. You have to assume they will get the technology they want.” – Nir Stern, VP of Product at AU1oTIX

As Stern likes to remind people, the adversary here is not a guy in a hoodie. “We’re facing off against major criminal organizations, often with thousands of people on their payrolls and hundreds of millions of dollars to spend ripping off some the biggest brands in the world. You have to assume they will get the technology they want.”

Nir Stern

Mass production of synthetic identities leads directly to mass attacks. This can take several different forms, according to Stern. A fraudster organization might create a million synthetic IDs and use them open a million accounts, e.g., at crypto services that offer a coin as a bonus for signing up. “By the time anyone notices, you might have millions in untraceable cryptocurrency out the door to users who don’t exist.”

Alternatively, fraudsters can copy a real person’s ID and use it to open accounts, steal merchandise, or launder money. One driver’s license, in his experience, can be duplicated in this kind of serial attack. Even if anti-fraud measures require the customer to submit a selfie to authenticate the driver’s license, deep fake photos generated with AI can outsmart the controls.

What can be done about this? “You have to bring a gun to this gunfight, so to speak,” Stern explained. “If AI is powering the attack, then AI needs to power the defense.” For example, as AU10TIX has found, it’s easier to spot synthetic IDs when technology is looking for them simultaneously across multiple fraud targets. “If you find the same driver’s license photo being submitted for verification at five different banks at the same time, each with a different name attached, you know you’ve got a serial synthetic fraud going on.”

“You have to bring a gun to this gunfight, so to speak…”

Making this happen requires specialized tooling, as well as agreements to cooperate by multiple companies. The key to success, as Stern relates, is to use AI and machine learning to fight on the same level as the fraudster.

Practical controls also help. For instance, Stern suggests that any kind of selfie verification require digital proof that an in-phone camera is being used, live, for the selfie. That way, it becomes a great deal more difficult for the fraudster to inject a deep fake photo into the process.