Long reads

Deepfake fraud: the rising threat in financial crime

Níamh Curran

Níamh Curran

Senior Reporter, Finextra

As it’s Halloween, spooky season is the perfect time to highlight one of the scariest, and arguably creepiest growing developments in financial crime, the deepfake.

Deepfakes have dominated national headlines for a while now, often for especially unsavoury reasons, and for their political usage.

They have now become a tangible issue within the financial services industry, particularly in relation to fraud.

We spoke to two experts working to combat these issues.

What is a deepfake?

If you aren’t familiar with the concept of a deepfake, they are fake videos, images, or voices created to replicate real people.

They can also be scarily simple to make.

Pavel Goldman Kalaydin, head of AI/ML, Sumsub, told us that deepfakes can now be made by simply downloading an app and specialist software is not even needed. This means that the possibilities of deepfakes are now widely available for criminals to use as they please.

However, it is complicated to create the software behind the apps criminals are using, which rely on generative AI, machine learning, and/or neural networks.

Isa Goksu, CTO UK&I and Germany, Globant, explained that a neural network is fed a lot of data, and is then asked to reproduce the very same data it has been fed, such as an image of Person A. The reason for this is to learn intricate details of that image and save that in the model.

Later, this model can be used to transfer those detailed learnings into some other photos of Person B. For instance, images of Person A and a prompt can be fed into a newer model – the model can then easily produce Person B’s photo with the movements, gestures and emotions applied from Person A. Some powerful machines are able to do this in real-time, but many are pre-processed.  

The same principle can be applied to someone’s voice. Some scammers have been able to phone unsuspecting people and record just enough of their voice for it to be replicated.

Often people are able to spot these deepfakes as they call into the “uncanny valley”, which makes them feel somewhat eerie and uncomfortable. However, the technology surrounding these has become so good, that it is unlikely you will be able to spot them with the naked eye.

Deepfakes, what risks do they pose to financial services?

The risks of deepfakes in financial services circle around fraud. However, Goksu noted: “There are use cases we never anticipated. The hackers are equally smart people. The way we are trying to prevent them, they are attacking in the same way.”

Fraudsters are able to use deepfakes to gain access to accounts which use video verification. This could happen with any account which uses this, but Goksu argued that the impact this technology is having on KYC processes is significant. He argued that KYC processes today are content-based verification, made up of a real-time video of the person and some other form of ID. This kind of remote biometric verification can be replicated easily by deepfakes.

He said: “There is an open-source framework which can be used to attack any KYC system being used by fintechs right now. Out of 18 providers, 16 of them are failing.”

Goksu further stated that many fintechs export their KYC processes to third parties. He said that although third party providers are trying to “beef up” their system, fraudsters are attacking them as a weak point.

Another example of how deepfakes can be used by fraudsters is through voice replication. Goksu said that while the younger generation might be more savvy about online fraud, they may be used by fraudsters to weaponise social engineering scams against their parents.

A scenario he described was a scammer phoning someone’s child and getting a short recording of their voice; he argued only “two seconds” of someone’s voice is needed to replicate it. This replication can then be used in a number of fraud scenarios. This could be a child “phoning” and asking for personal information their parents would have, or it could be a social engineering scam where a child urgently asks for a sum such as £100 without being able to explain why. While this is happening, the scammer could be speaking into the phone, but the person on the other end hears the replicated voice.

Goldman Kalaydin said that his biggest concern about deepfakes was for their use in account control and creation, because they could be used to empty accounts or use it for money mules.

However, Goldman Kaladin also pointed to the fact that in the last six months there has been a rise in fully synthetic faces. These can be used to create accounts for money laundering and money mules, without it being linked to a real person. This also means that the databases of stolen documents they might have used previously to flag a transaction are of limited use to this kind of attack.

All of these cases are changing as time goes on, but what is clear is any kind of security measure where you can identify someone through the camera could be in jeopardy with this technology becoming widely available.

How to detect a deepfake?

Unfortunately, there is no simple way to answer this question. The problem and the technology involved are constantly evolving. However, there are some techniques that financial institutions are using to tackle the deepfake problem.

Goksu stated that currently, banks are able to use residual neural networks to detect where images on documents are false. However, he said that “in six months or a year from now, I won’t be able to do that. It’s going to be very specific, and it’s going to be very hard to detect those things.”

Goksu pointed to some newer technologies which are able to detect the pulse within videos and see where they are mismatched as they would be in a deepfake. There are also other techniques which would show a “mathematical” difference in the images, but which may not be visible to the human eye. However, Goksu noted again that these detection techniques may become obsolete as deepfake technology changes.

He therefore emphasised the importance of multifactor authentication, saying that “doing a multifactor authentication is very basic, it will eliminate 90% of the problem.”

Goldman Kalaydin argued that “behavioural anti-fraud” can be used to detect fraudulent patterns. Examples he gave were that many users were using the same IP address or clusters of transactions. He further stated that he thought this was why it was necessary to have all of these checks on one platform, so you are able to see not only document and photo checks but also any other behaviour which could make a case suspicious.

Looking to the future, Goldman Kalaydin said: “The problem, unfortunately, will not go away. It will get harder and harder for use to detect who is a deepfake or not.”

For those looking to cope with deepfakes, there is a need to adapt and keep on top of this technology. It seems that multiple different sources should be used to try to detect and prevent deepfakes. Ultimately, this is a problem that will continue to evolve and proposes quite a scary future for identity as technology becomes more convincing. 

Comments: (0)