7 min read

Deepfakes: Ghosts in machines and their effect on the financial world

More and more lives and behaviours are migrating online and into the digital sphere, from socialising and working to how finance is accessed and used. Developments in technology such as Artificial Intelligence (AI) combined with non-face to face forms of interaction bring with them the potential for criminal profit, whilst making it more difficult to track down the perpetrators.

Deepfakes, AI-generated voices, videos or images, can make people appear to say or do things they never actually did and can look very convincing. A recent deepfake viral video of Tom Cruise playing golf brought deepfakes to the mainstream but it also brought into focus potentially serious risks ranging from financial crime to national security.

Understanding Deepfakes

In simple terms, deepfakes are created using artificial intelligence, where real images are fed into a system and which is ‘trained’ by using two parts – one which creates fake images and the other which spots the fakes until it cannot tell the difference between the real and the fake.

Specifically, the typical process of creating deepfakes involves a branch of AI, Machine Learning (ML), through a technique which uses a Generative Adversarial Network (GAN). The process uses two neural networks which are trained to work in tandem but against each other. A dataset of real images, for example, is fed into the system. The first network, referred to as the ‘generator’, is used to create the false data e.g. photos, video footage and audio, that replicates the characteristics of the real data. The second network, known as the ‘discriminator’, is tasked with identifying the false data. Depending on the results of the output, the generator network continues to make adjustments to improve its output to such an extent that the discriminator network is not able to distinguish the real data from the false.

Risks

The sentiment may be that deepfakes do not pose an immediate threat to the stability of the global financial system, nevertheless, the material threat that actors could use them to inflict different degrees of damage to individually targeted people, companies and even government functions should be of concern. For example, the release of deepfake videos of public officials or figures making offensive comments or behaving in an inappropriate manner could create public distrust. They may even influence public opinions at election time – two planned television advertisements using deepfakes of Vladimir Putin and Kim Jong-un, following the presidential debate between Donald Trump and Joe Biden on 29 September 2020, were rejected without reason – it’s not difficult to understand why this could have been.

The U.S. Defense Advanced Research Projects Agency (DARPA) has certainly been cognizant of the potential threat and use cases of deepfakes, as the agency ran two programs involved in the detection of deepfakes. The first was Media Forensics (MediFor), which was involved in the development of algorithms to assess the integrity of videos and images in order to provide insights into how fake content was generated. It was said to explore techniques in spotting inconsistencies in deep fakes such as inconsistencies in pixels (digital integrity), with the laws of physics (physical integrity) and with other information sources (semantic integrity). The second program, Semantic Forensics (SemaFor), looks to further the work completed under Medifor. SemaFor focuses on looking for and cataloging for human review semantic inconsistancies, such as mismatch of jewellery, irregular facial features and backgrounds.

The Federal Bureau of Investigation’s (FBI) Internet Crime Complaint Center (IC3) recently issued a Private Industry Notice (PIN), in March 2021, warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months. Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft”.

The FBI timeline of 12-18 months may appear alarmist but they could be correct in their thinking. At the end of 2019, Hao Li, one of the world’s top deepfake artists appeared in a podcast and spoke about his role in the making of the movie Fast and Furious 7, and how they brought back to life Paul Walker’s character for the movie following his death. Li also explained that deepfake technology is advancing rapidly and open source, therefore fairly easily accessible with continued improvements. Most striking of all though, was that he said deepfakes can easily generate something that is 90% imperceptible to the naked eye, while the remaining 10% can be masked with algorithms or ‘noise’ to hide artefacts. There is even an App in China called ‘Zao’, which allows the user to take a picture and insert it into video – the App “instantly inserts [your picture] into Titanic, so you have the role of DiCaprio…”, Li explained.

The continued increase in worldwide non-cash transactions, which saw a growth of 14.1% from 2018-2019, along with an increasing shift towards online banking and alternative financial platforms will likely bring with it challenges of managing increased financial crime risks – especially where the ‘Product, service, transaction or delivery channel’ in a banking relationship involves non-face to face interactions.

Threats

Currently, situations where deepfakes have been used to facilitate financial crime are rare but the notice from the FBI should serve as a warning to financial institutions that criminals will use any avenue open to them to facilitate financial crime – especially as technology develops along with changing behaviours in how customers access finance.

The use of deepfake technology to facilitate financial crime is in use today, as one CEO of a UK-based energy firm found out when he was scammed out of $243,000 in 2019. Using AI voice technology, the fraudster managed to convince the CEO that he was speaking to his colleague from the company’s parent organisation in Germany – he was fooled by his colleagues German accent down to the “melody” in voice.

The think-tank Carnegie Endowment for International Peace, in its working paper titled ‘Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios’, put forward ten scenarios detailing the risks to the financial sector. Financial institutions should be looking at this in order to proactively consider potential risks that may emerge, especially following shifts towards digital and online finance, as outlined by the Financial Action Task Force (FATF) guidance around Digital Identity.

The ten scenarios fell into two categories. Six of the scenarios involved ‘broadcast’ media, aimed at mass consumption and spread widely through public channels. Whereas four of the scenarios involved ‘narrowcast’ media, which was made for small, targeted audiences and sent directly through private channels. The think-tank put forward that the financial sector should help lead what they described as “much-needed” public conversation, specifically related threats posed through narrowcast media.

The ten scenarios showed how financial harm could be caused using voice cloning, face-swapped videos, synthetic photos and text:

  1. Identity theft – to carry out fraud or again access to corporate information
  2. Imposter scam – to impersonate public officials or victim family members to carry out fraud
  3. Cyber extortion – to blackmail victims
  4. Payment fraud – to impersonate company staff to carry out fraud
  5. Stock manipulation via fabricated events – to damage company leadership reputation, create false product endorsement to alter investor behaviour
  6. Stock manipulation via bots – to create attack or promote a brand in order to alter investor behaviour also, using social media bots
  7. Malicious bank run – to spread false rumours in order to initiate bank runs, again, using social media bots
  8. Malicious flash crash – to create false market events
  9. Fabricated government action – to give the false impression that there may be an interest rate change, shift in policy or enforcement action
  10. Regulatory astroturfing – to give the false impression of views from the public on proposed changes to financial regulation, which may sway the rule making

The Crude to the Complex

It is not only sophisticated deepfakes that may pose a risk, even crude image manipulation mixed with social engineering can pose serious security and financial crime risks.

Last year, investigations found a campaign dubbed “Dream Job”, attributed to the North Korean ‘Lazarus group’ (also known as HIDDEN COBRA). The campaign was found to have been active since the start of 2020 with success in infecting a number of company computers around the world. It served two purposes – the theft of information and money. The campaign was directed at defence and government companies, with certain employees of these companies being targeted. Dream job offers were sent on behalf of defence and aerospace companies such as Boeing, Lockheed Martin and BAE in order to manipulate individuals into giving up information and infecting computer systems.

The attackers would create fictitious LinkedIn profiles, which included what appeared to be lifelike computer generated profile photos. They added to these profiles the connections from the alleged company they worked for. They would also add connections from the victim’s employer in order create an air of credibility and reduce the likelihood of the victims becoming suspicious. The fictitious profiles were carefully crafted using research of the victim in order to maximise success. The attackers used a mix of sophisticated social engineering – sending emails to personal addresses of targets, communicating directly over the phone and through WhatsApp over an extended period of time in order to gain trust. The final stage of the attack, once trust had been gained, would be the job offer where the victim would be sent a malicious file via OneDrive or Dropbox. It was tailored and timed for the victim to open at work in order to infect company systems. The fictitious profiles along with communications would then be shut down once an attack was successful.

Problems caused by deepfakes are not limited to images and videos of people – there can even very convincing deepfakes of satellite images. Whilst it may appear that this could be harmless, they could be used to create not only false information about natural disasters like floods and wildfires, but also to manipulate narratives of stories where satellite images may be used; especially as satellite data has been used to track developments in relation to the situation faced by Uyghurs in China, as well as environmental issues of deforestation in the Amazon.

Researchers at the University of Washington have been studying fake satellite imagery by focussing on technical elements of image processing, such as ‘colour histograms’, ‘frequency’ and ‘spatial domains’. The purpose of the study was not to show that images could be faked but to understand how to detect them.

With more information in the public domain being used for everything from news consumption to open source intelligence, Todd Myers from the National Geospatial-Intelligence Agency had given a stark warning of the dangers of even a small amount of manipulated images and geospatial data entering the ‘supply line’. In an article from 2019, he said: “Forget about the [Department of Defense] and the [intelligence community]. Imagine Google Maps being infiltrated with that, purposefully? And imagine five years from now when the Tesla [self-driving] semis are out there routing stuff?”.

Other complex applications of deepfakes span beyond this even further and into the medical profession. Cybersecurity experts have shown that they were able to infiltrate a network within the medical field and download a CT scan of a patient’s pelvis, and then make a 3D printed model of it. This has led to the drafting of theoretical scenarios where an attacker could go much further and not only download the CT scan but could also manipulate and modify it by creating a deepfake of it for a number of reasons ranging from sabotage, ransomware attack to insurance fraud.

Countering Threats

Advances in technology have made it difficult to distinguish deepfakes from real images or videos with the naked eye, especially where the quality is very high, so with this comes the emergence of technology that can help in spotting them. But the reliance on technology alone may not be sufficient, particularly because it may not be available to use in all settings, therefore one of the biggest weaknesses in spotting deepfakes will remain the ‘human’ element. As a result, it is imperative that employees within organisations and the general public alike are educated on voice, image and video deepfakes. Some of the things that they can look for are:

  • Unnatural eye or body movement, colouring or facial expressions
  • Unrealistic facial-feature positioning, body or posture
  • Lack of emotion
  • Unrealistic hair
  • Teeth that don’t look real
  • Image blurring or misalignment
  • Inconsistent noise or audio
  • Images that look unnatural when slowed down

At least one firm has gone further and is researching whether it is able to effectively repurpose the use of GAN technology to train their systems to spot transaction fraud. Whether this is feasible is open to debate with some critics arguing that attackers will just find new ways to circumvent such application and others more optimistic compared to videos and images, application of the technology on data alone to spot fraud may prove feasible.

Ultimately, although the technology used to create deepfakes may be a new tool for criminals to use, the underlying act to facilitate financial crime using deepfakes is old – it is called fraud, something that is well known. This should serve as the starting point for any considerations in how the threat should be tackled.