The rapid emergence of Generative AI models such as ChatGPT, Gemini, Claude, and so forth, which are based on natural language processing models, is shaking up the landscape of online deepfake scams. The question remains: What promise do these tools hold, and what risks will arise if they wind up in the hands of scammers?

Cybersecurity experts warn that the problem is expected to worsen as criminals exploit it using generative AI technology. This has lowered the barrier of entry for such sophisticated scams. Moreover, deepfakes can also be used to spread fake news, manipulate stock prices, or even defame a company’s brand and sales.

Traditional fraud detection and prevention models are becoming ineffective and outdated with the rise of new AI-based scams. A report published by CNBC highlights the potential of AI scams using deepfakes and the accumulation of millions of dollars using sophisticated attacks.

The report highlights the following figures in numbers:

  • A Hong Kong finance worker was duped into transferring $25 million to a fraudster who had used AI-based deep fake technology to impersonate the Company’s Chief Finance officer (CFO) and ordered the transfer on a video call.
  • In a similar case in China, a finance worker was tricked into transferring 1.86 million Chinese Yuan (equivalent to $262,000) to a fraudster’s account after a video call with a deepfake of her boss.
  • In 2019, the CEO of a British energy company transferred €220,000 (equivalent to $238,000) to a scammer who had digitally mimicked his parent company's head using deepfake technologies.

potential of AI scams using deepfakes

Mandiant CEO Calls for Naming Names to Fight Cybercriminals

Kevin Mandia, the CEO of Mandiant at Google Cloud, has shared staggering statements about Deepfake and AI at the RSA 2024 Conference held in San Francisco.

In his statements to DarkReading, he mentioned:

  • To combat the new wave of sophisticated deepfake technology media, content creators are urged to embed “watermarks” with immutable metadata, digital certificates, and signed files that guarantee authenticity.
  • Mandia argues that it is time to make it riskier for threat actors themselves, suggesting doubling down on sharing attribution and also naming names in order to raise the stakes for cybercriminals.

Lohrman: AI Deepfakes Pose Major Threat, Need Holistic Approach to Fightback

In another statement from Dan Lohrman, who is an internationally recognized cybersecurity leader, keynote speaker, and author. In his Cybersecurity blog with Government Technology, He shared:

  • AI Deepfake has become a major cybersecurity threat for governmental organizations. With carefully articulated fake messages and sophisticated videos that are extremely difficult to interpret.
  • Traditional security awareness is no longer enough. We need a more holistic approach to “Human Risk Management” to change the security culture and empower employees to detect and report cyber threats.
  • Specific measures need to be taken to (re)train employees at all levels to identify inconsistencies in deep fakes. It is essential to provide them with tools and processes to verify message authenticity and leverage AI-based enterprise-level technologies to automatically detect deepfake scams and fraudulent content.

What is deepfake?

This is the question that requires us to consider the threat of deepfakes more seriously than ever before. In simpler words, a deepfake is an artificial image, video, or audio clip generated by a special kind of machine learning that is called “deep” learning as the name implies.

Deepfake is used for scientific research mostly, but this technology can also be used to create bogus content to impersonate high-profile personalities like politicians, world leaders, celebrities, and so forth to deliberately mislead people.

Deepfake Technology

Deepfake technology can seamlessly stitch and embed anyone in the world into a video or photo they never participated in or existed in the first place. This is possible by using a deep learning algorithm. In technical terms, deep learning is similar to machine learning, where an algorithm is fed with examples, and it learns to produce the output that ditto resembles the examples it learned from in the first place. Humans also have the same learning patterns, e.g. a baby tries eating random objects and quickly discovers what is edible and what is not.

How deepfake AI works?

Machine learning (ML) is the secret sauce in creating deepfakes, which has made it possible to create deepfake much faster and at a lower cost. In order to make an AI deepfake video of someone, the creator will first train a neural network by supplying it with a series of videos of the person (victim) to give it a realistic understanding of the person and how they look alike. The supplementary video usually covers multiple angles and different lighting conditions so that deep learning is effective. After that, they would combine the trained network with the aid of computer-generated graphic techniques to superimpose a copy of that person (victim) onto a different actor.

Recent Examples of AI Deepfake Scams

AI deepfake scams have become increasingly prevalent in recent years. Here are some recent examples:

A Hong Kong Company was Scammed of $25.7 Million Using Deepfake Technology

According to the report published by Bank Info Security, fraudsters successfully duped an employee of a multinational company based in Hong Kong into transferring HK$200 million, or $25.7 million to their accounts utilizing deepfake technology. The scammers created a fake video conference call in which they impersonated the company’s CFO. They directed the employee to make confidential financial transactions into undisclosed accounts. The deceived employee made 15 separate payments to five local bank accounts, and it was too late before realizing it was a scam.

In lieu of this scam, the Hong Kong police have warned the public about the deceptive tactics involving the use of AI technologies in online meetings.

Employee in Shaanxi Province, China Falls Victim to $258,000 Deepfake Scam

Another deepfake scam case occurred in the Shaanxi province of China, where scammers used AI to impersonate an individual by manipulating the video and audio of the person. As mentioned in the China Daily HK article, a financial employee was deceived into transferring 1.86 million yuan (equivalent to $258,000) to a fraudster who happened to impersonate as the boss of the employee over a video link call. The employee reported it to the police after verifying with the boss and came to know that the call was initiated by scammers. However, the police and relative authorities managed to freeze 1.56 million yuan of the total transfer amount.

Deepfake Scams on the Rise in Politics: Indonesian Election and Imran Khan Cases

AI Deepfake is seen as heavily used in elections and between politicians and their counterparts using it for their own gains and spreading propaganda of opposing parties. As reported by CNBC, the number of deepfake scams across the world rose at a staggering rate of 10 times in YoY from 2022—2023 alone according to the data verification firm Sumsub.

Before commencing the Indonesian election on the 14th of February, a video surfaced online of late Indonesian president Suharto advocating for people to vote for the political party he once presided over. The video went viral on social media platforms and racked up to 4.7 million views on the “X” platform only.

In a similar case, a deepfake clip of Imran Khan, the former prime minister of Pakistan, emerged around the timeline of national elections provoking a message that his party is boycotting and denying national elections.

Deepfakes of politicians and governmental personalities are becoming increasingly common, especially with the year 2024 forecasted to be the biggest global election year in the history of mankind.

How to defend against deepfake scams?

It is clear from the above statements that the deepfake scam threat is real and it is growing exponentially in usage while becoming more realistic and authentic. On the contrary, the battle against defeating the deepfake is also evolving with new dynamics and technologies to counter new threats. It is essential to familiarize yourself with tools and technologies and integrate them into your personal and professional practices. Imparting yourself with this knowledge will protect you from the deceptive, disruptive, and destructive potential of deepfake technology.

AI Deepfake Scams can be detected by a variety of methods and detection techniques, the main ones are listed below:

  • Advanced Detection Tools
  • Digital Identity Security with Biometrics
  • AI & Blockchain Technologies

Advanced Detection Tools

These AI-powered cybersecurity solutions utilize AI algorithms to meticulously identify subtle anomalies in video and audio deepfakes. This could be irregular blinking, non-synced lip movement, irregular motion and emotions, white strip instead of teeth, unnatural skin tone, or inconsistent lighting, and among others.

Digital Identity Security with Biometrics

The following table shows some of the important deepfake detection methods using biometric authentication.

Biometric Detection Method Description
Facial Recognition

It can be utilized to compare a video or image against a database of known individuals to verify authenticity.

Finger scan

Deepfakes are digital and cannot trick fingerprint scanners easily. FP scanners also rely on heat to verify that a finger is placed on the scanner surface.

Palm Scan

A cutting-edge biometric detection method that creates an image of vein patterns in the hand and compares them to others in the database. Deepfake AI technology is still far from manipulating palm scans.

Vien Print

The image of vessels within the human body that can be seen as a random mesh is utilized for this authentication technology. Researchers think that it is hard to bypass a person’s vein using deepfake AI.

Retinal Scan

A biometric technique that maps the unique pattern of the human retina using specialized devices to capture a hi-res image of the retina. These cameras utilize ultra-low intensity light sources to capture the images which makes it hard to mimic deepfake technology.

ECG Biometric

The human heart’s electrical signals are sufficiently distinct to allow ECG biometrics to be used for user identification and authentication. With this level of sophisticated internal biometric technique, the entry barrier of deepfake is negligible.

 

AI & Blockchain Technologies

By combining AI and blockchain technologies, it is possible to create a verifiable history of digital content. AI can verify the authenticity of the content against its blockchain records while highlighting discrepancies that point toward content manipulation.

Industry leader’s viewpoint on deepfake scams

Hugh Thomspon, the executive chairman of the RSA Conference is one of the leading experts on cyber security and privacy. He is also a member of the Aspen Cyber Security Group and the STS Forum Council. In his statements to Fortune.com, he shared the following insights:

  • Over 40,000 people from 130 countries are expected to attend the 33rd edition of the annual RSA conference on Cybersecurity with a focus on emerging threats such as deepfake scams.
  • Scams generated by AI Deepfake are becoming sophisticated. Bad actors were able to implant backdoors such as XZ Utils, in commonly used software applications with the potential of compromising tens of thousands of companies if not discovered.

Securus Communications posted an article that highlights the rise of deepfakes in the cybersecurity landscape. The article shared the following insights:

  • According to a survey conducted by iProov, the awareness of deepfake technology has grown significantly from 13% to 29% from 2019 to 2022. However, 57% of people believe that they could spot a deepfake, which is unlikely true for a well-constructed deepfake scam. The survey also deduced that a whopping 80% of people are more likely to use services that take measures against deepfake.
  • The article also mentions a US law called the DEEPFAKES Accountability Act that makes it illegal to create or distribute deepfakes without consent or proper labeling.
  • In a similar way, the UK government is creating new criminal offenses through the Criminal Justice Bill to punish taking or recording intimate images of people without consent.

Conclusion

The rise of generative AI and deepfake technology poses a major threat in the form of sophisticated almost real online scams and fraud. As the examples cited in the article, it is clear that deepfakes can convincingly impersonate any human in the world from a company executive to political leaders and celebrities to lure victims into transferring hefty amounts of money or spreading false news and misinformation.

To counter this increasing threat, a diverse approach is needed by utilizing tools powered by AI in cybersecurity. Leading cybersecurity experts continuously warn that these attacks are getting smarter day by day with low costs and easy barriers of entry. They are stressing the importance of implementing technical and AI-based countermeasures while also raising security awareness training among employees of all levels to be able to identify deepfake scams right before they happen.

Ultimately, the governing bodies and regulatory authorities must keep updating the laws around digital impersonation and non-consensual synthetic media creation. As deepfake scams are getting hyper-realistic, proactive measures should also ensure truth and trust online for individuals, businesses, and societies at large.

For further information on Sangfor’s cyber security and cloud computing solutions, visit our website, www.sangfor.com.

Contact Us for Business Inquiry

Listen To This Post

Search

Get in Touch

Get in Touch with Sangfor Team for Business Inquiry

Related Articles

Cyber Security

Role of Artificial Intelligence (AI) in Threat Detection

Date : 02 Jul 2024
Read Now
Cyber Security

CDK Global Outage: Everything You Need to Know

Date : 26 Jun 2024
Read Now
Cyber Security

Machine Learning in Cybersecurity: Benefits and Challenges

Date : 20 Jun 2024
Read Now

See Other Product

Cyber Command - NDR Platform
Endpoint Secure
Internet Access Gateway (IAG)
Sangfor Network Secure - Next Generation Firewall
Platform-X
Sangfor Access Secure