First Midwest BankFirst Midwest Bank logoArrow DownIcon of an arrow pointing downwardsArrow LeftIcon of an arrow pointing to the leftArrow RightIcon of an arrow pointing to the rightArrow UpIcon of an arrow pointing upwardsBank IconIcon of a bank buildingCheck IconIcon of a bank checkCheckmark IconIcon of a checkmarkCredit-Card IconIcon of a credit-cardFunds IconIcon of hands holding a bag of moneyAlert IconIcon of an exclaimation markIdea IconIcon of a bright light bulbKey IconIcon of a keyLock IconIcon of a padlockMail IconIcon of an envelopeMobile Banking IconIcon of a mobile phone with a dollar sign in a speech bubbleMoney in Home IconIcon of a dollar sign inside of a housePhone IconIcon of a phone handsetPlanning IconIcon of a compassReload IconIcon of two arrows pointing head to tail in a circleSearch IconIcon of a magnifying glassFacebook IconIcon of the Facebook logoLinkedIn IconIcon of the LinkedIn LogoXX Symbol, typically used to close a menu
Skip to nav Skip to content

New Report: Addressing deepfake threats, ‘the next generation of cyber security concerns’

In September of this year the U.S. National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) teamed to produce the report, “Contextualizing Deepfake Threats to Organizations” as a warning of the rapidly-increasing threat from developing technologies to banks and other large-scale U.S. infrastructure.

What are deepfakes?

Deepfakes manipulate text, video, audio, and images for a variety of malicious purposes online and in conjunction with communications of all types, utilizing artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media. When it comes to banks, these techniques can impersonate top executives and threaten an organization’s brand with fraudulent communications designed to access sensitive information from employees.

While the tools and techniques that can be used to manipulate authentic multimedia have been around for decades, what took days or weeks to put together a sophisticated “fake” with the software and computing speed of yesteryear now can be put together in a matter of hours.

Deepfake impacts

The most popular deepfake social engineering, according to U.S. government agencies, includes:

  • Fraudulent texts
  • Fraudulent voice messages
  • Faked videos

Some of the simpler fakes include:

  • Selectively copying and pasting content from an original scene to remove an object in an image and thereby change the story.
  • The slowing down of a video by adding repeat frames to make it sound like an individual is intoxicated.
  • Combining audio clips from a different source and replacing the audio on a video to change the story.
  • Using false text to push a narrative and cause financial loss and other impacts.

Some of the more sophisticated deepfakes have included:

  • LinkedIn experienced a huge increase in deepfake images for profile pictures in 2022.
  • An AI-generated scene that was the product of “AI hallucination” — made-up information that may seem plausible but is not true — that depicted an explosion near the Pentagon was shared around the internet in May 2023, causing general confusion and turmoil on the stock market.
  • A deepfake video showed Ukrainian President Volodomyr Zelenskyy telling his country to surrender to Russia.
  • More recently, several Russian TV channels and radio stations were hacked and a purported deepfake video of Russian President Vladimir Putin was aired claiming he was enacting martial law due to Ukrainians invading Russia.
  • Text-to-Video Diffusion Models, which are fully synthetic videos developed by AI.
  • In 2019, deepfake audio was used to steal $243,000 from a UK company.

How banks can fight deepfakes

Banks and other organizations can take a variety of steps to prepare to identify, defend against, and respond to deepfake threats:

  • Select and implement technologies to detect deepfakes such as real-time verification capabilities and procedures.
  • Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
  • Consider plug-ins to detect suspected fake profile or other pictures.
  • Protect public data of high-priority individuals, and use active authentication techniques such as watermarks and/or CAI standards.
  • Ensure plans are in place among organizational security teams to respond to a variety of deepfake techniques.

Google, CNBC and other media outlets have begun to label photos and videos that have been manipulated by AI. While the number of deepfakes on the internet in 2019 hovered around 15,000, today it is well over a million.

“Businesses need to view this as "the next generation of cyber security concerns,” says Matthew Moynahan, chief executive of authentication provider OneSpan, speaking to the Financial Times. “We’ve pretty much solved the issues of confidentiality and availability, now it’s about authenticity.”

To view the entire report, “Contextualizing Deepfake Threats to Organizations,” click here.

Subscribe for Insights

Subscribe