Will Deepfakes Be a Cyber Threat in 2021?
Deepfakes have received a lot of attention in recent years, but their use by cybercriminals has been relatively limited to date. That delay gives individuals and organizations time to familiarize themselves with deepfakes (also known as synthetic content) to know what they are, how they work, and the threats they pose.
The term deepfake was coined in 2018 by a Reddit user by the screen name “deepfakes”. Think of it as the evolution of forgery with a twist of artificial intelligence. Any digital media asset — imagery, video, audio — created with the assistance of AI-synthetic media qualifies as a deepfake.
A few examples:
- Video. Facebook CEO Mark Zuckerberg admitting to misdeeds and the touting power of Facebook. This deepfake was made by digitally altering a legitimate video of Zuckerberg with fake dialogue.
- Audio. AI-powered uber-realistic audio of Joe Rogan, cloning his voice. People can’t tell the difference between clips of the real Joe Rogan and deepfake Joe Rogan.
- Advertising. Political ads by the group RepresentUs used deepfakes of Russian President Vladimir Putin and North Korean leader Kim Jong-un stating they would not interfere in U.S. elections.
Deepfakes are not exceptionally hard to create, and technology is evolving that will make it much easier. Accessible software exists that enables real-time deepfakes on video conferencing platforms of well-known people, including Steve Jobs, Eminem, Albert Einstein, and the Mona Lisa.
While doctored videos or photos are sometimes labeled deepfakes, actual deepfake files are typically created using algorithms that make composites of existing footage; they effectively learn to identify faces and voices, and then combining them to create new content. A website called This Person Does Not Exist demonstrates the potential of this technology by presenting eerily lifelike photos of fictional people assembled in real-time by amalgamating thousands of photos.
How Big of a Cybersecurity Threat Are Deepfakes?
The use of deepfakes to deceive is what makes them a serious threat. A 2018 deepfake video of Barack Obama synced to an audio track created by comedian Jordan Peele sparked concerns about potential election interference. This led to increased demand that technology companies more actively filter out content utilizing the technology.
Don’t Ring the Alarms Yet
Today, deepfake technology is primarily used for viral videos and adult content. The threat of high-tech cyber-espionage that has worried computer scientists, security experts, and politicians alike has yet to materialize.
One of the reasons deepfakes haven’t reached their full threat potential is the way they are generated: Complicated deep learning and AI algorithms are required to process the vast amounts of sample content needed to generate a convincing deepfake.
For now, the subjects of deepfakes will continue to be famous people like Barack Obama, Mark Zuckerberg, politicians, and entertainers. There are hundreds, if not thousands, of available hours of video of these people that can be used to create a deepfake. Such image and audio catalogs don’t exist for the average scam or cyberattack target, which limits how much can be learned by an AI program.
There’s another factor limiting the spread of deepfakes: scammers don’t need them. There are plenty of low-tech ways to fool people. A “fake” deepfake 2019 video of Nancy Pelosi was viewed by millions and was retweeted by President Trump. It was a speech that the teetotaling Speaker of the House had given earlier, but played back at a slower speed. Likewise, the audio track in a widely distributed deepfake of then-President Obama wasn’t compiled by AI but recorded by a skilled impersonator.
Deepfake is Evolving
As always, scammers are looking for opportunities, and they don’t need high-tech solutions. They will often cold-call targets pretending to be relatives, supervisors, co-workers, or tech support. Providing a target with a sense of urgency combined with a convincing story is all a scammer needs to get someone to install malware, assist in the commission of wire fraud, or surrender sensitive information.
“There is a broad attack surface here — not just military and political but also insurance, law enforcement, and commerce,” said Matt Turek, a program manager for the Defense Advanced Research Projects Agency to the Financial Times.
These concerns were validated in 2019 when criminals used deepfake audio to scam a CEO out of $243,000. The unnamed UK executive was fooled into wiring money to someone claiming to be the chief executive of his parent company. The victim said the caller had convincingly replicated his employer’s German accent and “melody.”
The barriers for scammers looking to create convincing digital frauds can and will inevitably diminish. As deepfakes grow in popularity, we can expect to see new apps create faster, more compelling, and cheaper digital fakes.