Log in     Support     Status

6 Things You Probably Didn’t Know About Deepfakes

by | Feb 24, 2020

We sat down with our Chief Data Officer, Dave Costenaro, to get some details surrounding the popular topic of deepfakes. If you’re interested in learning where they originated, how to protect against them, and what the future conversation might look like, this article has you covered and then some. 

1. What are deepfakes?

A deepfake is a deep neural network that’s very adaptive at generating, editing, or interpreting high bandwidth media.  A deepfake attack could include editing or modifying an audio, video, or image file to alter its meaning. For example, a deep neural network can take someone’s voice and apply a different style to it, so they sound like someone else. 

Another tactic of deep faking includes taking a video of someone standing still and combining it with a video of someone dancing. This can make the dancing person stand still and vice versa. 

2. How long have deepfakes been around?

Deep faking started in a dark corner of the internet in 2012. Today it’s known for gimmicky memes, but also for serious trust and privacy violations. One of the most famous deepfakes includes the video of Mark Zuckerberg where his voice was dubbed over to sound and look like he is taunting Facebook users about access to personal data. There’s also a deepfake video of Nancy Pelosi that slows down her voice and movements to make her sound drunk and disoriented. 

There has been serious concern around the 2020 election since these clips of high-profile deepfakes have gone viral. As humans, the fast operating parts of our brain automatically anchors in a perception and easily snaps to judgment even if we know the video is completely fake. Because millions of people can be rapidly exposed to these deepfake videos, if a candidate is attacked, there’s a possibility of groundless reputational damage and polling swings.

3. In a recent article, you were quoted warning companies to “beware of the impact that deepfakes will have on identity verification and security in general.” What are the potential threats?

An unnamed UK-based energy firm was recently victim to a deepfake attack that spoofed the CEOs voice in a phone call to the CFO. In the call, the fake CEO seemingly asked the CFO to authorize a payment to a foreign supplier. The CFO did so, which cost the company a lot of money proving out a security risk scenario for organizations everywhere. 

Companies with big troves of data are the most at risk. The most obvious security practice to put in place is two-factor authentication. Rather than requiring one-step verification like a phone call, organizations should also require a second communication from a trusted source before releasing funds or information. In the case of the phishing attack, if the CFO and the Chief Accountant, for example, had to authorize the money transfer, that attack may have been thwarted. 

Companies with a large presence online should be conscious of PR exposure until device and platform providers have put security protocols in place to solve and detect against deepfakes. 

4. What protocols do you think these providers could put in place?

I think setting up security protocols online would be the most efficient on the devices and platforms rather than expecting individuals and companies that use these devices to come up with their own strategies to defend against it. It’s a very sophisticated problem that should be dealt with first and foremost at the platform level. 

Currently, Facebook, Twitter, and LinkedIn all have teams that are focused on this topic and formulating plans to prevent it. There are several methods being researched to technologically address deepfakes like embedding digital watermarks or fingerprints, or using signatures inside file metadata. However, this often requires a file to be registered upfront, so viewers know that it’s coming from a valid device. 

In a recent twitter poll, I asked users if they would register their own digital fingerprint with social media apps to ensure posts in the network are authentic. About half said yes, while the rest was a mixed bag that included different variations of no:

  • No, I just won’t post images or videos.
  • No, delete my account.
  • Idk, what is a deepfake?

Because it was a mixed bag, I concluded that platform providers wouldn’t voluntarily put up security protocols for users because that would decrease user engagement, advertisement clicks, and profits. The government would likely need to step in with regulations if this approach were to move forward.

5. How can you detect deepfakes?

As mentioned previously, there’s a lot of research going on in this area and they’re trying to figure out more sophisticated ways to fight against it. 

It used to be that cruder models were easier to detect, just like amateur photoshop jobs. However, like photoshop, techniques have advanced, so jaw movement, eye blinking, and face motion has become very smooth and realistic over time. Now it’s important to mathematically analyze the subtle details and anomalies like the statistical distribution of color or the pacing of the voice. 

Unfortunately, deepfakes are getting better too, and these differences are ever more subtle. At some point, they will probably become indistinguishable based on analyzing an image or audio. This is especially true if there are no security protocols put in place at the platform level.  

6. Do you see the conversation around deepfakes changing?

Yes. But, I don’t think we’ve seen a level-10 emergency deepfake that has really captured the public’s attention or outrage. I hope that doesn’t happen, but it hasn’t become a first-tier concern. If that does happen, the conversation will become more prominent and I think people will want to regulate it. 

If a level-10 emergency doesn’t happen, I think the conversation will stay where it is. There will continue to be isolated companies and individuals that are attacked by deepfakes and maybe a new insurance market will develop to defend against deepfake losses. Ultimately, it’s an arms race against the deepfakers and the detectors. 

As technology advances, so do the risks.  For organizations or individuals, it’s important to keep security practices top of mind when working with any vendor.