Jul 30 2019

The Challenge of Deepfakes

You have probably heard of deepfakes by now – convincing video manipulation that is improving rapidly, getting better and easier. Right now you can often tell when a video has been manipulated. The human eye is very sensitive to movement, facial expressions, and other subtle cues. But the best examples are getting more difficult, and experts predict there will be deepfakes undetectable by most people within a year.

How will this affect the world? Most reports I read simply say that people won’t be able to trust videos anymore (we already can’t trust photos). But that doesn’t capture the situation. People knowing that video can be manipulated won’t really solve the problem.

The problem is psychology – people can be primed and manipulated subconsciously. Let’s say, for example, that you see a video of a famous person committing a horrible crime, or saying something terrible. Even if you know such videos can be faked, or hear the claim that that video was fake, the images may still have an emotional effect. It becomes part of your subconscious memory memory of that person.

Human memory also will contribute to this effect. We are better at remembering claims than remembering where we heard them (source amnesia) or if they are true or not (truth status amnesia). Seeing a dramatic video of a person doing something horrible will stick with you and will be much more vivid than later information about forensic examination of the video.

Also, you can bet that the mere existence of undetectable (to the naked eye) deepfakes will have the net effect of causing uncertainty. When I was discussing this issue recently with George Hrab, he correctly pointed out that this cuts both ways – it means that fake videos will be mistaken for real, but also real videos can be dismissed as fakes – the video version of “fake news.” Therefore everyone can choose to believe or dismiss any video evidence based upon their tribal affiliation.

We don’t really have to speculate about this, because we already have deepfakes for the written and spoken word. Anyone can write or speak fake news, and there may be no way to tell without independent verification if the information is correct. So you have the spread of misinformation, and simultaneously the dismissal of real information as fake. The end result that everyone can live in a bubble of their own tribal narrative. There is no shared reality across partisan lines. Deepfake videos will just exacerbate this already existing situation.

Deepfake videos may make it much worse, however, because videos can have more immediate and visceral impact. The release of deepfake videos could theoretically cause the stockmarket to tumble, could throw an election, or ruin a career – long before forensic analysis can determine if the video is real or not, and then disseminate that information. And of course, depending on your partisan bias, the original video vs the later analysis can have more or less prominence in your reporting.

So what do we do? This is a situation where I think the government, tech, and social media companies need to get together and work on solutions to at least minimize the impact of deepfakes. First, of course, is the development of software that can reliably detect deep fakes. This already exists, using cues like lighting and facial expressions to determine if a video has been altered. But this is an arms race – any method developed to detect deepfakes could lead to improving the deepfakes to circumvent the detection.

Even if the detection algorithms stay one step ahead of the deepfake software, as I stated above that won’t stop the potential effects of releasing emotionally impactful (if fake) videos. They really need to be stopped in real time. So companies are working on various approaches, such as building into digital video hardware (like smart phones) the ability to authenticate and stamp videos that are genuinely produced and not manipulated. Social media can also use real-time detection of deepfakes and then delete them or at least prevent their spread.

So technology can use a combination of authentication and fake detection to allow or promote genuine videos and ban or impede fake ones. There might even be a default setting on smart phones that will not play videos that are not authenticated, or that don’t pass a quick forensic analysis. If you still want to play the video, you can do a more thorough analysis. And then if you still want to view what is likely a deepfake, you can, but with with warning and water marks showing that the video is probably fake.

I don’t think any such measures will eliminate the problem, but it can minimize it. Similarly, we cannot eradicate fake news, or the use of the accusation of fake news to dismiss real but inconvenient news. But certainly no one in obligated to make it easy to spread fake videos or to help them do it. Social media algorithms can legitimately use the authentication of pictures or videos as a criterion for ranking and promotion. At the very least warning that a video is likely to be fake should follow a fake video everywhere.

Mainstream news outlets also need to incorporate verification in their vetting process, and never show fake videos, not even to report on them.

The more difficult question, but one I think we need to explore, is if there should be a criminal approach to the problem. Should it be illegal to knowingly post deepfake videos in a public forum? One can reasonably argue that creating or posting a known deepfake of someone is a form of libel. It could be considered libel per se – meaning that it is automatically considered libel, without the need to prove harm or malicious intent. This concept already exists, such as falsely accusing someone of being a pedophile in many jurisdictions is considered libel per se. Perhaps a similar approach needs to be taken for deepfakes, given their potential for harm to our economy, our democracy, and to individuals. This is complicated, and so I don’t have a firm opinion on it, but I would like to see legal scholars debate it.

Deepfakes need to be recognized for the potential threat they are, but they are also only part of a larger threat – the death of expertise, verifiable fact, and a shared reality. This is just one more step toward a world in which all information is seen as equal, and the common square is overwhelmed with opinion without any shared facts to use for common ground.

 

No responses yet