Jul 19 2024

Deepfake Doctor Endorsements

This kind of abuse of deepfake endorsements was entirely predictable, so it’s not surprising that a recent BMJ study documents the scale of this fraud. The study focused on the UK, detailing instances of deepfakes of celebrity doctors endorsing dubious products. For example, there is this video of Dr. Hilary Jones used to endorse a snake oil product claiming to reduce blood pressure. The video is entirely fake. It’s also interesting that in the video the fake Jones refers only to “this product” – as if the deepfakers made a generic endorsement (ala Krusty the Clown) that could be then attached to any product.

This trend is obviously disturbing, although again entirely expected. This use of deepfakes is deliberate fraud, and should be treated as such. Public figures have a right to their own identity, including their name and likeness. Laws vary by country and by state, but most have some limited protections for the use of someone’s name or likeness. In the US, for example, there is a limited “right of publicity” which limits the use of someone’s name or likely for commercial purposes without their permission. This can also extend beyond death, with the estate holding the rights. Even imitating a recognizable voice has been successfully sued.

This means that using a deepfake clearly violates the right of publicity – in fact it is the ultimate violation of that right. There are generally three legal remedies for violations – monetary damages, injunctive relief, and punitive damages.

How good are the deepfakes? Good enough, especially if you are viewing a relatively low-res video on social media. And of course they are only getting better. We cannot wait until deepfakes are good enough to fool most people, right now they are high enough quality to constitute fraud. So what do we do about it?

Most of the articles I read about this study were fatalistic and put the onus on the user, such as:

For those whose likenesses are being co-opted, there’s seemingly very little they can do about it, but Stokel-Walker offers some tips on what to do if you find a deepfake. For instance, take a careful look at the content to make sure your suspicions are well-founded then leave a comment, questioning its veracity. Use the platform’s built-in reporting tools to voice your concerns, and finally report the person who or account that shared the post.

This question comes up with almost every type of fraud – do we deal with the situation by educating the public to spot the fraud and protect themselves, or do we try to control the fraud through regulation? I think we should always do both. It is better for individuals to be savvy, to takes steps to protect themselves from all kinds of fraud, and to report instances of fraud. We can improve this through education – teaching the public about each specific type of fraud, but also teaching generic critical thinking skills and media savvy.

But overall the “personal responsibility” approach to public problems has very limited success, in pretty much every context. This doesn’t mean we should not optimize personal responsibility, just that we need to recognize the results will always be limited. Expecting (in the US, for example) 300 million people to always do the right thing is unrealistic. This is true when it comes to public health, carbon footprint, litter, fraud detection, and other issues.

We also have to consider the “personal responsibility burden” of society (I just made up that term, but I think it’s a useful concept). Going through our day-to-day lives, how much mental energy do we need to expend in order to navigate all of our personal responsibilities? If we imagine a hypothetical world with zero regulations, one that is driven 100% by market forces, is this a world that anyone would want to live in? You would be responsible for evaluating and validating the safety and efficacy of every medical product you use, of the safety of the cars you drive in and the roads and bridges you drive on, of all commercial claims for every product, of the quality and safety of your food, and of the legitimacy of services you pay for. It would be overwhelming.

Also, keep in mind, that for each and every product or service there would be an industry with lots of time, money, and resources to craft effective scams, while you would have the burden of fending off thousands of such attempts to deceive you. The asymmetry is massive. Sure, there would likely be consumer protection organizations to provide reviews and investigations, but can you trust them? Industries would (and do) just make up their own fake consumer protection groups, or seals of approval, or whatever mechanism is used to help consumers evaluate products, in order to promote their own products.

I guess consumers could act collectively by funding their own organizations that would be evidence-based and transparent, and rely mostly upon experts to provide the information they need. But of course, that’s essentially what the government is, except with more teeth.

To be clear, I am not trying to factor personal responsibility out of the equation, but I think its unavoidable that society functions better if there is at least a minimal safety net. Our technological society is simply too advanced and specialized for anyone to have the ability to take responsibility for everything they may encounter or depend upon.  There should be minimal safety standards for food, regulations against outright fraud, and reasonable standards for safety at least.

The discussion should revolve around where the limits of regulations are. How regulations can be most effective, how are decisions made and implemented, how to avoid unintended negative consequences, and how to guard against gaming the system. We also have to consider the total regulatory burden (a term I did not coin as it is a concept long promoted by industry). There is a balance to be struck, and we need to consider the ROI of any regulation. This means having a system that is dynamic and responsive. Regulations should be transparent, minimalist, evidence-based, and self-correcting. This means there needs to be mechanisms for evaluating the burden and effectiveness of regulations, and for lobbying for changes where necessary.

Getting back to deepfakes – we also have to imagine a world overwhelmed with fake information content. We are essentially already there. I wrote recently about the culture of TikTok recently in which driving engagement is prioritized almost entirely over truth and accuracy. It is the end-stage of “infotainment”. Deepfakes like the ones in this study are worse – they are not optimized for engagement, they are deliberately deceiving the viewer in order to defraud them, while stealing the public personal of celebrities.

I am not fatalistic about this phenomenon. Rather, for all the reasons I stated above, I think this is an issue that needs to be dealt with through thoughtful and tough regulations. In terms of the remedies above, we need to make sure we have the legal resources to impose swift and accurate injunctive relief. I also think that “punitive damages” should be extensive – greater than any possible economic benefit from committing the fraud. It can’t just be the cost of doing business. Being caught using a deepfake to defraud the public should be ruinous. I also think that prison time should be on the table. People do go to prison for fraud.

Also, think about it this way. We have an entire highway safety infrastructure. We recognize that highways can be dangerous, and we need to protect the public. Right now I would argue that the most dangerous venue for the public is the internet. We need to start taking cybersecurity, in all its forms, much more seriously. We lead an increasing amount of our live online. We do business online, and we get our information online. We need a cybersecurity infrastructure, including appropriate regulations, that make our online lives reasonably safe and secure. Right now I am personally under a daily assault by countless scams – through calls, texts, e-mails, and websites. It’s constant. I am vigilant, but even I at times can be temporarily blindsided by a new type of scam. My elderly mother has no chance of protecting herself from the onslaught, so her kids have to do it for her. And we have to essentially limit her online interactions.

Online fraud cost Americans over 10 billion dollars in 2023, and that is likely an underestimate. Some individuals are wiped out. This is a serious issue that cannot be tackled through PSAs. This is why it was disappointing that so many articles on the deepfakes focus exclusively on personal responsibility. Rather, we need to demand effective regulations.

No responses yet