Feb 23 2016

Identifying Real or Fake Images

CGI-fakeFor anyone active on social media it is almost a daily occurrence that a photo being passed around as if it were real is revealed as a fake. In fact, if you don’t want to look silly, it’s a great idea to Google before you share. A basic search is often all that is necessary, and if the photo is fake it is very likely that Snopes has you covered.

For the more intrepid, you can also use reverse-photo search websites. These will find matches to the photo you select, which can often reveal the original photo that was “photoshopped” in order to create that iconic representation of whatever ideology is being promoted.

Some people have a better eye for photo manipulation than others. Sometimes context is all you need – if the photo seems too perfect to be true, it probably is.

The task of sniffing out fake photos, however, (at least from a technical perspective) is getting more difficult. There are two basic ways to make a fake photo. The most common is to take a real photo and manipulate it. Just replace the words on that protest sign to say whatever dumb thing you want to mock.

It is also possible to make an entirely computer generated (CG) image, or even video animation. As CG technology improves, detecting these fakes with the naked untrained eye is getting more difficult. Researchers at Dartmouth have been tracking just that, and they have recently released their most recent findings.

The researchers are focusing on CG images of human faces. This is the most challenging to create because of the uncanny valley. The human brain has a tremendous capacity to detect subtleties of the human face, probably because of our need to be very sensitive to the facial expressions of others, and to recognize individuals under a variety of conditions. In neurological terms, there is a large part of the cortex dedicated to processing visual information about human faces.

What CG animators discovered is that when artificial human images get closer and closer to realistic people tend to like them and relate to them more. However, when they get very close but not quite realistic, that affinity sharply drops (the uncanny valley) because the images start to look creepy. That effect goes away only when the images become almost indistinguishable from real faces.

This is why CG animated movies either focus on non-humans (robots, toys, aliens and insects) or they avoid the uncanny valley with cartoonish characters. The Polar Express is one notable exception – they went for realistic, and landed right in the middle of the uncanny valley.

As CG technology advances, however, it is slowly pushing through the uncanny valley, leading to realistic images and videos that are not always easy to tell apart from the real thing. Take a look at the photo at the top of this article – is it real or CG? Decide that before reading further.

The image is CG, but it’s pretty good. I actually think this was not too hard to tell, there is something wrong with the eyes (it’s always the eyes). It’s especially easy if you see the real and CG photos side-by-side (see bottom of post). It also might be more difficult if the photo is not of someone famous with which you are already familiar.

So how did the subjects do?

Observers correctly classified photographic images 92 percent of the time, but correctly classified computer-generated images only 60 percent of the time.

That means 40% of the time they mistook CG images for real images. That is almost a coin flip. At 50% subjects essentially cannot tell the difference when looking at a CGI image. It is interesting that when you are looking at a real image, you know it (at least 92% of the time).

The researchers then gave the 250 subjects some basic training in detecting CG images:

In a follow-up experiment, the researchers found that when a second set of observers was provided some training prior to the experiment, their accuracy on classifying photographic images fell slightly to 85 percent but their accuracy on computer-generated images jumped to 76 percent.

This was reported as an “increase in accuracy” but I don’t completely agree. It seems more like a shift toward calling everything CGI – an increase in true positives and false positives. They called more real photos CG and more fake photos CG.

This is actually not uncommon – when people have beginner training in detection, they tend to go through a phase where they are biased toward positive detecting. With more training they then tend to weed out the false positives better.

Perhaps more important than how the subject did on this study was how they compared to the same study from 5 years ago. The researchers report that subject did much better previously, and their ability to separate real from CG images is getting worse as CG technology improves (as one might expect).

The researchers discuss the real world implications of this fact, specifically as it relates to the regulation of child pornography in the US. In 1996 Congress passed a law making it illegal to own any explicit sexual representation of a minor. In 2002 the Supreme Court upheld the law, with the exception of completely CG images. They argued that since no real child was exploited, CG child pornography is protected free speech. In 2003 Congress responded by passing a new law making CG child pornography “obscene,” however in practice this does not have the same force as the law against real images.

So – it is now a matter of important legal concern whether or not an image or video depicting child pornography is CG or real.

I think it is a perfectly reasonable extrapolation of current trends, supported by this current research, to conclude that in the not-too-distant future it will become almost impossible for a human to tell the difference between a CG image and a real image. It will likely still be possible with computer analysis for quite some time, and it is interesting to speculate whether it will eventually become impossible even for computer analysis to tell the difference.

A defendant charged with possessing child pornography could always claim that they had good reason to believe the images were CG.

There are many more implications of this technology as well. The most obvious is – will studios forgo paying live actors once they can replace them with realistic CG characters? Of course this will happen, the question is, to what extent?

There are myriad legal and scientific implications as well. A photograph as evidence might eventually become worthless.




Like this post? Share it!

21 responses so far