Ever see a new science news item and your jaw drops? I love it when that happens.
That’s what happened to me when I read the title of this item that Jay and Evan sent me:
Researchers at the ATR Computational Neuroscience Laboratories in Japan are able to deduce what someone is seeing purely from brain imaging.
The private institute said in a statement that…”It was the first time in the world that it was possible to visualize what people see directly from the brain activity,”
Chief researcher Yukiyasu Kamitani said that “By applying this technology, it may become possible to record and replay subjective images that people perceive like dreams.”
So how did they do it?
Initially they displayed to each subject 400 black & white control images for 12 seconds each consisting of a 10 x 10 grid of pixels. Then they used an fMRI or functional magnetic resonance imager to detect blood flow changes in the visual cortex of the test subjects for each image. This is a key step because this is what calibrates the software.
Finally, they displayed the images to be reconstructed. In this case it was the letters N-E-U-R-O-N.
The reconstructed images were black and white and lo res but they are unmistakably correct. There’s no pareidolia going on here.
Some articles I read state something along the lines… Brain scanning can now extract information directly from the brain
This is correct but it has to be interpreted very carefully….this doesn’t mean that big brother can now read your mind and find anything it wants in your biological hard drive.
The machine has to be calibrated to the quirks of your brain and you have to be seeing something for it to then be reconstructed. As impressive as this feat is, declaring that dream vcrs are around the corner was definitely sensationalistic. This was a proof of concept experiment. Such a refined application of this technology will definitely not be ready in time for this holiday season.
Don’t forget, the test subjects had to look at 400 pictures for 12 seconds each to calibrate the software to their brains. That’s 80 minutes just to get the fuzzy images above. Who knows what calibration it may take to get color moving images at a relatively high resolution. Imagine calibrating for hours or even days? It also makes sense to assume that we’ll also eventually need much higher resolution fMRIs or other imaging devices as well.
The next step though is obvious. Can we reconstruct what people are thinking instead of just what they’re seeing? That’s where the fun really begins.
Imagine the fate of people who are “locked in”. They are so paralyzed that only their eyes can be voluntarily controlled. This would be a boon for these people.
I’m sure that law enforcement and the military would love to get their hands on a device like this as well.
Let me know what you think about some of the far out applications of this technology.
What about even further in the future?
ATR chief researcher Yukiyasu Kamitani says, “This technology can also be applied to senses other than vision. In the future, it may also become possible to read feelings and complicated emotional states.”
I thought mood rings already did that.