Jul 19 2019

Electronic Skin

This is another entry in my informal series on interfacing machines and the human brain. Yesterday I wrote about Neuralink, which is a project to develop electrodes to interface with the brain itself. Today I write about another incremental advance – in the July 17th issue of Science Robotics, researchers published, “A neuro-inspired artificial peripheral nervous system for scalable electronic skins.”

This seems to be a serious advance in the architecture of artificial skin.

“We demonstrate prototype arrays of up to 240 artificial mechanoreceptors that transmitted events asynchronously at a constant latency of 1 ms while maintaining an ultra-high temporal precision of <60 ns, thus resolving fine spatiotemporal features necessary for rapid tactile perception. Our platform requires only a single electrical conductor for signal propagation, realizing sensor arrays that are dynamically reconfigurable and robust to damage.”

This configuration is more scalable than the current designs, which “are currently interfaced via time-divisional multiple access (TDMA), where individual sensors are sampled sequentially and periodically to reconstruct a two-dimensional (2D) map of pressure distribution.” As the number of sensors increases, with this design, the delay in signal processing gets greater. The new design does not suffer that limitation.

Biological skin evolved to have a host of desirable features. It is soft, has a variable number of sensors as needed, and yet is robust, still operates with minor damage, and is self-repairing. Artificial skin would be optimal if it shared all these features. This new eskin gets us closer to this ideal.

There are some obvious applications for artificial skins, mainly with robots and prosthetics. Sensation is critical for normal functioning. Nervous systems are wired with complete circuits so that motor output is matched to sensory input. This feedback is critical for control. Sensory feedback can be visual, or tactile. Tactile sensation, in turn, can come in a number of forms – soft touch, damage sensation (pain), temperature, proprioception, vibration and pressure. Pressure, for example, is critical to know how tightly you are gripping something. Proprioception allows you to feel where your limbs are in three-dimensional space. It’s how you can touch your nose with your eyes closed.

These various sensations combine to create the sense that we own the various parts of our body, and also that we control them. Sensory information is also used as critical feedback for motor control and balance. There is also direct feedback from muscles, so our brains can sense how much they are stretching or contracting.

You can’t have dynamic motor control without some sensory feedback, and the more feedback you have the more nuanced, complex, and precise motor control can be. If we want robots that can function in human spaces, then human-like sensation will be critical. This is different than operating in a controlled factory, where the same exact movement can be executed without the need to adapt.

Prosthetic limbs also require sensation to be fully functional. Otherwise, the person with a prosthetic will just feel as if they are attached to their body, not that they are part of their body. Also, control of motorized prosthetic limbs is vastly improved with any sensory feedback. The good news is, while we still need to make significant technological improvements, every component of a full brain-machine interface has had a proof of concept. The brain, as I frequently point out, is plastic, which means it can adapt to use. So far it seems that the brain has no trouble mapping to a prosthetic limb, learning how to use it, and even incorporating sensory feedback from it. The loop is closed.

Now we just need to improve the technology so that the electrodes communicating to the brain are more sustainable. Basically we need squishy electrodes that will move with the brain, won’t irritate the brain and cause scar tissue, will not be rejected (won’t provoke an inflammatory response), and won’t generate too much heat. We also need these electrodes to communicate to computer components, which means we need wires existing the skull, or the electrodes needs to have wireless communication. Theoretically a piece of the skull itself could be replaced with an interface. Through these connections the brain could control a robotic limb and receive sensory information from it. The limb itself could contain the computer chips to do the heavy lifting on processing, and could also contain the power supply.

Powering such systems is also a major challenge, but there is plenty of research looking for solutions. Of course, any research to improve battery technology or the ability to produce energy on a very small scale can potentially benefit such devices. The ideal system would be powered by biological activity itself, harvesting energy from movement or physiological processes. But if you have to plug in your bionic arm while you sleep, that would not be a deal-breaker.

There is also one potential application of eSkin not mentioned in the published article or the press release – virtual or augmented reality. You won’t necessarily need to be missing a limb in order to benefit from brain-machine interfaces or from these kinds of electronic skin. A VR system could use a version of this technology for gloves or even body suits that can both sense what you do, in addition to providing haptic feedback.

Haptic feedback simply refers to the use of touch or tactile information in order to communicate with a user. This could be as simple as a joystick that vibrates. In a VR world, however, it could allow a user to interact more realistically with their virtual surroundings. If you pick up a virtual object, you will feel the object in your gloved hand.

The ultimate expression of this is to bypass anything like haptic feedback, and go straight to the brain. Now we are talking about Matrix-level stuff. But this does not have to be all or nothing. Once computers can communicate to the brain, then virtual experiences can be entirely digital, without the need for physical objects or suits to interface with the virtual world. Ideally you could sit in a chair, put on a helmet, and all your inputs and outputs will be diverted to the virtual simulation. This type of technology has been featured in several episodes of Black Mirror (an excellent series, if you have not seen it).

And of course future people may have permanent computer interfaces, with implanted in their brain or directly interfacing with their brain (such as the skull computer I discussed yesterday). Once you have a brain-computer interface, going virtual is simply a matter of plugging in (again, think Matrix).

It seems likely to me that something like this is in our future. Humanity is likely to become a race of cyborgs, of biological-computer hybrids, living a partly physical and partly virtual life. Some futurists fear that the virtual life will become so compelling it will make the increasingly onerous limitations of a physical life intolerable. Perhaps we will need to live for several decades in our physical bodies, so that we can attend to our civilization, until we get to retire to our digital life, which will be like paradise by comparison.

Alternatively, we may have robots to attend to the mundane physical necessities of existence, while humanity enjoys our unfettered digital bliss. We may abandon our physical bodies altogether, as unnecessary appendages, or wear physical bodies as suits (as in Altered Carbon).

I know there is a long way from electronic skin to a digital civilization, but there are no theoretical barriers in the way. It’s only a matter of incremental technological progress at this point.

 

No responses yet