Feb 14 2025
AI Powered Bionic Arm
My younger self, seeing that title – AI Powered Bionic Arm – would definitely feel as if the future had arrived, and in many ways it has. This is not the bionic arm of the 1970s TV show, however. That level of tech is probably closer to the 2070s than the 1970s. But we are still making impressive advances in brain-machine interface technology and robotics, to the point that we can replace missing limbs with serviceable robotic replacements.
In this video Sarah De Lagarde discusses her experience as the first person with an AI powered bionic arm. This represents a nice advance in this technology, and we are just scratching the surface. Let’s review where we are with this technology and how artificial intelligence can play an important role.
There are different ways to control robotics – you can have preprogrammed movements (with or without sensory feedback), AI can control the movements in real time, you can have a human operator, through some kind of interface including motion capture, or you can use a brain-machine interface of some sort. For robotic prosthetic limbs obviously the user needs to be able to control them in real time, and we want that experience to feel as natural as possible.
The options for robotic prosthetics include direct connection to the brain, which can be from a variety of electrodes. They can be deep brain electrodes, brain surface, scalp surface, or even stents inside the veins of the brain (stentrodes). All have their advantages and disadvantages. Brain surface and deep brain have the best resolution, but they are the most invasive. Scalp surface is the least invasive, but has the lowest resolution. Stentrodes may, for now, be the best compromise, until we develop more biocompatible and durable brain electrodes.
You can also control a robotic prosthetic without a direct brain connection, using surviving muscles as the interface. That is the method used in De Lagarde’s prosthetic. The advantage here is that you don’t need wires in the brain. Electrodes from the robotic limb connect to existing muscles which the user can contract voluntarily. The muscles themselves are not moving anything, but they generate a sizable electrical impulse which can activate the robotic limb. The user then has to learn to control the robotic limb by activating different sequences of muscle contractions.
At first this method of control requires a lot of concentration. I think a good analogy, one used by De Lagarde, is to think of controlling a virtual character in a video game. At first, you need to concentrate on the correct sequence of keys to hit to get the character to do what you want. But after a while you don’t have to think about the keystrokes. You just think about what you want the character to do and your fingers automatically (it seems) go to the correct keys or manipulate the mouse appropriately. The cognitive burden decreases and your control increases. This is the learning phase of controlling any robotic prosthetic.
As the technology develops researchers learned that providing sensory feedback is a huge help to this process. When the user uses the limb it can provide haptic feedback, such as vibrations, that correspond to the movement. Users report this is an extremely helpful feature. It allows for superior and more natural control, and allows them to control the limb without having to look directly at it. Sensory feedback closes the usual feedback loop of natural motor control.
And that is where the technology has gotten to, with continued incremental advances. But now we can add AI to the mix. What roll does that potentially play? As the user learns to contract the correct muscles in order to get the robotic limb to do what they want, AI connected to the limb itself can learn to recognize the user behavior and better predict what movements they want. The learning curve is now bidirectional.
De Lagarde reports that the primary benefit of the AI learning to interpret her movements better is a decrease in the lag time between her wanting to move and the robotic limb moving. At first the delay could be 10 seconds, which is forever if all you want to do is close your fist. But now the delay is imperceptible, with the limb moving essentially in real time. The limb does not feel like her natural limb. She still feels like it is a tool that she can use. But that tool is getting more and more useful and easy to use.
AI may be the perfect tool for brain-machine interface in general, and again in a bidirectional way. What AI is very good at is looking at tons of noisy data and finding patterns. This can help us interpret brain signals, even from low-res scalp electrodes, meaning that by training on the brain waves from one user an AI can learn to interpret what the brain waves mean in terms of brain activity and user intention. Further, AI can help interpret the user’s attempts at controlling a device or communicating with a BMI. This can dramatically reduce the extensive training period that BMIs often require, getting months of user training down to days. It can also improve the quality of the ultimate control achieved, and reduce the cognitive burden of the user.
We are already past the point of having usable robotic prosthetic limbs controlled by the user. The technology is also advancing nicely and quite rapidly, and AI is just providing another layer to the tech that fuels more incremental advances. It’s still hard to say how long it will take to get to the Bionic Man level of technology, but it’s easy to predict better and better artificial limbs.