Feb 16 2026
The Future of AI-Powered Prosthetics
It’s not easy being a futurist (which I guess I technically am, having written a book about the future of technology). It never was, judging by the predictions of past futurists, but it seems to be getting harder as the future is moving more and more quickly. Even if we don’t get to something like “The Singularity”, the pace of change in many areas of technology is speeding up. Actually it’s possible this may, paradoxically, be good for futurists. We get to see fairly quickly how wrong our predictions were, and so have a chance at making adjustments and learning from our mistakes.
We are now near the beginning of many transformative technologies – genetic engineering, artificial intelligence, nanotechnology, additive manufacturing, robotics, and brain-machine interface. Extrapolating these technologies into the future is challenging. How will they interact with each other? How will they be used and accepted? What limitations will we run into? And (the hardest question) what new technologies not on that list will disrupt the future of technology?
While we are dealing with these big question, let’s focus on one specific technology – controllable robotic prosthetics. I have been writing about this for years, and this is an area that is advancing more quickly than I had anticipated. The reason for this is, briefly, AI. Recent advances in AI are allowing for far better brain-machine interface control than previously achievable. Recent advances in AI allow for technology that is really good at picking out patterns from tons of noisy data. This includes picking out patterns in EEG signals from a noisy human brain.
This matters when the goal is having a robotic prosthetic limb controlled by the user through some sort of BMI (from nerves, muscles, or directly from the brain). There are always two components to this control – the software driving the robotic limb has to learn what the user wants, and the user has to learn how to control the limb. Traditionally this takes weeks to months of training, in order to achieve a moderate but usable degree of control. By adding AI to the computer-learning end of the equation, this training time is reduced to days, with far better results. This is what has accelerated progress by a couple of decades beyond where I thought it would be.
But it turns out this AI-assisted control can be a double-edged sword. To understand why we need to quickly review how the human brain adapts to artificial bodies or body parts. The short answer is – quite well. The reason is that our sense of ownership and control is all a constructed illusion of the brain in the first place. Circuits in our brain create the subjective sensation that each part of our body is part of us, that we own that body part (the sense of ownership) and the we control that body part (a sense of agency). We know about this largely from studying patients who have damage in one or more of these circuits that causes them to feel like a body part is not theirs or that they don’t control it.
This means that this circuitry can be hacked to make the brain create the sensation that you own and control a robotic or virtual limb. Luckily, this hacking is actually pretty simple. The brain compares different sensory inputs to see if they match, while also comparing motor intentions with motor outputs. So – if you see and feel a limb being touched, your brain will interpret that as you owning the limb. It can be that simple. If you intend to make a movement, and you see and feel the limb make that movement, then you feel as if you control the limb. So a robotic limb with some sensation, with some haptic feedback, and that does what we want it to do, will feel as if it is naturally part of us. The research is moving now in this direction, to close these loops as much as possible.
This, however, is where we run into a snag with AI-controlled robotic limbs. Part of the advance is that AI can add fine motor control to an artificial hand, say. Quickly, robotic movement tends to fall into one of three categories. You can directly control the robot, the robot can carry out a pre-programmed sequence of movements, or the robot can determine its movements in real time based on sensory feedback. When seeing a robotic demonstration you should always ask – what type of control is being demonstrated?
For robotic limbs what we want is direct control of the robot. While this is advancing, it is still somewhat limited and clumsy. So we can refine the direct control by adding one or both of the other two types of control. This means to some extent the robotic limb is carrying out the desired movements of the user with internal control. This can greatly increase the functionality of the robotic limb, but it comes at a cost of the user’s sense of embodiment and agency. Imagine if your hand were executing movements all by itself. It would feel uncanny and unnerving.
This is a long windup to a new study which tries to address this issue. The researchers were looking at the effect of the movement speed of the AI-controlled robotic limb to see how that affected the user’s sense of ownership and agency. What they found was not surprising, but good to know that this variable is effective and needs to be taken into consideration. They varied the execution time of an AI-controlled movement from 125 ms to 4 seconds. A moderate speed, about 1 second, resulted in the best sense of ownership and agency (or we can say the least interference with these senses). The further you got to either extreme the more the user felt an uncanny sense of unease, as if they did not own or control the robotic limb. This is a Goldilocks effect – too fast or too slow is no bueno, but just right results in a good outcome.
This result also makes sense from the perspective that prior neurological research shows that our brains also evaluate the world by how it moves. We separate agents from non-agents by how they move (the latter moves in an inertial frame while the former does not). Neurologists also know this because diseases that are movement disorders can often be diagnosed (and sometimes at a glance) by how the patient moves. Our brains are finely tuned to what constitutes normal human movement. Too fast or too slow, hypokinetic or hyperkinetic, and our brains immediately register that something is wrong.
So if we see our robotic limb moving at a normal human pace, doing what we want it to do (even though the fine movements are enhanced by AI) that can still be good enough for us to accept the limb as belonging to us and that we control it. There is likely also a Goldilocks zone here as well – too much AI control will break the illusion of control, while too little is of no use, but just right will be the best compromise between functionality and acceptance.
The nuances of neurological control through a brain-machine interface of an AI-enhanced robotic limb is one of those futurism problems that would have been difficult to anticipate.






