Search Results for "brain machine interface"

Jul 14 2023

Magnetohydrodynamic Drive – Silent Water Propulsion

Published by under Technology

DARPA, the US Defense Advanced Research Projects Agency, is now working on developing a magnet-driven silent water propulsion system – the magnetohydrodynamic (MHD) drive. The primary reason is to develop silent military naval craft. Imagine a nuclear submarine with an MHD drive, without moving parts, that can slice through the water silently. No moving parts also means much less maintenance (a bonus I can attest to, owning a fully electric vehicle).

But don’t be distracted by the obvious military application – if DARPA research leads to a successful MHD drive there are implications beyond the military, and there are a lot of interesting elements to this story. Let’s start, however, with the technology itself. How does the MHD work?

The drive was first imagined in the 1960s. That’s generic technology lesson #1 – technology often has deeper roots than you imagine, because development often takes a lot longer than initial hype would suggest. In 1992 Japan built the Yamato-1, a prototype ship with an MHD drive that worked. It was an important proof of concept, but was not practical. Even over 30 years later, we are not there yet. The drive works through powerful magnetic fields, which are place at right angles to an electrical current, producing a Lorentz force. This is a force produced on a particle moving through both an electrical and magnetic field, at right angles to both. Salt water contains charged particles which would feel this Lorentz force. Therefore, if arranged properly, the magnetic and electrical fields could push water toward the back of the ship, providing propulsion.

Sounds pretty straight forward, so what’s the holdup? Well, there are several. The most important aspect of the Yamato-1 is that is provided great research into all the technical hurdles for this technology. The first is that the MHD drive is horribly energy inefficient, which means it was very expensive to operate. What was mainly needed to improve efficiency was more powerful and more efficient magnets. Here we get to generic technology lesson #2 – basic technology developed for one application may have other or even greater utility for other applications. In this case the MHD is partly benefiting from the fusion energy industry, which requires powerful efficient magnets. We can take those same magnet innovations and apply them to MHD drives, making them energy and cost effective.

But there is still one major and one minor problem remaining. The major problem is the electrodes and electronics necessary to generate the electrical current. Electronics and salt water don’t mix – the salt water is highly corrosive, more so when exposed to magnetic fields and electrical current. We therefore need to develop highly corrosive-resistant electrodes. Fortunately, such development is already underway in the battery industry, that also needs robust electrodes. Apparently we are not there yet when it comes to MHD, and that will be a major focus of DARPA research.

There is also the minor problem of the electrodes electrolyzing the salt water, creating bubbles of hydrogen and oxygen. This reduces the efficiency of the system – not a deal-killer, but it would be nice to reduce this effect. I immediately wondered if the created gases can be captured somehow, both solving the problem and making green hydrogen from the shipping industry. In any case, that’s problem #2 for DARPA to solve.

If all goes well, we are probably 10-20 years (or more) still away from working MHD drives on ships. Probably the military applications will come first. I hope they don’t hog the technology, which they might in order to maintain their military technological dominance, but the civilian applications can be huge. The noise generated by shipping has massive negative consequences on marine life, especially whales and other cetaceans who rely on long distance sound to communicate with each other, to navigate, and to migrate. Propellers churning up water is also an ecological problem. If it ever becomes cost effective enough, a working MHD drive could revolutionize ocean travel and shipping. Electrifying ocean propulsion could also help reduce GHG emissions.

Plus, there might be other downstream benefits from the DARPA research. Those robust corrosion resistant electrodes will likely have many applications. It may feed back into battery technology. It may also lead to better electrodes for a brain-machine interface. This reminds me of the book and TV series Connections, by James Burke. This is a brilliant series I have not seen in a while and should probably watch again. It traces long chains of technological developments, from one application to the next, showing how extensively technologies cross-fertilize. A need in one area leads to an advance that makes a completely different application feasible – and so on and so on. I guess that’s generic technology lesson #3.

DARPA has a solid history of accelerating specific technologies in order to bring new industries to fruition more quickly. Hopefully they will be successful here as well. The downstream benefits of an MHD drive could be significant, with spin-off benefits to many industries.

No responses yet

May 02 2023

Reading The Mind with fMRI and AI

Published by under Neuroscience

This is pretty exciting neuroscience news – Semantic reconstruction of continuous language from non-invasive brain recordings. What this means is that researchers have been able to, sort of, decode the words that subjects were thinking of simply by reading their fMRI scan. They were able to accomplish this feat using a large language model AI, specifically GPT-1, an early version of Chat-GPT. It’s a great example of how these AI systems can be leveraged to aid research.

This is the latest advance in an overall research goal of figuring out how to read brain activity and translate that activity into actual thoughts.  Researchers started by picking some low-hanging fruit – determining what image a person was looking at by reading the pattern of activity in their visual cortex. This is relatively easy because the visual cortex actually maps to physical space, so if someone is looking at a giant letter E, that pattern of activity will appear in the cortex as well.

Moving to language has been tricky, because there is no physical mapping going on, just conceptual mapping. Efforts so far have relied upon high resolution EEG data from implanted electrodes. This research has also focused on single words or phrases, and often trying to pick one from among several known targets. This latest research represents three significant advances. The first is using a non-invasive technique to get the data, fMRI scan. The second is inferring full sentences and ideas, not just words. And the third is that the targets were open-ended, not picked from a limited set of choices. But let’s dig into some details, which are important.

Continue Reading »

No responses yet

Oct 06 2022

3D Printing Implantable Computer Chips

This is definitely a “you got chocolate in my peanut butter” type of advance, because it combines two emerging technologies to create a potential significant advance. I have been writing about brain-machine interface (or brain-computer interface, BCI) for years. My take is that the important proof of concepts have already been established, and now all we need is steady incremental advances in the technology. Well – here is one of those advances.

Carnegie Mellon University researchers have developed a computer chip for BCI, called a microelectrode array (MEA), using advanced 3D printing technology. The MEA looks like a regular computer chip, except that it has thin pins that are electrodes which can read electrical signals from brain tissue. MEAs are inserted into the brain with the pins stuck into brain tissue. They are thin enough to cause minimal damage. The MEA can then read the brain activity where it is placed, either for diagnostic purposes or to allow for control of a computer that is connected to the chip (yes, you need wired coming out of the skull). You can also stimulate the brain through the electrodes. MEAs are mostly used for research in animals and humans. They can generally be left in the brain for about one year.

One MEA in common use is called the Utah array, because it was developed at the University of Utah, which was patented in 1993. So these have been in use for decades. How much of an advance is the new MEA design? There are several advantage, which mostly stem from the fact that these MEAs can be printed using an advanced 3D printing technology called Aerosol Jet 3D Printing. This allows for the printing at the nano-scale using a variety of materials, included those needed to make MEAs. Using this technology provides three advantages.

Continue Reading »

No responses yet

Feb 21 2022

Orphaned Technology and Implants

Published by under Technology

Rapidly advancing computer technology has greatly enhanced our lives and had ripple effects throughout many industries. I essentially lived through the computer and internet revolution, and in fact each stage of my life is marked by the state of computer technology at that time. You can also easily date movies in a contemporary setting by the computer and cell phone technology in use. But one downside to rapid advance is so-called orphaned technology. You may, for example use a piece of software that you know really well and feel is the perfect compromise of usability and functionality. Upgrades may be too expensive for you, or simply not desired. But at some point the company stops supporting the software, because they have moved on to later versions and would rather just have their customers upgrade. Without upgrades the software slowly becomes unusable – vulnerable to hacks and not compatible with other software and hardware.

The problem is greater with hardware. Without driver updates and in some cases the ability to have hardware services, at some point it will stop working. Sure, you can just replace those Jazz drives with CD burners, but what about your library of backups? These problems can at least be solved with money, which can be an obstacle for many people. But what if the hardware is implanted in you? If the technology gets orphaned, there may be no other options. This can become a problem not solved with money or biting the bullet and upgrading.

That is the issue now being faced by more than 350 people around the world who had received the Second Sight bionic eye implant. The company almost went bankrupt in 2019 but was saved by a public offering which raised $57.5 million. However, since then their stock prices have plumetted and now the company is merging with a biopharmaceutical company called Nano Precision Medical, who plans to close the Second Sight division. The technology is effectively orphaned. No more repairs or software updates.

Continue Reading »

No responses yet

Sep 02 2021

Bionic Arms

The term “bionics” was coined by Jack E. Steele in August 1958. It is a portmanteau of biologic and electronic. Martin Caidin used the word in his 1972 novel, Cyborg (which is another portmanteau of cybernetic organism). But the term really became popularized in the 1970s TV show, The Six Million Dollar Man. Of course, at the time bionic limbs seemed futuristic, perhaps something we would see in a few decades. Thirty years always feels like far enough in the future that any imagined technology should be ready by then. But here we are, almost 50 years later, and we are nowhere near the technology Steve Austin was sporting. Bionics, as depicted, was more like 100 or more years premature. This is tech more appropriate to Luke Skywalker’s hand in Star Wars, rather than some secret government project in the 1970s.

We are, however, making progress, which I have been writing about periodically here. Now a team at Cleveland Clinic has produced a robot arm tested in two subjects, and they are breaking out the term “bionic” to describe their technology. They achieve their level of functionality by combining three aspects of a brain-machine interface connecting to a robotic limb – intuitive motor control, touch sensation, and kinesthetic sensation (simulating proprioception with vibration). The kinesthetic sensation allows the user to feel the robotic limb’s movements. The authors write:

Here, we show that the neurorobotic fusion of touch, grip kinesthesia, and intuitive motor control promotes levels of behavioral performance that are stratified toward able-bodied function and away from standard-of-care prosthetic users.

Continue Reading »

No responses yet

May 21 2021

The Neuroscience of Robotic Augmentation

Imagine having an extra arm, or an extra thumb on one hand, or even a tail, and imagine that it felt like a natural part of your body and you could control it easily and dexterously. How plausible is this type of robotic augmentation? Could “Doc Oc” really exist?

I have been following the science of brain-machine interface (BMI) for years, and the research consistently shows that such augmentation is neurologically possible. There is still a lot of research to be done, and the ultimate limits of this technology will be discovered when real-world use becomes common. But the early signs are very good. Brain plasticity seems to be sufficient to allow for incorporation of robotic feedback and motor control into our existing neural networks. But there are still questions about how complete this incorporation can be, and what other neurological effects might result.

A new study further explores some of these questions. They studied 20 participants who them fitted with a “third thumb” opposite their natural thumb on one hand. Each thumb was customized and 3D printed, and could be controlled with pressure sensors under their toes. The subjects quickly learned how to control the thumb, and could soon do complex tasks, even while distracted or blindfolded. They further reports that over time the thumb felt increasingly like part of their body. They used the thumb, even at home, for 2-6 hours each day over five days. (Ten control subjects wore an inactive thumb.)

They used fMRI to scan the subjects at the beginning and end of their training. What they found was that subjects changed the way they used the muscles of their hand with the third thumb, in order to accommodate the extra digit. There were also two effects on the motor cortex representing the hand with the extra thumb. At baseline each finger moved independently would have a distinct pattern of motor cortex activation. After training, these patterns became less distinct. The researchers refer to this as a, “mild collapse of the augmented hand’s motor representation.” Second, there was a decrease in what is called “kinematic synergy.”

Continue Reading »

No responses yet

May 13 2021

Communicating Through Handwriting with Thought

Published by under Neuroscience

We have another incremental advance with brain-machine interface technology, and one with practical applications. A recent study (by Krishna Shenoy, a Howard Hughes Medical Institute investigator at Stanford University and colleagues) demonstrated communication with thought alone at a rate of 15 words (90 characters) per minute, which is the fastest to date. This is also about as fast as the average person texts on their phone, and so is a reasonably practical speed for routine communication.

I have been following this technology here for year. The idea is to connect electrodes to the brain, either on the scalp, on the brain surface, inside blood vessels close to the brain, or even deep inside the brain, in order to read electrical signals generated by brain activity. Computer software then reads these signals and learns to interpret them. The subject also undergoes a training period where they learn to control their thoughts in such a way as to control something connected to their brain’s output. This could mean moving a cursor on a computer screen, or controlling a robotic arm.

Researchers are still working out the basics of this technology, both hardware and software, but are making good steady progress. There doesn’t appear to be any inherent biological limitation here, only a technological limitation, so progress should be, and has been, steady.

The researchers did something very clever. The goal is to facilitate communication with those who are paralyzed to the point that they cannot communicate physically. One method is to control a cursor and move it to letters on a screen in order to type out words. This works, but is slow. Ideally, we would simply read words directly from the language cortex – the subject thinks words and they appear on a screen or are spoken by a synthesizer. This, however, is extremely difficult because the language cortex does not have any obvious physical organization that relates to words or their meaning. Further, this would take a high level of discrimination, meaning that it would requires lots of small electrodes in contact with the brain.

Continue Reading »

No responses yet

Apr 30 2021

Organic Electrochemical Synaptic Transistor

The title of this post should be provocative, if you think about it for a minute. For “organic” read flexible, soft, and biocompatible. An electrochemical synapse is essentially how mammalian brains work. So far we can be talking about a biological brain, but the last word, “transistor”, implies we are talking about a computer. This technology may represent the next step in artificial intelligence, developing a transistor that more closely resembles the functioning of the brain.

Let’s step back and talk about how brains and traditional computers work. A typical computer, such as the device you are likely using to read this post, has separate memory and logic. This means that there are components specifically for storing information, such as RAM (random-access memory), cache memory (fast memory that acts as a buffer between the processor and RAM) and for long-term storage hard drives and solid state drives. There are also separate components that perform logic functions to process information, such as the central CPU (central processing unit), graphics card, and other specialized processors.

The strength of computers is that they can perform some types of processing extremely fast, such as calculating with very large numbers. Memory is also very stable. You can store a billion word document and years later it will be unchanged. Try memorizing a billion words. The weakness of this architecture is that it is very energy intensive, largely because of the inefficiency of constantly having to transfer information from the memory components to the processing components. Processors are also very linear – they do one thing at a time. This is why more modern computers use multi-core processors, so they can have some limited multi-tasking.

Continue Reading »

No responses yet

Apr 12 2021

Progress on Bionic Eye

Some terms created for science fiction eventually are adopted when the technology they anticipate comes to pass. In this case, we can thank The Six Million Dollar Man for popularizing the term “bionic” which was originally coined by Jack E. Steele in August 1958. The term is a portmanteau of biological and electronic, plus it just sounds cools and does roll off the tongue, so it’s a keeper. So while there are more technical terms for an artificial electronic eye, such as “biomimetic”, the press has almost entirely used the term “bionic”.

The current state of the art is nowhere near Geordi’s visor from Star Trek TNG. In terms of approved devices actually in use, we have the Argus II, which is a device that include an external camera mounted on glasses and connected to a processor. These send information to a retinal implant that connects to ganglion cells which send the signals to the brain. In a healthy eye the light-sensing cells in the retina will connect to the ganglion cells, but there are many conditions that prevent this and cause blindness. The photoreceptors my degenerate, for example, or corneal damage does not allow light to get to the photoreceptors. As long as there are surviving ganglion cells this device can work.

Currently the Argus II contains 60 pixels (6 columns of 10) in black and white. This is incredibly low resolution, but it can be far better than nothing at all. For those with complete blindness, being able to sense light and shapes can greatly enhance the ability to interact with the environment. They would still need to use their normal assistive device while walking (cane, guide dog or human), but would help them identify items in their environment, such as a door. Now that this device is approved and it functions, incremental improvements should come steadily. One firmware update allows for the perception of color, which is not directly senses but inferred from the pattern of signals.

Continue Reading »

No responses yet

Aug 31 2020

Elon Musk Unveils Neuralink Pig

Three days ago Elon Musk revealed an update to his Neuralink project – a pig named Gertrude that had the latest version of the Neuralink implanted. (I first wrote about the Neuralink here.) The demonstration does not seem to involve anything that itself is new with brain-machine interfaces, but it does represent Musk bringing the state of the art together into a device that is designed to be commercial, rather than just a laboratory proof-of-concept.

Unfortunately, I have had to cobble together information from multiple sources. There does not appear to be a scientific paper with all the technical details spelled out, and the mainstream reporting is often vague on those details. But I think I have a clear picture now. The device is a coin-sized, 23 mm diameter and 8 mm thick. It was implanted “in” the skull, and also described as being “flush” with the skull. From this I take it that the device is not on top of or inside the skull, but literally replacing a small piece of skull. It has 3,000 super thin and flexible electrodes that connect to 1000 neurons. The device itself has 1024 channels (a channel reads the electrical difference between two electrodes).

The company also reports that it has an internal battery that can last “all day” and then recharge overnight. It also communicates to an external device (such as an app on your smartphone) via bluetooth with a range of 5-10 meters. As an electronic device, this is pretty standard, but it is good to have these features in a small implantable device.

The big question is – what can the Neuralink actually do? The demonstration, in this regard, was not that impressive (compared to the hype for Neuralink) – just the absolute bare minimum for such a device. It was implanted in a pig and was interfaced with neurons that connect to the snout. This demo device was read only; it could not send signals to the pig’s brain, only read from the brain. The demonstration consisted of Gertrude sniffing around her cage, and when she did so we could see signals from the neurons in her brain that were interfacing with the Neuralink.

Continue Reading »

No responses yet

« Prev - Next »