Our “approach seems unlikely to scale to the broad range of limb movements”

I was looking at the grants currently funded by NIH, and found one related to our previous work:

BIOMIMETIC SOMATOSENSORY FEEDBACK THROUGH INTRACORTICALMICROSTIMULATION

My understanding is that NIH grants themselves are hidden from the public, and only a short description is available. In this description, I found a reference to our previous work:

“Early attempts at restoring somatosensation used intracortical microstimulation (ICMS) to activate somatosensory cortex (s1), requiring animals to learn largely arbitrary patterns of stimulation to represent two or three virtual objects or to navigate in two-dimensional space. While an important beginning, this approach seems unlikely to scale to the broad range of limb movements and interactions with objects that we experience in daily life.”

This apparently refers to our paper:

O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. (2011). Active tactile exploration using a brain-machine-brain interface. Nature479(7372), 228-231.

where a brain-machine-brain interface was demonstrated that simultaneously controlled an avatar arm and delivered artificial tactile feedback using intracortical microstimulation of the primary somatosensory cortex (S1). Monkeys performed an active exploration task, where they scanned several virtual objects with an avatar hand to find one with a particular artificial texture mimicked by a temporal pattern of microstimulation.

I am wondering why this approach “seems unlikely to scale to the broad range of limb movements and interactions with objects that we experience in daily life”.

The grantees propose the following solution:

“To move the field past this hurdle, we propose to replace both touch and proprioception by using multi- electrode ICMS to produce naturalistic patterns of neuronal activity in S1 of monkeys.”

So, they have the following ideas:

  1. Add artificial proprioception.
  2. Add more stimulating electrodes.
  3. Produce naturalistic patterns of neuronal activity.

The first idea is relatively novel, although we submitted a grant about this to NIH several years ago, which was rejected. Apparently, now NIH changed its mind.

The second idea has been actually implemented already, so this is not particularly new:

Fitzsimmons, N. A., Drake, W., Hanson, T. L., Lebedev, M. A., & Nicolelis, M. A. L. (2007). Primate reaching cued by multichannel spatiotemporal cortical microstimulation. Journal of Neuroscience27(21), 5593-5602.

As to the third idea, I do not think this would be really possible because intracortical microstimulation is a highly artificial way to activate S1 neurons (actually, mostly fibers), so it is unrealistic to expect it to produce truly “naturalistic patterns of neuronal activity”.

The grant includes three specific aims. The first one:

“In Aim 1, we will develop model-optimized mappings between limb state (pressure on the fingertip, or motion of the limb) and the patterns of ICMS required to evoke S1 activation that mimics that of natural inputs. These maps will account for both the dynamics of neural responses and the biophysics of ICMS. We anticipate that this biomimetic approach will evoke intuitive sensations that require little or no training to interpret. We will validate the maps by comparing natural and ICMS-evoked S1 activity using novel hardware that allows for concurrent ICMS and neural recording.”

I am not particularly impressed by this plan. Even if they manage to induce, in a group of S1 neurons, activity patterns that resemble the natural ones to a certain extent, this achievement will be meaningless because they will not generate “natural” patterns in the millions of neurons to which these particular ones reciprocally interconnect. So, the overall pattern will never be natural and it is quite naive to expect that “this biomimetic approach will evoke intuitive sensations that require little or no training to interpret”. If the goal is to achieve “intuitive sensations”, sensations themselves should be the parameter being optimized, not the firing patterns of a few cortical neurons with a very unclear relationship to sensations. (For example, S1 neurons are quite active in the absence of somatosensory stimulation.)

The second aim:

“In Aim 2, we will test the ability of monkeys to recognize objects using artificial touch. Having learned to identify real objects by touch, animals will explore virtual objects with an avatar that shadows their own hand movements, receiving artificial touch sensations when the avatar contacts objects. We will test their initial performance on the virtual stereognosis task without learning, as well as their improvements in performance over time.”

This sounds like an interesting monkey training venture, but how is it principally different from O’Doherty et al. design? This looks like a proposal to replace O’Doherty’s virtual textures with invisible shapes that monkeys would actively explore with an avatar hand, most likely in a 3D virtual environment. Given the results of O’Doherty et al., it would not be a big surprise that monkeys can do this. In fact, O’Doherty has already shown in another experiment that monkeys can actively explore invisible shapes (gratings). This work was presented at SFN but not published yet.

Regarding the proposal to have monkeys compare real objects with the ones mimicked by microstimulation, Romo and his colleagues reported more than a decade ago that monkeys can match microstimulation patterns to vibrotactile patterns applied to the hand. Experiments of this kind can be interpreted in two ways: (1) monkeys feel microstimulation the same way they feel real stimuli, or (2)  they are simply operantly conditioned to match the two kind of stimuli. The same problem remains for the “biomimetic” multichannel stimulation proposed in the grant.

Aim 3:

“Aim 3 will be similar, but will focus on proprioception. We will train monkeys to report the direction of brief force bumps applied to their hand. After training, we will replace the actual bumps with virtual bumps created by patterned ICMS, again asking the monkeys to report their perceived sense of the direction and magnitude of the perturbation.”

This looks flawed to me. First, this is too simplistic compared to the original ambitious intent to generate artificial proprioception. Second, this is a match to sample task that can be operantly conditioned even if the sensation from microstimulation is not of a proprioceptive kind (see above). Third, if a brief force is applied, the hand tactile receptors are intensively stimulated; this experiment does not isolate proprioception per se.

Finally, Aim 4:

“In Aim 4, we will temporarily paralyze the monkey’s arm, thereby removing both touch and proprioception, mimicking the essential characteristics of a paralyzed patient. The avatar will be controlled based on recordings from motor cortex and guided by artificial somatosensation. The monkey will reach to a set of virtual objects, find one with a particular shape, grasp it, and move it to a new location.”

This looks like an interesting demonstration, but the scientific question is not very clear. If in aims 1-3 monkeys respond to microstimulation, and peripheral sensation is nonessential, what is the big deal with its removal by an anesthetic block? Monkeys can be probably trained to perform this brain-machine-brain interface task, but what would be a scientific advance from this demonstration?  The design itself very much resembles O’Doherty et al. study (with some additions like the virtual object getting attached to the virtual hand). Surprisingly, the design does not seem to incorporate artificial proprioception, presumable a key innovation of this grant.

The grantees conclude:

“If we can demonstrate that this model-optimized, biomimetic feedback is informative and easy to learn, it should form the basis for robust, scalable, somatosensory feedback for BMIs.”

Well, I am not really convinced. Why would not the already existing O’Doherty’s et al. findings “form the basis for robust, scalable, somatosensory feedback for BMIs”?

In conclusion, in my opinion, this grant represents only an incremental development compared to O’Doherty et al. published work. The experimental plan utilizes the key elements of O’Doherty et al. study (avatar arm, active search task, brain-machine-brain interface) and Fitzsimmons et al. study (multichannel stimulation). Several novel features are added to this framework, like 3D virtual environment and sensations from more than one virtual finger, but there is nothing revolutionary about them.

I have to disagree with:

“While an important beginning, this approach seems unlikely to scale to the broad range of limb movements.”

The good, old O’Doherty et al. approach still looks like a gold standard for investigations of this kind.

 

 

 

 

Advertisements

Monkeys Drive Wheelchair with Their Cortical Activity

Our new study “Direct Cortical Control of Primate Whole-Body Navigation in a Mobile Robotic Wheelchair” presents the first demonstration of wheelchair navigation enabled through a cortical brain-machine interface (BMI).

Previous neurophysiological and BMI research in primates mostly focused on eye and arm movements, whereas brain mechanisms of whole-body movements (for example, jumping from one tree to another) were virtually neglected. This is a serious impediment to the development of invasive BMIs for wheelchair control. Such devices are needed for patients suffering from severe body paralysis.

Rajangam et al. for the first showed that rhesus monkeys can navigate themselves while seated in a wheelchair, using their cortical activity as the control signal. Monkeys were chronically implanted with multichannel electrode arrays, which recorded from several hundred cortical neurons in the sensorimotor cortex. The BMI transformed this neuronal ensemble activity into wheelchair linear (backward and forward) and rotational (leftward and rightward) velocity.

Monkeys successfully learned to navigate the wheelchair from one corner of the room to the other corner, where a food reward was placed in a feeder. Their ability to drive the wheelchair improved over several weeks of training. The navigation did not require any steering device (for example, a joystick); monkeys produced the wheelchair movements just by imagining themselves moving.

The demonstration was possible owing to Tim Hanson’s multichannel wireless recording system and brilliant engineering of Gary Lehew, Po-He Tseng and Allen Yin.

There is still a long way till invasive BMIs of this type could be implemented in human patients. But a proof of concept demonstration is there!

Walk Again: History of the Project

Discharges of cortical neurons during bipedal locomotion in a monkey

Discharges of cortical neurons during bipedal locomotion in a monkey

On June 12, an EEG-controlled, “Walk Again” exoskeleton was demonstrated at the World Cup opening.

The Walk Again project was first announced in the 2009 review article by Nicolelis and Lebedev “Principles of neural ensemble physiology underlying the operation of brain-machine interfaces” published in Nature Rev Neurosci:

Ultimately, we expect that the identification of principles of neural ensemble physiology will guide the development of a generation of cortical neuroprosthetic devices that can restore full-body mobility in patients suffering from devastating levels of paralysis, due either to traumatic or degenerative lesions of the nervous system. We believe that such devices should incorporate several key design features. First, brain-derived signals should be obtained from multi-electrode arrays implanted in the upper- and lower-limb representations of the cortex, preferably in multiple cortical areas. Custom-designed microchips (also known as neurochips), chronically implanted in the skull, would be used for neural signal-processing tasks. To significantly reduce the risk of infection and damage to the cortex, multi-channel wireless technology would transmit neural signals to a small, wearable processing unit. Such a unit would run multiple real-time computational models designed to optimize the real-time prediction of motor parameters. Time-varying, kinematic and dynamic digital motor signals would be used to continuously control actuators distributed across the joints of a wearable, whole-body, robotic exoskeleton. High-order brain-derived motor commands would then interact with the controllers of local actuators and sensors distributed across the exoskeleton. Such interplay between brain-derived and robotic control signals, known as shared brain–machine control, would assure both voluntary control and stability of bipedal walking of a patient supported by the exoskeleton.

Touch, position, stretch and force sensors, distributed throughout the exoskeleton, would generate a continuous stream of artificial touch and proprioceptive feedback signals to inform the patient’s brain of the neuroprosthetic performance. Such signals would be delivered by multichannel cortical microstimulation directly into the patient’s somatosensory areas. Our prediction is that, after a few weeks, such a continuous stream of somatosensory feedback signals, combined with vision, would allow patients to incorporate, through a process of experience-dependent cortical plasticity, the whole exoskeleton as an extension of their body.

These developments are likely to converge into the first reliable, safe and clinically useful cortical neuroprosthetic. To accelerate this process and make this milestone a clinical reality, a worldwide team of neurophysiologists, computer scientists, engineers, roboticists, neurologists and neurosurgeons has been assembled to launch the Walk Again Project, a non-profit, global initiative aimed at building the first cortical neuroprosthetic capable of restoring full-body mobility in severely paralysed patients.

Screenshots of neuronal ensemble activity taken many months after the implantation surgery

Screenshots of neuronal ensemble activity taken many months after the implantation surgery

Similar ideas were expressed in the 2009 original-research article by Fitzsimmons, Lebedev et al. “Extracting kinematic parameters for monkey bipedal walking from cortical neuronal ensemble activity“:

Based on these results, we propose an approach to restore locomotion in patients with lower limb paralysis that relies on using cortical activity to generate locomotor patterns in an artificial actuator, such as a wearable exoskeleton (Figure 8 G; Fleischer et al., 2006 ; Hesse et al., 2003 ; Veneman et al., 2007 ). This approach may be applicable to clinical cases in which the locomotion centers of the brain are intact, but cannot communicate with the spinal cord circuitry due to spinal cord injury. The feasibility of employing a cortically driven BMI for the restoration of gait is supported by fMRI studies in which cortical activation was detected when subjects imagined themselves walking (Bakker et al., 2007 , 2008 ; Iseki et al., 2008 ; Jahn et al., 2004 ) and when paraplegic patients imagined foot and leg movements (Alkadhi et al., 2005 ; Cramer et al., 2005 ; Hotz-Boendermaker et al., 2008 ). Event-related potentials also demonstrated cortical activations in similar circumstances (Halder et al., 2006 ; Lacourse et al., 1999 ; Muller-Putz et al., 2007 ). Further support for this idea comes from recent studies of EEG-based brain-computer interfaces for navigation in a virtual environment in healthy subjects (Pfurtscheller et al., 2006 ) and paraplegics (Enzinger et al., 2008 ).

While a cortical BMI based neuroprosthesis that derived all its control signals from the user would have to cope with the lack of signals normally derived from subcortical centers, such as the cerebellum, basal ganglia and brainstem (Grillner, 2006 ; Grillner et al., 2008 ; Hultborn and Nielsen, 2007 ; Kagan and Shik, 2004 ; Matsuyama et al., 2004 ; Mori et al., 2000 ; Takakusaki, 2008 ), these problems may be avoided by an approach which only derives higher level leg movement signals from brain activity, while allowing robotic systems to produce a safer, optimum output. The challenge of efficient low-level control could be overcome by implementing “shared brain–machine” control (Kim et al., 2006 ), i.e. a control strategy that allows robotic controllers to efficiently supervise low-level details of motor execution, while brain derived signals are utilized to derive higher-order voluntary motor commands (step initiation, step length, leg orientation).

A cortically driven BMI for the restoration of walking may become an integral part of other rehabilitation strategies employed to improve the quality of life of patients. In particular, it may supplement the strategy based on harnessing the remaining functionality and plasticity of spinal cord circuits isolated from the brain (Behrman et al., 2006 ; Dobkin et al., 1995 ; Grasso et al., 2004 ;Harkema, 2001 ; Lunenburger et al., 2006 ). Indeed, cortically driven exoskeletons may facilitate spinal cord plasticity, helping to recover locomotion automatisms. Additionally, cortically driven neuroprostheses may work in cohort with rehabilitation methods based on functional electrical stimulation (FES; Hamid and Hayek, 2008 ; Nightingale et al., 2007 ; Wieler et al., 1999 ; Zhang and Zhu, 2007 ). In such an implementation, the BMI output could be connected to a FES system that stimulates the subject’s leg muscles. Finally, there is the intriguing possibility of connecting the BMI to an electrical stimulator implanted in the spinal cord, a strategy that may help induce plastic reorganization within these circuits.

Altogether, our results indicate that direct linkages between the human brain and artificial devices may be utilized to define a series of neuroprosthetic devices for restoring the ability to walk in people suffering from paralysis.

An update was given in 2011 by Lebedev, Tate, et al. in “Future developments in brain-machine interface research“:

As follows from our results on BMIs that enact leg movements, BMIs for the whole body are likely to become a real possibility in the near future. We propose the development of a whole-body BMI in which neuronal ensemble activity recorded from multiple cortical areas in rhesus monkeys controls the actuators that enact movements of both upper and lower extremities. This BMI will be first implemented in a virtual environment (monkey avatar) and then using a whole-body exoskeleton. In these experiments we will also examine the plasticity of neuronal ensemble properties caused by their involvement in the whole-body BMI control and the ability of cortical ensembles to adapt to represent novel external actuators.

Furthermore, we will also explore the ability of an animal to navigate a virtual environment using both physical and neural control that engages both the upper and lower limbs. The first phase of these experiments will be to train the animals to walk in a bipedal manner on a treadmill while assisting the navigation with a hand steering control. We have already built a virtual environment needed for the monkey to navigate using 3D visualization software. Within this environment, the monkey’s body is represented by a life-like avatar. This representation is viewed in the third person by the monkey and employs real-world inverse kinematics to move, allowing the avatar’s limbs to move in close relation to the experimental animal.

Initially, the direction that the avatar is facing will be dictated by the monkey moving a handlebar with its hands. As the animal moves the handlebar left or right, the avatar will rotate in the corresponding direction. The avatar’s legs will mimic the exact motion of the monkey’s legs on the treadmill. The simplest task will be for the animal to simply move the avatar forward to an object that represents a reward, a virtual fruit. Virtual fruits will appear at different angular positions relative to the monkey, which will let us measure the neuronal representation of navigation direction and modulations in cortical arm representation related to the steering. The monkey will have to make several steps while steering in the required direction to approach a virtual reward and to obtain an actual reward. The next set of experiments will allow the animal to control the virtual BMI in a manner similar to how we anticipate that the eventual application will be used: with no active movement of the subject’s body parts. The animals will use the neural control of the environment to obtain rewards when they are seated in a monkey chair. We expect that the monkey will be able to generate periodic neural modulations associated with individual steps of the avatar even though it does not perform any actual steps with its own legs.

Finally, we will use the algorithms developed in these experiments to control a full-body monkey exoskeleton in a non-human primate which has been subjected to a spinal cord anesthetic block to produce a temporary and reversible state of quadriplegia. This exoskeleton will encase the monkey’s arms and legs. It will be attached to the monkey using bracelets molded in the shape of the monkey’s limbs. A full body exoskeleton prototype will be utilized. The basic design and controller will be based on the humanoid robot, Computational Brain (CB). The exoskeleton will provide full-sensory feedback to the BMI set up-joints position/ velocity/torque, ground contacts and orientations. In BMI mode, the exoskeleton will guide the monkey’s limbs with smooth motions while at the same time monitoring its range of motions to ensure it is within the safety limits. This demonstration will provide the first prototype of a neural prosthetic device that would allow paralyzed people to walk again.

At the time these papers were published, we were the only group that developed brain-machine interfaces for primate locomotion. (But jealous competitors still succeeded in blocking Fitzsimmons et al. from publication in high-impact journals.) Now rivalry is building up, as evident from some of the critiques to the World Cup demo. And this is actually good. Competitive environment means speedier progress.

Brain-to-Brain Interface

On February 28, we published an article describing the first brain-to-brain interface:

A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information

Miguel Pais-Vieira, Mikhail Lebedev, Carolina Kunicki, Jing Wang & Miguel A. L. Nicolelis

Scientific Reports 3, doi:10.1038/srep01319

In brief, this interface interconnected two rats. One we called the “encoder”, the other – the “decoder”. While the encoder performed a two-choice task, its cortical signals were recorded with implanted electrodes, slightly processed (with a sigmoid transfer function) and transmitted to the brain of the decoder in the form of intracortical microstimulation. The decoder successfully learned to understand microstimulation and reproduced the encoder’s behavior with ~70% accuracy. It should be noted that, in addition to the communication channel from the encoder to the decoder, there was also a feedback loop from the decoder to the encoder: the encoder was rewarded each time the decoder got it right. This feedback encouraged the encoder to do a good job in generating a readable code.

Miguel Pais-Vieira — an extremely talented neuroscientist at Duke University — performed the major part of this tremendous experimental work. Jing Wang and Carolina Kunicki assisted him in the experiments. Curiously, Carolina handled rats many thousand miles away from Durham — in Natal, Brasil. The Durham and Natal rats were connected through Internet.

This work was greatly inspired by the ideas of Miguel Nicolelis who has been talking to us about brain-to-brain interfaces and various ways to implement them for quite a while, for more than two years already. Another Nicolelis’ dream has come true!

Although this first implementation of a brain-to-brain interface may seem to be relatively simple (see some critical comments below), the rat-to-rat dyad is relatively easily scalable to incorporate more than two rats. This is where the complexity will start — an “organic computer” according to Miguel Nicolelis. I think that many research laboratories will in the nearest future rush to implement “organic computers” made of several rats or other animals.

I was very happy to help Miguel Pais-Vieira with hopefully useful input on the experimental design, data analysis and writing. I think this is just the beginning for Miguel, Jr (Miguel, Sr — Miguel Nicolelis). Soon he  will produce even more amazing results.

Interestingly, the paper received critical comments from our competitors in the field of brain-machine-interfaces. Here are these comments:

But Sliman Bensmaia, a neuroscientist from the University of Chicago in Illinois, says that if the goal is to make better neural prosthetics, “the design seems convoluted and irrelevant”. And if it is to build a computer, “the proposition is speculative and the evidence underwhelming”.

Bensmaia is developing an artificial sensation based on microstimulation of the primary somatosensory cortex, but to the best of my knowledge has not produced prominent publications on the topic. A semi-serious answer to this can be found in Wikipedia:

“In the process of developing an invention, the initial idea may change. The invention may become simpler, more practical, it may expand, or it may even morph into something totally different. Working on one invention can lead to others too.”

Lee Miller, a physiologist at Northwestern University in Evanston, Illinois, says that Nicolelis’s team has made many important contributions to neural interfaces, but the current paper could be mistaken for a “poor Hollywood science-fiction script”. He adds, “It is not clear to what end the effort is really being made.”

Miller is known for his work where cortical neurons were connected to a functional electrical stimulator that stimulated forearm muscles to reproduce hand grasping. Another semi-serious reply based on a Wikipedia quote:

“Play may lead to invention. Childhood curiosity, experimentation, and imagination can develop one’s play instinct—an inner need according to Carl Jung. Inventors feel the need to play with things that interest them, and to explore, and this internal drive brings about novel creations.”

But Andrew Schwartz, a neurobiologist at the University of Pittsburgh in Pennsylvania, notes that the decoders performed poorly, even though they had to solve only a basic task with just two choices. “Although this may sound like ‘mental telemetry’, it was a very simple demonstration of binary detection and binary decision-making,” he says. “To be of real interest, some sort of continuous spectrum of values should be decoded, transmitted and received.”

Schwartz is well known for his work on brain-machine interfaces. “Some sort of continuous spectrum”? Let’s not rush. Also from Wikipedia:

“Morse code is a method of transmitting text information as a series of on-off tones, lights, or clicks that can be directly understood by a skilled listener or observer without special equipment.”

“It’s a pretty cool idea that they’re in tune with each other and working together,” said neuroscientist Bijan Pesaran of New York University. But Pesaran says he could use some more convincing that this is what’s actually going on. For example, he’d like to see the researchers extend the experiment to see if the rats on the receiving end of the brain-to-brain communication link could improve their performance even more. “If you could see them learning to do it better and faster, then I’d really be impressed.”

Pesaran is an expert in association cortical areas. “Better and faster?” But what if the first rat deliberately lies? From Wikipedia:

“Chinese whispers (or telephone in the United States) is a game played around the world, in which one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first. Some players also deliberately alter what is being said in order to guarantee a changed message by the end of it.”

On a more serious note, I am not sure that it is appropriate and productive to place in popular media critical comments on a peer-reviewed scientific publication. If someone wants to criticize a scientific study, why not do it in scientific literature (or a blog), where the authors could then rebut the criticism? Popular articles by their nature cannot convey all the details of scientific controversies and may easily mislead the readers.

In any way, I think that a new era has just begun: the era of brain-to-brain interfaces.