Our “approach seems unlikely to scale to the broad range of limb movements”

I was looking at the grants currently funded by NIH, and found one related to our previous work:


My understanding is that NIH grants themselves are hidden from the public, and only a short description is available. In this description, I found a reference to our previous work:

“Early attempts at restoring somatosensation used intracortical microstimulation (ICMS) to activate somatosensory cortex (s1), requiring animals to learn largely arbitrary patterns of stimulation to represent two or three virtual objects or to navigate in two-dimensional space. While an important beginning, this approach seems unlikely to scale to the broad range of limb movements and interactions with objects that we experience in daily life.”

This apparently refers to our paper:

O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. (2011). Active tactile exploration using a brain-machine-brain interface. Nature479(7372), 228-231.

where a brain-machine-brain interface was demonstrated that simultaneously controlled an avatar arm and delivered artificial tactile feedback using intracortical microstimulation of the primary somatosensory cortex (S1). Monkeys performed an active exploration task, where they scanned several virtual objects with an avatar hand to find one with a particular artificial texture mimicked by a temporal pattern of microstimulation.

I am wondering why this approach “seems unlikely to scale to the broad range of limb movements and interactions with objects that we experience in daily life”.

The grantees propose the following solution:

“To move the field past this hurdle, we propose to replace both touch and proprioception by using multi- electrode ICMS to produce naturalistic patterns of neuronal activity in S1 of monkeys.”

So, they have the following ideas:

  1. Add artificial proprioception.
  2. Add more stimulating electrodes.
  3. Produce naturalistic patterns of neuronal activity.

The first idea is relatively novel, although we submitted a grant about this to NIH several years ago, which was rejected. Apparently, now NIH changed its mind.

The second idea has been actually implemented already, so this is not particularly new:

Fitzsimmons, N. A., Drake, W., Hanson, T. L., Lebedev, M. A., & Nicolelis, M. A. L. (2007). Primate reaching cued by multichannel spatiotemporal cortical microstimulation. Journal of Neuroscience27(21), 5593-5602.

As to the third idea, I do not think this would be really possible because intracortical microstimulation is a highly artificial way to activate S1 neurons (actually, mostly fibers), so it is unrealistic to expect it to produce truly “naturalistic patterns of neuronal activity”.

The grant includes three specific aims. The first one:

“In Aim 1, we will develop model-optimized mappings between limb state (pressure on the fingertip, or motion of the limb) and the patterns of ICMS required to evoke S1 activation that mimics that of natural inputs. These maps will account for both the dynamics of neural responses and the biophysics of ICMS. We anticipate that this biomimetic approach will evoke intuitive sensations that require little or no training to interpret. We will validate the maps by comparing natural and ICMS-evoked S1 activity using novel hardware that allows for concurrent ICMS and neural recording.”

I am not particularly impressed by this plan. Even if they manage to induce, in a group of S1 neurons, activity patterns that resemble the natural ones to a certain extent, this achievement will be meaningless because they will not generate “natural” patterns in the millions of neurons to which these particular ones reciprocally interconnect. So, the overall pattern will never be natural and it is quite naive to expect that “this biomimetic approach will evoke intuitive sensations that require little or no training to interpret”. If the goal is to achieve “intuitive sensations”, sensations themselves should be the parameter being optimized, not the firing patterns of a few cortical neurons with a very unclear relationship to sensations. (For example, S1 neurons are quite active in the absence of somatosensory stimulation.)

The second aim:

“In Aim 2, we will test the ability of monkeys to recognize objects using artificial touch. Having learned to identify real objects by touch, animals will explore virtual objects with an avatar that shadows their own hand movements, receiving artificial touch sensations when the avatar contacts objects. We will test their initial performance on the virtual stereognosis task without learning, as well as their improvements in performance over time.”

This sounds like an interesting monkey training venture, but how is it principally different from O’Doherty et al. design? This looks like a proposal to replace O’Doherty’s virtual textures with invisible shapes that monkeys would actively explore with an avatar hand, most likely in a 3D virtual environment. Given the results of O’Doherty et al., it would not be a big surprise that monkeys can do this. In fact, O’Doherty has already shown in another experiment that monkeys can actively explore invisible shapes (gratings). This work was presented at SFN but not published yet.

Regarding the proposal to have monkeys compare real objects with the ones mimicked by microstimulation, Romo and his colleagues reported more than a decade ago that monkeys can match microstimulation patterns to vibrotactile patterns applied to the hand. Experiments of this kind can be interpreted in two ways: (1) monkeys feel microstimulation the same way they feel real stimuli, or (2)  they are simply operantly conditioned to match the two kind of stimuli. The same problem remains for the “biomimetic” multichannel stimulation proposed in the grant.

Aim 3:

“Aim 3 will be similar, but will focus on proprioception. We will train monkeys to report the direction of brief force bumps applied to their hand. After training, we will replace the actual bumps with virtual bumps created by patterned ICMS, again asking the monkeys to report their perceived sense of the direction and magnitude of the perturbation.”

This looks flawed to me. First, this is too simplistic compared to the original ambitious intent to generate artificial proprioception. Second, this is a match to sample task that can be operantly conditioned even if the sensation from microstimulation is not of a proprioceptive kind (see above). Third, if a brief force is applied, the hand tactile receptors are intensively stimulated; this experiment does not isolate proprioception per se.

Finally, Aim 4:

“In Aim 4, we will temporarily paralyze the monkey’s arm, thereby removing both touch and proprioception, mimicking the essential characteristics of a paralyzed patient. The avatar will be controlled based on recordings from motor cortex and guided by artificial somatosensation. The monkey will reach to a set of virtual objects, find one with a particular shape, grasp it, and move it to a new location.”

This looks like an interesting demonstration, but the scientific question is not very clear. If in aims 1-3 monkeys respond to microstimulation, and peripheral sensation is nonessential, what is the big deal with its removal by an anesthetic block? Monkeys can be probably trained to perform this brain-machine-brain interface task, but what would be a scientific advance from this demonstration?  The design itself very much resembles O’Doherty et al. study (with some additions like the virtual object getting attached to the virtual hand). Surprisingly, the design does not seem to incorporate artificial proprioception, presumable a key innovation of this grant.

The grantees conclude:

“If we can demonstrate that this model-optimized, biomimetic feedback is informative and easy to learn, it should form the basis for robust, scalable, somatosensory feedback for BMIs.”

Well, I am not really convinced. Why would not the already existing O’Doherty’s et al. findings “form the basis for robust, scalable, somatosensory feedback for BMIs”?

In conclusion, in my opinion, this grant represents only an incremental development compared to O’Doherty et al. published work. The experimental plan utilizes the key elements of O’Doherty et al. study (avatar arm, active search task, brain-machine-brain interface) and Fitzsimmons et al. study (multichannel stimulation). Several novel features are added to this framework, like 3D virtual environment and sensations from more than one virtual finger, but there is nothing revolutionary about them.

I have to disagree with:

“While an important beginning, this approach seems unlikely to scale to the broad range of limb movements.”

The good, old O’Doherty et al. approach still looks like a gold standard for investigations of this kind.





Monkeys Drive Wheelchair with Their Cortical Activity

Our new study “Direct Cortical Control of Primate Whole-Body Navigation in a Mobile Robotic Wheelchair” presents the first demonstration of wheelchair navigation enabled through a cortical brain-machine interface (BMI).

Previous neurophysiological and BMI research in primates mostly focused on eye and arm movements, whereas brain mechanisms of whole-body movements (for example, jumping from one tree to another) were virtually neglected. This is a serious impediment to the development of invasive BMIs for wheelchair control. Such devices are needed for patients suffering from severe body paralysis.

Rajangam et al. for the first showed that rhesus monkeys can navigate themselves while seated in a wheelchair, using their cortical activity as the control signal. Monkeys were chronically implanted with multichannel electrode arrays, which recorded from several hundred cortical neurons in the sensorimotor cortex. The BMI transformed this neuronal ensemble activity into wheelchair linear (backward and forward) and rotational (leftward and rightward) velocity.

Monkeys successfully learned to navigate the wheelchair from one corner of the room to the other corner, where a food reward was placed in a feeder. Their ability to drive the wheelchair improved over several weeks of training. The navigation did not require any steering device (for example, a joystick); monkeys produced the wheelchair movements just by imagining themselves moving.

The demonstration was possible owing to Tim Hanson’s multichannel wireless recording system and brilliant engineering of Gary Lehew, Po-He Tseng and Allen Yin.

There is still a long way till invasive BMIs of this type could be implemented in human patients. But a proof of concept demonstration is there!