Our “approach seems unlikely to scale to the broad range of limb movements”

I was looking at the grants currently funded by NIH, and found one related to our previous work:


My understanding is that NIH grants themselves are hidden from the public, and only a short description is available. In this description, I found a reference to our previous work:

“Early attempts at restoring somatosensation used intracortical microstimulation (ICMS) to activate somatosensory cortex (s1), requiring animals to learn largely arbitrary patterns of stimulation to represent two or three virtual objects or to navigate in two-dimensional space. While an important beginning, this approach seems unlikely to scale to the broad range of limb movements and interactions with objects that we experience in daily life.”

This apparently refers to our paper:

O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. (2011). Active tactile exploration using a brain-machine-brain interface. Nature479(7372), 228-231.

where a brain-machine-brain interface was demonstrated that simultaneously controlled an avatar arm and delivered artificial tactile feedback using intracortical microstimulation of the primary somatosensory cortex (S1). Monkeys performed an active exploration task, where they scanned several virtual objects with an avatar hand to find one with a particular artificial texture mimicked by a temporal pattern of microstimulation.

I am wondering why this approach “seems unlikely to scale to the broad range of limb movements and interactions with objects that we experience in daily life”.

The grantees propose the following solution:

“To move the field past this hurdle, we propose to replace both touch and proprioception by using multi- electrode ICMS to produce naturalistic patterns of neuronal activity in S1 of monkeys.”

So, they have the following ideas:

  1. Add artificial proprioception.
  2. Add more stimulating electrodes.
  3. Produce naturalistic patterns of neuronal activity.

The first idea is relatively novel, although we submitted a grant about this to NIH several years ago, which was rejected. Apparently, now NIH changed its mind.

The second idea has been actually implemented already, so this is not particularly new:

Fitzsimmons, N. A., Drake, W., Hanson, T. L., Lebedev, M. A., & Nicolelis, M. A. L. (2007). Primate reaching cued by multichannel spatiotemporal cortical microstimulation. Journal of Neuroscience27(21), 5593-5602.

As to the third idea, I do not think this would be really possible because intracortical microstimulation is a highly artificial way to activate S1 neurons (actually, mostly fibers), so it is unrealistic to expect it to produce truly “naturalistic patterns of neuronal activity”.

The grant includes three specific aims. The first one:

“In Aim 1, we will develop model-optimized mappings between limb state (pressure on the fingertip, or motion of the limb) and the patterns of ICMS required to evoke S1 activation that mimics that of natural inputs. These maps will account for both the dynamics of neural responses and the biophysics of ICMS. We anticipate that this biomimetic approach will evoke intuitive sensations that require little or no training to interpret. We will validate the maps by comparing natural and ICMS-evoked S1 activity using novel hardware that allows for concurrent ICMS and neural recording.”

I am not particularly impressed by this plan. Even if they manage to induce, in a group of S1 neurons, activity patterns that resemble the natural ones to a certain extent, this achievement will be meaningless because they will not generate “natural” patterns in the millions of neurons to which these particular ones reciprocally interconnect. So, the overall pattern will never be natural and it is quite naive to expect that “this biomimetic approach will evoke intuitive sensations that require little or no training to interpret”. If the goal is to achieve “intuitive sensations”, sensations themselves should be the parameter being optimized, not the firing patterns of a few cortical neurons with a very unclear relationship to sensations. (For example, S1 neurons are quite active in the absence of somatosensory stimulation.)

The second aim:

“In Aim 2, we will test the ability of monkeys to recognize objects using artificial touch. Having learned to identify real objects by touch, animals will explore virtual objects with an avatar that shadows their own hand movements, receiving artificial touch sensations when the avatar contacts objects. We will test their initial performance on the virtual stereognosis task without learning, as well as their improvements in performance over time.”

This sounds like an interesting monkey training venture, but how is it principally different from O’Doherty et al. design? This looks like a proposal to replace O’Doherty’s virtual textures with invisible shapes that monkeys would actively explore with an avatar hand, most likely in a 3D virtual environment. Given the results of O’Doherty et al., it would not be a big surprise that monkeys can do this. In fact, O’Doherty has already shown in another experiment that monkeys can actively explore invisible shapes (gratings). This work was presented at SFN but not published yet.

Regarding the proposal to have monkeys compare real objects with the ones mimicked by microstimulation, Romo and his colleagues reported more than a decade ago that monkeys can match microstimulation patterns to vibrotactile patterns applied to the hand. Experiments of this kind can be interpreted in two ways: (1) monkeys feel microstimulation the same way they feel real stimuli, or (2)  they are simply operantly conditioned to match the two kind of stimuli. The same problem remains for the “biomimetic” multichannel stimulation proposed in the grant.

Aim 3:

“Aim 3 will be similar, but will focus on proprioception. We will train monkeys to report the direction of brief force bumps applied to their hand. After training, we will replace the actual bumps with virtual bumps created by patterned ICMS, again asking the monkeys to report their perceived sense of the direction and magnitude of the perturbation.”

This looks flawed to me. First, this is too simplistic compared to the original ambitious intent to generate artificial proprioception. Second, this is a match to sample task that can be operantly conditioned even if the sensation from microstimulation is not of a proprioceptive kind (see above). Third, if a brief force is applied, the hand tactile receptors are intensively stimulated; this experiment does not isolate proprioception per se.

Finally, Aim 4:

“In Aim 4, we will temporarily paralyze the monkey’s arm, thereby removing both touch and proprioception, mimicking the essential characteristics of a paralyzed patient. The avatar will be controlled based on recordings from motor cortex and guided by artificial somatosensation. The monkey will reach to a set of virtual objects, find one with a particular shape, grasp it, and move it to a new location.”

This looks like an interesting demonstration, but the scientific question is not very clear. If in aims 1-3 monkeys respond to microstimulation, and peripheral sensation is nonessential, what is the big deal with its removal by an anesthetic block? Monkeys can be probably trained to perform this brain-machine-brain interface task, but what would be a scientific advance from this demonstration?  The design itself very much resembles O’Doherty et al. study (with some additions like the virtual object getting attached to the virtual hand). Surprisingly, the design does not seem to incorporate artificial proprioception, presumable a key innovation of this grant.

The grantees conclude:

“If we can demonstrate that this model-optimized, biomimetic feedback is informative and easy to learn, it should form the basis for robust, scalable, somatosensory feedback for BMIs.”

Well, I am not really convinced. Why would not the already existing O’Doherty’s et al. findings “form the basis for robust, scalable, somatosensory feedback for BMIs”?

In conclusion, in my opinion, this grant represents only an incremental development compared to O’Doherty et al. published work. The experimental plan utilizes the key elements of O’Doherty et al. study (avatar arm, active search task, brain-machine-brain interface) and Fitzsimmons et al. study (multichannel stimulation). Several novel features are added to this framework, like 3D virtual environment and sensations from more than one virtual finger, but there is nothing revolutionary about them.

I have to disagree with:

“While an important beginning, this approach seems unlikely to scale to the broad range of limb movements.”

The good, old O’Doherty et al. approach still looks like a gold standard for investigations of this kind.






Brain-to-Brain Interface

On February 28, we published an article describing the first brain-to-brain interface:

A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information

Miguel Pais-Vieira, Mikhail Lebedev, Carolina Kunicki, Jing Wang & Miguel A. L. Nicolelis

Scientific Reports 3, doi:10.1038/srep01319

In brief, this interface interconnected two rats. One we called the “encoder”, the other – the “decoder”. While the encoder performed a two-choice task, its cortical signals were recorded with implanted electrodes, slightly processed (with a sigmoid transfer function) and transmitted to the brain of the decoder in the form of intracortical microstimulation. The decoder successfully learned to understand microstimulation and reproduced the encoder’s behavior with ~70% accuracy. It should be noted that, in addition to the communication channel from the encoder to the decoder, there was also a feedback loop from the decoder to the encoder: the encoder was rewarded each time the decoder got it right. This feedback encouraged the encoder to do a good job in generating a readable code.

Miguel Pais-Vieira — an extremely talented neuroscientist at Duke University — performed the major part of this tremendous experimental work. Jing Wang and Carolina Kunicki assisted him in the experiments. Curiously, Carolina handled rats many thousand miles away from Durham — in Natal, Brasil. The Durham and Natal rats were connected through Internet.

This work was greatly inspired by the ideas of Miguel Nicolelis who has been talking to us about brain-to-brain interfaces and various ways to implement them for quite a while, for more than two years already. Another Nicolelis’ dream has come true!

Although this first implementation of a brain-to-brain interface may seem to be relatively simple (see some critical comments below), the rat-to-rat dyad is relatively easily scalable to incorporate more than two rats. This is where the complexity will start — an “organic computer” according to Miguel Nicolelis. I think that many research laboratories will in the nearest future rush to implement “organic computers” made of several rats or other animals.

I was very happy to help Miguel Pais-Vieira with hopefully useful input on the experimental design, data analysis and writing. I think this is just the beginning for Miguel, Jr (Miguel, Sr — Miguel Nicolelis). Soon he  will produce even more amazing results.

Interestingly, the paper received critical comments from our competitors in the field of brain-machine-interfaces. Here are these comments:

But Sliman Bensmaia, a neuroscientist from the University of Chicago in Illinois, says that if the goal is to make better neural prosthetics, “the design seems convoluted and irrelevant”. And if it is to build a computer, “the proposition is speculative and the evidence underwhelming”.

Bensmaia is developing an artificial sensation based on microstimulation of the primary somatosensory cortex, but to the best of my knowledge has not produced prominent publications on the topic. A semi-serious answer to this can be found in Wikipedia:

“In the process of developing an invention, the initial idea may change. The invention may become simpler, more practical, it may expand, or it may even morph into something totally different. Working on one invention can lead to others too.”

Lee Miller, a physiologist at Northwestern University in Evanston, Illinois, says that Nicolelis’s team has made many important contributions to neural interfaces, but the current paper could be mistaken for a “poor Hollywood science-fiction script”. He adds, “It is not clear to what end the effort is really being made.”

Miller is known for his work where cortical neurons were connected to a functional electrical stimulator that stimulated forearm muscles to reproduce hand grasping. Another semi-serious reply based on a Wikipedia quote:

“Play may lead to invention. Childhood curiosity, experimentation, and imagination can develop one’s play instinct—an inner need according to Carl Jung. Inventors feel the need to play with things that interest them, and to explore, and this internal drive brings about novel creations.”

But Andrew Schwartz, a neurobiologist at the University of Pittsburgh in Pennsylvania, notes that the decoders performed poorly, even though they had to solve only a basic task with just two choices. “Although this may sound like ‘mental telemetry’, it was a very simple demonstration of binary detection and binary decision-making,” he says. “To be of real interest, some sort of continuous spectrum of values should be decoded, transmitted and received.”

Schwartz is well known for his work on brain-machine interfaces. “Some sort of continuous spectrum”? Let’s not rush. Also from Wikipedia:

“Morse code is a method of transmitting text information as a series of on-off tones, lights, or clicks that can be directly understood by a skilled listener or observer without special equipment.”

“It’s a pretty cool idea that they’re in tune with each other and working together,” said neuroscientist Bijan Pesaran of New York University. But Pesaran says he could use some more convincing that this is what’s actually going on. For example, he’d like to see the researchers extend the experiment to see if the rats on the receiving end of the brain-to-brain communication link could improve their performance even more. “If you could see them learning to do it better and faster, then I’d really be impressed.”

Pesaran is an expert in association cortical areas. “Better and faster?” But what if the first rat deliberately lies? From Wikipedia:

“Chinese whispers (or telephone in the United States) is a game played around the world, in which one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first. Some players also deliberately alter what is being said in order to guarantee a changed message by the end of it.”

On a more serious note, I am not sure that it is appropriate and productive to place in popular media critical comments on a peer-reviewed scientific publication. If someone wants to criticize a scientific study, why not do it in scientific literature (or a blog), where the authors could then rebut the criticism? Popular articles by their nature cannot convey all the details of scientific controversies and may easily mislead the readers.

In any way, I think that a new era has just begun: the era of brain-to-brain interfaces.