Skip to main content
search
Science and ResearchScientist

Neurorobotics

By May 20, 2019October 4th, 2019No Comments

From self-navigating robotic vacuum cleaners to in-home assistants for the care of the elderly and disabled, autonomous drones to stunning discoveries about the basis of neurological function, robotic interfacing based on the mechanisms underlying the central nervous system is a complex field known as neurorobotics, which has undergone rapid development in the 21st century. Here, we will discuss the theory underlying neurorobotics and its contributions to both the basic sciences and its applications in the world’s greater context. Further, we will examine exceptional researchers in several sub-fields of neurorobotics and highlight their significant contributions to the field.

Though originally referring to biological organisms, the term robot is currently accepted as meaning “a programmable, multi-functional manipulator designed to move material, parts or specialized devices through variable programmed motions for the performance of a variety of tasks” according to the Robot Institute of America.[1] Expanding upon this, neurorobotics is the marriage of robotics, neuroscience, and artificial intelligence (AI). The application of these neurorobots ranges from surgical to mechanical assistance as well as the study of basic nervous system functions. Further extending the definition of this field includes the brain-machine interfaces which underlie modern prosthetics.

Artificial Intelligence: what and why?

The use of strong AI to simulate neural networks and their resultant output, such as both cognitive and performative behavior, has advanced dramatically in the past decade and continues to evolve rapidly in tandem with basic neuroscientific research and technological advancements in the simulation of neural networks. This study has lead to a variety of neurorobotic end-products, from autonomous social assistants to mechanically assisted laboratory animals. As viewed by researchers, the interplay of AI and robotics allows for a reverse-engineering approach to neuroscience; in other words, one way to understand the brain and the function of its 1011 neurons is to build the brain digitally until we have an accurate representation of its real-world processes.[2]

The principle approach for AI development in the quest for simulated cognition is known as reinforcement learning. Briefly, reinforcement learning is a means of problem solving via positive and negative feedback directed at a goal-oriented agent. In humans or animals for instance, reinforcement learning occurs when the person or animal attempts a task and either succeeds or fails, then modifies its behavior according to that outcome. For instance, when first solving a maze, a mouse will either succeed or fail to reach the end-goal. This results in either a reward (e.g. food, treat, or safety) or a penalty (e.g. lack of food/treat or inability to find the safe-zone), respectively. In subsequent trials, the mouse will use that previous feedback to either solve the maze more efficiently in the case of previous success, or to correct its previous errors in a modified attempt to solve the maze.

Researchers have begun applying the concept of reinforcement learning to in silico models of cognition. Guided by the principles of neurons known as ‘‘grid cells’’ in the brain’s entorhinal cortex, which have been found to provide mapping information for other brain regions involved in spatial perception and goal-directed behaviors,[3] the AI startup DeepMind has developed AI which can learn to solve mazes at a level of efficiency that rivals or exceeds that of mammals including humans.[4] Amazingly, this form of deep reinforcement learning allowed their AI to develop non-traditional (i.e. shortcut) routes to solving spatial tasks, and introduced cognitive flexibility which allowed for behavioral adaptation in conditions which modified the task itself, or randomly erased bits of information. These findings in particular are indicative of significant advances in cognitive simulation, as real-world problem solving is regularly challenged by changing environmental conditions and the limits of short-term memory.

Neurorobotics in Basic Science: Learning About the Brain by Building the Brain

As mentioned above, one of the most valuable aspects of neurorobotic research is a reverse-engineering type approach to learning about the brain. By modeling AI and AI-driven neurorobots after the brain, or indeed even individual brain regions, we gain unparalleled grounds of observation of its function and development. This observation and subsequent manipulation can serve as an extremely powerful tool for understanding brain function that is inherently difficult to observe in biological organisms. The exponential growth from simple circuits to computer-driven models containing millions of neuronal connections have led to countless discoveries and neurological illuminations already and shows no signs of slowing.

Dr. Jeffrey Krichmar

One of the true pioneers of AI in neurorobotics and the use of such modeling as a means to study neuronal function is Dr. Jeffrey Krichmar at the University of California, Irvine, in the United States. Starting at the end of the 20th century, Dr. Krichmar has championed such projects as the Darwin VII, a robot which could achieve perceptual categorization, as a means of learning more about brain function. Years later, a similar project (Darwin X[5,6]), was used to model the hippocampus and was indeed capable of performing the functions served by the hippocampus in mammals, such as spatial memory. Indeed, Darwin X, which was comprised of nearly 1.5 million digital synapses at 100,000 neurons, was able to navigate a dry variant of the Morris water maze by devising various routes to find a hidden platform using spatial information and eventually developing hippocampal place-cells as are seen in both rodent and human brains.

The importance of such a finding cannot be overlooked: this very modeling allowed them to backtrace the origin of the place cell firing and led to the now-accepted model of hippocampal function in which the direct pathway, from the entorhinal cortex to the CA1 region is used for spatial recall while the more circuitous route (entorhinal cortex to dentate gyrus, then to CA3 and CA1) is the original route for the acquisition of new spatial information. Extensions on such projects as Darwin XI,[7] which solved a plus-maze with the multi-sensory inputs, further unmasked neural properties of complex brain functions through the study of neurorobots and continue to inform our understanding of the brain. Additionally, these findings to have significant real-world applications for autonomous navigation. Dr. Krichmar and his colleagues continue to make headway into autonomous navigation by creating a self driving robot powered by IBM’s TrueNorth neuromorphic hardware,[8] and by developing a brain-inspired algorithm for adaptive path planning.[9]

AI-driven navigation: Where are we going and how do we get there?

Applying similar learning mechanisms, an international team of neurorobotic researchers in 2015 successfully developed a spider-like robot with significant learning and navigating capacity.[10] By using neural-feedback loop dynamics, the robot was able to find its way through a complex and dynamic environment. Importantly, the robot was not preprogrammed with any information regarding the nature of its surroundings, and therefore encountered obstacles with the same level of novelty that it would in a real-world environment. In essence, the robot is able to learn by attempting movements and, based on the success or failure of those movements, calibrate its behavior to either continue (in the case of success) or alter its path (in the case of a failure). The applications for such an autonomous pathfinder range from search-and-rescue missions to the exploration of dangerous environments where humans are either incapable of entering or otherwise unwelcome.

In another experiment, intended to model the deliberative decision-making process known as vicarious trial-and-error (VTE), researchers in Japan had model robots based on neural network dynamics perform in a T-maze task.[11] By intentionally programming inherent instability into the neuronal function, much like the flexibility shown by functioning neurons in the brain, the researchers were able to develop a model of learning which could effectively explore and learn to properly navigate the maze. Their modeling simulates the classic Hebbian theory of neuroscience which postulates that repeated pairings of information between two communicating counterparts (such as a pre- and postsynaptic neuron) lead to synaptic plasticity and eventually to learned behaviors. In this way, this experiment functioned as a form of reverse engineering to show how neurons might communicate in a large-scale system to learn environmental navigation.

Dr. Simon Garnier

Navigation is not always an individual activity, a fact not overlooked by Dr. Simon Garnier who currently works at the Federated Department of Biology at the New Jersey Institute of Technology where he runs the Swarm Lab. In 2013, Dr. Garnier and colleagues published their findings using robotic ants to better understand the collective navigation decisions made by groups of their namesake insects in nature.[12] By placing the robotic ants in a series of diverse environments and studying their navigation choices, they were able to observe emerging patterns as relationships between the individuals, the groups, and their environment. They concluded that trail shape has a surprisingly large influence on navigational choices, a finding which they believe applies to group-dynamics in a variety of situations and species, including how and why humans choose certain routes given a specific set of conditions and environments. These group dynamics are particularly important because, as Dr. Garnier notes, “collective navigation emerges from interactions between individuals that cannot navigate efficiently if tested alone. This is particularly important, I think, because each of the robots individually is incapable of finding the best path on its own.” His team has also studied robotic modeling of cockroaches[13,14] using Alice robots, with similarly insightful conclusions to be drawn from the naturalistic robot behavior.[15]

Clearly, the dynamic navigation mechanisms which underlie the many projects of neurorobotic explorers have enlightened potential routes and applications in a variety of fields beyond the robots themselves. For example, autonomous pathfinding and obstacle avoidance may be applied to self-driving vehicles or computer-assisted flights for drones or airplanes. In fact, unmanned AI-driven airplanes have already been shown to be superior to their manned counterparts.[16] When used alone, these forms of AI may replace human participation in such navigation tasks and thereby eliminate the risks of human error. Alternatively, they may be applied in cooperation with humans in order to provide assistance in conditions where a person’s perception may be inadequate.

Brain-machine interfacing: the intersection of man and machine

Aside from applications in the development of cyborgs or autonomous robots, reinforcement learning and AI are potential tools for functional assistance in living organisms including animals and humans. In this light, brain-machine interfacing has the potential to revolutionize the field of prosthetics.[17] In recent years, several significant experiments have shown that brain-computer interfaces are not only possible, but can be implemented to assist learning in rodents.

Exemplifying such an advancement with a high-degree of face-value regarding real-world applications, a 2016 study used a closed-loop neurorobotic feedback design to modulate prosthetic assistance in rats.[18] By equipping the animals with a motorized trunk-assistance mechanism, paired with brain wave recordings via EEG, the researchers were able to assist rats with severe limb dysfunction in walking normally. While the idea of prosthetic assistance for mobility disorders is not new, classic approaches used simple mechanical assists, usually unidirectional, to compensate for weak or missing muscles, nerves or entire limbs. By integrating a neurorobotic platform here, the researchers were able to achieve far superior efficiency and fluidity in their prosthetic assistance, paving the way for significant future developments.[19]

Dr. Gang Pan

Aside from mechanical assistance, the neurorobotic approach to brain-machine interfacing along with machine intelligence has potential applications in the cognitive domain. One particular set of researchers led by Dr. Gang Pan at the College of Computer Science and Technology at Zhejiang University in China have championed significant advancements in this regard. The authors first demonstrated the viability of such a hybrid system in 2016[20,21] by implanting rats with an apparatus which allowed machine-learning integration in order to enhance their maze solving capabilities.[22] Briefly, by installing micro-electrodes into both the medial forebrain bundle (a region whose stimulation elicits motivation via dopamine release) and the whisker barrel fields of the rat brain, which are used for navigation and spatial detection, the researchers were able to imbue the rats with a source of either human- or computer-driven direct stimuli to assist them in navigation. These rat cyborgs (termed “ratbots”) were then compared against both computers and non-enhanced rats in solving fourteen diverse mazes. With the assistance of neural feedback from the implants, the rat cyborgs were significantly more efficient in solving the mazes as compared to the control rats, and showed nearly equal efficiency when compared against the fully synthetic model computers. This research led by Dr. Gang Pan has been further credited with developing the term “cyborg intelligence” for describing the convergence of biological and machine intelligence via brain-machine interfacing.[23]

Expanding upon this, the authors have also developed a digitally-driven, automated training module for the behavioral development of these rat cyborgs.[24] In their original studies, training of the implanted rats to respond to the external stimuli while navigating the mazes proved a laborious step in their experimental execution. However, by automating the training in a modified radial arm maze they were able to streamline this process. Together with their previous publications, they conclude that not only do the rat cyborgs exhibit exceptional efficiency in maze-solving, they also offer distinct advantages over pure neurorobotic machines. Owing to their natural sense of curiosity in exploration, the rat cyborgs covered significantly more ground than the computer-modeled maze-solvers. They are also naturally agile animals, meaning that in a real-world environment such as a search-and-rescue mission, they may prove able to navigate a complex and dynamic environment better than a robotic counterpart. Finally, these findings carry the implication that brain-computer interfacing may be used to assist those with mental deficits by gathering real-world information and feedback in order to help guide decision-making and compensate for learning deficits.

Neurorobotics Application Maze Engineers

Conclusions

While still considered a nascent field, neurorobotic research has made spectacular advancements in recent years and promises exponential growth towards a variety of applications. From autonomous robots to neural-feedback prosthetics and rehabilitation assistance,[25] the future of neurorobotics and AI holds great promise for the integrated enhancement of modern life. Furthermore, this type of reverse-engineering of neural networks has already provided significant insights into how the brain functions, which in turn serves to inform neuroscience research. Clearly, neurobotic research holds a massive potential for applications in multiple fields and the burgeoning interest in this domain shows no signs of slowing down.

References

  1. Xie, Ming. (2003). Fundamental of robotics: Linking perception to action. Singapore: World Scientific
  2. Morimoto J, Kawato M. 2015 Creating the brain and interacting with the brain: an integrated approach to understanding the brain. J. R. Soc. Interface 12: 20141250. http://dx.doi.org/10.1098/rsif.2014.1250
  3. Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., and Moser, E.I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature 436:801-806.
  4. Banino, A., Barry, C., Uria, B., Blundell, C., Lillicrap, T., Mirowski, P., … Kumaran, D. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705), 429–433.
  5. Krichmar, J. L., Nitz, D. A., Gally, J. A., and Edelman, G. M. (2005a). Characterizing functional hippocampal pathways in a brain-based device as it solves a spatial memory task. Proc. Natl. Acad. Sci. U S A 102, 2111–2116.
  6. Krichmar, J. L., Seth, A. K., Nitz, D. A., Fleischer, J. G., and Edelman, G. M. (2005b). Spatial navigation and causal analysis in a brain-based device modeling cortical-hippocampal interactions. Neuroinformatics 3, 197–221.
  7. Fleischer, J. G., and Krichmar, J. L. (2007). Sensory integration and remapping in a model of the medial temporal lobe during maze navigation by a brain-based device. J. Integr. Neurosci. 6, 403–431
  8. Hwu, T., Isbell, J., Oros, N., and Krichmar, J. 2017 IEEE International Joint Conference on Neural Networks]
  9. Hwu, T., Wang, A.Y., Oros, N., and Krichmar, J.L. IEEE Transactions on Cognitive and Developmental Systems 10, 126-137, 2018
  10. Stewart, T. C., Kleinhans, A., Mundy, A., & Conradt, J. (2016). Serendipitous Offline Learning in a Neuromorphic Robot. Frontiers in neurorobotics, 10, 1.
  11. Grinke, E., Tetzlaff, C., Wörgötter, F., & Manoonpong, P. (2015). Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot. Frontiers in neurorobotics, 9, 11.
  12. Matsuda, E., Hubert, J., & Ikegami, T. (2014). A robotic approach to understanding the role and the mechanism of vicarious trial-and-error in a T-maze task. PloS one, 9(7), e102708.
  13. Garnier S, Combe M, Jost C, Theraulaz G (2013) Do ants need to estimate the geometrical properties of trail bifurcations to find an efficient route? A swarm robotics test bed. PLoS Comput Biol 9: e1002903.
  14. Garnier S, Gautrais J, Asadpour M, Jost C, Theraulaz G (2009) Self-Organized Aggregation Triggers Collective Decision Making in a Group of Cockroach-Like Robots. Adapt Behav 17: 109–133.
  15. Garnier S, Jost C, Gautrais J, Asadpour M, Caprari G, et al. (2008) The embodiment of cockroach aggregation behavior in a group of micro-robots. Artif Life 14: 387–408.
  16. Garnier S (2011) From Ants to Robots and Back : How Robotics Can Contribute to the Study of Collective Animal Behavior. Bio-Inspired Self-Organizing Robot Syst 355: 105–120.
  17. Ernest N, Carroll D, Schumacher C, Clark M, Cohen K, et al. (2016) Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions. J Def Manag 6: 144.
  18. Chapin, J. K. (2004). Using multi-neuron population recordings for neural prosthetics. Nature Neuroscience, 7, 452.
  19. Zitzewitz, J. von, Asboth, L., Fumeaux, N., Hasse, A., Baud, L., Vallery, H., & Courtine, G. (2016). A neurorobotic platform for locomotor prosthetic development in rats and mice. Journal of Neural Engineering, 13(2), 026007.
  20. Yu, Y., Pan, G., Gong, Y., Xu, K., Zheng, N., Hua, W., Zheng, X., … Wu, Z. (2016). Intelligence-Augmented Rat Cyborgs in Maze Solving. PloS one, 11(2), e0147754.
  21. Wu, Z., Zheng, N., Zhang, S., Zheng, X., Gao, L., & Su, L. (2016). Maze learning by a hybrid brain-computer system. Scientific reports, 6, 31746.
  22. Rijnbeek, E. H., Eleveld, N., & Olthuis, W. (2018). Update on Peripheral Nerve Electrodes for Closed-Loop Neuroprosthetics. Frontiers in neuroscience, 12, 350.
  23. Zhaohui Wu, Gang Pan, Nenggan Zheng, Cyborg Intelligence, IEEE Intelligent Systems, 28(5):31-33, Sep/Oct 2013.
  24. Yu, Y., Wu, Z., Xu, K., Gong, Y., Zheng, N., Zheng, X., & Pan, G. (2016). Automatic Training of Rat Cyborgs for Navigation. Computational intelligence and neuroscience, 2016, 6459251.
  25. Iosa, M., Morone, G., Cherubini, A., & Paolucci, S. (2016). The Three Laws of Neurorobotics: A Review on What Neurorehabilitation Robots Should Do for Patients and Clinicians. Journal of medical and biological engineering, 36, 1-11.
Close Menu