Catálogo de publicaciones - libros

Compartir en
redes sociales


Biomimetic Neural Learning for Intelligent Robots: Intelligent Systems, Cognitive Robotics, and Neuroscience

Stefan Wermter ; Günther Palm ; Mark Elshaw (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-27440-7

ISBN electrónico

978-3-540-31896-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Towards Biomimetic Neural Learning for Intelligent Robots

Stefan Wermter; Günther Palm; Cornelius Weber; Mark Elshaw

We present a brief overview of the chapters in this book that relate to the development of intelligent robotic systems that are inspired by neuroscience concepts. Firstly, we concentrate on the research of the MirrorBot project which focuses on biomimetic multimodal learning in a mirror neuron-based robot. This project has made significant developments in biologically inspired neural models using inspiration from the mirror neuron system and modular cerebral cortex organisation of actions for use in an intelligent robot within an extended ‘pick and place’ type scenario. The hypothesis under investigation in the MirrorBot project is whether a mirror neuron-based cell assembly model can produce a life-like perception system for actions. Various models were developed based on principles such as cell assemblies, associative neural networks, and Hebbian-type learning in order to associate vision, language and motor concepts. Furthermore, we introduce the chapters of this book from other researchers who attended our AI-workshop on NeuroBotics.

- Towards Biomimetic Neural Learning for Intelligent Robots | Pp. 1-18

The Intentional Attunement Hypothesis The Mirror Neuron System and Its Role in Interpersonal Relations

Vittorio Gallese

Neuroscientific research has unveiled neural mechanisms mediating between the personal experiential knowledge we hold of our lived body, and the implicit certainties we simultaneously hold about others. Such personal, body-related experiential knowledge enables our intentional attunement with others, which in turn constitutes a shared manifold of intersubjectivity. This we-centric space allows us to personally characterize and provide experiential understanding to the actions performed by others, and the emotions and sensations they experience. A direct form of “experiential understanding” is achieved by modeling the behavior of other individuals as intentional experience on the basis of the equivalence between what the others do and feel and what we do and feel. This parsimonious modeling mechanism is embodied simulation. The mirror neuron system is likely a neural correlate of this mechanism. This account shades some light on too often sidelined aspects of social cognition. More generally, it emphasizes the role played in cognition by neural sensory-motor integration.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 19-30

Sequence Detector Networks and Associative Learning of Grammatical Categories

Andreas Knoblauch; Friedemann Pulvermüller

A fundamental prerequisite for language is the ability to distinguish word sequences that are grammatically well-formed from ungrammatical word strings and to generalise rules of syntactic serial order to new strings of constituents. In this work, we extend a neural model of syntactic brain mechanisms that is based on syntactic sequence detectors (SDs). Elementary SDs are neural units that specifically respond to a sequence of constituent words AB, but not (or much less) to the reverse sequence BA. We discuss limitations of the original version of the SD model (Pulvermüller, Theory in Biosciences, 2003) and suggest optimal model variants taking advantage of optimised neuronal response functions, non-linear interaction between inputs, and leaky integration of neuronal input accumulating over time. A biologically more realistic model variant including a network of several SDs is used to demonstrate that associative Hebb-like synaptic plasticity leads to learning of word sequences, formation of neural representations of grammatical categories, and linking of sequence detectors into neuronal assemblies that may provide a biological basis of syntactic rule knowledge. We propose that these syntactic neuronal assemblies (SNAs) underlie generalisation of syntactic regularities from already encountered strings to new grammatical word sequences.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 31-53

A Distributed Model of Spatial Visual Attention

Julien Vitay; Nicolas P. Rougier; Frédéric Alexandre

Although biomimetic autonomous robotics relies on the massively parallel architecture of the brain, the key issue is to temporally organize behaviour. The distributed representation of the sensory information has to be coherently processed to generate relevant actions. In the visual domain, we propose here a model of visual exploration of a scene by the means of localized computations in neural populations whose architecture allows the emergence of a coherent behaviour of sequential scanning of salient stimuli. It has been implemented on a real robotic platform exploring a moving and noisy scene including several identical targets.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 54-72

A Hybrid Architecture Using Cross-Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots

John C. Murray; Harry Erwin; Stefan Wermter

Audition is one of our most important modalities and is widely used to communicate and sense the environment around us. We present an auditory robotic system capable of computing the angle of incidence (azimuth) of a sound source on the horizontal plane. The system is based on some principles drawn from the mammalian auditory system and using a recurrent neural network (RNN) is able to dynamically track a sound source as it changes azimuthally within the environment. The RNN is used to enable fast tracking responses to the overall system. The development of a hybrid system incorporating cross-correlation and recurrent neural networks is shown to be an effective mechanism for the control of a robot tracking sound sources azimuthally.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 73-87

Image Invariant Robot Navigation Based on Self Organising Neural Place Codes

Kaustubh Chokshi; Stefan Wermter; Christo Panchev; Kevin Burn

For a robot to be autonomous it must be able to navigate independently within an environment. The overall aim of this paper is to show that localisation can be performed even without having a pre-defined map given to the robot by humans. In nature place cells are brain cells that respond to the environment the animal is in. In this paper we present a model of place cells based on Self Organising Maps. We also show how image invariance can improve the performance of the place cells and make the model more robust to noise. The incoming visual stimuli are interpreted by means of neural networks and they respond only to a specific combination of visual landmarks. The activities of these neural networks implicitly represent environmental properties like distance and orientation to the visual cues. Unsupervised learning is used to build the computational model of hippocampal place cells. After training, a robot can localise itself within a learned environment.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 88-106

Detecting Sequences and Understanding Language with Neural Associative Memories and Cell Assemblies

Heiner Markert; Andreas Knoblauch; Günther Palm

Using associative memories and sparse distributed representations we have developed a system that can learn to associate words with objects, properties like colors, and actions. This system is used in a robotics context to enable a robot to respond to spoken commands like ”bot show plum” or ”bot put apple to yellow cup”. This involves parsing and understanding of simple sentences and “symbol grounding”, for example, relating the nouns to concrete objects sensed by the camera and recognized by a neural network from the visual input.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 107-117

Combining Visual Attention, Object Recognition and Associative Information Processing in a NeuroBotic System

Rebecca Fay; Ulrich Kaufmann; Andreas Knoblauch; Heiner Markert; Günther Palm

We have implemented a neurobiologically plausible system on a robot that integrates visual attention, object recognition, language and action processing using a coherent cortex-like architecture based on neural associative memories. This system enables the robot to respond to spoken commands like ”bot show plum” or ”bot put apple to yellow cup”. The scenario for this is a robot close to one or two tables carrying certain kinds of fruit and other simple objects. Tasks such as finding and pointing to certain fruits in a complex visual scene according to spoken or typed commands can be demonstrated. This involves parsing and understanding of simple sentences, relating the nouns to concrete objects sensed by the camera, and coordinating motor output with planning and sensory processing.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 118-143

Towards Word Semantics from Multi-modal Acoustico-Motor Integration: Application of the Bijama Model to the Setting of Action-Dependant Phonetic Representations

Olivier Ménard; Frédéric Alexandre; Hervé Frezza-Buet

This paper presents a computational self-organizing model of multi-modal information, inspired from cortical maps. It shows how the organization in a map can be influenced by the same process occurring in other maps. We illustrate this approach on a phonetic – motor association, that shows that the organization of words can integrate motor constraints, as observed in humans.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 144-161

Grounding Neural Robot Language in Action

Stefan Wermter; Cornelius Weber; Mark Elshaw; Vittorio Gallese; Friedemann Pulvermüller

In this paper we describe two models for neural grounding of robotic language processing in actions. These models are inspired by concepts of the mirror neuron system in order to produce learning by imitation by combining high-level vision, language and motor command inputs. The models learn to perform and recognise three behaviours, ‘go’, ‘pick’ and ‘lift’. The first single-layer model uses an adapted Helmholtz machine wake-sleep algorithm to act like a Kohonen self-organising network that receives all inputs into a single layer. In contrast, the second, hierarchical model has two layers. In the lower level hidden layer the Helmholtz machine wake-sleep algorithm is used to learn the relationship between action and vision, while the upper layer uses the Kohonen self-organising approach to combine the output of the lower hidden layer and the language input.

On the hidden layer of the single-layer model, the action words are represented on non-overlapping regions and any neuron in each region accounts for a corresponding sensory-motor binding. In the hierarchical model rather separate sensory- and motor representations on the lower level are bound to corresponding sensory-motor pairings via the top level that organises according to the language input.

- Part I: Biomimetic Multimodal Learning in Neuron-Based Robots | Pp. 162-181