Catálogo de publicaciones - libros

Compartir en
redes sociales


Neurodynamics of Cognition and Consciousness

Leonid I. Perlovsky ; Robert Kozma (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73266-2

ISBN electrónico

978-3-540-73267-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Neurodynamics of Cognition and Consciousness

Leonid I. Perlovsky; Robert Kozma

Dynamic aspects of higher cognitive functions are addressed. Dynamical neural networks with encoding in limit cycle and non-convergent attractors have gained increasing popularity in the past decade. Experimental evidence in humans and other mammalians indicates that complex neurodynamics is crucial for the emergence of higher-level intelligence and consciousness. We give an overview of research activities in the field, including dynamic models of consciousness, experiments to identify neurodynamic correlates of cognitive functions, interpretation of experimental findings, development of dynamical neural memories, and applications of dynamical approaches to intelligent system.

- Neurodynamics of Cognition and Consciousness | Pp. 1-8

Proposed Cortical “Shutter” Mechanism in Cinematographic Perception

Walter J. Freeman

Brains are open thermodynamic systems, continually dissipating meta-bolic energy in forming cinematographic spatiotemporal patterns of neural activity. In this report patterns of cortical oscillations are described as ‘dissipative structures’ formed near an operating point at criticality far from equilibrium. Around that point exists a small-signal, near-linear range in which pairs of impulse responses superpose. Piece-wise linearization extends analysis into nonlinear ranges. Resulting root loci are interpreted as projections from a phase plane, in which the three phase boundaries are graphed in the coordinates of rate of change in a dynamic order parameter (negentropy) on the ordinate analogous to static pressure rate of energy dissipation (power) analogous to static temperature on the abscissa. The graph displays the neural mechanism that implements phase transitions and enables the limbic system to repeat the action-perception cycle at 3–7 Hz. The mechanism is null spikes (‘vortices’) in Rayleigh noise in background electrocorticogram (ECoG) that serve as a shutter by triggering phase transitions.

Part I - Neurocognition and Human Consciousness | Pp. 11-38

Toward a Complementary Neuroscience: Metastable Coordination Dynamics of the Brain

J. A. Scott Kelso; Emmanuelle Tognoli

Metastability has been proposed as a new principle of behavioral and brain function and may point the way to a truly complementary neuroscience. From elementary coordination dynamics we show explicitly that metastability is a result of a symmetry breaking caused by the subtle interplay of two forces: the tendency of the components to couple together and the tendency of the components to express their intrinsic independent behavior. The metastable regime reconciles the well-known tendencies of specialized brain regions to express their autonomy (segregation) and the tendencies for those regions to work together as a synergy (integration). Integration ∼ segregation is just one of the complementary pairs (denoted by the tilde (∼) symbol) to emerge from the science of coordination dynamics. We discuss metastability in the brain by describing the favorable conditions existing for its emergence and by deriving some predictions for its empirical characterization in neurophysiological recordings.

Part I - Neurocognition and Human Consciousness | Pp. 39-59

The Formation of Global Neurocognitive State

Steven L. Bressler

I propose in this chapter that the formation of global neurocognitive state in the cerebral cortex is central to the mammalian capacity for assessment of organismic state. I consider a putative mechanism for the formation of global neurocognitive state from interactions among interconnected cortical areas. In this model, each area makes a local assessment of its own current state, representing a partial assessment of organismic state, through the generation of packets of high-frequency oscillatory wave activity. The spatial amplitude modulation (AM) pattern of the wave packet is proposed to represent the expression of an area’s current state in relation to the other areas with which it is interacting. Through their interactions, sets of cortical areas mutually constrain the AM patterns of their wave packets. It is proposed that this process leads to the manifestation of wave packets having cognitively consistent patterns, and the formation of globally unified consensual neurocognitive states.

Part I - Neurocognition and Human Consciousness | Pp. 61-72

Neural Dynamic Logic of Consciousness: the Knowledge Instinct

Leonid I. Perlovsky

The chapter discusses evolution of consciousness driven by the knowledge instinct, a fundamental mechanism of the mind which determines its higher cognitive functions and neural dynamics. Although evidence for this drive was discussed by biologists for some time, its fundamental nature was unclear without mathematical modeling. We discuss mathematical difficulties encountered in the past attempts at modeling the mind and relate them to logic. The main mechanisms of the mind include instincts, concepts, emotions, and behavior. Neural modeling fields and dynamic logic mathematically describe these mechanisms and relate their neural dynamics to the knowledge instinct. Dynamic logic overcomes past mathematical difficulties encountered in modeling intelligence. Mathematical mechanisms of concepts, emotions, instincts, consciousness and unconscious are described and related to perception and cognition. The two main aspects of the knowledge instinct are differentiation and synthesis. Differentiation is driven by dynamic logic and proceeds from vague and unconscious states to more crisp and conscious states, from less knowledge to more knowledge at each hierarchical level of the mind. Synthesis is driven by a hierarchical organization of the mind; it strives to achieve unity and meaning of knowledge: every concept finds its deeper and more general meaning at a higher level. These mechanisms are in complex relationship of symbiosis and opposition, and lead to complex dynamics of evolution of consciousness and cultures. Mathematical modeling of this dynamics in a population leads to predictions for the evolution of consciousness, and cultures. Cultural predictive models can be compared to experimental data and used for improvement of human conditions. We discuss existing evidence and future research directions.

Part I - Neurocognition and Human Consciousness | Pp. 73-108

Using ADP to Understand and Replicate Brain Intelligence: The Next Level Design?

Paul J. Werbos

Since the 1960’s I proposed that we could understand and replicate the highest level of intelligence seen in the brain, by building ever more capable and general systems for adaptive dynamic programming (ADP) – like “reinforcement learning” but based on approximating the Bellman equation and allowing the controller to know its utility function. Growing empirical evidence on the brain supports this approach. Adaptive critic systems now meet tough engineering challenges and provide a kind of first-generation model of the brain. Lewis, Prokhorov and I have done some work on second-generation designs. I now argue that mammal brains possess three core capabilities – creativity/imagination and ways to manage spatial and temporal complexity – even beyond the second generation. This chapter reviews previous progress, and describes new tools and approaches to overcome the spatial complexity gap. The Appendices discuss what we can learn about higher functions of the human mind from this kind of mathematical approaches.

Part I - Neurocognition and Human Consciousness | Pp. 109-123

Neurodynamics of Intentional Behavior Generation

Robert Kozma

The chapter reviews mechanisms of generation and utilization of knowledge in human cognitive activity and in artificial intelligence systems. First we explore experience-based methods, including top-down symbolic approaches, which address knowledge processing in humans. Symbolic theories of intelligence fall short of explaining and implementing strategies routinely produced by human intelligence. Connectionist methods are rooted in our understanding of the operation of brains and nervous systems, and they gain popularity in constructing intelligent devices. Contrary to top-down symbolic methods, connectionism uses bottom-up emergence to generate intelligent behaviors. Recently, computational intelligence, cognitive science and neuroscience have achieved a level of maturity that allows integration of top-down and bottom-up approaches, in modeling the brain.

We present a dynamical approach to higher cognition and intelligence based on the model of intentional action-perception cycle. In this model, meaningful knowledge is continuously created, processed, and dissipated in the form of sequences of oscillatory patterns of neural activity distributed across space and time, rather than via manipulation of certain symbol system. Oscillatory patterns can be viewed as intermittent representations of generalized symbol systems, with which brains compute. These dynamical symbols are not rigid but flexible and they disappear soon after they have been generated through spatio-temporal phase transitions, at the rate of 4–5 patterns per second in human brains. Human cognition performs a granulation of the seemingly homogeneous temporal sequences of perceptual experiences into meaningful and comprehendible chunks of concepts and complex behavioral schemas. They are accessed during future action selection and decisions. This biologically-motivated computing using dynamic patterns provides an alternative to the notoriously difficult symbol grounding problem and it has been implemented in computational and robotic environments.

Part I - Neurocognition and Human Consciousness | Pp. 131-161

How Does the Brain Create, Change, and Selectively Override its Rules of Conduct?

Daniel S. Levine

How do we know to talk openly to our friends but be guarded with strangers? How do we move between work place, club, and house of worship and fit our behavior to each setting? How do we develop context-dependent rules about what we may eat? We solve such problems readily, but are a long way from designing intelligent systems that can do so. Yet enough is known about cognitive and behavioral functions of three regions of prefrontal cortex, and their subcortical connections, to suggest a neural theory of context-dependent rule formation and learning. Rules that an individual follows can change, either because of personal growth or change in stress. This can be partly explained by the interplay between signals from the hippocampus, signifying task relevance, and the amygdala, signifying emotional salience. Both sets of signals influence the basal ganglia gate that selectively disinhibits behaviors in response to context.

Part I - Neurocognition and Human Consciousness | Pp. 163-181

Shape Recognition Through Dynamic Motor Representations

Navendu Misra; Yoonsuck Choe

How can agents, natural or artificial, learn about the external environment based only on its internal state (such as the activation patterns in the brain)? There are two problems involved here: first, forming the internal state based on sensory data to reflect reality, and second, forming thoughts and desires based on these internal states. (Aristotle termed these passive and active intellect, respectively [1].) How are these to be accomplished? Chapters in this book consider mechanisms of the instinct for learning (chapter PERLOVSKY) and reinforcement learning (chapter IFTEKHARUDDIN; chapter WERBOS), which modify the mind’s representation for better fitting sensory data. Our approach (as those in chapters FREEMAN and KOZMA) emphasizes the importance of action in this process. Action plays a key role in recovering sensory stimulus properties that are represented by the internal state. Generating the right kind of action is essential to decoding the internal state. Action that maintains invariance in the internal state are important as it will have the same property as that of the represented sensory stimulus. However, such an approach alone does not address how it can be generalized to learn more complex object concepts. We emphasize that the limitation is due to the reactive nature of the sensorimotor interaction in the agent: lack of long-term memory prevents learning beyond the basic stimulus properties such as orientation of the input. Adding memory can help the learning of complex object concepts, but what kind of memory should be used and why? Themain aim of this chapter is to assess the merit of memory of action sequence linked with a particular spatiotemporal pattern (skill memory), as compared to explicit memory of visual form (visual memory), all within an object recognition domain. Our results indicate that skill memory is (1) better than visual memory in terms of recognition performance, (2) robust to noise and variations, and (3) better suited as a flexible internal representation. These results suggest that the dynamic nature of skill memory, with its involvement in the closure of the agent-environment loop, provides a strong basis for robust and autonomous object concept learning.

Part II - Cognitive Computing for Sensory Perception | Pp. 185-210

A Biologically Inspired Dynamic Model for Object Recognition

Khan M. Iftekharuddin; Yaqin Li; Faraz Siddiqui

Biological vision provides an excellent promise in the design of automated object recognition (AOR) system. A particularly important unresolved issue is that of learning. This topic is also explored in Chapters (Choe) and (Perlovsky), where learning is related to actions (Choe) and to the knowledge instinct (Perlovsky). Reinforcement learning (RL) is part of procedural learning that is routinely employed in biological vision. The RL in biology appears to be crucial for attentive decision making process in a stochastic dynamic environment. RL is a learning mechanism that does not need explicit teacher or training samples, but learns from an external reinforcement. The idea of RL is related to the knowledge instinct explored in Chapter (Perlovsky), which provides internal motivations for matching models to sensor signals. The model in this chapter implements RL through neural networks in an adaptive critic design (ACD) framework in automatic recognition of objects. An ACD approximates the neuro-dynamic programming employing an action and a critic network, respectively. Two ACDs such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are both exploited in implementing the RL model. We explore the plausibility of RL for distortion-related object recognition inspired by principles of biological vision. We test and evaluate these two designs using simulated transformations as well as face authentication problems. Our simulations show promising results for both designs for transformation-invariantAOR.

Part II - Cognitive Computing for Sensory Perception | Pp. 211-227