Catálogo de publicaciones - libros
Neurodynamics of Cognition and Consciousness
Leonid I. Perlovsky ; Robert Kozma (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-73266-2
ISBN electrónico
978-3-540-73267-9
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Cobertura temática
Tabla de contenidos
A Brain-Inspired Model for Recognizing Human Emotional States from Facial Expression
Jia-Jun Wong; Siu Yeung Cho
Metastability has been proposed as a new principle of behavioral and brain function and may point the way to a truly complementary neuroscience. From elementary coordination dynamics we show explicitly that metastability is a result of a symmetry breaking caused by the subtle interplay of two forces: the tendency of the components to couple together and the tendency of the components to express their intrinsic independent behavior. The metastable regime reconciles the well-known tendencies of specialized brain regions to express their autonomy (segregation) and the tendencies for those regions to work together as a synergy (integration). Integration ∼ segregation is just one of the complementary pairs (denoted by the tilde (∼) symbol) to emerge from the science of coordination dynamics. We discuss metastability in the brain by describing the favorable conditions existing for its emergence and by deriving some predictions for its empirical characterization in neurophysiological recordings.
Part II - Cognitive Computing for Sensory Perception | Pp. 233-254
Engineering Applications of Olfactory Model from Pattern Recognition to Artificial Olfaction
Guang Li; Jin Zhang; Walter J. Freeman
Derived from biological olfactory systems, an olfactory model entitled KIII was setup. Different from the conventional artificial neural networks, the KIII model works in a chaotic way similar to biological olfactory systems. As one kind of chaotic neural network, KIII network can be used as a general classifier needing much fewer training times in comparison with other artificial neural networks. The experiments to apply the novel neural network to recognition of handwriting numerals, classification of Mandarin spoken digits, recognition of human face and classi- fication of normal and hypoxia EEG have been carried out. Based on KIII models, an application of electronic nose on tea classification was explored. Hopefully, the K set models will make electronic noses more bionically.
Part II - Cognitive Computing for Sensory Perception | Pp. 255-276
Recursive Nodes with Rich Dynamics as Modeling Tools for Cognitive Functions
Emilio Del-Moral-Hernandez
This chapter addresses artificial neural networks employing processing nodes with complex dynamics and the representation of information through spatiotemporal patterns. These architectures can be programmed to store information through cyclic collective oscillations, which can be explored for the representation of stored memories or pattern classes. The nodes that compose the network are parametric recursions that present rich dynamics, bifurcation and chaos. A blend of periodic and erratic behavior is explored for the representation of information and the search for stored patterns. Several results on these networks have been produced in recent years, some of them showing their superior performance on pattern storage and recovery when compared to traditional neural architectures. We discuss tools of analysis, design methodologies and tools for the characterization of these RPEs networks (RPEs - Recursive Processing Elements, as the nodes are named).
Part II - Cognitive Computing for Sensory Perception | Pp. 279-304
Giving Meaning to Cycles to Go Beyond the Limitations of Fixed Point Attractors
Colin Molter; Utku Salihoglu; Hugues Bersini
This chapter focuses on associativememories in recurrent artificial neural networks using the same kind of very simple neurons usually found in neural nets. The past 25 years have dedicated much of the research endeavor on coding the information in fixed point attractors. From a cognitive or neurophysiological point of view, this choice is rather arbitrary. This paper justifies the need to switch to another encoding mechanism exploiting limit cycles and complex dynamics in the background rather than fixed points. It is shown how these attractors encompass in many aspects the limitations of fixed points: better correspondence with neurophysiological facts, increase of the encoding capacity, improved robustness during the retrieval phase, decrease in the number of spurious attractors. However, how to exploit and learn these cycles for encoding the relevant information is still an open issue. In this paper two learning strategies are proposed, tested and compared, one rather classical, very reminiscent of the usual supervised hebbian learning, the other one, rather original since allowing the coding attractor to be chosen by the network itself. Computer experiments of these two learning strategies will be presented and explained. The second learning mechanism will be advocated both for its highly cognitive relevance and on account of its much better performance in encoding and retrieving the information. Since the kind of dynamics observed in our experiments (cyclic attractor when a stimulus is presented and a weak background chaos in the absence of such stimulation) faithfully reminds neurophysiological data and although no straightforward applications have been found so far, we limit our justification to this qualitative mapping with brain observations and the need to better explore how a physical device such as a brain can store and retrieve in a robust way a huge quantity of information.
Part II - Cognitive Computing for Sensory Perception | Pp. 305-324
Complex Biological Memory Conceptualized as an Abstract Communication System–Human Long Term Memories Grow in Complexity during Sleep and Undergo Selection while Awake
Bruce G. Charlton; Peter Andras
Biological memory in humans and other animals with a central nervous system is often extremely complex in its organization and functioning.A description of memory from the perspective of complex systems may therefore be useful to interpret and understand existing neurobiological data and to plan future research. We define systems in terms of communications.A system does not include the communication units (‘CUs’) that produce and receive communications. A dense cluster of inter-referencing communications surrounded by rare set of communications constitutes a communication system. Memory systems are based on communication units that are more temporally stable than the CUs of the system which is using the memory system. We propose that the long term memory (LTM) system is a very large potential set of neurons among which self-reproducing communication networks (i.e. individual memories) may be established, propagate and grow. Long term memories consist of networks of self-reproducing communications between the neurons of the LTM. Neurons constitute the main communication units in the system, but neurons are not part of the abstract system of memory.
Part II - Cognitive Computing for Sensory Perception | Pp. 325-339
Nonlinear High-Order Model for Dynamic Synapse with Multiple Vesicle Pools
Bing Lu; Walter M. Yamada; Theodore W. Berger
A computational framework for studying nonlinear dynamic synapses is proposed in this chapter. The framework is based on biological observation and electrophysiological measurement of synaptic function. The “pool framework” results in a model composed of four vesicle pools that are serially connected in a loop. The vesicle release event is affected by facilitation and depression in a multiple-order fashion between the presynapse and the postsynapse. The proposed high-order dynamic synapse (HODS) model, using fewer parameters, predicts the experimental data recorded from Schaffer collateral – CA1 synapses under different experimental conditions better than the basic additive dynamic synapse (DS) model and the basic multiplicative facilitation-depression (FD) model. Numerical study shows that the proposed model is stable and can efficiently explore the dynamic filtering property of the synapse. The proposed model captures biological reality with regard to neurotransmitter communication between the pre- and postsynapse, while neurotransmitter communication between neurons encodes information of cognition and consciousness throughout cortices. It is expected that the present model can be employed as basic computational units to explore neural learning functions in a dynamic neural network framework.
Part II - Cognitive Computing for Sensory Perception | Pp. 341-358