Catálogo de publicaciones - libros
Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I
Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)
En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74689-8
ISBN electrónico
978-3-540-74690-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Impact of Shrinking Technologies on the Activation Function of Neurons
Ralf Eickhoff; Tim Kaulmann; Ulrich Rückert
Artificial neural networks are able to solve a great variety of different applications, e.g. classification or approximation tasks. To utilize their advantages in technical systems various hardware realizations do exist. In this work, the impact of shrinking device sizes on the activation function of neurons is investigated with respect to area demands, power consumption and the maximum resolution in their information processing. Furthermore, analog and digital implementations are compared in emerging silicon technologies beyond 100 nm feature size.
- Advances in Neural Network Architectures | Pp. 501-510
Rectangular Basis Functions Applied to Imbalanced Datasets
Vicenç Soler; Marta Prim
Rectangular Basis Functions Networks (RecBFN) come from RBF Networks, and are composed by a set of Fuzzy Points which describe the network. In this paper, a set of characteristics of the RecBF are proposed to be used in imbalanced datasets, especially the order of the training patterns. We will demonstrate that it is an important factor to improve the generalization of the solution, which is the main problem in imbalanced datasets. Finally, this solution is compared with other important methods to work with imbalanced datasets, showing our method works well with this type of datasets and that an understandable set of rules can be extracted.
- Advances in Neural Network Architectures | Pp. 511-519
Qualitative Radial Basis Function Networks Based on Distance Discretization for Classification Problems
Xavier Parra; Andreu Català
This paper presents a radial basis function neural network which leads to classifiers of lower complexity by using a qualitative radial function based on distance discretization. The proposed neural network model generates smaller solutions for a similar generalization performance, rising to classifiers with reduced complexity in the sense of fewer radial basis functions. Classification experiments on real world data sets show that the number of radial basis functions can be reduced in some cases significantly without affecting the classification accuracy.
- Advances in Neural Network Architectures | Pp. 520-528
A Control Approach to a Biophysical Neuron Model
Tim Kaulmann; Axel Löffler; Ulrich Rückert
In this paper we present a neuron model based on the description of biophysical mechanisms combined with a regulatory mechanism from control theory. The aim of this work is to provide a neuron model that is capable of describing the main features of biological neurons such as maintaining an equilibrium potential using the NaK-ATPase and the generation of action potentials as well as to provide an estimation of the energy consumption of a single cell in a) quiescent mode (or equilibrium state) and b) firing state, when excited by other neurons. The same mechanism has also been used to model the synaptic excitation used in the simulated system.
- Advances in Neural Network Architectures | Pp. 529-538
Integrate-and-Fire Neural Networks with Monosynaptic-Like Correlated Activity
Héctor Mesa; Francisco J. Veredas
To study the physiology of the central nervous system it is necessary to understand the properties of the neural networks that integrate it and conform its functional substratum. Modeling and simulation of neural networks allow us to face this problem and consider it from the point of view of the analysis of activity correlation between pairs of neurons. In this paper, we define an optimized integrate-and-fire model of the simplest network possible, the monosynaptic circuit, and we raise the problem of searching for alternative non-monosynaptic circuits that generate monosynaptic-like correlated activity. For this purpose, we design an evolutionary algorithm with a crossover-with-speciation operator that works on populations of neural networks. The optimization of the neuronal model and the concurrent execution of the simulations allow us to efficiently cover the search space to finally obtain networks with monosynaptic-like correlated activity.
- Advances in Neural Network Architectures | Pp. 539-548
Multi-dimensional Recurrent Neural Networks
Alex Graves; Santiago Fernández; Jürgen Schmidhuber
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multi-dimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks, thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.
- Advances in Neural Network Architectures | Pp. 549-558
FPGA Implementation of an Adaptive Stochastic Neural Model
Giuliano Grossi; Federico Pedersini
In this paper a FPGA implementation of a novel neural stochastic model for solving constrained NP-hard problems is proposed and developed. The hardware implementation allows to obtain high computation speed by exploiting parallelism, as the neuron update and the constraint violation check phases can be performed simultaneously.
The neural system has been tested on random and benchmark graphs, showing good performance with respect to the same heuristic for the same problems. Furthermore, the computational speed of the FPGA implementation has been measured and compared to software implementation. The developed architecture features dramatically faster computations with respect to the software implementation, even adopting a low-cost FPGA chip.
- Advances in Neural Network Architectures | Pp. 559-568
Global Robust Stability of Competitive Neural Networks with Continuously Distributed Delays and Different Time Scales
Yonggui Kao; QingHe Ming
The dynamics of cortical cognitive maps developed by self-organization must include the aspects of long and short-term memory. The behavior of such a neural network is characterized by an equation of neural activity as a fast phenomenon and an equation of synaptic modification as a slow part of the neural system, besides, this model bases on unsupervised synaptic learning algorithm. In this paper, using theory of the topological degree and strict Liapunov functional methods, we prove existence and uniqueness of the equilibrium of competitive neural networks with continuously distributed delays and different time scales, and present some new criteria for its global robust stability.
- Neural Dynamics and Complex Systems | Pp. 569-578
Nonlinear Dynamics Emerging in Large Scale Neural Networks with Ontogenetic and Epigenetic Processes
Javier Iglesias; Olga K. Chibirova; Alessandro E. P. Villa
We simulated a large scale spiking neural network characterized by an initial developmental phase featuring cell death driven by an excessive firing rate, followed by the onset of spike-timing-dependent synaptic plasticity (STDP), driven by spatiotemporal patterns of stimulation. The network activity stabilized such that recurrent preferred firing sequences appeared along the STDP phase. The analysis of the statistical properties of these patterns give hints to the hypothesis that a neural network may be characterized by a particular state of an underlying dynamical system that produces recurrent firing patterns.
- Neural Dynamics and Complex Systems | Pp. 579-588
Modeling of Dynamics Using Process State Projection on the Self Organizing Map
Juan J. Fuertes-Martínez; Miguel A. Prada; Manuel Domínguez-González; Perfecto Reguera-Acevedo; Ignacio Díaz-Blanco; Abel A. Cuadrado-Vega
In this paper, an approach to model the dynamics of multivariable processes based on the motion analysis of the process state trajectory is presented. The trajectory followed by the projection of the process state onto the 2D neural lattice of a Self-Organizing Map (SOM) is used as the starting point of the analysis. In a first approach, a coarse grain cluster-level model is proposed to identify the possible transitions among process operating conditions (clusters). Alternatively, in a finer grain neuron-level approach, a SOM neural network whose inputs are 6-dimensional vectors which encode the trajectory (T-SOM), is defined in a top level, where the KR-SOM, a generalization of the SOM algorithm to the continuous case, is used in the bottom level for continuous trajectory generation in order to avoid the problems caused in trajectory analysis by the discrete nature of SOM. Experimental results on the application of the proposed modeling method to supervise a real industrial plant are included.
- Neural Dynamics and Complex Systems | Pp. 589-598