Catálogo de publicaciones - libros

Compartir en
redes sociales


Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I

Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)

En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74689-8

ISBN electrónico

978-3-540-74690-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Implementing Classical Conditioning with Spiking Neurons

Chong Liu; Jonathan Shapiro

In this paper, we attempt to implement classical conditioning with spiking neurons instead of connectionist neural networks. The neuron model used is a leaky linear integrate-and-fire model with a learning algorithm combining spike-time dependent Hebbian learning and spike-time dependent anti-Hebbian learning. Experimental results show that the major phenomena of classical conditioning, including Pavlovian conditioning, extinction, partial conditioning, blocking, inhibitory conditioning, overshadow and secondary conditioning, can be implemented by the spiking neuron model proposed here and further indicate that spiking neuron models are well suited to implementing classical conditioning.

- Spiking Neural Networks | Pp. 400-410

Deformable Radial Basis Functions

Wolfgang Hübner; Hanspeter A. Mallot

Radial basis function networks (RBF) are efficient general function approximators. They show good generalization performance and they are easy to train. Due to theoretical considerations RBFs commonly use Gaussian activation functions. It has been shown that these tight restrictions on the choice of possible activation functions can be relaxed in practical applications. As an alternative difference of sigmoidal functions (SRBF) have been proposed. SRBFs have an additional parameter which increases the ability of a network node to adapt its shape to input patterns, even in cases where Gaussian functions fail.

In this paper we follow the idea of incorporating greater flexibility into radial basis functions. We propose to use splines as localized deformable radial basis functions (DRBF). We present initial results which show that DRBFs can be evaluated more effectively then SRBFs. We show that even with enhanced flexibility the network is easy to train and convergences robustly towards smooth solutions.

- Advances in Neural Network Architectures | Pp. 411-420

Selection of Basis Functions Guided by the L2 Soft Margin

Ignacio Barrio; Enrique Romero; Lluís Belanche

Support Vector Machines (SVMs) for classification tasks produce sparse models by maximizing the margin. Two limitations of this technique are considered in this work: firstly, the number of support vectors can be large and, secondly, the model requires the use of (Mercer) kernel functions. Recently, some works have proposed to maximize the margin while controlling the sparsity. These works also require the use of kernels. We propose a search process to select a subset of basis functions that maximize the margin without the requirement of being kernel functions. The sparsity of the model can be explicitly controlled. Experimental results show that accuracy close to SVMs can be achieved with much higher sparsity. Further, given the same level of sparsity, more powerful search strategies tend to obtain better generalization rates than simpler ones.

- Advances in Neural Network Architectures | Pp. 421-430

Extended Linear Models with Gaussian Prior on the Parameters and Adaptive Expansion Vectors

Ignacio Barrio; Enrique Romero; Lluís Belanche

We present an approximate Bayesian method for regression and classification with models linear in the parameters. Similar to the Relevance Vector Machine (RVM), each parameter is associated with an expansion vector. Unlike the RVM, the number of expansion vectors is specified beforehand. We assume an overall Gaussian prior on the parameters and find, with a gradient based process, the expansion vectors that (locally) maximize the evidence. This approach has lower computational demands than the RVM, and has the advantage that the vectors do not necessarily belong to the training set. Therefore, in principle, better vectors can be found. Furthermore, other hyperparameters can be learned in the same smooth joint optimization. Experimental results show that the freedom of the expansion vectors to be located away from the training data causes overfitting problems. These problems are alleviated by including a hyperprior that penalizes expansion vectors located far away from the input data.

- Advances in Neural Network Architectures | Pp. 431-440

Functional Modelling of Large Scattered Data Sets Using Neural Networks

Q. Meng; B. Li; N. Costen; H. Holstein

We propose a self-organising hierarchical Radial Basis Function (RBF) network for functional modelling of large amounts of scattered unstructured point data. The network employs an error-driven active learning algorithm and a multi-layer architecture, allowing progressive bottom-up reinforcement of local features in subdivisions of error clusters. For each RBF subnet, neurons can be inserted, removed or updated iteratively with full dimensionality adapting to the complexity and distribution of the underlying data. This flexibility is particularly desirable for highly variable spatial frequencies. Experimental results demonstrate that the network representation is conducive to geometric data formulation and simplification, and therefore to manageable computation and compact storage.

- Advances in Neural Network Architectures | Pp. 441-449

Stacking MF Networks to Combine the Outputs Provided by RBF Networks

Joaquín Torres-Sospedra; Carlos Hernández-Espinosa; Mercedes Fernández-Redondo

The performance of a Radial Basis Functions network (RBF) can be increased with the use of an ensemble of RBF networks because the RBF networks are successfully applied to solve classification problems and they can be trained by gradient descent algorithms. Reviewing the bibliography we can see that the performance of ensembles of Multilayer Feedforward (MF) networks can be improved by the use of the two combination methods based on described in [1]. We think that we could get a better classification system if we applied these combiners to an ensemble. In this paper we satisfactory apply these two new methods, and , on ensembles of networks. Increasing the number of networks used in the combination module is also successfully proposed in this paper. The results show that training 3 networks to combine an RBF ensemble is the best alternative.

- Advances in Neural Network Architectures | Pp. 450-459

Neural Network Processing for Multiset Data

Simon McGregor

This paper introduces the notion of the (VNN). The inputs to a variadic network are an arbitrary-length list of -tuples of real numbers, where is fixed. In contrast to a recurrent network which processes a list sequentially, typically being affected more by more recent list elements, a variadic network processes the list simultaneously and is affected equally by all list elements. Formally speaking, the network can be seen as instantiating a function on a along with a member of that multiset. I describe a simple implementation of a variadic network architecture, the (MLVP), and present experimental results showing that such a network can learn various variadic functions by back-propagation.

- Advances in Neural Network Architectures | Pp. 460-470

The Introduction of Time-Scales in Reservoir Computing, Applied to Isolated Digits Recognition

Benjamin Schrauwen; Jeroen Defour; David Verstraeten; Jan Van Campenhout

Reservoir Computing (RC) is a recent research area, in which a untrained recurrent network of nodes is used for the recognition of temporal patterns. Contrary to Recurrent Neural Networks (RNN), where the weights of the connections between the nodes are trained, only a linear output layer is trained. We will introduce three different time-scales and show that the performance and computational complexity are highly dependent on these time-scales. This is demonstrated on an isolated spoken digits task.

- Advances in Neural Network Architectures | Pp. 471-479

Partially Activated Neural Networks by Controlling Information

Ryotaro Kamimura

In this paper, we propose partial activation to simplify complex neural networks. For choosing important elements in a network, we develop a fully supervised competitive learning that can deal with any targets. This approach is an extension of competitive learning to a more general one, including supervised learning. Because competitive learning focuses on an important competitive unit, all the other competitive units are of no use. Thus, the number of connection weights to be updated can be reduced to a minimum point when we use competitive learning. We apply the method to the XOR problem to show that learning is possible with good interpretability of internal representations. Then, we apply the method to a student survey. In the problem, we try to show that the new method can produce connection weights that are more stable than those produced by BP. In addition, we show that, though connection weights are quite similar to those produced by linear regression analysis, generalization performance can be improved by changing the number of competitive units.

- Advances in Neural Network Architectures | Pp. 480-489

CNN Based Hole Filler Template Design Using Numerical Integration Techniques

K. Murugesan; P. Elango

This paper presents, a design method for the template of a hole-filler used to improve the pe rformance of the handwritten character recognition using numerical integration algorithms, based on the dynamic analysis of a cellular neural network (CNN). This is done by analyzing the features of the hole-filler template and the dynamic process of CNN using popular numerical integration algorithms to obtain a set of inequalities satisfying its output characteristics as well as the parameter range of the hole-filler template. Simulation results are presented for Euler, Modified Euler and RK methods and compared. It was found that RK Method performs well in terms of settling time and computation time for all step sizes.

- Advances in Neural Network Architectures | Pp. 490-500