Catálogo de publicaciones - libros
Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I
Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)
En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74689-8
ISBN electrónico
978-3-540-74690-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Improving the GRLVQ Algorithm by the Cross Entropy Method
Abderrahmane Boubezoul; Sébastien Paris; Mustapha Ouladsine
This paper discusses an alternative approach to parameter optimization of prototype-based learning algorithms that aim to minimize an objective function based on gradient search. The proposed approach is a stochastic optimization method called the Cross Entropy (CE) method. The CE method is used to tackle the initialization sensitiveness problem associated with the original generalized Learning Vector Quantization (GLVQ) algorithm and its variants and to locate the globally optimal solutions. We will focus our study on a variant which deals with a weighted norm instead of the Euclidean norm in order to select the most relevant features. The results in this paper indicate that the CE method can successfully be applied to this kind of problems and efficiently generate high quality solutions. Also, highly competitive numerical results on real world data sets are reported.
- Advances in Neural Network Learning Methods | Pp. 199-208
Incremental and Decremental Learning for Linear Support Vector Machines
Enrique Romero; Ignacio Barrio; Lluís Belanche
We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner product, without re-computing it from scratch and the procedure is reversible. An algorithm to implement the proposed method is presented, which avoids matrix inversions from scratch. Among the possible applications, we find feature selection and the design of kernels out of similarity measures.
- Advances in Neural Network Learning Methods | Pp. 209-218
An Efficient Method for Pruning the Multilayer Perceptron Based on the Correlation of Errors
Cláudio M. S. Medeiros; Guilherme A. Barreto
In this paper we present a novel method for pruning redundant weights of a trained multilayer Perceptron (MLP). The proposed method is based on the correlation analysis of the errors produced by the output neurons and the backpropagated errors associated with the hidden neurons. Repeated applications of it leads eventually to the complete elimination of all connections of a neuron. Simulations using real-world data indicate that, in terms of performance, the proposed method compares favorably with standard pruning techniques, such as the Optimal Brain Surgeon (OBS) and Weight Decay and Elimination (WDE), but with much lower computational costs.
- Advances in Neural Network Learning Methods | Pp. 219-228
Reinforcement Learning for Cooperative Actions in a Partially Observable Multi-agent System
Yuki Taniguchi; Takeshi Mori; Shin Ishii
In this article, we apply a policy gradient-based reinforcement learning to allowing multiple agents to perform cooperative actions in a partially observable environment. We introduce an auxiliary state variable, an internal state, whose stochastic process is Markov, for extracting important features of multi-agent’s dynamics. Computer simulations show that every agent can identify an appropriate internal state model and acquire a good policy; this approach is shown to be more effective than a traditional memory-based method.
- Advances in Neural Network Learning Methods | Pp. 229-238
Input Selection for Radial Basis Function Networks by Constrained Optimization
Jarkko Tikka
Input selection in the nonlinear function approximation is important and difficult problem. Neural networks provide good generalization in many cases, but their interpretability is usually limited. However, the contributions of input variables in the prediction of output would be valuable information in many real world applications. In this work, an input selection algorithm for Radial basis function networks is proposed. The selection of input variables is achieved using a constrained cost function, in which each input dimension is weighted. The constraints are imposed on the values of weights. The proposed algorithm solves a log-barrier reformulation of the original optimization problem. The input selection algorithm was applied to both simulated and benchmark data and obtained results were compelling.
- Advances in Neural Network Learning Methods | Pp. 239-248
An Online Backpropagation Algorithm with Validation Error-Based Adaptive Learning Rate
Stefan Duffner; Christophe Garcia
We present a new learning algorithm for feed-forward neural networks based on the standard Backpropagation method using an adaptive global learning rate. The adaption is based on the evolution of the error criteria but in contrast to most other approaches, our method uses the error measured on the set instead of the set to dynamically adjust the global learning rate. At no time the examples of the validation set are directly used for training the network in order to maintain its original purpose of validating the training and to perform ”early stopping”. The proposed algorithm is a heuristic method consisting of two phases. In the first phase the learning rate is adjusted after each iteration such that a minimum of the error criteria on the validation set is quickly attained. In the second phase, this search is refined by repeatedly reverting to previous weight configurations and decreasing the global learning rate. We experimentally show that the proposed method rapidly converges and that it outperforms standard Backpropagation in terms of generalization when the size of the training set is reduced.
- Advances in Neural Network Learning Methods | Pp. 249-258
Adaptive Self-scaling Non-monotone BFGS Training Algorithm for Recurrent Neural Networks
Chun-Cheng Peng; George D. Magoulas
In this paper, we propose an adaptive BFGS, which uses a self-adaptive scaling factor for the Hessian matrix and is equipped with nonmonotone strategy. Our experimental evaluation using different recurrent networks architectures provides evidence that the proposed approach trains successfully recurrent networks of various architectures, inheriting the benefits of the BFGS and, at the same time, alleviating some of its limitations.
- Advances in Neural Network Learning Methods | Pp. 259-268
Some Properties of the Gaussian Kernel for One Class Learning
Paul F. Evangelista; Mark J. Embrechts; Boleslaw K. Szymanski
This paper proposes a novel approach for directly tuning the gaussian kernel matrix for one class learning. The popular gaussian kernel includes a free parameter, , that requires tuning typically performed through validation. The value of this parameter impacts model performance significantly. This paper explores an automated method for tuning this kernel based upon a hill climbing optimization of statistics obtained from the kernel matrix.
- Advances in Neural Network Learning Methods | Pp. 269-278
Improved SOM Learning Using Simulated Annealing
Antonino Fiannaca; Giuseppe Di Fatta; Salvatore Gaglio; Riccardo Rizzo; Alfonso M. Urso
Self-Organizing Map (SOM) algorithm has been extensively used for analysis and classification problems. For this kind of problems, datasets become more and more large and it is necessary to speed up the SOM learning. In this paper we present an application of the Simulated Annealing (SA) procedure to the SOM learning algorithm. The goal of the algorithm is to obtain fast learning and better performance in terms of matching of input data and regularity of the obtained map. An advantage of the proposed technique is that it preserves the simplicity of the basic algorithm. Several tests, carried out on different large datasets, demonstrate the effectiveness of the proposed algorithm in comparison with the original SOM and with some of its modification introduced to speed-up the learning.
- Advances in Neural Network Learning Methods | Pp. 279-288
The Usage of Golden Section in Calculating the Efficient Solution in Artificial Neural Networks Training by Multi-objective Optimization
Roselito A. Teixeira; Antônio P. Braga; Rodney R. Saldanha; Ricardo H. C. Takahashi; Talles H. Medeiros
In this work a modification was made on the algorithm of Artificial Neural Networks (NN) Training of the Multilayer Perceptron type (MLP) based on multi-objective optimization (MOBJ), to increase its computational efficiency. Usually, the number of efficient solutions to be generated is a parameter that must be provided by the user. In this work, this number is automatically determined by an algorithm, through the usage of golden section, being generally less when specified, showing a sensible reduction in the processing time and keeping the high generalization capability of the obtained solution from the original method.
- Advances in Neural Network Learning Methods | Pp. 289-298