Catálogo de publicaciones - libros
Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I
Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)
En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74689-8
ISBN electrónico
978-3-540-74690-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Neural Network Ensemble Training by Sequential Interaction
M. A. H. Akhand; Kazuyuki Murase
Neural network ensemble (NNE) has been shown to outperform single neural network (NN) in terms of generalization ability. The performance of NNE is therefore depends on well diversity among component NNs. Popular NNE methods, such as bagging and boosting, follow data sampling technique to achieve diversity. In such methods, NN is trained independently with a particular training set that is probabilistically created. Due to independent training strategy there is a lack of interaction among component NNs. To achieve training time interaction, negative correlation learning (NCL) has been proposed for simultaneous training. NCL demands direct communication among component NNs; which is not possible in bagging and boosting. In this study, first we modify the NCL from simultaneous to sequential style and then induce in bagging and boosting for interaction purpose. Empirical studies exhibited that sequential training time interaction increased diversity among component NNs and outperformed conventional methods in generalization ability.
- Learning Theory | Pp. 98-108
Improving Optimality of Neural Rewards Regression for Data-Efficient Batch Near-Optimal Policy Identification
Daniel Schneegaß; Steffen Udluft; Thomas Martinetz
In this paper we present two substantial extensions of Neural Rewards Regression (NRR) [1]. In order to give a less biased estimator of the Bellman Residual and to facilitate the regression character of NRR, we incorporate an improved, Auxiliared Bellman Residual [2] and provide, to the best of our knowledge, the first Neural Network based implementation of the novel Bellman Residual minimisation technique. Furthermore, we extend NRR to Policy Gradient Neural Rewards Regression (PGNRR), where the strategy is directly encoded by a policy network. PGNRR profits from both the data-efficiency of the Rewards Regression approach and the directness of policy search methods. PGNRR further overcomes a crucial drawback of NRR as it extends the accordant problem class considerably by the applicability of continuous action spaces.
- Learning Theory | Pp. 109-118
Structure Learning with Nonparametric Decomposable Models
Anton Schwaighofer; Mathäus Dejori; Volker Tresp; Martin Stetter
We present a novel approach to structure learning for graphical models. By using nonparametric estimates to model clique densities in decomposable models, both discrete and continuous distributions can be handled in a unified framework. Also, consistency of the underlying probabilistic model is guaranteed. Model selection is based on predictive assessment, with efficient algorithms that allow fast greedy forward and backward selection within the class of decomposable models. We show the validity of this structure learning approach on toy data, and on two large sets of gene expression data.
- Advances in Neural Network Learning Methods | Pp. 119-128
Recurrent Bayesian Reasoning in Probabilistic Neural Networks
Jiří Grim; Jan Hora
Considering the probabilistic approach to neural networks in the framework of statistical pattern recognition we assume approximation of class-conditional probability distributions by finite mixtures of product components. The mixture components can be interpreted as probabilistic neurons in neurophysiological terms and, in this respect, the fixed probabilistic description becomes conflicting with the well known short-term dynamic properties of biological neurons. We show that some parameters of PNN can be “released” for the sake of dynamic processes without destroying the statistically correct decision making. In particular, we can iteratively adapt the mixture component weights or modify the input pattern in order to facilitate the correct recognition.
- Advances in Neural Network Learning Methods | Pp. 129-138
Resilient Approximation of Kernel Classifiers
Thorsten Suttorp; Christian Igel
Trained support vector machines (SVMs) have a slow run-time classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixed-point iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied.
- Advances in Neural Network Learning Methods | Pp. 139-148
Incremental Learning of Spatio-temporal Patterns with Model Selection
Koichiro Yamauchi; Masayoshi Sato
This paper proposes a biologically inspired incremental learning method for spatio-temporal patterns based on our recently reported “Incremental learning through sleep (ILS)” method. This method alternately repeats two learning phases: awake and sleep. During the awake phase, the system learns new spatio-temporal patterns by rote, whereas in the sleep phase, it rehearses the recorded new memories interleaved with old memories. The rehearsal process is essential for reconstructing the internal representation of the neural network so as not only to memorize the new patterns while keeping old memories but also to reduce redundant hidden units. By using this strategy, the neural network achieves high generalization ability.
The most attractive property of the method is the incremental learning ability of non-independent distributed samples without catastrophic forgetting despite using a small amount of resources. We applied our method to an experiment on robot control signals, which vary depending on the context of the current situation.
- Advances in Neural Network Learning Methods | Pp. 149-158
Accelerating Kernel Perceptron Learning
Daniel García; Ana González; José R. Dorronsoro
Recently it has been shown that appropriate perceptron training methods, such as the Schlesinger–Kozinec (SK) algorithm, can provide maximal margin hyperplanes with training costs O( ×), with denoting sample size and the number of training iterations. In this work we shall relate SK training with the classical Rosenblatt rule and show that, when the hyperplane vector is written in dual form, the support vector (SV) coefficients determine their training appearance frequency; in particular, large coefficient SVs penalize training costs. Under this light we shall explore a training acceleration procedure in which large coefficient and, hence, large cost SVs are removed from training and that allows for a further stable large sample shrinking. As we shall see, this results in a much faster training while not penalizing test classification.
- Advances in Neural Network Learning Methods | Pp. 159-168
Analysis and Comparative Study of Source Separation Performances in Feed-Forward and Feed-Back BSSs Based on Propagation Delays in Convolutive Mixture
Akihide Horita; Kenji Nakayama; Akihiro Hirano
Feed-Forward (FF-) and Feed-Back (FB-) structures have been proposed for Blind Source Separation (BSS). The FF-BSS systems have some degrees of freedom in the solution space, and signal distortion is likely to occur in convolutive mixtures. On the other hand, the FB-BSS structure does not cause signal distortion. However, it requires a condition on the propagation delays in the mixing process. In this paper, source separation performance in the FB-BSS is theoretically analyzed taking the propagation delays into account. Simulation is carried out by using white signals and speech signals as the signal sources. The FF-BSS system and the FB-BSS system are compared. Even though the FB-BSS can provide good separation performance, there exits some limitation on location of the signal sources and the sensors.
- Advances in Neural Network Learning Methods | Pp. 169-179
Learning Highly Non-separable Boolean Functions Using Constructive Feedforward Neural Network
Marek Grochowski; Włodzisław Duch
Learning problems with inherent non-separable Boolean logic is still a challenge that has not been addressed by neural or kernel classifiers. The -separability concept introduced recently allows for characterization of complexity of non-separable learning problems. A simple constructive feedforward network that uses a modified form of the error function and a window-like functions to localize outputs after projections on a line has been tested on such problems with quite good results. The computational cost of training is low because most nodes and connections are fixed and only weights of one node are modified at each training step. Several examples of learning Boolean functions and results of classification tests on real-world multiclass datasets are presented.
- Advances in Neural Network Learning Methods | Pp. 180-189
A Fast Semi-linear Backpropagation Learning Algorithm
Bertha Guijarro-Berdiñas; Oscar Fontenla-Romero; Beatriz Pérez-Sánchez; Paula Fraguela
Ever since the first gradient-based algorithm, the brilliant proposed by Rumelhart, a variety of new training algorithms have emerged to improve different aspects of the learning process for feed-forward neural networks. One of these aspects is the . In this paper, we present a learning algorithm that combines linear-least-squares with gradient descent. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with other learning algorithms and well known data sets. Results show the proposed algorithm improves the learning speed of the basic backpropagation algorithm in several orders of magnitude, while maintaining good optimization accuracy. Its performance and low computational cost makes it an interesting alternative even for second order methods, specially when dealing large networks and training sets.
- Advances in Neural Network Learning Methods | Pp. 190-198