Catálogo de publicaciones - libros

Compartir en
redes sociales


Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I

Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)

En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74689-8

ISBN electrónico

978-3-540-74690-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Designing Modular Artificial Neural Network Through Evolution

Eva Volna

The purpose of this article is to make a contribution to the study of modular structure of neural nets, in particular to describe a method of automatic neural net modularization. The problem specific modularizations of the representation emerge through the iterations of the evolutionary algorithm directly with the problem. We used the probability vector to construct – bit vectors, which represented individuals in the population (in our approach they describe an architecture of a neural network). All individuals in every generation are pseudorandomly generated from the probability vector that is associated with this generation. The probability vector is updated on the basis of best individuals in a population, so that next generations are getting progressively closer to best solutions. The process is repeated until the probability vector entries are close to zero or to one. The resulting probability vector then determines an optimal solution of the given optimization task.

- Ensemble Learning | Pp. 299-308

Averaged Conservative Boosting: Introducing a New Method to Build Ensembles of Neural Networks

Joaquín Torres-Sospedra; Carlos Hernández-Espinosa; Mercedes Fernández-Redondo

In this paper, a new algorithm called () is presented to build ensembles of neural networks. In we mix the improvements that () and () made to (). In the algorithm we propose we have applied the conservative equation used in along with the averaged procedure used in in order to update the sampling distribution used in the training of . We have tested the methods with seven databases from the . The results show that the best results are provided by our method, .

- Ensemble Learning | Pp. 309-318

Selection of Decision Stumps in Bagging Ensembles

Gonzalo Martínez-Muñoz; Daniel Hernández-Lobato; Alberto Suárez

This article presents a comprehensive study of different ensemble pruning techniques applied to a bagging ensemble composed of decision stumps. Six different ensemble pruning methods are tested. Four of these are greedy strategies based on first reordering the elements of the ensemble according to some rule that takes into account the complementarity of the predictors with respect to the classification task. Subensembles of increasing size are then constructed by incorporating the ordered classifiers one by one. A halting criterion stops the aggregation process before the complete original ensemble is recovered. The other two approaches are selection techniques that attempt to identify optimal subensembles using either genetic algorithms or semidefinite programming. Experiments performed on 24 benchmark classification tasks show that the selection of a small subset (≈ 10 − 15%) of the original pool of stumps generated with bagging can significantly increase the accuracy and reduce the complexity of the ensemble.

- Ensemble Learning | Pp. 319-328

An Ensemble Dependence Measure

Matthew Prior; Terry Windeatt

Ensemble methods in supervised classification problems have been shown to be superior to single base classifiers of comparable performance, particularly when used in conjunction with multi-layer perceptron base classifiers. An ensemble’s performance is related to the accuracy and diversity of its component classifiers. Intuitively, diversity seems to be a desirable quality for a collection of non-optimal classifiers. Despite much interest being shown in diversity, little progress has been made in linking generalisation performance to any specific diversity metric.

With the agglomeration of even modestly accurate statistically independent classifiers it can be shown theoretically that ensemble accuracy can be forced close to optimality. Despite this theoretical benchmark, real world ensembles fall far short of this performance. The root of this problem is the lack of statistical independence amongst the base classifiers. We investigate a measure of statistical dependence in ensembles, , and its relationship to the diversity metric and pairwise correlation and also examine voting patterns in real world ensembles. We show that, whilst Q is relatively insensitive to changes in the ensemble configuration measures correlations between the base classifiers effectively. The experiments are based on several two class problems from the UCI data sets and use bootstrapped ensembles of relatively weak, multi-layer perceptron, base classifiers.

- Ensemble Learning | Pp. 329-338

Boosting Unsupervised Competitive Learning Ensembles

Emilio Corchado; Bruno Baruque; Hujun Yin

Topology preserving mappings are great tools for data visualization and inspection in large datasets. This research presents a combination of several topology preserving mapping models with some basic classifier ensemble and boosting techniques in order to increase the stability conditions and, as an extension, the classification capabilities of the former. A study and comparison of the performance of some novel and classical ensemble techniques are presented in this paper to test their suitability, both in the fields of data visualization and classification when combined with topology preserving models such as the SOM, ViSOM or ML-SIM.

- Ensemble Learning | Pp. 339-348

Using Fuzzy, Neural and Fuzzy-Neural Combination Methods in Ensembles with Different Levels of Diversity

Anne M. P. Canuto; Marjory C. C. Abreu

Classifier Combination has been investigated as an alternative to obtain improvements in design and/or accuracy for difficult pattern recognition problems. In the literature, many combination methods and algorithms have been developed, including methods based on computational Intelligence, such as: fuzzy sets, neural networks and fuzzy neural networks. This paper presents an evaluation of how different levels of diversity reached by the choice of the components can affect the accuracy of some combination methods. The aim of this analysis is to investigate whether or not fuzzy, neural and fuzzy-neural combination methods are affected by the choice of the ensemble members.

- Ensemble Learning | Pp. 349-359

SpikeStream: A Fast and Flexible Simulator of Spiking Neural Networks

David Gamez

SpikeStream is a new simulator of biologically structured spiking neural networks that can be used to edit, display and simulate up to 100,000 neurons. This simulator uses a combination of event-based and synchronous simulation and stores most of its information in databases, which makes it easy to run simulations across an arbitrary number of machines. A comprehensive graphical interface is included and SpikeStream can send and receive spikes to and from real and virtual robots across a network. The architecture is highly modular, and so other researchers can use its graphical editing facilities to set up their own simulation networks or apply genetic algorithms to the SpikeStream databases. SpikeStream is available for free download under the terms of the GPL.

- Spiking Neural Networks | Pp. 360-369

Evolutionary Multi-objective Optimization of Spiking Neural Networks

Yaochu Jin; Ruojing Wen; Bernhard Sendhoff

Evolutionary multi-objective optimization of spiking neural networks for solving classification problems is studied in this paper. By means of a Pareto-based multi-objective genetic algorithm, we are able to optimize both classification performance and connectivity of spiking neural networks with the latency coding. During optimization, the connectivity between two neurons, i.e., whether two neurons are connected, and if connected, both weight and delay between the two neurons, are evolved. We minimize the the classification error in percentage or the root mean square error for optimizing performance, and minimize the number of connections or the sum of delays for connectivity to investigate the influence of the objectives on the performance and connectivity of spiking neural networks. Simulation results on two benchmarks show that Pareto-based evolutionary optimization of spiking neural networks is able to offer a deeper insight into the properties of the spiking neural networks and the problem at hand.

- Spiking Neural Networks | Pp. 370-379

Building a Bridge Between Spiking and Artificial Neural Networks

Florian Kaiser; Fridtjof Feldbusch

Spiking neural networks (SNN) are a promising approach for the detection of patterns with a temporal component. However they provide more parameters than conventional artificial neural networks (ANN) which make them hard to handle. Many error-gradient-based approaches work with a time-to-first-spike code because the explicit calculation of a gradient in SNN is - due to the nature of spikes - very difficult. In this paper, we present the estimation of such an error-gradient based on the gain function of the neurons. This is done by interpreting spike trains as rate codes in a given time interval. This way a bridge is built between SNN and ANN. This bridge allows us to train the SNN with the well-known error back-propagation algorithm for ANN.

- Spiking Neural Networks | Pp. 380-389

Clustering of Nonlinearly Separable Data Using Spiking Neural Networks

Lakshmi Narayana Panuku; C. Chandra Sekhar

In this paper, we study the clustering capabilities of spiking neural networks. We first study the working of spiking neural networks for clustering linearly separable data. Also, a biological interpretation has been given to the delay selection in spiking neural networks. We show that by varying the firing threshold of spiking neurons during the training, nonlinearly separable data like the ring data can be clustered. When a multi-layer spiking neural network is trained for clustering, subclusters are formed in the hidden layer and these subclusters are combined in the output layer, resulting in hierarchical clustering of the data. A spiking neural network with a hidden layer is generally trained by modifying the weights of the connections to the nodes in the hidden layer and the output layer simultaneously. We propose a two-stage learning method for training a spiking neural network model for clustering. In the proposed method, the weights for the connections to the nodes in the hidden layer are learnt first, and then the weights for the connections to the nodes in the output layer are learnt. We show that the proposed two-stage learning method can cluster complex data such as the interlocking cluster data, without using lateral connections.

- Spiking Neural Networks | Pp. 390-399