Catálogo de publicaciones - libros

Compartir en
redes sociales


Computational and Ambient Intelligence: 9th International Work-Conference on Artificial Neural Networks, IWANN 2007, San Sebastián, Spain, June 20-22, 2007. Proceedings

Francisco Sandoval ; Alberto Prieto ; Joan Cabestany ; Manuel Graña (eds.)

En conferencia: 9º International Work-Conference on Artificial Neural Networks (IWANN) . San Sebastián, Spain . June 20, 2007 - June 22, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Image Processing and Computer Vision; Pattern Recognition; Computational Biology/Bioinformatics

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73006-4

ISBN electrónico

978-3-540-73007-1

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Unified Analysis and Design of ART/SOM Neural Networks and Fuzzy Inference Systems Based on Lattice Theory

Vassilis G. Kaburlasos

Fuzzy interval numbers (FINs, for short) is a unifying data representation analyzable in the context of lattice theory. This work shows how FINs improve the design of popular neural/fuzzy paradigms.

- Improving Models and Learning Procedures | Pp. 80-93

A Comparison Between ANN Generation and Training Methods and Their Development by Means of Graph Evolution: 2 Sample Problems

Daniel Rivero; Julián Dorado; Juan R. Rabuñal; Marcos Gestal

This paper presents a study in which a new technique for automatically developing Artificial Neural Networks (ANNs) by means of Evolutionary Computation (EC) tools is compared with the traditional evolutionary techniques used for ANN development. The technique used here is based on network encoding on graphs and also their performance and evolution. For this comparison, 2 different real-world problems have been solved using various tools, and the results are presented here. According to them, the results obtained with this technique can beat those obtained with other ANN development tools.

- Improving Models and Learning Procedures | Pp. 94-101

Robust LTS Backpropagation Learning Algorithm

Andrzej Rusiecki

Training data sets containing outliers are often a problem for supervised neural networks learning algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, learning algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS learning algorithm is simultaneously the first robust learning algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new algorithm are presented and the robustness against outliers is demonstrated.

- Improving Models and Learning Procedures | Pp. 102-109

Heuristic Search Based Exploration in Reinforcement Learning

Ngo Anh Vien; Nguyen Hoang Viet; SeungGwan Lee; TaeChoong Chung

In this paper, we consider reinforcement learning in systems with unknown environment where the agent must trade off efficiently between: (long-term optimization) and (short-term optimization). −  algorithm is a method using near-greedy action selection rule. It behaves greedily () most of the time, but every once in a while, say with small probability (), instead select an action at random. Many works already proved that random exploration drives the agent towards poorly modeled states. Therefore, this study evaluates the role of heuristic based exploration in reinforcement learning. We proposed three methods: neighborhood search based exploration, simulated annealing based exploration, and tabu search based exploration. All techniques follow the same rule ”Explore the most unvisited state”. In the simulation, these techniques are evaluated and compared on a discrete reinforcement learning task (robot navigation).

- Improving Models and Learning Procedures | Pp. 110-118

Improving Adaptive Boosting with a Relaxed Equation to Update the Sampling Distribution

Joaquín Torres-Sospedra; Carlos Hernández-Espinosa; Mercedes Fernández-Redondo

() is one of the most known methods to build an ensemble of neural networks. In this paper we briefly analyze and mix two of the most important variants of , and , in order to build a robuster ensemble of neural networks. The mixed method called () applies the conservative equation used in along with the averaged procedure used in in order to update the sampling distribution. We have tested the methods with seven databases from the . The results show that is the best performing method.

- Improving Models and Learning Procedures | Pp. 119-126

Automatic Model Selection for Probabilistic PCA

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; María del Carmen Vargas-González

The Mixture of Probabilistic Principal Components Analyzers (MPPCA) is a multivariate analysis technique which defines a Gaussian probabilistic model at each unit. The number of units and principal directions in each unit is not learned in the original approach. Variational Bayesian approaches have been proposed for this purpose, which rely on assumptions on the input distribution and/or approximations of certain statistics. Here we present a different way to solve this problem, where cross-validation is used to guide the search for an optimal model selection. This allows to learn the model architecture without the need of any assumptions other than those of the basic PPCA framework. Experimental results are presented, which show the probability density estimation capabilities of the proposal with high dimensional data.

- Improving Models and Learning Procedures | Pp. 127-134

Probabilistic Aggregation of Classifiers for Incremental Learning

Patricia Trejo; Ricardo Ñanculef; Héctor Allende; Claudio Moraga

We work with a recently proposed algorithm where an ensemble of base classifiers, combined using weighted majority voting, is used for incremental classification of data. To successfully accommodate novel information without compromising previously acquired knowledge this algorithm requires an adequate strategy to determine the voting weights. Given an instance to classify, we propose to define each voting weight as the posterior probability of the corresponding hypothesis given the instance. By operating with priors and the likelihood models the obtained weights can take into account the location of the instance in the different class-specific feature spaces but also the coverage of each class given the classifier and the quality of the learned hypothesis. This approach can provide important improvements in the generalization performance of the resulting classifier and its ability to control the stability/plasticity tradeoff. Experiments are carried out with three real classification problems already introduced to test incremental algorithms.

- Improving Models and Learning Procedures | Pp. 135-143

Behaviour-Based Clustering of Neural Networks Applied to Document Enhancement

F. Zamora-Martínez; S. España-Boquera; M. J. Castro-Bleda

This work proposes an agglomerative hierarchical clustering algorithm where the items to be clustered are supervised-learning classifiers. The measure of similarity to compare classifiers is based on their behaviour. This clustering algorithm has been applied to document enhancement: A set of neural filters is trained with multilayer perceptrons for different types of noise and then clustered into groups to obtain a reduced set of neural clustered filters. In order to automatically determine which clustered filter is the most suitable to clean and enhance a real noisy image, an image classifier is also trained using multilayer perceptrons.

- Improving Models and Learning Procedures | Pp. 144-151

Building Automated Negotiation Strategies Enhanced by MLP and GR Neural Networks for Opponent Agent Behaviour Prognosis

Ioanna Roussaki; Ioannis Papaioannou; Miltiades Anangostou

A quite challenging research field in the artificial intelligence domain is the design and evaluation of agents handling automated negotiations on behalf of their human or corporate owners. This paper aims to enhance such agents with techniques enabling them to predict their opponents’ negotiation behaviour and thus achieve more profitable results and better resource utilization. The proposed learning techniques are based on MLP and GR neural networks (NNs) that are used mainly to detect at an early stage the cases where agreements are not achievable, supporting the decision of the agents to withdraw or not from the specific negotiation thread. The designed NN-assisted negotiation strategies have been evaluated via extensive experiments and are proven to be very useful.

- Improving Models and Learning Procedures | Pp. 152-161

Improving the Performance of the RBF Neural Networks Trained with Imbalanced Samples

R. Alejo; V. García; J. M. Sotoca; R. A. Mollineda; J. S. Sánchez

Recently, the class imbalance problem in neural networks, is receiving growing attention in works of machine learning and data mining. This problem appears when the samples of some classes are much smaller than those in the other classes. The classes with small size can be ignored in the learning process and the convergence of these classes is very slow. This paper studies empirically the class imbalance problem in the context of the RBF neural network trained with backpropagation algorithm. We propose to introduce a cost function in the training process to compensate imbalance class and one strategy to reduce the impact of the cost function in the data probability distribution.

- Improving Models and Learning Procedures | Pp. 162-169