Catálogo de publicaciones - libros

Compartir en
redes sociales


Learning Classifier Systems: International Workshops, IWLCS 2003-2005, Revised Selected Papers

Tim Kovacs ; Xavier Llorà ; Keiki Takadama ; Pier Luca Lanzi ; Wolfgang Stolzmann ; Stewart W. Wilson (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Data Mining and Knowledge Discovery

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-71230-5

ISBN electrónico

978-3-540-71231-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

An Experimental Comparison Between ATNoSFERES and ACS

Samuel Landau; Olivier Sigaud; Sébastien Picault; Pierre Gérard

After two papers comparing ATNoSFERES with XCSM, a Learning Classifier System with internal states, this paper is devoted to a comparison between ATNoSFERES and ACS (an Anticipatory Learning Classifier System). As previously, we focus on the way perceptual aliazing problems encountered in non-Markov environments are solved with both kinds of systems. We shortly present ATNoSFERES, a framework based on an indirect encoding Genetic Algorithm which builds finite-state automata controllers, and we compare it with ACS through two benchmark experiments. The comparison shows that the difference in performance between both system depends on the environment. This raises a discussion of the adequacy of both adaptive mechanisms to particular subclasses of non-Markov problems. Furthermore, since ACS converges much faster than ATNoSFERES, we discuss the need to introduce learning capabilities in our model. As a conclusion, we advocate for the need of more experimental comparisons between different systems in the Learning Classifier System community.

II - Mechanisms | Pp. 144-160

The Class Imbalance Problem in UCS Classifier System: A Preliminary Study

Albert Orriols-Puig; Ester Bernadó-Mansilla

The class imbalance problem has been said recently to hinder the performance of learning systems. In fact, many of them are designed with the assumption of well-balanced datasets. But this commitment is not always true, since it is very common to find higher presence of one of the classes in real classification problems. The aim of this paper is to make a preliminary analysis on the effect of the class imbalance problem in learning classifier systems. Particularly we focus our study on UCS, a supervised version of XCS classifier system. We analyze UCS’s behavior on unbalanced datasets and find that UCS is sensitive to high levels of class imbalance. We study strategies for dealing with class imbalances, acting either at the sampling level or at the classifier system’s level.

II - Mechanisms | Pp. 161-180

Three Methods for Covering Missing Input Data in XCS

John H. Holmes; Jennifer A. Sager; Warren B. Bilker

Missing data pose a potential threat to learning and classification in that they may compromise the ability of a system to develop robust, generalized models of the environment in which they operate. This investigation reports on the effects of three approaches to covering these data using an XCS-style learning classifier system. Using fabricated datasets representing a wide range of missing value densities, it was found that missing data do not appear to adversely affect LCS learning and classification performance. Furthermore, three types of missing value covering were found to exhibit similar efficiency on these data, with respect to convergence rate and classification accuracy.

II - Mechanisms | Pp. 181-192

A Hyper-Heuristic Framework with XCS: Learning to Create Novel Problem-Solving Algorithms Constructed from Simpler Algorithmic Ingredients

Javier G. Marín-Blázquez; Sonia Schulenburg

Evolutionary Algorithms (EAs) have been successfully reported by academics in a wide variety of commercial areas. However, from a commercial point of view, the story appears somewhat different; the number of success stories does not appear to be as significant as those reported by academics. For instance, Heuristic Algorithms (HA) are still very widely used to tackle practical problems in operations research, where many of these are NP-hard and exhaustive search is often computationally intractable. There are a number of logical reasons why practitioners do not embark so easily in the development and use of EAs. This work is concerned with a new line of research based on bringing together these two approaches in a harmonious way. The idea is that instead of using an EA to learn the solution of a specific problem, use it to find an algorithm, i.e. a solution process that can solve well a large family of problems by making use of familiar heuristics. The work of the authors is novel in two ways: within the Learning Classifier Systems (LCS) current body of research, it represents the first attempt to tackle the Bin Packing problem (BP), a different kind of problem to those already studied by the LCS community, and from the Hyper-Heuristics (HH) framework, it represents the first use of LCS as the learning paradigm. Several reward schema based on single or multiple step environments are studied in this paper, tested on a very large set of BP problems and a small set of widely used HAs. Results of the approach are encouraging, showing outperformance over all HAs used individually and over previously reported work by the authors, including non-LCS (a GA based approach used for the same BP set of problems) and LCS (using single step environments). Several findings and future lines of work are also outlined.

III - New Directions | Pp. 193-218

Adaptive Value Function Approximations in Classifier Systems

Lashon B. Booker

Previous work [1] introduced a new approach to value function approximation in classifier systems called . Hyperplane coding is a closely related variation of tile coding [13] in which classifier rule conditions fill the role of tiles, and there are few restrictions on the way those “tiles” are organized. Experiments with hyperplane coding have shown that, given a relatively small population of random classifiers, it computes much better approximations than more conventional classifier system methods in which individual rules compute approximations independently. The obvious next step in this line of research is to use the approximation resources available in a random population as a starting point for a more refined approach to approximation that re-allocates resources adaptively to gain greater precision in those regions of the input space where it is needed. This paper shows how to compute such an adaptive function approximation.

III - New Directions | Pp. 219-238

Three Architectures for Continuous Action

Stewart W. Wilson

Three classifier system architectures are introduced that permit the systems to have continuous (non-discrete) actions. One is based on interpolation, the second on an actor-critic paradigm, and the third on treating the action as a continuous variable homogeneous with the input. While the last architecture appears most interesting and promising, all three offer potential directions toward continuous action, a goal that classifier systems have hardly addressed.

III - New Directions | Pp. 239-257

A Formal Relationship Between Ant Colony Optimizers and Classifier Systems

Lawrence Davis

This paper demonstrates that, with minimal modifications, a classifier system can be made to operate just as an ant colony optimizer does for solving the TSP.  The paper contains a formal proof of this result, and suggests that the modifications made could be useful in other ways.  In effect, the paper suggests that there may be a new role for classifier systems in optimization, inspired by the way that ant colony optimizers have achieved their successes. The paper also suggests that there may be ways suggested by classifier systems to modify ant colony optimization practice.

III - New Directions | Pp. 258-269

Detection of Sentinel Predictor-Class Associations with XCS: A Sensitivity Analysis

John H. Holmes

Knowledge discovery in databases has traditionally focused on classification, prediction, or in the case of unsupervised discovery, clusters and class definitions. Equally important, however, is the discovery of individual predictors along a continuum of some metric that indicates their association with a particular class. This paper reports on the use of an XCS learning classifier system for this purpose. Conducted over a range of odds ratios for a fixed variable in synthetic data, it was found that XCS discovers rules that contain metric information about specific predictors and their relationship to a given class.

III - New Directions | Pp. 270-281

Data Mining in Learning Classifier Systems: Comparing XCS with GAssist

Jaume Bacardit; Martin V. Butz

This paper compares performance of the Pittsburgh-style system GAssist with the Michigan-style system XCS on several datamining problems. Our analysis shows that both systems are suitable for datamining but have different advantages and disadvantages. The study does not only reveal important differences between the two systems but also suggests several structural properties of the underlying datasets.

IV - Application-Oriented Research and Tools | Pp. 282-290

Improving the Performance of a Pittsburgh Learning Classifier System Using a Default Rule

Jaume Bacardit; David E. Goldberg; Martin V. Butz

An interesting feature of encoding the individuals of a Pittsburgh learning classifier system as a decision list is the emergent generation of a default rule. However, performance of the system is strongly tied to the learning system choosing the correct class for this default rule. In this paper we experimentally study the use of an explicit (static) default rule. We first test simple policies for setting the class of the default rule, such as the majority/minority class of the problem. Next, we introduce some techniques to automatically determine the most suitable class.

IV - Application-Oriented Research and Tools | Pp. 291-307