Catálogo de publicaciones - libros

Compartir en
redes sociales


Learning Classifier Systems: International Workshops, IWLCS 2003-2005, Revised Selected Papers

Tim Kovacs ; Xavier Llorà ; Keiki Takadama ; Pier Luca Lanzi ; Wolfgang Stolzmann ; Stewart W. Wilson (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Data Mining and Knowledge Discovery

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-71230-5

ISBN electrónico

978-3-540-71231-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Analyzing Parameter Sensitivity and Classifier Representations for Real-Valued XCS

Atsushi Wada; Keiki Takadama; Katsunori Shimohara; Osamu Katai

To evaluate a real-valued XCS classifier system, we present a validation of Wilson’s XCSR from two points of view. These are: (1) sensitivity of real-valued XCS specific parameters on performance and (2) the design of classifier representation with classifier operators such as mutation and covering. We also propose model with another classifier representation (LU-Model) to compare it with a model with the original XCSR classifier representation (CS-Model.) We did comprehensive experiments by applying a 6-dimensional real-valued multiplexor problem to both models. This revealed the following: (1) there are critical threshold on covering operation parameter (), which must be considered in setting parameters to avoid serious decreases in performance; and (2) the LU-Model has an advantage in smaller classifier population size within the same performance level over the CS-Model, which reveals the superiority of alternative classifier representation for real-valued XCS.

I - Knowledge Representation | Pp. 1-16

Use of Learning Classifier System for Inferring Natural Language Grammar

Olgierd Unold; Grzegorz Dabrowski

This paper deals with the use of learning classifier system—LCS—for inferring a nontrivial natural language grammar. In a repeated analysis LCS infers the grammar of a given natural language from an exemplary set of correct and incorrect sentences. A genetic algorithm used periodically strengthens LCS’s operation. A context-free grammar is used in the description of language structure.

I - Knowledge Representation | Pp. 17-24

Backpropagation in Accuracy-Based Neural Learning Classifier Systems

Toby O’Hara; Larry Bull

Learning Classifier Systems traditionally use a binary string rule representation with wildcards added to allow for generalizations over the problem encoding. We have presented a neural network-based representation to aid their use in complex problem domains. Here each rule’s condition and action are represented by a small neural network, evolved through the actions of the genetic algorithm. In this paper we present results from the use of backpropagation in conjunction with the genetic algorithm within XCS. After describing the minor changes required to the standard production system functionality, performance is presented from using backpropagation in a number of ways within the system. Results from both continuous and discrete action tasks indicate that significant decreases in the time taken to reach optimal behaviour can be obtained from the incorporation of the local learning algorithm.

I - Knowledge Representation | Pp. 25-39

Binary Rule Encoding Schemes: A Study Using the Compact Classifier System

Xavier Llorà; Kumara Sastry; David E. Goldberg

Several binary rule encoding schemes have been proposed for Pittsburgh-style classifier systems. This paper focus on the analysis of how maximally general and accurate rules, regardless of the encoding, can be evolved in a such classifier systems. The theoretical analysis of maximally general and accurate rules using two different binary rule encoding schemes showed some theoretical results with clear implications to the scalability of any genetic-based machine learning system that uses the studied encoding schemes. Such results are clearly relevant since one of the binary representations studied is widely used on Pittsburgh-style classifier systems, and shows an exponential shrink of the useful rules available as the problem size increases . In order to be able to perform such analysis we use a simple barebones Pittsburgh classifier system—the compact classifier system (CCS)—based on estimation of distribution algorithms.

I - Knowledge Representation | Pp. 40-58

Bloat Control and Generalization Pressure Using the Minimum Description Length Principle for a Pittsburgh Approach Learning Classifier System

Jaume Bacardit; Josep Maria Garrell

Bloat control and generalization pressure are very important issues in the design of Pittsburgh Approach Learning Classifier Systems (LCS), in order to achieve simple and accurate solutions in a reasonable time. In this paper we propose a method to achieve these objectives based on the () principle. This principle is a metric which combines in a smart way the accuracy and the complexity of a theory (rule set , instance set, etc.). An extensive comparison with our previous generalization pressure method across several domains and using two knowledge representations has been done. The test show that the based size control method is a good and robust choice.

II - Mechanisms | Pp. 59-79

Post-processing Clustering to Decrease Variability in XCS Induced Rulesets

Flavio Baronti; Alessandro Passaro; Antonina Starita

XCS is a stochastic algorithm, so it does not guarantee to produce the same results when run with the same input. When interpretability matters, obtaining a single, stable result is important. We propose an algorithm which applies clustering in order to merge the rules produced from many XCS runs. Such an algorithm needs a measure of distance between rules; we then suggest a general definition for such a measure. We finally evaluate the results obtained on two well-known data sets, with respect to performance and stability. We find that stability is improved, while performance is slightly impaired.

II - Mechanisms | Pp. 80-92

LCSE: Learning Classifier System Ensemble for Incremental Medical Instances

Yang Gao; Joshua Zhexue Huang; Hongqiang Rong; Da-qian Gu

This paper proposes LCSE, a learning classifier system ensemble, which is an extension to the classical learning classifier system(LCS). The classical LCS includes two major modules, a genetic algorithm module used to facilitate rule discovery, and a reinforcement learning module used to adjust the strength of the corresponding rules after the learning module receives the rewards from the environment. In LCSE we build a two-level ensemble architecture to enhance the generalization of LCS. In the first-level, new instances are first bootstrapped and sent to several LCSs for classification. Then, in the second-level, a simple plurality-vote method is used to combine the classification results of individual LCSs into a final decision. Experiments on some benchmark medical data sets from the UCI repository have shown that LCSE has better performance on incremental medical data learning and better generalization ability than the single LCS and other supervised learning methods.

II - Mechanisms | Pp. 93-103

Effect of Pure Error-Based Fitness in XCS

Martin V. Butz; David E. Goldberg; Pier Luca Lanzi

The accuracy-based fitness approach in XCS is one of the most significant changes in comparison with original learning classifier systems. Nonetheless, neither the scaled accuracy function, nor the importance of the relative fitness approach has been investigated in detail. The recent introduction of tournament selection to XCS has shown to make the system more independent from parameter settings and scaling issues. The question remains if relative accuracy itself is actually necessary in XCS or if the evolutionary process could be based directly on error. This study investigates advantages and disadvantages of pure error-based fitness vs. relative accuracy-based fitness in XCS.

II - Mechanisms | Pp. 104-114

A Fuzzy System to Control Exploration Rate in XCS

Ali Hamzeh; Adel Rahmani

Exploration/Exploitation dilemma is one of the most challenging issues in reinforcement learning area as well as learning classifier systems such as XCS. In this paper, an intelligent method is proposed to control the exploration rate in XCS to improve its long-term performance. This method is called Intelligent Exploration Method (IEM) and is applied to some benchmark problems to show advantages of adaptive exploration rate for XCS.

II - Mechanisms | Pp. 115-127

Counter Example for Q-Bucket-Brigade Under Prediction Problem

Atsushi Wada; Keiki Takadama; Katsunori Shimohara

Aiming at clarifying the convergence or divergence conditions for Learning Classifier System (LCS), this paper explores: (1) an extreme condition where the reinforcement process of LCS diverges; and (2) methods to avoid such divergence. Based on our previous work that showed equivalence between LCS’s reinforcement process and Reinforcement Learning (RL) with Function approximation (FA) method, we present a counter example for LCS with the Q-bucket-brigade based on the 11-state star problem, a counter example originally proposed to show the divergence of Q-learning with linear FA. Furthermore, the empirical results applying the counter example to LCS verified the results predicted from the theory: (1) LCS with the Q-bucket-brigade diverged under prediction problems, where the action selection policy was fixed; and (2) such divergence was avoided by using the implicit-bucket-brigade or applying residual gradient algorithm to the Q-bucket-brigade.

II - Mechanisms | Pp. 128-143