Catálogo de publicaciones - libros

Compartir en
redes sociales


Multiple Classifier Systems: 7th International Workshop, MCS 2007, Prague, Czech Republic, May 23-25, 2007. Proceedings

Michal Haindl ; Josef Kittler ; Fabio Roli (eds.)

En conferencia: 7º International Workshop on Multiple Classifier Systems (MCS) . Prague, Czech Republic . May 23, 2007 - May 25, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Biometrics; Computation by Abstract Devices

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72481-0

ISBN electrónico

978-3-540-72523-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

On the Diversity-Performance Relationship for Majority Voting in Classifier Ensembles

Yun-Sheng Chung; D. Frank Hsu; Chuan Yi Tang

Combining multiple classifier systems (MCS’) has been shown to outperform single classifier system. It has been demonstrated that improvement for ensemble performance depends on either the diversity among or the performance of individual systems. A variety of diversity measures and ensemble methods have been proposed and studied. It remains a challenging problem to estimate the ensemble performance in terms of the performance of and the diversity among individual systems. In this paper, we establish upper and lower bounds for (performance of the ensemble using majority voting) in terms of P̄(average performance of individual systems) and D̄ (average entropy diversity measure among individual systems). These bounds are shown to be tight using the concept of a performance distribution pattern (PDP) for the input set. Moreover, we showed that when P̄ is big enough, the ensemble performance resulting from a maximum (information-theoretic) entropy PDP is an increasing function with respect to the diversity measure D̄. Five experiments using data sets from various applications domains are conducted to demonstrate the complexity, richness, and diverseness of the problem in estimating the ensemble performance.

- Majority Voting | Pp. 407-420

Hierarchical Behavior Knowledge Space

Hubert Cecotti; Abdel Belaïd

In this paper we present a new method for fusing classifiers output for problems with a number of classes  > 2. We extend the well-known Behavior Knowledge Space method with a hierarchical approach of the different cells. We propose to add the ranking information of the classifiers output for the combination. Each cell can be divided into new sub-spaces in order to solve ambiguities. We show that this method allows a better control of the rejection, without using new classifiers for the empty cells. This method has been applied on a set of classifiers created by bagging. It has been successfully tested on handwritten character recognition allowing better-detailed results. The technique has been compared with other classical combination methods.

- Majority Voting | Pp. 421-430

A New Dynamic Ensemble Selection Method for Numeral Recognition

Albert Hung-Ren Ko; Robert Sabourin; Alceu de Souza Britto

An ensemble of classifiers (EoC) has been shown to be effective in improving classifier performance. To optimize EoC, the ensemble selection is one of the most imporatant issues. Dynamic scheme urges the use of different ensembles for different samples, but it has been shown that dynamic selection does not give better performance than static selection. We propose a dynamic selection scheme which explores the property of the oracle concept. The result suggests that the proposed scheme is apparently better than the selection based on popular majority voting error.

- Ensemble Learning | Pp. 431-439

Ensemble Learning in Linearly Combined Classifiers Via Negative Correlation

Manuela Zanda; Gavin Brown; Giorgio Fumera; Fabio Roli

We investigate the theoretical links between a regression ensemble and a linearly combined classification ensemble. First, we reformulate the Tumer & Ghosh model for linear combiners in a regression context; we then exploit this new formulation to generalise the concept of the “Ambiguity decomposition”, previously defined only for regression tasks, to classification problems. Finally, we propose a new algorithm, based on the Negative Correlation Learning framework, which applies to ensembles of linearly combined classifiers.

- Ensemble Learning | Pp. 440-449

Naïve Bayes Ensembles with a Random Oracle

Juan J. Rodríguez; Ludmila I. Kuncheva

Ensemble methods with Random Oracles have been proposed recently (Kuncheva and Rodríguez, 2007). A random-oracle classifier consists of a pair of classifiers and a fixed, randomly created oracle that selects between them. Ensembles of random-oracle decision trees were shown to fare better than standard ensembles. In that study, the oracle for a given tree was a random hyperplane at the root of the tree. The present work considers two random oracles types (linear and spherical) in ensembles of Naive Bayes Classifiers (NB). Our experiments show that ensembles based solely upon the spherical oracle (and no other ensemble heuristic) outrank Bagging, Wagging, Random Subspaces, AdaBoost.M1, MultiBoost and Decorate. Moreover, these ensemble methods are better with any of the two random oracles than their standard versions without the oracles.

- Ensemble Learning | Pp. 450-458

An Experimental Study on Rotation Forest Ensembles

Ludmila I. Kuncheva; Juan J. Rodríguez

Rotation Forest is a recently proposed method for building classifier ensembles using independently trained decision trees. It was found to be more accurate than bagging, AdaBoost and Random Forest ensembles across a collection of benchmark data sets. This paper carries out a lesion study on Rotation Forest in order to find out which of the parameters and the randomization heuristics are responsible for the good performance. Contrary to common intuition, the features extracted through PCA gave the best results compared to those extracted through non-parametric discriminant analysis (NDA) or random projections. The only ensemble method whose accuracy was statistically indistinguishable from that of Rotation Forest was LogitBoost although it gave slightly inferior results on 20 out of the 32 benchmark data sets. It appeared that the main factor for the success of Rotation Forest is that the transformation matrix employed to calculate the (linear) extracted features is sparse.

- Ensemble Learning | Pp. 459-468

Cooperative Coevolutionary Ensemble Learning

Daniel Kanevskiy; Konstantin Vorontsov

A new optimization technique is proposed for classifier fusion — Cooperative Coevolutionary Ensemble Learning (CCEL). It is based on a specific multipopulational evolutionary algorithm — cooperative coevolution. It can be used as a wrapper over any kind of weak algorithms, learning procedures and fusion functions, for both classification and regression tasks. Experiments on the real-world problems from the UCI repository show that CCEL has a fairly high generalization performance and generates ensembles of much smaller size than boosting, bagging and random subspace method.

- Ensemble Learning | Pp. 469-478

Robust Inference in Bayesian Networks with Application to Gene Expression Temporal Data

Omer Berkman; Nathan Intrator

We are concerned with the problem of inferring genetic regulatory networks from a collection of temporal observations. This is often done via estimating a Dynamic Bayesian Network (DBN) from time series of gene expression data. However, when applying this algorithm to the limited quantities of experimental data that nowadays technologies can provide, its estimation is not robust. We introduce a weak learners’ methodology for this inference problem, study few methods to produce Weak Dynamic Bayesian Networks (WDBNs), and demonstrate its advantages on simulated gene expression data.

- Ensemble Learning | Pp. 479-489

An Ensemble Approach for Incremental Learning in Nonstationary Environments

Michael D. Muhlbaier; Robi Polikar

We describe an ensemble of classifiers based algorithm for incremental learning in nonstationary environments. In this formulation, we assume that the learner is presented with a series of training datasets, each of which is drawn from a different snapshot of a distribution that is drifting at an unknown rate. Furthermore, we assume that the algorithm must learn the new environment in an incremental manner, that is, without having access to previously available data. Instead of a time window over incoming instances, or an aged based forgetting – as used by most ensemble based nonstationary learning algorithms – a strategic weighting mechanism is employed that tracks the classifiers’ performances over drifting environments to determine appropriate voting weights. Specifically, the proposed approach generates a single classifier for each dataset that becomes available, and then combines them through a dynamically modified weighted majority voting, where the voting weights themselves are computed as weighted averages of classifiers’ individual performances over all environments. We describe the implementation details of this approach, as well as its initial results on simulated non-stationary environments.

- Ensemble Learning | Pp. 490-500

Multiple Classifier Systems in Remote Sensing: From Basics to Recent Developments

Jon Atli Benediktsson; Jocelyn Chanussot; Mathieu Fauvel

In this paper, we present some recent developments of Multiple Classifiers Systems (MCS) for remote sensing applications. Some standard MCS methods (boosting, bagging, consensus theory and random forests) are briefly described and applied to multisource data (satellite multispectral images, elevation, slope and aspect data) for landcover classification. In the second part, special attention is given to Support Vector Machines (SVM) based algorithms. In particular, the fusion of two classifiers using both spectral and the spatial information is discussed in the frame of hyperspectral remote sensing for the classification of urban areas. In all the cases, MCS provide a significant improvement of the classification accuracies. In order to address new challenges for the analysis of remote sensing data, MCS provide invaluable tools to handle situations with an ever growing complexity. Examples include extraction of multiple features from one data set, use of multi-sensor data, and complementary use of several algorithms in a decision fusion scheme.

- Invited Papers | Pp. 501-512