Catálogo de publicaciones - libros

Compartir en
redes sociales


Multiple Classifier Systems: 7th International Workshop, MCS 2007, Prague, Czech Republic, May 23-25, 2007. Proceedings

Michal Haindl ; Josef Kittler ; Fabio Roli (eds.)

En conferencia: 7º International Workshop on Multiple Classifier Systems (MCS) . Prague, Czech Republic . May 23, 2007 - May 25, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Biometrics; Computation by Abstract Devices

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72481-0

ISBN electrónico

978-3-540-72523-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

An Improved Random Subspace Method and Its Application to EEG Signal Classification

Shiliang Sun

Ensemble learning is one of the principal current directions in the research of machine learning. In this paper, subspace ensembles for classification are explored which constitute an ensemble classifier system by manipulating different feature subspaces. Starting with the nature of ensemble efficacy, we probe into the microcosmic meaning of ensemble diversity, and propose to use region partitioning and region weighting to implement effective subspace ensembles. An improved random subspace method that integrates this mechanism is presented. Individual classifiers possessing eminent performance on a partitioned region reflected by high neighborhood accuracies, are deemed to contribute largely to this region, and are assigned large weights in determining the labels of instances in this area. The robustness and effectiveness of the proposed method is shown empirically with the base classifier of linear support vector machines on the classification problem of EEG signals.

- Applications | Pp. 103-112

Ensemble Learning Methods for Classifying EEG Signals

Shiliang Sun

Bagging, boosting and random subspace are three popular ensemble learning methods, which have already shown effectiveness in many practical classification problems. For electroencephalogram (EEG) signal classification arising in recent brain-computer interface (BCI) research, however, there are almost no reports investigating their feasibilities. This paper systematically evaluates the performance of these three ensemble methods for their new application on EEG signal classification. Experiments are conducted on three BCI subjects with k-nearest neighbor and decision tree as base classifiers. Several valuable conclusions are derived about the feasibility and performance of ensemble methods for classifying EEG signals.

- Applications | Pp. 113-120

Confidence Based Gating of Colour Features for Face Authentication

Mohammad T. Sadeghi; Samaneh Khoshrou; Josef Kittler

We address the problem of fusing colour information for face authentication. The performance of a face verification system in different colour spaces is experimentally studied first. The verification process is based on the normalised correlation measure within the LDA feature space. The confidence level of the measurement made is then calculated for each colour subspace. Confidence measures are used within the framework of a gating process in order to select a subset of colour space classifiers. The selected classifiers are finally combined using the voting rule for decision making. Using the proposed method, the performance of the verification system is considerably improved as compared to the intensity space. The proposed colour fusion scheme also outperforms the best colour space in different conditions.

- Applications | Pp. 121-130

View-Based Eigenspaces with Mixture of Experts for View-Independent Face Recognition

Reza Ebrahimpour; Ehsanollah Kabir; Mohammad Reza Yousefi

We propose a new model for view-independent face recognition, which lies under the category of multi-view approaches. We use the so-called “mixture of experts”, ME, in which, the problem space is divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In the proposed model, instead of allowing ME to partition the face space automatically, the ME is directed to adapt to a particular partitioning corresponding to predetermined views. In this model, view-dependent representations are used to direct the experts towards a specific area of face space. The experimental results support our claim that directing the mixture of experts to a predetermined partitioning of face space is a more beneficial way of using conventional ME for view-independent face recognition.

- Applications | Pp. 131-140

Fusion of Support Vector Classifiers for Parallel Gabor Methods Applied to Face Verification

Ángel Serrano; Isaac Martín de Diego; Cristina Conde; Enrique Cabello; Li Bai; Linlin Shen

In this paper we present a fusion technique for Support Vector Machine (SVM) scores, obtained after a dimension reduction with Bilateral-projection-based Two-Dimensional Principal Component Analysis (B2DPCA) for Gabor features. We apply this new algorithm to face verification. Several experiments have been performed with the public domain FRAV2D face database (109 subjects). A total of 40 wavelets (5 frequencies and 8 orientations) have been used. Each set of wavelet-convolved images is considered in parallel for the B2DPCA and the SVM classification. A final fusion is performed combining the SVM scores for the 40 wavelets with a raw average. The proposed algorithm outperforms the standard dimension reduction techniques, such as Principal Component Analysis (PCA) and B2DPCA.

- Applications | Pp. 141-150

Serial Fusion of Fingerprint and Face Matchers

Gian Luca Marcialis; Fabio Roli

The serial fusion of multiple biometric traits for personal identity verification has been poorly investigated so far. However, this approach exhibits some potential advantages, for example, the possibility of reducing the verification time for genuine users and the requested degree of user cooperation. Moreover, the use of multiple biometrics can discourage fraudulent attempts to deceive the system. In this paper, some preliminary results on a novel approach to multi-modal serial fusion are reported, with comparative results against the commonly used parallel fusion of face and fingerprint matchers.

- Applications | Pp. 151-160

Boosting Lite – Handling Larger Datasets and Slower Base Classifiers

Lawrence O. Hall; Robert E. Banfield; Kevin W. Bowyer; W. Philip Kegelmeyer

In this paper, we examine ensemble algorithms (Boosting Lite and Ivoting) that provide accuracy approximating a single classifier, but which require significantly fewer training examples. Such algorithms allow ensemble methods to operate on very large data sets or use very slow learning algorithms. Boosting Lite is compared with Ivoting, standard boosting, and building a single classifier. Comparisons are done on 11 data sets to which other approaches have been applied. We find that ensembles of support vector machines can attain higher accuracy with less data than ensembles of decision trees. We find that Ivoting may result in higher accuracy ensembles on some data sets, however Boosting Lite is generally able to indicate when boosting will increase overall accuracy.

- Boosting | Pp. 161-170

Information Theoretic Combination of Classifiers with Application to AdaBoost

Julien Meynet; Jean-Philippe Thiran

Combining several classifiers has proved to be an efficient machine learning technique. We propose a new measure of the goodness of an ensemble of classifiers in an information theoretic framework. It measures a trade-off between diversty and individual classifier accuracy. This technique can be directly used for the selection of an ensemble in a pool of classifiers. We also propose a variant of AdaBoost for directly training the classifiers by taking into account this new information theoretic measure.

- Boosting | Pp. 171-179

Interactive Boosting for Image Classification

Yijuan Lu; Qi Tian; Thomas S. Huang

Traditional boosting method like adaboost, boosts a weak learning algorithm by updating the sample weights (the relative importance of the training samples) iteratively. In this paper, we propose to integrate feature re-weighting into boosting scheme, which not only weights the samples but also weights the feature elements iteratively. To avoid overfitting problem caused by feature re-weighting on a small training data set, we also incorporate relevance feedback into boosting and propose an interactive boosting called i.Boosting. It merges adaboost, feature re-weighting and relevance feedback into one framework and exploits the favorable attributes of these methods. In this paper, i.Boosting is implemented using Adaptive Discriminant Analysis (ADA) as base classifiers. It not only enhances but also combines a set of ADA classifiers into a more powerful one. A feature re-weighting method for ADA is also proposed and integrated in i.Boosting. Extensive experiments on UCI benchmark data sets, three facial image data sets and COREL color image data sets show the superior performance of i.Boosting over AdaBoost and other state-of-the-art projection-based classifiers.

- Boosting | Pp. 180-189

Group-Induced Vector Spaces

Manuele Bicego; Elżbieta Pękalska; Robert P. W. Duin

The strength of classifier combination lies either in a suitable averaging over multiple experts/sources or in a beneficial integration of complementary approaches. In this paper we focus on the latter and propose the use of group-induced vector spaces (GIVSs) as a way to combine unsupervised learning with classification. In such an integrated approach, the data is first modelled by a number of groups, found by a clustering procedure. Then, a proximity function is used to measure the (dis)similarity of an object to each group. A GIVS is defined by mapping an object to a vector of proximity scores, computed with respect to the given groups.In this study, we focus on a particular aspect of using GIVSs in a mode of building a trained combiner, namely the integration of generative and discriminative methods. First, in the generative step, we model the groups by simple generative models, building the GIVS space. The classification problem is then mapped in the resulting vector space, where a discriminative classifier is trained. Our experiments show that the integrated approach leads to comparable or better results than the generative methods in the original feature spaces.

- Cluster and Graph Ensembles | Pp. 190-199