Catálogo de publicaciones - libros

Compartir en
redes sociales


Multiple Classifier Systems: 7th International Workshop, MCS 2007, Prague, Czech Republic, May 23-25, 2007. Proceedings

Michal Haindl ; Josef Kittler ; Fabio Roli (eds.)

En conferencia: 7º International Workshop on Multiple Classifier Systems (MCS) . Prague, Czech Republic . May 23, 2007 - May 25, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Biometrics; Computation by Abstract Devices

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72481-0

ISBN electrónico

978-3-540-72523-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Combining Pattern Recognition Modalities at the Sensor Level Via Kernel Fusion

Vadim Mottl; Alexander Tatarchuk; Valentina Sulimova; Olga Krasotkina; Oleg Seredin

The problem of multi-modal pattern recognition is considered under the assumption that the kernel-based approach is applicable within each particular modality. The Cartesian product of the linear spaces into which the respective kernels embed the output scales of single sensor is employed as an appropriate joint scale corresponding to the idea of combining modalities, actually, at the sensor level. From this point of view, the known kernel fusion techniques, including Relevance and Support Kernel Machines, offer a toolkit of combining pattern recognition modalities. We propose an SVM-based quasi-statistical approach to multi-modal pattern recognition which covers both of these modes of kernel fusion.

- Kernel-Based Fusion | Pp. 1-12

The Neutral Point Method for Kernel-Based Combination of Disjoint Training Data in Multi-modal Pattern Recognition

David Windridge; Vadim Mottl; Alexander Tatarchuk; Andrey Eliseyev

Multiple modalities present potential difficulties for kernel-based pattern recognition in consequence of the lack of inter-modal kernel measures. This is particularly apparent when training sets for the differing modalities are disjoint. Thus, while it is always possible to consider the problem at the classifier fusion level, it is conceptually preferable to approach the matter from a kernel-based perspective. By interpreting the aggregate of disjoint training sets as an entire data set with missing inter-modality measurements to be filled in by appropriately chosen substitutes, we arrive at a novel kernel-based technique, the . On further theoretical analysis, it transpires that the method is, in structural terms, a kernel-based analog of the well-known sum rule combination scheme. We therefore expect the method to exhibit similar error-canceling behavior, and thus constitute a robust and conservative strategy for the treatment of kernel-based multi-modal data.

- Kernel-Based Fusion | Pp. 13-21

Kernel Combination Versus Classifier Combination

Wan-Jui Lee; Sergey Verzakov; Robert P. W. Duin

Combining classifiers is to join the strengths of different classifiers to improve the classification performance. Using rules to combine the outputs of different classifiers is the basic structure of classifier combination. Fusing models from different kernel machine classifiers is another strategy for combining models called kernel combination. Although classifier combination and kernel combination are very different strategies for combining classifier, they aim to reach the same goal by very similar fundamental concepts.

We propose here a compositional method for kernel combination. The new composed kernel matrix is an extension and union of the original kernel matrices. Generally, kernel combination approaches relied heavily on the training data and had to learn some weights to indicate the importance of each kernel. Our compositional method avoids learning any weight and the importance of the kernel functions are directly derived in the process of learning kernel machines. The performance of the proposed kernel combination procedure is illustrated by some experiments in comparison with classifier combining based on the same kernels.

- Kernel-Based Fusion | Pp. 22-31

Deriving the Kernel from Training Data

Stefano Merler; Giuseppe Jurman; Cesare Furlanello

In this paper we propose a strategy for constructing data–driven kernels, automatically determined by the training examples. Basically, their associated Reproducing Kernel Hilbert Spaces arise from finite sets of linearly independent functions, that can be interpreted as weak classifiers or regressors, learned from training material. When working in the Tikhonov regularization framework, the unique free parameter to be optimized is the regularizer, representing a trade-off between empirical error and smoothness of the solution. A generalization error bound based on Rademacher complexity is provided, yielding the potential for controlling overfitting.

- Kernel-Based Fusion | Pp. 32-41

On the Application of SVM-Ensembles Based on Adapted Random Subspace Sampling for Automatic Classification of NMR Data

Kai Lienemann; Thomas Plötz; Gernot A. Fink

We present an approach for the automatic classification of Nuclear Magnetic Resonance Spectroscopy data of biofluids with respect to drug induced organ toxicities. Classification is realized by an Ensemble of Support Vector Machines, trained on different subspaces according to a modified version of Random Subspace Sampling. Features most likely leading to an improved classification accuracy are favored by the determination of subspaces, resulting in an improved classification accuracy of base classifiers within the Ensemble. An experimental evaluation based on a challenging, real task from pharmacology proves the increased classification accuracy of the proposed Ensemble creation approach compared to single SVM classification and classical Random Subspace Sampling.

- Applications | Pp. 42-51

A New HMM-Based Ensemble Generation Method for Numeral Recognition

Albert Hung-Ren Ko; Robert Sabourin; Alceu de Souza Britto

A new scheme for the optimization of codebook sizes for HMMs and the generation of HMM ensembles is proposed in this paper. In a discrete HMM, the vector quantization procedure and the generated codebook are associated with performance degradation. By using a selected clustering validity index, we show that the optimization of HMM codebook size can be selected without training HMM classifiers. Moreover, the proposed scheme yields multiple optimized HMM classifiers, and each individual HMM is based on a different codebook size. By using these to construct an ensemble of HMM classifiers, this scheme can compensate for the degradation of a discrete HMM.

- Applications | Pp. 52-61

Classifiers Fusion in Recognition of Wheat Varieties

Sarunas Raudys; Ömer Kaan Baykan; Ahmet Babalik; Vitalij Denisov; Antanas Andrius Bielskis

Five wheat varieties (Bezostaja, Çeşit1252, Dağdaş, Gerek, Kızıltan traded in Konya Exchange of Commerce, Turkey), characterized by nine geometric and three colour descriptive features have been classified by multiple classier system where pair-wise SLP or SV classifiers served as base experts. In addition to standard voting and Hastie and Tibshirani fusion rules, two new ones were suggested that allowed reducing the generalization error up to 5%. In classifying of kernel lots, we may obtain faultless grain recognition.

- Applications | Pp. 62-71

Multiple Classifier Methods for Offline Handwritten Text Line Recognition

Roman Bertolami; Horst Bunke

This paper investigates the use of multiple classifier methods for offline handwritten text line recognition. To obtain ensembles of recognisers we implement a random feature subspace method. The word sequences returned by the individual ensemble members are first aligned. Then the final word sequence is produced. For this purpose we use a voting method and two novel statistical combination methods. The conducted experiments show that the proposed multiple classifier methods have the potential to improve the recognition accuracy of single recognisers.

- Applications | Pp. 72-81

Applying Data Fusion Methods to Passage Retrieval in QAS

Hans Ulrich Christensen; Daniel Ortiz-Arroyo

This paper investigates the use of diverse data fusion methods to improve the performance of the passage retrieval component in a question answering system. Our results obtained with 13 data fusion methods and 8 passage retrieval systems show that data fusion techniques are capable of improving the performance of a passage retrieval system by 6.43% and 11.32% in terms of the mean reciprocal rank and coverage measures respectively.

- Applications | Pp. 82-92

A Co-training Approach for Time Series Prediction with Missing Data

Tawfik A. Mohamed; Neamat El Gayar; Amir F. Atiya

In this paper we consider the problem of missing data in time series analysis. We propose a semi-supervised co-training method to handle the problem of missing data. We transform the time series data to set of labeled and unlabeled data. Different predictors are used to predict the unlabelled data and the most confident labeled patterns are used to retrain the predictors further to and enhance the overall prediction accuracy. By labeling the unknown patterns the missing data is compensated for. Experiments were conducted on different time series data and with varying percentage of missing data using a uniform distribution. We used KNN base predictors and Fuzzy Inductive Reasoning (FIR) base predictors and compared their performance using different confidence measures. Results reveal the effectiveness of the co-training method to compensate for the missing values and to improve prediction. The FIR model together with the ”similarity” confidence measures obtained in most cases the best results in our study.

- Applications | Pp. 93-102