Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Recognition Systems: Proceedings of the 4th International Conference on Computer Recognition Systems CORES ’05

Marek Kurzyński ; Edward Puchała ; Michał Woźniak ; Andrzej żołnierek (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Artificial Intelligence (incl. Robotics); Appl.Mathematics/Computational Methods of Engineering; Applications of Mathematics; Information Systems and Communication Service

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-25054-8

ISBN electrónico

978-3-540-32390-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Neural Network-Based Pattern Recognition — Part 2: Stability and Algorithmic Issues

D. Calitoiu; John B. Oommen; D. Nussbaum

Traditional Pattern Recognition (PR) systems work with the model that the object to be recognized is characterized by a set of features, which are treated as the inputs. In this paper, we propose a new model for Pattern Recognition (PR), namely, one that involves Chaotic Neural Networks (CNNs). To achieve this, we enhance the basic model proposed by Adachi [], referred to as Neural Network (ACNN). Although the ACNN has been shown to be chaotic, we prove that it also has the property that the degree of “chaos” can be controlled; decreasing the multiplicity of the eigenvalues of the underlying control system, we can effectively decrease the degree of chaos, and conversely increase the periodicity. We then show that such a Modified ACNN (M-ACNN) has the desirable property that it recognizes various input patterns. The way that this PR is achieved is by the system essentially “resonating” with a finite periodicity whenever these samples are presented. In this paper, which follows its companion paper [], we analyze the M-ACNN for its stability and algorithmic issues. This paper also includes more comprehensive experimental results.

Part I - Invited Papers | Pp. 3-16

A Brief Survey of Dynamic Texture Description and Recognition

Dmitry Chetverikov; Renaud Péteri

Two test images are decomposed into sequences of ten ordered images which result from a clustering of pixels. The first image is supposed to contain pixels belonging to , the tenth image — pixels belonging to . The remaining images gradually change from edge related to interior related. The clustering is provided by the so called Grade Correspondence — Cluster Analysis (GCCA), described in lastly published book on grade models and methods for data analysis. The GCCA is applied to the data matrices formed by a set of 12 variables which include gradient module, gray level, and ten variables describing the nearest neighborhood of each pixel according to the increasing level of module diffierentiation. Data matrices are visualized in form of the so called “ordered overrepresentation maps” and “grade stripcharts”.

Part I - Invited Papers | Pp. 17-26

Open Issues in Pattern Recognition

Robert P. W. Duin; Elżbieta Pekalska

The area of pattern recognition has developed itself into a mature engineering eld with many practical applications. This increased applicability, together with the development of sensors and computer resources, leads to new research areas and raises new questions. In this paper, old and new open issues are discussed that have to be faced in advancing real world applications. Some may only be overcome by brute force procedures, while others may be solved or circumvented either by novel and better procedures, or by a better understanding of their causes. Here, we will try to identify a number of open issues and define them as well as possible.

Part I - Invited Papers | Pp. 27-42

The Role of Ontological Models in Pattern Recognition

Juliusz L. Kulikowski

There are considered the role and applications of ontological models in advanced pattern recognition methods. Formal definition of ontological models, a general taxonomy, and specification of some typical ontological models are presented. Examples of a simple, a composite and an extended ontological model are given. The role of ontological models in composite patterns recognition is described and illustrated by examples.

Part I - Invited Papers | Pp. 43-52

Current Feature Selection Techniques in Statistical Pattern Recognition

Pavel Pudil; Petr Somol

The paper addresses the problem of feature selection (abbreviated FS in the sequel) in statistical pattern recognition with particular emphasis to recent knowledge. Besides over-viewing advances in methodology it attempts to put them into a taxonomical framework. The methods discussed include the latest variants of the Branch & Bound algorithm, enhanced sub-optimal techniques and the simultaneous semi-parametric probability density function modeling and feature space selection method.

Part I - Invited Papers | Pp. 53-68

Margin-based Diversity Measures for Ensemble Classifiers

Tomasz Arodź

The classifier ensembles have been used successfully in many applications. Their superiority over single classifiers depends on the diversity of the classifiers forming the ensemble. Till now, most of the ensemble diversity measures were derived basing on the binary classification information. In this paper we propose a new group of methods, which use the margins of individual classifiers from the ensemble. These methods process the margins with a bipolar sigmoid function, as the most important information is contained in margins of low magnitude. The proposed diversity measures are evaluated for three types of ensembles of linear classifiers. The tests show that these measures are better at predicting recognition accuracy than established diversity measures, such as or disagreement measures, or entropy.

Part II - Features, Learning and Classifiers | Pp. 71-78

Boosting the Fisher Linear Discriminant with Random Feature Subsets

Tomasz Arodź

Boosting increases the recognition accuracy of many types of classifiers. However, studies show that for the Fisher Linear Discriminant (FLD), a simple and widely used classifier, boosting does not lead to a significant increase in accuracy. In this paper, a new method for adapting the FLD into the boosting framework is proposed. This method, the AdaBoost-RandomFeatureSubset-FLD (AB-RFS-FLD), uses a different, randomly chosen subset of features for learning in each boosting round. The new method achieves significantly better accuracy than both single FLD and FLD with boosting, with improvements reaching 6% in some cases. We show that the good performance can be attributed to higher diversity of the individual FLDs, as well as to the better generalization abilities.

Part II - Features, Learning and Classifiers | Pp. 79-86

A Look-Ahead Branch and Bound Pruning Scheme for Trie-Based Approximate String Matching

Ghada Badr; John B. Oommen

This paper deals with the problem of estimating a transmitted string * by processing the corresponding string , which is a noisy version of *. We assume that contains substitution, insertion and deletion errors, and that * is an element of a finite (but possibly, large) dictionary, . The best estimate of *, is defined as that element of which minimizes the Generalized Levenshtein Distance () between and such that the total number of errors is not more than , for all ∈ . In this paper we present a new Branch and Bound pruning strategy that can be applied to dictionary-based approximate string matching when the dictionary is stored as a trie. The new strategy attempts to look ahead at each node, , before moving further, by merely evaluating a certain local criterion at . As opposed to the reported trie-based methods [], [], the pruning is done before even embarking on the edit distance computations and thus it combines the advantages of partitioning the dictionary according to the string lengths, and the advantages gleaned by representing using the trie data structure. The results demonstrate a marked improvement (even up to 33%) with respect to the number of operations needed on three benchmark dictionaries.

Part II - Features, Learning and Classifiers | Pp. 87-94

Linear Ranked Regression - Designing Principles

Leon Bobrowski

A priori information about selected pattern recognition problem is often a necessary precondition to reach a sufficient quality of the problem solution. Such information could take the form of the ranked order in the referencing sets of objects or events. For example, we can encounter a case when one patient is suffering from a more advanced stage of a disease than the another one. In other cases, we can assume that some events took place earlier or later than the regarded one. A ranked regression task is aimed at designing such linear transformation of multivariate data sets on the line which preserves with the highest precision possible the ranked order. The convex and piecewise linear (CPL) criterion functions are used here for designing ranked linear models.

Part II - Features, Learning and Classifiers | Pp. 105-112

Time Series Patterns Recognition with Genetic Algorithms

Marcin Borkowski

The aim of this paper is to present applicable, working pattern recognition system, which can find and classify all useful dependencies between data entries in time series. The idea of predictor and its level of certainty are introduced in the work. Genetic algorithm has been deployed to prepare and govern a set of independent predictors. Practical part of solution consists of data fitting and prediction. Architecture of the system offers possibility to interleave learning phase with use. Analyzed data may be non continuous, and incomplete. In uncertain cases the system presents either more than one answer to processed data or no response at all. Early testing results, including prediction and fitting of simple time series with missing data amount ranging from 10 to 50 percent, are presented at the end of this paper.

Part II - Features, Learning and Classifiers | Pp. 113-120