Catálogo de publicaciones - libros

Compartir en
redes sociales


Pattern Recognition and Image Analysis: Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, June 7-9, 2005, Proceeding, Part II

Jorge S. Marques ; Nicolás Pérez de la Blanca ; Pedro Pina (eds.)

En conferencia: 2º Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA) . Estoril, Portugal . June 7, 2005 - June 9, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Document Preparation and Text Processing; Computer Graphics

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-26154-4

ISBN electrónico

978-3-540-32238-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Testing Some Improvements of the Fukunaga and Narendra’s Fast Nearest Neighbour Search Algorithm in a Spelling Task

Eva Gómez-Ballester; Luisa Micó; Jose Oncina

Nearest neighbour search is one of the most simple and used technique in Pattern Recognition. One of the most known fast nearest neighbour algorithms was proposed by Fukunaga and Narendra. The algorithm builds a tree in preprocess time that is traversed on search time using some elimination rules to avoid its full exploration. This paper tests two new types of improvements in a real data environment, a spelling task. The first improvement is a new (and faster to build) type of tree, and the second is the introduction of two new elimination rules. Both techniques, even taken independently, reduce significantly both: the number of distance computations and the search time expended to find the nearest neighbour.

Palabras clave: Test Point; Search Time; Near Neighbour; Distance Computation; Tree Construction.

I - Statistical Pattern Recognition | Pp. 3-10

Solving Particularization with Supervised Clustering Competition Scheme

Oriol Pujol; Petia Radeva

The process of mixing labelled and unlabelled data is being recently studied in semi-supervision techniques. However, this is not the only scenario in which mixture of labelled and unlabelled data can be done. In this paper we propose a new problem we have called particularization and a way to solve it. We also propose a new technique for mixing labelled and unlabelled data. This technique relies in the combination of supervised and unsupervised processes competing for the classification of each data point. Encouraging results on improving the classification outcome are obtained on MNIST database.

Palabras clave: Recognition Rate; Gaussian Mixture Model; Label Data; Handwritten Character; Robust Cluster.

I - Statistical Pattern Recognition | Pp. 11-18

Adaptive Optimization with Constraints: Convergence and Oscillatory Behaviour

Fernando J. Coito; João M. Lemos

The problem of adaptive minimization of globally unknown functionals under constraints on the independent variable is considered in a stochastic framework. The CAM algorithm for vector problems is proposed. By resorting to the ODE analysis for analysing stochastic algorithms and singular perturbation methods, it is shown that the only possible convergence points are the constrained local minima. Simulation results in 2 dimensions illustrate this result.

Palabras clave: Oscillatory Behaviour; Stochastic Algorithm; Fast Subsystem; Convergence Point; Vector Problem.

I - Statistical Pattern Recognition | Pp. 19-26

Data Characterization for Effective Prototype Selection

Ramón A. Mollineda; J. Salvador Sánchez; José M. Sotoca

The Nearest Neighbor classifier is one of the most popular supervised classification methods. It is very simple, intuitive and accurate in a great variety of real-world applications. Despite its simplicity and effectiveness, practical use of this rule has been historically limited due to its high storage requirements and the computational costs involved, as well as the presence of outliers. In order to overcome these drawbacks, it is possible to employ a suitable prototype selection scheme, as a way of storage and computing time reduction and it usually provides some increase in classification accuracy. Nevertheless, in some practical cases prototype selection may even produce a degradation of the classifier effectiveness. From an empirical point of view, it is still difficult to know a priori when this method will provide an appropriate behavior. The present paper tries to predict how appropriate a prototype selection algorithm will result when applied to a particular problem, by characterizing data with a set of complexity measures.

Palabras clave: Complexity Measure; Training Instance; Lower Error Rate; Neighbor Rule; Prototype Selection.

I - Statistical Pattern Recognition | Pp. 27-34

A Stochastic Approach to Wilson’s Editing Algorithm

Fernando Vázquez; J. Salvador Sánchez; Filiberto Pla

Two extensions of the original Wilson’s editing method are introduced in this paper. These new algorithms are based on estimating probabilities from the k -nearest neighbor patterns of an instance, in order to obtain more compact edited sets while maintaining the classification rate. Several experiments with synthetic and real data sets are carried out to illustrate the behavior of the algorithms proposed here and compare their performance with that of other traditional techniques.

Palabras clave: Classification Accuracy; Near Neighbor; Machine Learn Database; Editing Method; Edit Near Neighbor.

I - Statistical Pattern Recognition | Pp. 35-42

Parallel Perceptrons, Activation Margins and Imbalanced Training Set Pruning

Iván Cantador; José R. Dorronsoro

A natural way to deal with training samples in imbalanced class problems is to prune them removing redundant patterns, easy to classify and probably over represented, and label noisy patterns that belonging to one class are labelled as members of another. This allows classifier construction to focus on borderline patterns, likely to be the most informative ones. To appropriately define the above subsets, in this work we will use as base classifiers the so–called parallel perceptrons, a novel approach to committee machine training that allows, among other things, to naturally define margins for hidden unit activations. We shall use these margins to define the above pattern types and to iteratively perform subsample selections in an initial training set that enhance classification accuracy and allow for a balanced classifier performance even when class sizes are greatly different.

Palabras clave: Near Neighbor; Minority Class; Activation Margin; Negative Pattern; Positive Pattern.

I - Statistical Pattern Recognition | Pp. 43-50

Boosting Statistical Local Feature Based Classifiers for Face Recognition

Xiangsheng Huang; Yangsheng Wang

In this work, we present a novel approach for face recognition which use boosted statistical local Gabor feature based classifiers. Firstly, two Gabor parts, real part and imaginary part, are extracted for each pixel of face images. The two parts are transformed into two kinds of Gabor features, magnitude feature and phase feature. 40 magnitude Gaborfaces and 40 phase Gaborfaces are generated for each face image by convoluting face images with five scales and eight orientations Gabor filters. Then these Gaborfaces are scanned with a sub-window from which the quantified Gabor features histograms are extracted representing efficiently the face image. The multi-class problem of face recognition is transformed into a two-class one of intra-and extra-class classification using intra-personal and extra-personal images, as in [5]. The intra/extra features are constructed based on these histograms of two different face images with Chi square statistic as dissimilarity measure. A strong classifier is learned using boosting examples, similar to the way in face detection framework [10]. Experiments on FERET database show good results comparable to the best one reported in literature [6].

Palabras clave: Face Recognition; Independent Component Analysis; Face Image; Independent Component Analysis; Convolutional Neural Network.

I - Statistical Pattern Recognition | Pp. 51-58

Dynamic and Static Weighting in Classifier Fusion

Rosa M. Valdovinos; J. Salvador Sánchez; Ricardo Barandela

When a Multiple Classifier System is employed, one of the most popular methods to accomplish the classifier fusion is the simple majority voting. However, when the performance of the ensemble members is not uniform, the efficiency of this type of voting is affected negatively. In this paper, a comparison between simple and weighted voting (both dynamic and static) is presented. New weighting methods, mainly in the direction of the dynamic approach, are also introduced. Experimental results with several real-problem data sets demonstrate the advantages of the weighting strategies over the simple voting scheme. When comparing the dynamic and the static approaches, results show that the dynamic weighting is superior to the static strategy in terms of classification accuracy.

Palabras clave: Static Weighting; Test Pattern; Near Neighbor; Individual Classifier; Weighted Vote.

I - Statistical Pattern Recognition | Pp. 59-66

A Novel One-Parameter Regularized Kernel Fisher Discriminant Method for Face Recognition

Wensheng Chen; Pongchi Yuen; Jian Huang; Daoqing Dai

Kernel-based regularization discriminant analysis (KRDA) is one of the promising approaches for solving small sample size problem in face recognition. This paper addresses the problem in regularization parameter reduction in KRDA. From computational complexity point of view, our goal is to develop a KRDA algorithm with minimum number of parameters, in which regularization process can be fully controlled. Along this line, we have developed a Kernel 1-parameter RDA (K1PRDA) algorithm (W. S. Chen, P C Yuen, J Huang and D. Q. Dai, “Kernel machine-based one-parameter regularized Fisher discriminant method for face recognition,” IEEE Transactions on SMC-B , to appear, 2005.). K1PRDA was developed based on a three-parameter regularization formula. In this paper, we propose another approach to formulate the one-parameter KRDA (1PRKFD) based on a two-parameter formula. Yale B database, with pose and illumination variations, is used to compare the performance of 1PRKFD algorithm, K1PRDA algorithm and other LDA-based algorithms. Experimental results show that both 1PRKFD and K1PRDA algorithms outperform the other LDA-based face recognition algorithms. The performance between 1PRKFD and K1PRDA algorithms are comparable. This concludes that our methodology in deriving the one-parameter KRDA is stable.

Palabras clave: Face Recognition; Conjugate Gradient Method; Illumination Variation; Rank1 Accuracy; Small Sample Size Problem.

I - Statistical Pattern Recognition | Pp. 67-74

AutoAssign – An Automatic Assignment Tool for Independent Components

Matthias Böhm; Kurt Stadlthanner; Ana M. Tomé; Peter Gruber; Ana R. Teixeira; Fabian J. Theis; Carlos G. Puntonet; Elmar W. Lang

In this work an automatic assignment tool for estimated independent components within an independent component analysis is presented. The algorithm is applied to the problem of removing the water artifact from 2D NOESY NMR spectra. The algorithm uses local PCA to approximate the water artifact and defines a suitable cost function which is optimized using simulated annealing. The blind source separation of the water artifact from the remaining protein spectrum is done with the recently developed algorithm dAMUSE.

Palabras clave: Simulated Annealing; Independent Component Analysis; Free Induction Decay; Singular Spectrum Analysis; Matrix Pencil.

I - Statistical Pattern Recognition | Pp. 75-82