Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Recognition Systems: Proceedings of the 4th International Conference on Computer Recognition Systems CORES ’05

Marek Kurzyński ; Edward Puchała ; Michał Woźniak ; Andrzej żołnierek (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Artificial Intelligence (incl. Robotics); Appl.Mathematics/Computational Methods of Engineering; Applications of Mathematics; Information Systems and Communication Service

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-25054-8

ISBN electrónico

978-3-540-32390-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Selection of Fuzzy-Valued Loss Function in Two Stage Binary Classifier

Robert Burduk

In this paper, a model to deal with Bayesian hierarchical classifier, in which consequences of decision are fuzzy-valued, is introduced. The model is based on the notion of fuzzy random variable and also on a subjective ranking method for fuzzy number defined by Campos and González. The Bayesian hierarchical classifier is based on a decision-tree scheme for given tree skeleton and features to be used in each inertial nodes. The influence of selection of fuzzy-valued loss function on classification result is given. Finally, an example illustrating this case of Bayesian analysis is considered.

Part II - Features, Learning and Classifiers | Pp. 121-128

A New Rejection Strategy for Convolutional Neural Network by Adaptive Topology

Hubert Cecotti; Abdel Belaïd

We presented a model combining supervised learning for the classification stage and a specific not-supervised model for the test stage. It allows changing the topology of the supervised part for improving the reject quality. Within the framework of the recognition of disturbed forms, our strategy was based on the adaptability of the classifier instead of, for example, making it learn more samples by artificially creating all the possible deformations, our approach consisted in defining the type of problematical transformation, by changing the localities observed by the principal classifier. The system makes it possible indeed to refine the results till it makes possible to raise ambiguities for certain confusions but there remains still depend on the effectiveness of the initial rejection and the assumptions concerning the choice of the model of class. On the level of the prospects, if the type of deformation is perfectly known: cut, rotation, problem of shift, it would be possible to introduce new properties inside the self-organizing map to speed up its convergence.

Part II - Features, Learning and Classifiers | Pp. 129-136

Fast PCA and LDA for JPEG Images

Weilong Chen; Meng Joo Er; Shiqian Wu

In this paper, we prove that the Principal Component Analysis (PCA) and the Linear Discriminant Analysis (LDA) can be directly implemented in the DCT (Discrete Cosine Transform) domain and the results are exactly the same as the one obtained from the spatial domain. In some applications, compressed images are desirable to reduce the storage requirement. For images compressed using the DCT, e.g., in JPEG or MPEG standard, the PCA and LDA can be directly implemented in the DCT domain such that the inverse DCT transform can be skipped and the dimensionality of the original data can be initially reduced to cut down computational cost.

Part II - Features, Learning and Classifiers | Pp. 137-144

A Hybrid ε-Insensitive Learning of Fuzzy Systems

Tomasz Czogala; Jacek M. Leski

Initially, it is shown that ε-insensitive learning of a fuzzy system may be presented as a combination of both an ε-insensitive gradient method and solving a system of linear inequalities. Then, a hybrid learning algorithm is introduced. Example is given of using this algorithm for design a fuzzy model of real ECG data. Simulation results show an improvement in the generalization ability of a fuzzy system learned by the new method with respect to the traditional and other ε-insensitive learning methods.

Part II - Features, Learning and Classifiers | Pp. 145-152

The Incremental Method for Discovery of Association Rules

Damian Dudek; Aleksander Zgrzywa

We present a new method for incremental discovery of association rules, which is highly general and independent of a mining algorithm. The heart of the method is the rule maintenance algorithm, which keeps the base of discovered rules as if they were mined in a single run through the whole transaction database. For more general and flexible results we take into account thresholds of rules statistical significance and influence of time. The method can be used as a learning model in knowledge-based systems with bounded resources, e.g. software agents.

Part II - Features, Learning and Classifiers | Pp. 153-160

Feature Extraction with Wavelet Transformation for Statistical Object Recognition

Marcin Grzegorzek; Michael Reinhold; Heinrich Niemann

In this paper we present a statistical approach for localization and classification of 3-D objects in 2-D images with real heterogeneous background. Two-dimensional local feature vectors are computed directly from pixel intensities in square gray level images with the wavelet multiresolution analysis. We use three different resolution levels for the feature computation. For the first one local neighborhoods of size 8 × 8 pixels, for the second one 4 × 4 pixels, and for the third one 2 × 2 pixels are taken into account. Then we define an object area as a function of 3-D transformations and represent the feature vectors as density functions. Our localization and classification algorithm uses a combination of object models created for the three different resolutions in the training phase. Experiments made on a real data set with 42240 images show that the recognition rates are much better using the resolution combination of the wavelet transformation.

Part II - Features, Learning and Classifiers | Pp. 161-168

Digital Filter Design with Constraints in Time and Frequency Domains

Norbert Henzel

This paper describes a new method for design of linear phase finite impulse response (FIR) filters. This new approach, based on the -insensitive loss function, allows the design process to take into account not only constraints specified in the frequency domain, but also constraints on the output, time domain, signal. The performances of the proposed approach are shortly illustrated with a design of a highpass filter used for ECG baseline wander reduction.

Part II - Features, Learning and Classifiers | Pp. 169-176

Efficient Implementation of Nearest Neighbor Classification

José R. Herrero; Juan J. Navarro

An efficient approach to Nearest Neighbor classification is presented, which improves performance by exploiting the ability of superscalar processors to issue multiple instructions per cycle and by using the memory hierarchy adequately. This is accomplished by the use of floating-point arithmetic which outperforms integer arithmetic, and block (tiled) algorithms which exploit the data locality of programs allowing an efficient use of the data stored in the cache memory.

Part II - Features, Learning and Classifiers | Pp. 177-186

Inductive Development of Customer e-Loyalty Theory with Bayesian Networks

Waldemar Jaroński; Koen Vanhoof; José Bloemer

The main objective of this paper is to show the use of Bayesian networks in inductive research applied in an e-loyalty study and to investigate whether e-loyalty theories can be discovered by means of Bayesian networks.

Part II - Features, Learning and Classifiers | Pp. 187-194

Reference Set Size Reduction for 1-NN Rule Based on Finding Mutually Nearest and Mutually Furthest Pairs of Points

Adam Jóźwik; Paweł Kieś

Two algorithms for reference set size reduction are presented and tested on an actual data set of a large size. The first one, as most of existing procedures, is based on the consistency idea, which means that all points from the primary reference set are correctly classified by 1-NN rule operating with the reduced set. The second algorithm requires division of the reference set into some subsets and replacing these subsets by their gravity centers. These gravity centers assume the same label as the majority of points of the corresponding subset. This algorithm enables the condensation of the reference set to the desired size, however, the resulting sets do not offer as good classification quality as other existing methods.

As opposed to the first algorithm, the second algorithm does not enable the control of the reduced set size. It is shown that combining both of these algorithms promises as good of a performance while allowing the control of the size of the obtained condensed sets.

Part II - Features, Learning and Classifiers | Pp. 195-202