Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Recognition Systems: Proceedings of the 4th International Conference on Computer Recognition Systems CORES ’05

Marek Kurzyński ; Edward Puchała ; Michał Woźniak ; Andrzej żołnierek (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Artificial Intelligence (incl. Robotics); Appl.Mathematics/Computational Methods of Engineering; Applications of Mathematics; Information Systems and Communication Service

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-25054-8

ISBN electrónico

978-3-540-32390-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Learning from a Test Set

Piotr Juszczak; Robert P. W. Duin

Classication of partially labeled data requires linking the unlabeled input distribution (x) with the conditional distribution (|x) obtained from the labeled data. The latter should, for example, vary little in high density regions. The key problem is to articulate a general principle behind this and other such reasonable assumptions. In this paper we provide a new approach to semi-supervised learning based on the stability of estimated labels for the unlabeled dataset, e.g a large test set, and the maximization of the mutual label relation. No clustering assumptions are required and the approach remains tractable even for continuous marginal class densities. We demonstrate the approach on synthetic examples and UCI repository datasets.

Part II - Features, Learning and Classifiers | Pp. 203-210

Image Clustering with Median and Myriad Spatial Constraint Enhanced FCM

Jacek Kawa; Ewa Pietka

In the current study two approaches to the clustering problem have been tested. First, a sequential analysis of -ltering and fuzzy c-means (FCM) method is performed. Then, the standard FCM has been modi-ed by adding to the objective function a second term that formulates a spatial constraint. In both approaches mean, median, and myriad are implemented. The analysis has been performed on a synthetic image and clinical images.

Part II - Features, Learning and Classifiers | Pp. 211-218

Feature Selection in Unsupervised Context: Clustering Based Approach

Artur Klepaczko; Andrzej Materka

In this paper we present a novel feature selection method that is applicable in unsupervised learning tasks. The method is based on clustering quality measures, which reflect different aspects of clustering performance. Sequential Floating Forward Search algorithm is employed to search through the original feature space for the best possible subset. Main stress has been put on the objectivism of the new technique, so that it could be applied in various classification tasks. Results of experiments with texture images are presented in order to confirm effectiveness of the method.

Part II - Features, Learning and Classifiers | Pp. 219-226

Conceptual Ontological Object Knowledge Base and Language

Marek Krótkiewicz; Krystian Wojtkiewicz

This paper deals with AI in aspect of knowledge acquisition and ontology base structure. The core of the system was designed in an object model to optimize it for further processing. Direct concept linking was used to assure fast semantic network processing. Predefined attributes used in the core minimize the number of basic connections within the ontology and help in inference. The system is assumed to generate questions and to specify the knowledge. The AI system defined in this way opens a possibility for better understanding of such basic human mind mechanisms as learning or analyzing.

Part II - Features, Learning and Classifiers | Pp. 227-234

A Method of Supervised Discrimination of Textures Based on Serial Statistical Tests

Juliusz L. Kulikowski; Malgorzata Przytulska; Diana Wierzbicka

It is presented a new type of learning textures recognition algorithms based on serial statistical tests. It is assumed that a texture can be formally represented by a multi-component random vector whose probabilistic characteristics are, in general, a priori unknown. Discrimination of textures is equivalent to a discrimination of random vectors of different but a priori unknown statistical properties. For this purpose non-parametric statistical tests based on serial statistics are used. Construction of serial statistics needs a linear ordering of multi-dimensional observation space. The method is illustrated by numerical examples.

Part II - Features, Learning and Classifiers | Pp. 235-242

Approximation Algorithm for the Argument Reduction Problem

Piotr Kułaga; Piotr Sapiecha; Krzysztof S#x0119;p

This paper proposes a new method of solving the argument reduction problem. Our method is different to the classical approach using the greedy algorithm, independently invented by Lovasz, Johnson, and Chvatal. However, sometimes the classical method does not produce minimal sets in the sense of cardinality. According to the results of computer tests, better results can be achieved by application of our method in combination with the classical method. Therefore, improvements are found in the quality of solutions when it is applied as a post-processing method.

Part II - Features, Learning and Classifiers | Pp. 243-248

Spread Histogram — A Method for Calculating Spatial Relations Between Objects

Halina Kwasnicka; Mariusz Paradowski

This paper presents a novel approach called Spread Histogram for calculation of spatial relations between objects. It allows to determine such relations as . Additionally, the method cooperates very well with standard histogram methods like Histogram of Angles for determining the directional spatial relations.

Part II - Features, Learning and Classifiers | Pp. 249-256

Comparing Modifcation Operators Used in Clustering Algorithm Based on a Sequence of Discriminant Rules

Dariusz Mazur

Clustering as a data exploration technique is very widely applied. It is based on clustering algorithms whose usefulness depends strictly on the form and style of the incoming data. The following article comparing operator in evolutionary algorithms used to clustering of symbolic data. Clustering methods is based on list of decision rules.

Part II - Features, Learning and Classifiers | Pp. 257-262

Pairwise Selection of Features and Prototypes

Elżbieta Pekalska; Artsiom Harol; Carmen Lai; Robert P. W. Duin

Learning from given patterns is realized by learning from their appropriate representations. This is usually practiced either by defining a set of features or by measuring proximities between pairs of objects. Both approaches are problem dependent and aim at the construction of some representation space, where discrimination functions can be defined.

In most situations, some feature reduction or prototype selection is mandatory. In this paper, a pairwise selection for creating a suitable representation space is proposed. To determine an informative set of features (or prototypes), the correlations between feature pairs are taken into account. By this, some dependencies are detected, while overtraining is avoided as the criterion is evaluated in two-dimensional feature spaces. Several experiments show that for small sample size problems, the proposed algorithm can outperform traditional selection methods.

Part II - Features, Learning and Classifiers | Pp. 271-278

Evolutionary Method in Grouping of Units

Henryk Potrzebowski; Jarosław Stańczak; Krzysztof Sęp

This paper deals with the clustering problem, where an order of elements plays a pivotal role. This formulation is very usable for wide range of Decision Support System (DSS) applications. The proposed clustering method consists of two stages. The first is a stage of data matrix reorganization, using a specialized evolutionary algorithm. The second stage is a final clustering step and is performed using a simple clustering method.

Part II - Features, Learning and Classifiers | Pp. 279-286