Catálogo de publicaciones - libros

Compartir en
redes sociales


Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing: 10th International Conference, RSFDGrC 2005, Regina, Canada, August 31: September 3, 2005, Proceedings, Part I

Dominik Ślęzak ; Guoyin Wang ; Marcin Szczuka ; Ivo Düntsch ; Yiyu Yao (eds.)

En conferencia: 10º International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing (RSFDGrC) . Regina, SK, Canada . August 31, 2005 - September 3, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Information Storage and Retrieval; Database Management; Mathematical Logic and Formal Languages; Computation by Abstract Devices; Pattern Recognition

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-28653-0

ISBN electrónico

978-3-540-31825-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Dependency Bagging

Yuan Jiang; Jin-Jiang Ling; Gang Li; Honghua Dai; Zhi-Hua Zhou

In this paper, a new variant of Bagging named is proposed. This algorithm obtains bootstrap samples at first. Then, it employs a causal discoverer to induce from each sample a dependency model expressed as a Directed Acyclic Graph (DAG). The attributes without connections to the class attribute in all the DAGs are then removed. Finally, a component learner is trained from each of the resulted samples to constitute the ensemble. Empirical study shows that DepenBag is effective in building ensembles of nearest neighbor classifiers.

- Machine Learning | Pp. 491-500

Combination of Metric-Based and Rule-Based Classification

Arkadiusz Wojna

We consider two classification approaches. The metric-based approach induces the distance measure between objects and classifies new objects on the basis of their nearest neighbors in the training set. The rule-based approach extracts rules from the training set and uses them to classify new objects. In the paper we present a model that combines both approaches. In the combined model the notions of rule, rule minimality and rule consistency are generalized to metric-dependent form.

An effective polynomial algorithm implementing the classification model based on minimal consistent rules has been proposed in [2]. We show that this algorithm preserves its properties in application to the metric-based rules. This allows us to combine this rule-based algorithm with the nearest neighbor (-nn) classification method. In the combined approach the rule-based algorithm takes the role of nearest neighbor voting model. The presented experiments with real data sets show that the combined classification model have the accuracy higher than single models.

- Machine Learning | Pp. 501-511

Combining Classifiers Based on OWA Operators with an Application to Word Sense Disambiguation

Cuong Anh Le; Van-Nam Huynh; Hieu-Chi Dam; Akira Shimazu

This paper proposes a framework for combining classifiers based on OWA operators in which each individual classifier uses a distinct representation of objects to be classified. It is shown that this framework yields several commonly used decision rules but without some strong assumptions made in the work by Kittler et al. [7]. As an application, we apply the proposed framework of classifier combination to the problem of word sense disambiguation (shortly, WSD). To this end, we experimentally design a set of individual classifiers, each of which corresponds to a distinct representation type of context considered in the WSD literature, and then the proposed combination strategies are experimentally tested on the datasets for four polysemous words, namely , , , and , and compared to previous studies.

- Machine Learning | Pp. 512-521

System Health Prognostic Model Using Rough Sets

Zbigniew M. Wojcik

A new rough sets data fusion model is presented fusing measured health degradation levels and influences on these degradations. The data fusion model is a system of matrix inequalities of the rough sets covariances. Rough sets variance allows to explicitly assess only health degradations assuring increased signal-to-noise ratio, thus high accuracy of processing. The matrices of inequalities fuse measured health degradation levels and influences on these degradations. Adaptations mechanisms are by a new machine learning approach determining weights of the terms of the inequalities at the time of key events found in the historical data. Prognostic is always time-sequenced, therefore methods based on time sequences are incorporated, e.g. a new data fusion model exploiting time-dependency of events, assuring high quality of prediction. Deterministic prognostic is by estimating the pattern of health degradation in question, finding the match with degradation pattern in historical data, and then tracing this historical degradation pattern up to its conclusion. The model is hierarchical: the right sides of the data fusion expressions substitute for endogenous variables of higher-level expressions.

- Machine Learning | Pp. 522-531

Live Logic: Method for Approximate Knowledge Discovery and Decision Making

Marina Sapir; David Verbel; Angeliki Kotsianti; Olivier Saidi

Live Logic is an integrated approach for support of the learning and decision making in conditions of uncertainty. The approach covers both induction of probabilistic logical hypotheses from known examples and deduction of the plausible solution for an unknown case based on the inducted hypotheses.

The induction method generalizes empirical data, discovering statistical patterns, expressed in logical language. The deduction method uses multidimensional ranking to reconcile contradictory patterns exhibited by a particular case.

The method was applied on clinical data of the patients with prostate cancer who underwent prostatectomy. The goal was to predict biochemical failure based on the pre- and post- operative status of the patient. The patterns found by the method proved to be insightful from the pathologist’s point of view. Most of them had been confirmed on the control dataset.

In our experiments, the predictive accuracy of the Live Logic was also higher than that of other tested methods.

- Approximate and Uncertain Reasoning | Pp. 532-540

Similarity, Approximations and Vagueness

Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas

The relation of similarity is essential in understanding and developing frameworks for reasoning with vague and approximate concepts. There is a wide spectrum of choice as to what properties we associate with similarity and such choices determine the nature of vague and approximate concepts defined in terms of these relations. Additionally, robotic systems naturally have to deal with vague and approximate concepts due to the limitations in reasoning and sensor capabilities. Halpern [1] introduces the use of subjective and objective states in a modal logic formalizing vagueness and distinctions in transitivity when an agent reasons in the context of sensory and other limitations. He also relates these ideas to a solution to the Sorities and other paradoxes. In this paper, we generalize and apply the idea of similarity and tolerance spaces [2,3,4,5], a means of constructing approximate and vague concepts from such spaces and an explicit way to distinguish between an agent’s objective and subjective states. We also show how some of the intuitions from Halpern can be used with similarity spaces to formalize the above-mentioned Sorities and other paradoxes.

- Approximate and Uncertain Reasoning | Pp. 541-550

Decision Theory = Performance Measure Theory + Uncertainty Theory

Eugene Eberbach

The decision theory is defined typically as the combination of utility theory and probability theory. In this paper we generalize the decision theory as the performance measure theory and uncertainty theory. Intelligent agents look for approximate optimal decisions under bounded resources and uncertainty. The $-calculus process algebra for problem solving applies the cost performance measures to converge to optimal solutions with minimal problem solving costs, and allows to incorporate probabilities, fuzzy sets and rough sets to deal with uncertainty and incompleteness.

- Approximate and Uncertain Reasoning | Pp. 551-560

The Graph-Theoretical Properties of Partitions and Information Entropy

Cungen Cao; Yuefei Sui; Youming Xia

The information entropy, as a measurement of the average amount of information contained in an information system, is used in the classification of objects and the analysis of information systems. The information entropy of a partition is non-increasing when the partition is refined, and is related to rough sets by Wong and Ziarko. The partitions and information entropy have some graph-theoretical properties. Given a non-empty universe , all the partitions on are taken as nodes, and a relation between partitions are defined and taken as edges. The graph obtained is denoted by (,), which represents the connections between partitions on . According to the values of the information entropy of partitions, a directed graph is defined on (,). It will be proved that there is a set of partitions with the minimal entropy; and a set of partitions with the maximal entropy; and the entropy is non-decreasing on any directed pathes in from a partition with the minimal entropy to one of the partitions with the maximal entropy. Hence, in the information entropy of partitions is represented in a clearly structured way.

- Probabilistic Network Models | Pp. 561-570

A Comparative Evaluation of Rough Sets and Probabilistic Network Algorithms on Learning Pseudo-independent Domains

Jae-Hyuck Lee

This study provides a comparison between the rough sets and probabilistic network algorithms in application to learning a , a type of probabilistic models hard to learn by common probabilistic learning algorithms based on search heuristics called . The experimental result from this study shows that the rough sets algorithm outperforms the common probabilistic network method in learning a PI model. This indicates that the rough sets algorithm can apply to learning PI domains.

- Probabilistic Network Models | Pp. 571-580

On the Complexity of Probabilistic Inference in Singly Connected Bayesian Networks

Dan Wu; Cory Butz

In this paper, we revisit the consensus of computational complexity on exact inference in Bayesian networks. We point out that even in singly connected Bayesian networks, which conventionally are believed to have efficient inference algorithms, the computational complexity is still NP-hard.

- Probabilistic Network Models | Pp. 581-590