Catálogo de publicaciones - libros

Compartir en
redes sociales


Symbolic and Quantitative Approaches to Reasoning with Uncertainty: 8th European Conference, ECSQARU 2005, Barcelona, Spain, July 6-8, 2005, Proceedings

Lluís Godo (eds.)

En conferencia: 8º European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (ECSQARU) . Barcelona, Spain . July 6, 2005 - July 8, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Mathematical Logic and Formal Languages

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-27326-4

ISBN electrónico

978-3-540-31888-0

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

A Notion of Comparative Probabilistic Entropy Based on the Possibilistic Specificity Ordering

Didier Dubois; Eyke Hüllermeier

In this paper, we reconsider the problem of deciding whether one probability distribution is more informative (in the sense of representing a less indeterminate situation) than another one. Instead of using well-established information measures such as the Shannon entropy, however, we take up the idea of comparing probability distributions in a qualitative way. More specifically, we focus on a natural partial ordering induced by what is called the “peakedness” of a distribution. Moreover, there is a close connection between this ordering between probability distributions and the standard specificity ordering on possibility distributions that can be constructed from them. The main result of the paper is a proof showing that possibilistic specificity is consistent with probabilistic entropy in the sense that the (total) ordering defined by the latter refines the (partial) ordering defined by the former.

- Uncertainty Measures | Pp. 848-859

Consonant Random Sets: Structure and Properties

Enrique Miranda

In this paper, we investigate consonant random sets from the point of view of lattice theory. We introduce a new definition of consonancy and study its relationship with possibility measures as upper probabilities. This allows us to improve a number of results from the literature. Finally, we study the suitability of consonant random sets as models of the imprecise observation of random variables.

- Uncertainty Measures | Pp. 860-871

Comparative Conditional Possibilities

Giulianella Coletti; Barbara Vantaggi

Any dynamic decision model or procedure for acquisition of knowledge must deal with conditional events and should refer to (not necessarily structured) domains containing only the elements and the information of interest. We consider conditional possibility theory as numerical reference model to handle uncertainty and to study binary relations, defined on an arbitrary set of conditional events expressing the idea of “no more possible than”. We give the necessary conditions for the representability of a relation by a -conditional possibility, for any triangular norm , and we provide a complete characterization in terms of necessary and sufficient conditions for the representability by a conditional possibility (i.e. when is the minimum).

- Uncertainty Measures | Pp. 872-883

Second-Level Possibilistic Measures Induced by Random Variables

Ivan Kramosil

Given a real-valued random variable defined on a probability space and given a subset of the space Ω of all elementary random events, an ∈ Ω is called possibly favourable to with respect to , if it belongs to the subset of Ω with this property: for every holds. The mapping Π ascribing to each ⊂Ω the value (), i.e., the probability of the set of all elementary random events possibly favorable to w.r.to , defines a possibilistic measure on the power-set of all subsets of Ω. Having at hand two random variables and defined on and repeating our reasoning with replaced by and with replaced by , we arrive at the idea of second-level possibilistic measures induced by random variables.

- Uncertainty Measures | Pp. 884-895

Hybrid Bayesian Estimation Trees Based on Label Semantics

Zengchang Qin; Jonathan Lawry

Linguistic decision tree (LDT) [7] is a classification model based on a random set based semantics which is referred to as label semantics [4]. Each branch of a trained LDT is associated with a probability distribution over classes. In this paper, two hybrid learning models by combining linguistic decision tree and fuzzy Naive Bayes classifier are proposed. In the first model, an unlabelled instance is classified according to the Bayesian estimation given a single LDT. In the second model, a set of disjoint LDTs are used as Bayesian estimators. Experimental studies show that the first new hybrid models has both better accuracy and transparency comparing to fuzzy Naive Bayes and LDTs at shallow tree depths. The second model has the equivalent performance to the LDT model.

- Probabilistic Classifiers | Pp. 896-907

Selective Gaussian Naïve Bayes Model for Diffuse Large-B-Cell Lymphoma Classification: Some Improvements in Preprocessing and Variable Elimination

Andrés Cano; Javier G. Castellano; Andrés R. Masegosa; Serafín Moral

In this work, we present some significant improvements for for feature selection in wrapper methods. They are two: the first of them consists in a proper preordering of the feature set; and the second one consists in the application of an irrelevant feature elimination method, where the irrelevance condition is subjected to the partial selected feature subset by the wrapper method. We validate these approaches with the subtype classification problem and we show that these two changes are an important improvement in the computation cost and the classification accuracy of these wrapper methods in this domain.

- Probabilistic Classifiers | Pp. 908-920

Towards a Definition of Evaluation Criteria for Probabilistic Classifiers

Nahla Ben Amor; Salem Benferhat; Zied Elouedi

This paper deals with the evaluation of ”probabilistic” classifiers, where the results of the classification in not a unique class but a probability distribution over the set of possible classes. Our aim is to propose alternative definitions of the well known percent of correct classification (PCC) for probabilistic classifiers. The evaluation functions are called percent of probabilistic-based correct classification (PPCC). We first propose natural properties that an evaluation function should satisfy. Then, we extend these properties to the case when a semantic distance exists between different classes. An example of an evaluation function based on Euclidean distance is provided.

- Probabilistic Classifiers | Pp. 921-931

Methods to Determine the Branching Attribute in Bayesian Multinets Classifiers

A. Cano; J. G. Castellano; A. R Masegosa; S. Moral

Bayesian multinets are a Bayesian networks extension where context-specific conditional independences can be represented. The main aim of this work is to study different methods to choose the distinguished attribute in Bayesian multinets when we use them in supervised classification tasks. We have used different approaches: a wrapper method and several filter methods. This will allow us to determine the most appropriate approach that meets our requirements of accuracy and/or time.

- Probabilistic Classifiers | Pp. 932-943

Qualitative Inference in Possibilistic Option Decision Trees

Ilyes Jenhani; Zied Elouedi; Nahla Ben Amor; Khaled Mellouli

This paper presents a classification technique using possibility theory, namely the possibilistic option decision trees (PODT) which offers a more flexible building procedure by selecting more than one attribute in each decision node. Then, a classification method, using the PODT, to determine the class value of instances characterized by uncertain/missing attributes is proposed.

- Classification and Clustering | Pp. 944-955

Partially Supervised Learning by a edal Approach

Patrick Vannoorenberghe; Philippe Smets

In this paper, we propose a Credal EM (CrEM) approach for partially supervised learning. The uncertainty is represented by belief functions as understood in the transferable belief model (TBM). This model relies on a non probabilistic formalism for representing and manipulating imprecise and uncertain information. We show how the EM algorithm can be applied within the TBM framework when applied for the classification of objects and when the learning set is imprecise (the actual class of each object is only known as belonging to a subset of classes), and/or uncertain (the knowledge about the actual class is represented by a probability function or by a belief function).

- Classification and Clustering | Pp. 956-967