Catálogo de publicaciones - libros

Compartir en
redes sociales


Symbolic and Quantitative Approaches to Reasoning with Uncertainty: 9th European Conference, ECSQARU 2007, Hammamet, Tunisia, October 31: November 2, 2007. Proceedings

Khaled Mellouli (eds.)

En conferencia: 9º European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (ECSQARU) . Hammamet, Tunisia . October 31, 2007 - November 2, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Mathematical Logic and Formal Languages

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-75255-4

ISBN electrónico

978-3-540-75256-1

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Pattern Recognition and Information Fusion Using Belief Functions: Some Recent Developments

Thierry Denœux

The Transferable BeliefModel (TBM) is a general framework for reasoning with uncertainty using belief functions [8]. Of particular interest is the General Bayesian Theorem (GBT), an extension of Bayes’s theorem in which probability measures are replaced by belief functions, and no prior knowledge is assumed [7,6].

- Invited Talks | Pp. 1-1

Causality and Dynamics of Beliefs in Qualitative Uncertainty Frameworks

Salem Benferhat

Causality and belief changes play an important role in many applications. Recently, Pearl [6] has proposed approaches based on probability theory using causal graphs to give formal semantics to the notion of interventions. From representational point of view, interventions are distinguished from observations using the concept of the ”do” operator [4]. From reasoning point of view, handling interventions consists in ”ignoring” the effects of all direct (and undirected) causes related to the variable of interest.

- Invited Talks | Pp. 2-3

Elements of Argumentation

Anthony Hunter

Logic-based formalizations of argumentation, that take pros and cons for some claim into account, have been extensively studied, and some basic principles have been established (for reviews see [1-3]). These formalizations assume a set of formulae and then exhaustively lay out arguments and counterarguments, where a counterargument either rebuts (i.e. negates the claim of the argument) or undercuts (i.e. negates the support of the argument). Recently attempts have been made to refine these formalizations by using techniques for selecting the more appropriate arguments and counterarguments by taking into account intrinsic factors (such as the degree of inconsistency between an argument and its counterarguments) and extrinsic factors (such as the impact of particular arguments on the audience and the beliefs of the audience). In this presentation, we consider the need to take intrinsic and extrinsic factors into account, and then consider ways that this can be done in logic in order to refine existing logic-based approaches to argumentation. These refinements offer interesting options for formalizations that may better capture practical argumentation for intelligent agents [3].

- Invited Talks | Pp. 4-4

Causal Graphical Models with Latent Variables: Learning and Inference

Stijn Meganck; Philippe Leray; Bernard Manderick

Several paradigms exist for modeling causal graphical models for discrete variables that can handle latent variables without explicitly modeling them quantitatively. Applying them to a problem domain consists of different steps: structure learning, parameter learning and using them for probabilistic or causal inference. We discuss two well-known formalisms, namely semi-Markovian causal models and maximal ancestral graphs and indicate their strengths and limitations. Previously an algorithm has been constructed that by combining elements from both techniques allows to learn a semi-Markovian causal models from a mixture of observational and experimental data. The goal of this paper is to recapitulate the integral learning process from observational and experimental data and to demonstrate how different types of inference can be performed efficiently in the learned models. We will do this by proposing an alternative representation for semi-Markovian causal models.

- Causal Networks | Pp. 5-16

Learning Causal Bayesian Networks from Incomplete Observational Data and Interventions

Hanen Borchani; Maher Chaouachi; Nahla Ben Amor

This paper proposes a new method for learning causal Bayesian networks from incomplete observational data and interventions. We extend our Greedy Equivalence Search-Expectation Maximization (GES-EM) algorithm [2], initially proposed to learn Bayesian networks from incomplete observational data, by adding a new step allowing the discovery of correct causal relationships using interventional data. Two intervention selection approaches are proposed: an adaptive one, where interventions are done sequentially and where the impact of each intervention is considered before starting the next one, and a non-adaptive one, where the interventions are executed simultaneously. An experimental study shows the merits of the new version of the GES-EM algorithm by comparing the two selection approaches.

- Causal Networks | Pp. 17-29

Measuring Inconsistency for Description Logics Based on Paraconsistent Semantics

Yue Ma; Guilin Qi; Pascal Hitzler; Zuoquan Lin

In this paper, we present an approach for measuring inconsistency in a knowledge base. We first define the degree of inconsistency using a four-valued semantics for the description logic . Then an ordering over knowledge bases is given by considering their inconsistency degrees. Our measure of inconsistency can provide important information for inconsistency handling.

- Belief Revision and Inconsistency Handling | Pp. 30-41

On the Dynamics of Total Preorders: Revising Abstract Interval Orders

Richard Booth; Thomas Meyer

Total preorders (tpos) are often used in belief revision to encode an agent’s strategy for revising its belief set in response to new information. Thus the problem of tpo-revision is of critical importance to the problem of iterated belief revision. Booth et al. [1] provide a useful framework for revising tpos by adding extra structure to guide the revision of the initial tpo, but this results in tpo revision only. In this paper we extend that framework to consider tpo revision. We provide new ways of representing the structure required to revise a tpo, based on , and look at some desirable properties for revising this structure. We prove the consistency of these properties by giving a concrete operator satisfying all of them.

- Belief Revision and Inconsistency Handling | Pp. 42-53

Approaches to Constructing a Stratified Merged Knowledge Base

Anbu Yue; Weiru Liu; Anthony Hunter

Many merging operators have been proposed to merge either flat or stratified knowledge bases. The result of merging by such an operator is a flat base (or a set of models of the merged base) irrespective of whether the original ones are flat or stratified. The drawback of obtaining a flat merged base is that information about more preferred knowledge (formulae) versus less preferred knowledge is not explicitly represented, and this information can be very useful when deciding which formulae should be retained when there is a conflict. Therefore, it can be more desirable to return a stratified knowledge base as a merged result. A straightforward approach is to deploy the preference relation over possible worlds obtained after merging to reconstruct such a base. However, our study shows that such an approach can produce a poor result, that is, preference relations over possible worlds obtained after merging are not suitable for reconstructing a merged stratified base. Inspired by the Condorcet method in voting systems, we propose an alternative method to stratify a set of possible worlds given a set of stratified bases and take the stratification of possible worlds as the result of merging. Based on this, we provide a family of syntax-based methods and a family of model-based methods to construct a stratified merged knowledge base. In the syntax based methods, the formulae contained in the merged knowledge base are from the original individual knowledge bases. In contrast, in the model based methods, some additional formulae may be introduced into the merged knowledge base and no information in the original knowledge bases is lost. Since the merged result is a stratified knowledge base, the commonly agreed knowledge together with a preference relation over this knowledge can be extracted from the original knowledge bases.

- Belief Revision and Inconsistency Handling | Pp. 54-65

Syntactic Propositional Belief Bases Fusion with Removed Sets

Julien Hue; Odile Papini; Eric Wurbel

The problem of merging multiple sources information is central in several domains of computer science. In knowledge representation for artificial intelligence, several approaches have been proposed for propositional bases fusion, however, most of them are defined at a semantic level and are untractable. This paper proposes a new syntactic approach of belief bases fusion, called Removed Sets Fusion (RSF). The notion of removed-set, initially defined in the context of belief revision is extended to fusion and most of the classical fusion operations are syntactically captured by RSF. In order to efficiently implement RSF, the paper shows how RSF can be encoded into a logic program with answer set semantics, then presents an adaptation of the smodels system devoted to efficiently compute the removed sets in order to perform RSF. Finally a preliminary experimental study shows that the answer set programming approach seems promising for performing belief bases fusion on real scale applications.

- Belief Revision and Inconsistency Handling | Pp. 66-77

COBA 2.0: A Consistency-Based Belief Change System

James P. Delgrande; Daphne H. Liu; Torsten Schaub; Sven Thiele

We describe COBA 2.0, an implementation of a consistency-based framework for expressing belief change, focusing here on revision and contraction, with the possible incorporation of integrity constraints. This general framework was first proposed in [1]; following a review of this work, we present COBA 2.0’s high-level algorithm, work through several examples, and describe our experiments. A distinguishing feature of COBA 2.0 is that it builds on SAT-technology by using a module comprising a state-of-the-art SAT-solver for consistency checking. As well, it allows for the simultaneous specification of revision, multiple contractions, along with integrity constraints, with respect to a given knowledge base.

- Belief Revision and Inconsistency Handling | Pp. 78-90