Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Artificial Intelligence: 19th Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2006, Quebec City, Quebec, Canada, June 7-9, Proceedings

Luc Lamontagne ; Mario Marchand (eds.)

En conferencia: 19º Conference of the Canadian Society for Computational Studies of Intelligence (Canadian AI) . Quebec City, QC, Canada . June 7, 2006 - June 9, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-34628-9

ISBN electrónico

978-3-540-34630-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Relaxation of Soft Constraints Via a Unified Semiring

Peter Harvey; Aditya Ghose

The semiring framework for constraint satisfaction allows us to model a wide variety of problems of choice. Semiring constraint satisfaction problems are able to represent both classical consistency and optimisation problems, as well as soft constraint problems such as valued and weight CSPs. In this paper we pose and answer the question: how can we represent and ‘solve’ the relaxation of a semiring constraint satisfaction problem?

- Constraint Satisfaction and Search | Pp. 122-133

Intelligent Information Personalization Leveraging Constraint Satisfaction and Association Rule Methods

Syed Sibte Raza Abidi; Yan Zeng

Recommender systems, using information personalization methods, provide information that is relevant to a user-model. Current information personalization methods do not take into account whether multiple documents when recommended together present a factually consistent outlook. In the realm of content-based filtering, in this paper, we investigate establishing the factual consistency between the set of documents deemed relevant to a user. We approach information personalization as a constraint satisfaction problem, where we attempt to satisfy two constraints—i.e. user-model constraints to determine the relevance of a document to a user and consistency constraints to establish factual consistency of the overall personalized information. Our information personalization framework involves: (a) an automatic constraint acquisition method, based on association rule mining, to derive consistency constraints from a corpus of documents; and (b) a hybrid of constraint satisfaction and optimization methods to derive an optimal solution comprising both relevant and factually consistent documents. We apply our information personalization framework to filter news items using the Reuters-21578 dataset.

- Constraint Satisfaction and Search | Pp. 134-145

On the Quality and Quantity of Random Decisions in Stochastic Local Search for SAT

Dave A. D. Tompkins; Holger H. Hoos

Stochastic local search (SLS) methods are underlying some of the best-performing algorithms for certain types of SAT instances, both from an empirical as well as from a theoretical point of view. By definition and in practice, random decisions are an essential ingredient of SLS algorithms. In this paper we empirically analyse the role of randomness in these algorithms. We first study the effect of the quality of the underlying random number sequence on the behaviour of well-known algorithms such as Papadimitriou’s algorithm and Adaptive Novelty+. Our results indicate that while extremely poor quality random number sequences can have a detrimental effect on the behaviour of these algorithms, there is no evidence that the use of standard pseudo-random number generators is problematic. We also investigate the amount of randomness required to achieve the typical behaviour of these algorithms using derandomisation. Our experimental results indicate that the performance of SLS algorithms for SAT is surprisingly robust with respect to the number of random decisions made by an algorithm.

- Constraint Satisfaction and Search | Pp. 146-158

Simple Support-Based Distributed Search

Peter Harvey; Chee Fon Chang; Aditya Ghose

Distributed Constraint Satisfaction Problems provide a natural mechanism for multiagent coordination and agreement. To date, algorithms for Distributed Constraint Satisfaction Problems have tended to mirror existing non-distributed global-search or local-search algorithms. Unfortunately, existing distributed global-search algorithms derive from classical backtracking search methods and require a total ordering over agents for completeness. Distributed variants of local-search algorithms (such as distributed breakout) inherit the incompleteness properties of their predecessors, or depend on the creation of new communication links between agents. In [5, 4] a new algorithm was presented designed explicitly for distributed environments so that a global ordering is not required, while avoiding the problems of existing local-search algorithms. This paper presents a significant improvement on that algorithm in performance and provability.

- Constraint Satisfaction and Search | Pp. 159-170

Modeling Causal Reinforcement and Undermining with Noisy-AND Trees

Y. Xiang; N. Jia

When data are insufficient to support learning, causal modeling, such as noisy-OR, aids elicitation by reducing probability parameters to be acquired in constructing a Bayesian network. Multiple causes can reinforce each other in producing the effect or can undermine the impact of each other. Most existing causal models do not consider their interactions from the perspective of reinforcement or undermining. We show that none of them can represent both interactions. We present the first explicit causal model that can encode both reinforcement and undermining and we show how to use such a model to support efficient probability elicitation.

- Knowledge Representation and Reasoning | Pp. 171-182

An Improved LAZY-AR Approach to Bayesian Network Inference

C. J. Butz; S. Hua

We propose LAZY arc-reversal with variable elimination (LAZY-ARVE) as a new approach to probabilistic inference in Bayesian networks (BNs). LAZY-ARVE is an improvement upon LAZY arc- reversal (LAZY-AR), which was very recently proposed and empirically shown to be the state-of-the-art method for exact inference in discrete BNs. The primary advantage of LAZY-ARVE over LAZY-AR is that the former only computes the actual distributions passed during inference, whereas the latter may perform unnecessary computation by constructing irrelevant intermediate distributions. A comparison between LAZY-AR and LAZY-ARVE, involving processing evidence in a real-world BN for coronary heart disease, is favourable towards LAZY-ARVE.

- Knowledge Representation and Reasoning | Pp. 183-194

Four-Valued Semantics for Default Logic

Anbu Yue; Yue Ma; Zuoquan Lin

Reiter’s default logic suffers the triviality, that is, a single contradiction in the premise of a default theory leads to the only trivial extension which everything follows from. In this paper, we propose a default logic based on four-valued semantics, which endows default logic with the ability of handling inconsistency without leading to triviality. We define four-valued models for default theory such that the default logic has the ability of nonmonotonic paraconsistent reasoning. By transforming default rules in propositional language into language , a one-to-one relation between the four-valued models in and the extensions in is proved, whereby the proof theory of Reiter’s default logic is remained.

- Knowledge Representation and Reasoning | Pp. 195-205

Exploiting Dynamic Independence in a Static Conditioning Graph

Kevin Grant; Michael C. Horsch

A conditioning graph (CG) is a graphical structure that attempt to minimize the implementation overhead of computing probabilities in belief networks. A conditioning graph recursively factorizes the network, but restricting each decomposition to a single node allows us to store the structure with minimal overhead, and compute with a simple algorithm. This paper extends conditioning graphs with optimizations that effectively reduce the height of the CG, thus reducing time complexity exponentially, while increasing the storage requirements by only a constant factor. We conclude that CGs are frequently as efficient as any other exact inference method, with the advantage of being vastly superior to VE and JT in terms of space complexity, and far simpler to implement.

- Knowledge Representation and Reasoning | Pp. 206-217

Probabilistic Melodic Harmonization

Jean-François Paiement; Douglas Eck; Samy Bengio

We propose a representation for musical chords that allows us to include domain knowledge in probabilistic models. We then introduce a graphical model for harmonization of melodies that considers every structural components in chord notation. We show empirically that root notes progressions exhibit global dependencies that can be better captured with a tree structure related to the meter than with a simple dynamical HMM that concentrates on local dependencies. However, a local model seems to be sufficient for generating proper harmonizations when root notes progressions are provided. The trained probabilistic models can be sampled to generate very interesting chord progressions given other polyphonic music components such as melody or root note progressions.

- Knowledge Representation and Reasoning | Pp. 218-229

Learning Bayesian Networks in Semi-deterministic Systems

Wei Luo

In current constraint-based (Pearl-style) systems for discovering Bayesian networks, inputs with deterministic relations are prohibited. This restricts the applicability of these systems. In this paper, we formalize a sufficient condition under which Bayesian networks can be recovered even with deterministic relations. The sufficient condition leads to an improvement to Pearl’s IC algorithm; other constraint-based algorithms can be similarly improved. The new algorithm, assuming the sufficient condition proposed, is able to recover Bayesian networks with deterministic relations, and moreover suffers no loss of performance when applied to nondeterministic Bayesian networks.

- Knowledge Representation and Reasoning | Pp. 230-241