Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Artificial Intelligence: 20th Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2007, Montreal, Canada, May 28-30, 2007. Proceedings

Ziad Kobti ; Dan Wu (eds.)

En conferencia: 20º Conference of the Canadian Society for Computational Studies of Intelligence (Canadian AI) . Montreal, QC, Canada . May 28, 2007 - May 30, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72664-7

ISBN electrónico

978-3-540-72665-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

On the Use of Possibilistic Bases for Local Computations in Product-Based Possibilistic Networks

Salem Benferhat; Salma Smaoui

Product-based possibilistic networks allow an efficient representation of possibility distributions. However, when the graph is multiply connected, the propagation may be unfeasible because of the high space complexity problem. In this paper, we propose a new inference approach on product-based possibilistic networks based on compact representations of possibility distributions, which are possibilistic knowledge bases.

- Session 6. Knowledge Representation and Reasoning | Pp. 356-368

Reasoning with Conditional Preferences Across Attributes

Shaoju Chen; Scott Buffett; Michael W. Fleming

Before an autonomous agent can perform automated negotiation on behalf of a user in an electronic commerce transaction, the user’s preferences over the set of outcomes must be learned as accurately as possible. This paper presents a structure, a Conditional Outcome Preference Network (COP-network), for modeling preferences directly elicited from a user. The COP-network then expands to indicate all preferences that can be inferred as a result. The network can be easily checked for consistency and redundancy, and can be used to determine quickly whether one outcome is preferred over another. An important feature of the COP-network is that conditional preferences, where a user’s preference over outcomes depends on whether particular attribute values are included, can be modeled and inferred as well. If the agent also knows the user’s utilities for some of the possible outcomes, then these can be considered in the COP-network as well. Three techniques for estimating utilities based on the specified preferences and utilities are described. One such technique, which works by first estimating utilities for long chains of outcomes for which preferences are known, is shown to be the most effective.

- Session 6. Knowledge Representation and Reasoning | Pp. 369-380

Path Propagation for Inference in Bayesian Networks

Dan Wu; Liu He

Although (probabilistic) inference in Bayesian networks has been well studied, the recent trend on extending Bayesian networks to model large and complex domains imposes new challenges on inference. In this paper, we suggest a method called path propagation that addresses these new challenges. The experimental results indicate that the proposed method achieves better performance than conventional method, especially for large Bayesian networks.

- Session 6. Knowledge Representation and Reasoning | Pp. 381-392

Problem-Solving Knowledge Mining from Users’ Actions in an Intelligent Tutoring System

Roger Nkambou; Engelbert Mephu Nguifo; Olivier Couturier; Philippe Fournier-Viger

In an intelligent tutoring system (its), the domain expert should provide relevant domain knowledge to the tutor so that it will be able to guide the learner during problem solving. However, in several domains, this knowledge is not predetermined and should be captured or learned from expert users as well as intermediate and novice users. Our hypothesis is that, knowledge discovery (kd) techniques can help to build this domain intelligence in ITS. This paper proposes a framework to capture problem-solving knowledge using a promising approach of data and knowledge discovery based on a combination of sequential pattern mining and association rules discovery techniques. The framework has been implemented and is used to discover new meta knowledge and rules in a given domain which then extend domain knowledge and serve as problem space allowing the intelligent tutoring system to guide learners in problem-solving situations. Preliminary experiments have been conducted using the framework as an alternative to a path-planning problem solver in CanadarmTutor.

- Session 6. Knowledge Representation and Reasoning | Pp. 393-404

Incremental Neighborhood Graphs Construction for Multidimensional Databases Indexing

Hakim Hacid; Tetsuya Yoshida

The point location (neighborhood search) is a significant problem in several fields like databases and data mining. Neighborhood graphs are interesting representations of this problem in a multidimensional space. However, several problems related to neighborhood graphs are under research and require detailed work to solve them. These problems are mainly related to their high construction costs and to their updating difficulties. In this article, we deal with the point location problem by considering neighborhood graphs optimization. We propose and compare two strategies able to quickly build and update these structures.

- Session 6. Knowledge Representation and Reasoning | Pp. 405-416

Learning Network Topology from Simple Sensor Data

Dimitri Marinakis; Philippe Giguère; Gregory Dudek

In this paper, we present an approach for recovering a topological map of the environment using only detection events from a deployed sensor network. Unlike other solutions to this problem, our technique operates on observational data; no timing information is exploited by our algorithm except the ordering. We first give a theoretical analysis of this version of the problem, and then we show that by considering a sliding window over the observations, the problem can be re-formulated as a version of set-covering. We present two heuristics based on this set-covering formulation and evaluate them with numerical simulations. The experiments demonstrate that promising results can be obtained using a greedy algorithm.

- Session 7. Learning | Pp. 417-428

Reinforcement Learning in Nonstationary Environment Navigation Tasks

Terran Lane; Martin Ridens; Scott Stevens

The field of reinforcement learning (RL) has achieved great strides in learning control knowledge from closed-loop interaction with environments. “Classical” RL, based on atomic state space representations, suffers from an inability to adapt to nonstationarities in the target Markov decision process (i.e., environment). Relational RL is widely seen as being a potential solution to this shortcoming. In this paper, we demonstrate a class of “pseudo-relational” learning methods for nonstationary navigational RL domains – domains in which the location of the goal, or even the structure of the environment, can change over time. Our approach is closely related to deictic representations, which have previously been found to be troublesome for RL. The key insight of this paper is that navigational problems are a highly constrained class of MDP, possessing a strong native that relaxes some of the partial observability difficulties arising from deixis. Agents can employ local information that is relevant to their near-term action choices to act effectively. We demonstrate that, unlike an atomic representation, our agents can learn to fluidly adapt to changing goal locations and environment structure.

- Session 7. Learning | Pp. 429-440

On the Stability and Bias-Variance Analysis of Kernel Matrix Learning

V. Vijaya Saradhi; Harish Karnick

Stability and bias-variance analysis are two powerful tools to understand learning algorithms better. We use these tools to analyze learning the kernel matrix (LKM) algorithm. The motivation comes from: (i) LKM works in the transductive setting where both training and test data points are to be given apriori. Hence, it is worth knowing the stability of LKM under small variations in the data set and (ii) It has been argued that LKMs overfit the given data set. In particular we are interested in answering the following questions: (a) ? (b) (c) . Our experimental results show that LKMs do not overfit the given data set. The stability analysis reveals that LKMs are algorithms.

- Session 7. Learning | Pp. 441-451

Query-Based Summarization of Customer Reviews

Olga Feiguina; Guy Lapalme

We describe an architecture for organizing and summarizing consumer reviews about products that have been posted on specialized web sites. The core technology is based on the automatic extraction of product features for which we report experiments on two types of corpora. We thus show that NLP techniques can be fruitfully used in this context for helping consumers sort out the mass of information displayed in such contexts.

- Session 8. Natural Language | Pp. 452-463

Multi-state Directed Acyclic Graphs

Michael Wachter; Rolf Haenni

This paper continues the line of research on the representation and compilation of propositional knowledge bases with propositional directed acyclic graphs (PDAG), negation normal forms (NNF), and binary decision diagrams (BDD). The idea is to permit variables with more than two states and to explicitly represent them in their most natural way. The resulting representation languages are analyzed according to their succinctness, supported queries, and supported transformations. The paper shows that most results from PDAGs, NNFs, and BDDs can be generalized to their corresponding multi-state extension. This implies that the entire knowledge compilation map is extensible from propositional to multi-state variables.

- Session 8. Natural Language | Pp. 464-475