Catálogo de publicaciones - libros

Compartir en
redes sociales


MICAI 2007: Advances in Artificial Intelligence: 6th Mexican International Conference on Artificial Intelligence, Aguascalientes, Mexico, November 4-10, 2007. Proceedings

Alexander Gelbukh ; Ángel Fernando Kuri Morales (eds.)

En conferencia: 6º Mexican International Conference on Artificial Intelligence (MICAI) . Aguascalientes, Mexico . November 4, 2007 - November 10, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-76630-8

ISBN electrónico

978-3-540-76631-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Simple Model-Based Exploration and Exploitation of Markov Decision Processes Using the Elimination Algorithm

Elizabeth Novoa

The fundamental problem in learning and planning of Markov Decision Processes is how the agent explores and exploits an uncertain environment. The classical solutions to the problem are basically heuristics that lack appropriate theoretical justifications. As a result, principled solutions based on Bayesian estimation, though intractable even in small cases, have been recently investigated. The common approach is to approximate Bayesian estimation with sophisticated methods that cope the intractability of computing the Bayesian posterior. However, we notice that the complexity of these approximations still prevents their use as the long-term reward gain improvement seems to be diminished by the difficulties of implementation. In this work, we propose a deliberately simplistic model-based algorithm to show the benefits of Bayesian estimation when compared to classical model-free solutions. In particular, our agent combines several Markov Chains from its belief state and uses the matrix-based Elimination Algorithm to find the best action to take. We test our agent over the three standard problems Chain, Loop, and Maze, and find that it outperforms the classical Q-Learning with e-Greedy, Boltzmann, and Interval Estimation action selection heuristics.

- Knowledge Representation and Reasoning | Pp. 327-336

A Simple Model for Assessing Output Uncertainty in Stochastic Simulation Systems

Tengda Sun; Jinfeng Wang

The need for expressing uncertainty in stochastic simulation systems is widely recognized. However, the emphasis in uncertainty has been directed toward assessing simulation model input parameter uncertainty, while the analysis of simulation output uncertainty is deduced from the input uncertainty. Most recently used methods to assess uncertainty include Delta-Method approaches, Resampling method, Bayesian Analysis method and so on. The problem for all these methods is that the typical simulation user is not particularly proficient in statistics, and so is unlikely to be aware of appropriate sensitivity and/or uncertainty analyses. This suggests the need for a transparent, implementable and efficient method for understanding uncertainty, especially for simulation output uncertainty. In this paper, we propose a simple and straightforward framework to assess stochastic simulation output uncertainty based on Bayesian Melding. We firstly assume the form of probability distribution function of simulation output. We also assume that the final output uncertainty is the weight sum of uncertainty for every simulation output and the weight of each simulation run is proportional to its probability. The advantage of these assumptions is that to describe the simulation output uncertainty in the form of probability distribution function after limited simulation runs, we need only to do two things (1) to estimate parameters in the simulation output probability distribution function and (2) to calculate weight for each simulation. Both of them are discussed in detail in this paper.

- Knowledge Representation and Reasoning | Pp. 337-347

An Empirically Terminological Point of View on Agentism in the Artificial

C. T. A. Schmidt

Many endeavours in Artificial Intelligence work towards recreating the dialogical capabilities of humans in machines, robots, ”creatures”, in short information processing systems. This original goal in AI has been left to the wayside by many in order to produce Artificial Life entities in a futuristic vision of ; scientists that have not ’abandoned ship’ confirm the difficulty of reaching the of AI research. This means the importance of language generation and understanding components have been reduced. Are the pragmatics of language use too difficult to deal with? According to Shapiro and Rapaport (1991), ”the quintessential task is interactive dialogue”. Man-made entities are not functional in dialoguing with humans. The benefits of re-establishing a ”proper” relational stance in the Artificial Sciences are twofold, namely, ./to better understand the communication difficulties encountered, and ./to bring enhanced meaning to the goals of building artificial agents. Point has consequences for in that it will change the very goals of scientists working on social and conversational agents. In the literature, the of agent proves unsuitable for the specification of any higher-order communication tasks; a Tower of Babel problem exists with regards to the very definition of ”agent” between Scientists and Philosophers. In the present article, I eliminate the nebulosity currently contouring agency’s terminology with a goal to improving understanding when speaking about entities that can mean.

- Agents and Multiagent Systems | Pp. 348-358

Inductive Logic Programming Algorithm for Estimating Quality of Partial Plans

Sławomir Nowaczyk; Jacek Malec

We study agents situated in partially observable environments, who do not have the resources to create conformant plans. Instead, they create conditional plans which are partial, and learn from experience to choose the best of them for execution. Our agent employs an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge in order to choose the best plan for execution.

We show results of using PROGOL learning algorithm to distinguish “bad” plans, and we present three modifications which make the algorithm fit this class of problems better. Specifically, we limit the search space by fixing semantics of conditional branches within plans, we guide the search by specifying relative relevance of portions of knowledge base, and we integrate learning algorithm into the agent architecture by allowing it to directly access the agent’s knowledge encoded in Active Logic. We report on experiments which show that those extensions lead to significantly better learning results.

- Agents and Multiagent Systems | Pp. 359-369

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents

Jackeline Spinola de Freitas; Ricardo Imbert; João Queiroz

In the last decades, cognitive and neuroscience findings about emotion have motivated the design of emotional-based architectures to model individuals’ behavior. Currently, we are working with a cognitive, multi-layered architecture for Agents, which provides them with emotion-influenced behavior and has been extended to model social interactions. This paper shows this architecture, focusing on its social features and how it could be used to model emotion-based agents’ social behavior. A model of a prey-predator simulation is presented as a test-bed of the architecture social layer.

- Agents and Multiagent Systems | Pp. 370-380

Just-in-Time Monitoring of Project Activities Through Temporal Reasoning

Sara E. Garza; José Luis Aguirre

A critical issue in project management is time and its administration. With the introduction of autonomous or semi-autonomous systems that handle tasks related to projects, time management can become even more complex. In this paper, a temporal reasoning based mechanism for monitoring the execution times of activities that are developed during a project is being proposed, with the intent of detecting temporal discrepancies (delays or activities going ahead of time). The mechanism relies on the use of Allen’s interval algebra and allows uncovering concurrent relationships, which are useful to consider for making the necessary adjustments to the project when discrepancies are detected. This can be especially convenient in automated or semi-automated contexts, where activities are assigned to intelligent agents. With this in mind, the mechanism was integrated to the JITIK multiagent system. It has been tested with representative cases and has been found to detect all the situations where delays are involved.

- Agents and Multiagent Systems | Pp. 381-391

Scaling Kernels: A New Least Squares Support Vector Machine Kernel for Approximation

Mu Xiangyang; Zhang Taiyi; Zhou Yatong

Support vector machines(SVM) have been introduced for pattern recognition and regression. But it was limited by the time consuming and the choice of kernel function in practical application. Motivated by the theory of multi-scale representations of signals and wavelet transforms, this paper presents a way for building a wavelet-based reproducing kernel Hilbert spaces (RKHS) which is a multiresolution scale subspace and its associate scaling kernel for least squares support vector machines (LS-SVM). The scaling kernel is constructed by using a scaling function with its different dilations and translations. Results on several approximation problems illustrate that the LS-SVM with scaling kernel can approximate arbitrary signal with multi-scale and owns better approximation performance.

- Machine Learning and Data Mining | Pp. 392-398

Evolutionary Feature and Parameter Selection in Support Vector Regression

Iván Mejía-Guevara; Ángel Kuri-Morales

A genetic approach is presented in this article to deal with two problems: a) feature selection and b) the determination of parameters in Support Vector Regression (SVR). We consider a kind of genetic algorithm (GA) in which the probabilities of mutation and crossover are determined in the evolutionary process. Some empirical experiments are made to measure the efficiency of this algorithm against two frequently used approaches.

- Machine Learning and Data Mining | Pp. 399-408

Learning Models of Relational MDPs Using Graph Kernels

Florian Halbritter; Peter Geibel

Relational reinforcement learning is the application of reinforcement learning to structured state descriptions. Model-based methods learn a policy based on a known model that comprises a description of the actions and their effects as well as the reward function. If the model is initially unknown, one might learn the model first and then apply the model-based method (indirect reinforcement learning). In this paper, we propose a method for model-learning that is based on a combination of several SVMs using graph kernels. Indeterministic processes can be dealt with by combining the kernel approach with a clustering technique. We demonstrate the validity of the approach by a range of experiments on various scenarios.

- Machine Learning and Data Mining | Pp. 409-419

Weighted Instance-Based Learning Using Representative Intervals

Octavio Gómez; Eduardo F. Morales; Jesús A. González

Instance-based learning algorithms are widely used due to their capacity to approximate complex target functions; however, the performance of this kind of algorithms degrades significantly in the presence of irrelevant features. This paper introduces a new noise tolerant instance-based learning algorithm, called WIB-, that uses one or more weights, per feature per class, to classify integer-valued databases. A set of intervals that represent the rank of values of all the features is automatically created for each class, and the nonrepresentative intervals are discarded. The remaining intervals (representative intervals) of each feature are compared against the representative intervals of the same feature in the other classes to assign a weight. The weight represents the discriminative power of the interval, and is used in the similarity function to improve the classification accuracy. The algorithm was tested on several datasets, and compared against other representative machine learning algorithms showing very competitive results.

- Machine Learning and Data Mining | Pp. 420-430