Catálogo de publicaciones - libros

Compartir en
redes sociales


AI 2007: Advances in Artificial Intelligence: 20th Australian Joint Conference on Artificial Intelligence, Gold Coast, Australia, December 2-6, 2007. Proceedings

Mehmet A. Orgun ; John Thornton (eds.)

En conferencia: 20º Australasian Joint Conference on Artificial Intelligence (AI) . Gold Coast, QLD, Australia . December 2, 2007 - December 6, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Mathematical Logic and Formal Languages; Data Mining and Knowledge Discovery; Information Systems Applications (incl. Internet); Information Storage and Retrieval; Computation by Abstract Devices

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-76926-2

ISBN electrónico

978-3-540-76928-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A UAV Search and Rescue Scenario with Human Body Detection and Geolocalization

Patrick Doherty; Piotr Rudol

The use of Unmanned Aerial Vehicles (UAVs) which can operate autonomously in dynamic and complex operational environments is becoming increasingly more common. The UAVTech Lab, is pursuing a long term research endeavour related to the development of future aviation systems which try and push the envelope in terms of using and integrating high-level deliberative or AI functionality with traditional reactive and control components in autonomous UAV systems. In order to carry on such research, one requires challenging mission scenarios which force such integration and development. In this paper, one of these challenging emergency services mission scenarios is presented. It involves search and rescue for injured civilians by UAVs. In leg I of the mission, UAVs scan designated areas and try to identify injured civilians. In leg II of the mission, an attempt is made to deliver medical and other supplies to identified victims. We show how far we have come in implementing and executing such a challenging mission in realistic urban scenarios.

- Invited Talks | Pp. 1-13

Social Agents

Norman Foo

Social agents are computational cognitive agents that have epistemic abilities besides the traditional ones of action, communication and other forms of interaction. There is increasing interest in the modelling of such agents. This talk will first outline selected aspects of social agents and potential applications. Then we proceed to an example of how a couple of aspects may be investigated, viz., agent trust based on experience.

- Invited Talks | Pp. 14-14

Globally Optimal Solutions to Vision Using Convex and Quasi-convex Optimization

Richard Hartley

Vision Geometry is the are of Computer Vision that deals with computing the geometry of the 3D world from sequences of images. It grew out of Photogrammetry, a field that goes back at least to the start of the 20th century. In the 1990s this field was transformed by the application of methods of Projective Geometry, leading to many new algorithms and deployment of the new methods in a wide variety of applications.

The algorithmic basis for Vision Geometry still ultimately relied on a technique called ”bundle adjustment”, involving iterative refinement of initial solutions by Newton or Gauss-Newton methods. These had the disadvantage of often finding local rather than global minima.

Recent work has focussed on applying different optimization techniques, particularly Convex Optimization techniques to attempt to find guaranteed global solutions to these problems. I will talk about progress in this area, through the use of methods such as Second Order Cone Programming, branch-and-bound fractional programming and semi-definite programming.

- Invited Talks | Pp. 15-15

Efficient Learning of Bayesian Network Classifiers

Alexandra M. Carvalho; Arlindo L. Oliveira; Marie-France Sagot

We introduce a Bayesian network classifier less restrictive than Naive Bayes (NB) and Tree Augmented Naive Bayes (TAN) classifiers. Considering that learning an unrestricted network is unfeasible the proposed classifier is confined to be consistent with the breadth-first search order of an optimal TAN. We propose an efficient algorithm to learn such classifiers for any score that decompose over the network structure, including the well known scores based on information theory and Bayesian scoring functions. We show that the induced classifier always scores better than or the same as the NB and TAN classifiers. Experiments on modeling transcription factor binding sites show that, in many cases, the improved scores translate into increased classification accuracy.

- Machine Learning | Pp. 16-25

Mixture of the Robust L1 Distributions and Its Applications

Junbin Gao; Richard Y. Xu

Recently a robust probabilistic L1-PCA model was introduced in [1] by replacing the conventional Gaussian noise model with the Laplacian L1 model. Due to the heavy tail characteristics of the L1 distribution, the proposed model is more robust against data outliers. In this paper, we generalized the L1-PCA into a mixture of L1-distributions so that the model can be used for possible multiclustering data. For the model learning we use the property that the L1 density can be expanded as a superposition of infinite number of Gaussian densities to include a tractable Bayesian learning and inference based on the variational EM-type algorithm.

- Machine Learning | Pp. 26-35

On Using a Hierarchy of Twofold Resource Allocation Automata to Solve Stochastic Nonlinear Resource Allocation Problems

Ole-Christoffer Granmo; B. John Oommen

Recent trends in AI attempt to solve difficult NP-hard problems using intelligent techniques so as to obtain approximately-optimal solutions. In this paper, we consider a of such problems which fall under the general umbrella of “knapsack-like” problems, and demonstrate how we can solve fast and accurately using a hierarchy of Learning Automata (LA). In a multitude of real-world situations, resources must be allocated based on incomplete and noisy information, which often renders traditional resource allocation techniques ineffective. This paper addresses one such class of problems, namely, . We first present a completely new on-line LA system — the (H-TRAA). The primitive component of the H-TRAA is a (TRAA), which in itself possesses novelty in the field of LA. For both the TRAA and H-TRAA, we then provide formal convergence results. Finally, we demonstrate empirically that the H-TRAA provides faster convergence compared to state-of-the-art. Indeed, in contrast to state-of-the-art, the H-TRAA scales sub-linearly. As a result, we believe that the H-TRAA opens avenues for handling demanding real-world applications, such as the allocation of resources in large-scale web monitoring.

- Machine Learning | Pp. 36-47

An Investigation into the Interaction Between Feature Selection and Discretization: Learning How and When to Read Numbers

Sumukh Ghodke; Timothy Baldwin

Pre-processing is an important part of machine learning, and has been shown to significantly improve the performance of classifiers. In this paper, we take a selection of pre-processing methods—focusing specifically on discretization and feature selection—and empirically examine their combined effect on classifier performance. In our experiments, we take 11 standard datasets and a selection of standard machine learning algorithms, namely one-R, ID3, naive Bayes, and IB1, and explore the impact of different forms of preprocessing on each combination of dataset and algorithm. We find that in general the combination of wrapper-based forward selection and naive supervised methods of discretization yield consistently above-baseline results.

- Machine Learning | Pp. 48-57

Safe State Abstraction and Reusable Continuing Subtasks in Hierarchical Reinforcement Learning

Bernhard Hengst

Hierarchical reinforcement learning methods have not been able to simultaneously abstract and reuse subtasks with discounted value functions. The contribution of this paper is to introduce two completion functions that jointly decompose the value function hierarchically to solve this problem. The significance of this result is that the benefits of hierarchical reinforcement learning can be extended to discounted value functions and to continuing (infinite horizon) reinforcement learning problems. This paper demonstrates the method with the an algorithm that discovers subtasks automatically. An example is given where the optimum policy requires a subtask never to terminate.

- Machine Learning | Pp. 58-67

Local Learning Algorithm for Markov Blanket Discovery

Shunkai Fu; Michel Desmarais

Learning of Markov blanket can be regarded as an optimal solution to the feature selection problem. In this paper, we propose a local learning algorithm, called Breadth-First search of MB (BFMB), to induce Markov blanket (MB) without having to learn a Bayesian network first. It is demonstrated as (1) easy to understand and prove to be sound in theory; (2) data efficient by making full use of the knowledge of underlying topology of MB; (3) fast by relying on fewer data passes and conditional independent test than other approaches; (4) scalable to thousands of variables due local learning. Empirical results on BFMB, along with known Iterative Association Markov blanket (IAMB) and Parents and Children based Markov boundary (PCMB), show that (i) BFMB significantly outperforms IAMB in measures of data efficiency and accuracy of discovery given the same amount of instances available (ii) BFMB inherits all the merits of PCMB, but reaches higher accuracy level using only around 20% and 60% of the number of data passes and conditional tests, respectively, used by PCMB.

- Machine Learning | Pp. 68-79

A Novel Ordering-Based Greedy Bayesian Network Learning Algorithm on Limited Data

Feng Liu; Fengzhan Tian; Qiliang Zhu

Existing algorithms for learning Bayesian network (BN) require a lot of computation on high dimensional itemsets, which affects accuracy especially on limited datasets and takes up a large amount of time. To alleviate the above problem, we propose a novel BN learning algorithm OMRMRG, Ordering-based Max Relevance and Min Redundancy Greedy algorithm. OMRMRG presents an ordering-based greedy search method with a greedy pruning procedure, applies Max-Relevance and Min-Redundancy feature selection method, and proposes Local Bayesian Increment function according to Bayesian Information Criterion (BIC) formula and the likelihood property of overfitting. Experimental results show that OMRMRG algorithm has much better efficiency and accuracy than most of existing BN learning algorithms on limited datasets.

- Machine Learning | Pp. 80-89