Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Artificial Intelligence: 19th Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2006, Quebec City, Quebec, Canada, June 7-9, Proceedings

Luc Lamontagne ; Mario Marchand (eds.)

En conferencia: 19º Conference of the Canadian Society for Computational Studies of Intelligence (Canadian AI) . Quebec City, QC, Canada . June 7, 2006 - June 9, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-34628-9

ISBN electrónico

978-3-540-34630-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Discriminative vs. Generative Classifiers for Cost Sensitive Learning

Chris Drummond

This paper experimentally compares the performance of discriminative and generative classifiers for cost sensitive learning. There is some evidence that learning a discriminative classifier is more effective for a traditional classification task. This paper explores the advantages, and disadvantages, of using a generative classifier when the misclassification costs, and class frequencies, are not fixed. The paper details experiments built around commonly used algorithms modified to be cost sensitive. This allows a clear comparison to the same algorithm used to produce a discriminative classifier. The paper compares the performance of these different variants over multiple data sets and for the full range of misclassification costs and class frequencies. It concludes that although some of these variants are better than a single discriminative classifier, the right choice of training set distribution plus careful calibration are needed to make them competitive with multiple discriminative classifiers.

- Supervised and Unsupervised Learning | Pp. 479-490

The Best-Paths Approach to Approximate Dynamic Programming with Application to Portfolio Optimization

Nicolas Chapados; Yoshua Bengio

We describe a general method to transform a non-markovian sequential decision problem into a supervised learning problem using a -best-paths algorithm. We consider an application in financial portfolio management where we can train a controller to directly optimize a Sharpe Ratio (or other risk-averse non-additive) utility function. We illustrate the approach by demonstrating experimental results using a kernel-based controller architecture that would not normally be considered in traditional reinforcement learning or approximate dynamic programming.

- Supervised and Unsupervised Learning | Pp. 491-502

Learning Naive Bayes for Probability Estimation by Feature Selection

Liangxiao Jiang; Harry Zhang

Naive Bayes is a well-known effective and efficient classification algorithm. But its probability estimation is poor. In many applications, however, accurate probability estimation is often required in order to make optimal decisions. Usually, probability estimation is measured by conditional log likelihood (CLL). There have been some learning algorithms proposed recently to extend naive Bayes for high CLL, such as ERL [8, 9] and BNC-2P [10]. Unfortunately, their computational complexity is relatively high. Is there a simple but effective and efficient approach to improve the probability estimation of naive Bayes? In this paper, we propose to use feature selection for this purpose. More precisely, a search process is conducted to select a subset of attributes, and then a naive Bayes is deployed on the selected attribute set. In fact, feature selection has been successfully applied to naive Bayes and achieves significant improvement in classification accuracy. Among the feature selection algorithms for naive Bayes, selective Bayesian classifiers (SBC) by Langley et al.[13] demonstrates good performance. In this paper, we first study the performance of SBC in terms of probability estimation, and then propose an improved SBC algorithm SBC-CLL, in which the CLL score is directly used for attribute selection, instead of using classification accuracy. Our experiments show that both SBC and SBC-CLL achieve significant improvement over naive Bayes, and that SBC-CLL outperforms SBC substantially, in probability estimation measured by CLL. Our work provides an efficient and surprisingly effective approach to improve the probability estimation of naive Bayes.

- Supervised and Unsupervised Learning | Pp. 503-514

Lazy Averaged One-Dependence Estimators

Liangxiao Jiang; Harry Zhang

Naive Bayes is a probability-based classification model based on the conditional independence assumption. In many real-world applications, however, this assumption is often violated. Responding to this fact, researchers have made a substantial amount of effort to improve the accuracy of naive Bayes by weakening the conditional independence assumption. The most recent work is the (AODE) [15] that demonstrates good classification performance. In this paper, we propose a novel lazy learning algorithm , simply LAODE, by extending AODE. For a given test instance, LAODE firstly expands the training data by adding some copies (clones) of each training instance according to its similarity to the test instance, and then uses the expanded training data to build an AODE classifier to classify the test instance. We experimentally test our algorithm in Weka system [16], using the whole 36 UCI data sets [11] recommended by Weka [17], and compare it to naive Bayes [3], AODE [15], and LBR [19]. The experimental results show that LAODE significantly outperforms all the other algorithms used to compare.

- Supervised and Unsupervised Learning | Pp. 515-525

Probabilistic Inference Trees for Classification and Ranking

Jiang Su; Harry Zhang

In many applications, an accurate ranking of instances is as important as accurate classification. However, it has been observed that traditional decision trees perform well in classification, but poor in ranking. In this paper, we point out that there is an inherent obstacle for traditional decision trees to achieving both accurate classification and ranking. We propose to understand decision trees from probabilistic perspective, and use probability theory to compute probability estimates and perform classification and ranking. The new model is called probabilistic inference trees (PITs). Our experiments show that the PIT learning algorithm performs well in both ranking and classification. More precisely, it significantly outperforms the state-of-the-art decision tree learning algorithms designed for ranking, such as C4.4 [10] and Ling and Yan’s algorithm [6], and performs competitively with the traditional decision tree learning algorithms, such as C4.5, in classification. Our research provides a novel algorithm for the applications in which both accurate classification and ranking are desired.

- Supervised and Unsupervised Learning | Pp. 526-537

Parameter Estimation of One-Class SVM on Imbalance Text Classification

Ling Zhuang; Honghua Dai

Compared with conventional two-class learning schemes, one-class classification simply uses a single class for training purposes. Applying one-class classification to the minorities in an imbalanced data has been shown to achieve better performance than the two-class one. In this paper, in order to make the best use of all the available information during the learning procedure, we propose a general framework which first uses the minority class for training in the one-class classification stage; and then uses both minority and majority class for estimating the generalization performance of the constructed classifier. Based upon this generalization performance measurement, parameter search algorithm selects the best parameter settings for this classifier. Experiments on UCI and Reuters text data show that one-class SVM embedded in this framework achieves much better performance than the standard one-class SVM alone and other learning schemes, such as one-class Naive Bayes, one-class nearest neighbour and neural network.

- Supervised and Unsupervised Learning | Pp. 538-549

MITS: A Mixed-Initiative Intelligent Tutoring System for Sudoku

Allan Caine; Robin Cohen

In this paper, we propose a model called MITS — Mixed Initiative Intelligent Tutoring System for Sudoku. Extrapolating from theory for tutoring in scholastic subjects, and tutoring in the game of chess, we develop a model for tutoring the game of Sudoku using a mixed-initiative paradigm. Moreover, our aim is to design a system which not only proposes moves to make but also gives advice on why a particular move ought to be made. We operate in a decision-theoretic framework that measures the benefits and costs of interacting with students who are learning the game. The tutor will take the initiative to interact when the student lacks knowledge and is making moves that have low utility. But it will also interact when the student takes the initiative to elicit further input on the game he or she is trying to play. We illustrate our graphic user interface prototype and take the reader through a sample session. As a result, we present a system that is useful not only to gain insight into how to tutor students about strategy games but also about how to support mixed-initiative interaction in tutorial settings.

- User Modeling | Pp. 550-561