Catálogo de publicaciones - libros
Symbolic and Quantitative Approaches to Reasoning with Uncertainty: 9th European Conference, ECSQARU 2007, Hammamet, Tunisia, October 31: November 2, 2007. Proceedings
Khaled Mellouli (eds.)
En conferencia: 9º European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (ECSQARU) . Hammamet, Tunisia . October 31, 2007 - November 2, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Mathematical Logic and Formal Languages
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-75255-4
ISBN electrónico
978-3-540-75256-1
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Discriminative vs. Generative Learning of Bayesian Network Classifiers
Guzmán Santafé; Jose A. Lozano; Pedro Larrañaga
Discriminative learning of Bayesian network classifiers has recently received considerable attention from the machine learning community. This interest has yielded several publications where new methods for the discriminative learning of both structure and parameters have been proposed. In this paper we present an empirical study used to illustrate how discriminative learning performs with respect to generative learning using simple Bayesian network classifiers such as naive Bayes or TAN, and we discuss when and why a discriminative learning is preferred. We also analyzed how log-likelihood and conditional log-likelihood scores guide the learning process of Bayesian network classifiers.
- Learning and Classification | Pp. 453-464
PADUA Protocol: Strategies and Tactics
Maya Wardeh; Trevor Bench-Capon; Frans Coenen
In this paper we describe an approach to classifying objects in a domain where classifications are uncertain using a novel combination of argumentation and data mining. Classification is the topic of a dialogue game between two agents, based on an argument scheme and critical questions designed for use by agents whose knowledge of the domain comes from data mining. Each agent has its own set of examples which it can mine to find arguments based on association rules for and against a classification of a new instance. These arguments are exchanged in order to classify the instance. We describe the dialogue game, and in particular discuss the strategic considerations which agents can use to select their moves. Different strategies give rise to games with different characteristics, some having the flavour of persuasion dialogues and other deliberation dialogues.
- Learning and Classification | Pp. 465-476
A Semi-naive Bayes Classifier with Grouping of Cases
Joaquín Abellán; Andrés Cano; Andrés R. Masegosa; Serafín Moral
In this work, we present a semi-naive Bayes classifier that searches for dependent attributes using different filter approaches. In order to avoid that the number of cases of the compound attributes be too high, a grouping procedure is applied each time after two variables are merged. This method tries to group two or more cases of the new variable into an unique value. In an emperical study, we show as this approach outperforms the naive Bayes classifier in a very robust way and reaches the performance of the Pazzani’s semi-naive Bayes [1] without the high cost of a wrapper search.
- Learning and Classification | Pp. 477-488
Split Criterions for Variable Selection Using Decision Trees
Joaquín Abellán; Andrés R. Masegosa
In the field of attribute mining, several feature selection methods have recently appeared indicating that the use of sets of decision trees learnt from a data set can be an useful tool for selecting relevant and informative variables regarding to a main class variable. With this aim, in this study, we claim that the use of a new split criterion to build decision trees outperforms another classic split criterions for variable selection purposes. We present an experimental study on a wide and different set of databases using only one decision tree with each split criterion to select variables for the Naive Bayes classifier.
- Learning and Classification | Pp. 489-500
Inference and Learning in Multi-dimensional Bayesian Network Classifiers
Peter R. de Waal; Linda C. van der Gaag
We describe the family of multi-dimensional Bayesian network classifiers which include one or more class variables and multiple feature variables. The family does not require that every feature variable is modelled as being dependent on every class variable, which results in better modelling capabilities than families of models with a single class variable. For the family of multidimensional classifiers, we address the complexity of the classification problem and show that it can be solved in polynomial time for classifiers with a graphical structure of bounded treewidth over their feature variables and a restricted number of class variables. We further describe the learning problem for the subfamily of fully polytree-augmented multi-dimensional classifiers and show that its computational complexity is polynomial in the number of feature variables.
- Learning and Classification | Pp. 501-511
Combining Decision Trees Based on Imprecise Probabilities and Uncertainty Measures
Joaquín Abellán; Andrés R. Masegosa
In this article, we shall present a method for combining classification trees obtained by a simple method from the imprecise Dirichlet model (IDM) and uncertainty measures on closed and convex sets of probability distributions, otherwise known as credal sets. Our combine method has principally two characteristics: it obtains a high percentage of correct classifications using a few number of classification trees and it can be parallelized to apply on very large databases.
- Learning and Classification | Pp. 512-523
Belief Classification Approach Based on Generalized Credal EM
Imene Jraidi; Zied Elouedi
The EM algorithm is widely used in supervised and unsupervised classification when applied for mixture model parameter estimation. It has been shown that this method can be applied for partially supervised classification where the knowledge about the class labels of the observations can be imprecise and/or uncertain. In this paper, we propose to generalize this approach to cope with imperfect knowledge at two levels: the attribute values of the observations and their class labels. This knowledge is represented by belief functions as understood in the Transferable Belief Model. We show that this approach can be applied when the data are categorical and generated from multinomial mixtures.
- Learning and Classification | Pp. 524-535
Logical Compilation of Bayesian Networks with Discrete Variables
Michael Wachter; Rolf Haenni
This paper presents a new direction in the area of compiling Bayesian networks. The principal idea is to encode the network by logical sentences and to compile the resulting encoding into an appropriate form. From there, all possible queries are answerable in linear time relative to the size of the logical form. Therefore, our approach is a potential solution for real-time applications of probabilistic inference with limited computational resources. The underlying idea is similar to both the differential and the weighted model counting approach to inference in Bayesian networks, but at the core of the proposed encoding we avoid the transformation from discrete to binary variables. This alternative encoding enables a more natural solution.
- Bayesian Networks and Probabilistic Reasoning | Pp. 536-547
Local Monotonicity in Probabilistic Networks
Johan Kwisthout; Hans Bodlaender; Gerard Tel
It is often desirable that a probabilistic network is monotone, e.g., more severe symptoms increase the likeliness of a more serious disease. Unfortunately, determining whether a network is monotone is highly intractable. Often, approximation algorithms are employed that work on a local scale. For these algorithms, the monotonicity of the arcs (rather than the network as a whole) is determined. However, in many situations monotonicity depends on the ordering of the values of the nodes, which is sometimes rather arbitrary. Thus, it is desirable to order the values of these variables such that as many arcs as possible are monotone. We introduce the concept of local monotonicity, discuss the computational complexity of finding an optimal ordering of the values of the nodes in a network, and sketch a branch-and-bound exact algorithm to find such an optimal solution.
- Bayesian Networks and Probabilistic Reasoning | Pp. 548-559
Independence Decomposition in Dynamic Bayesian Networks
Ildikó Flesch; Peter Lucas
Dynamic Bayesian networks are a special type of Bayesian network that explicitly incorporate the dimension of time. They can be distinguished into repetitive and non-repetitive networks. Repetitiveness implies that the set of random variables of the network and their independence relations are the same at each time step. Due to their structural symmetry, repetitive networks are easier to use and are, therefore, often taken as the standard. However, repetitiveness is a very strong assumption, which normally does not hold, as particular dependences and independences may only hold at certain time steps.
In this paper, we propose a new framework for independence modularisation in dynamic Bayesian networks. Our theory provides a method for separating atemporal and temporal independence relations, and offers a practical approach to building dynamic Bayesian networks that are possibly non-repetitive. A composition operator for temporal and atemporal independence relations is proposed and its properties are studied. Experimental results obtained by learning dynamic Bayesian networks from real data show that this framework offers a more accurate way for knowledge representation in dynamic Bayesian networks.
- Bayesian Networks and Probabilistic Reasoning | Pp. 560-571