Catálogo de publicaciones - libros

Compartir en
redes sociales


Scalable Uncertainty Management: First International Conference, SUM 2007, Washington,DC, USA, October 10-12, 2007. Proceedings

Henri Prade ; V. S. Subrahmanian (eds.)

En conferencia: 1º International Conference on Scalable Uncertainty Management (SUM) . Washington, DC, USA . October 10, 2007 - October 12, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-75407-7

ISBN electrónico

978-3-540-75410-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Probabilistic Planning in Hybrid Probabilistic Logic Programs

Emad Saad

In this paper, we present a new approach to probabilistic planning based on logic programming, by relating probabilistic planning to hybrid probabilistic logic programs with probabilistic answer set semantics [32]. We show that any probabilistic planning problem, , can be translated into a hybrid probabilistic logic program whose probabilistic answer sets correspond to trajectories in , with associated probabilities. We formally prove the correctness of our approach. Moreover, we show that the complexity of finding a plan for a probabilistic planning problem in our approach is NP-complete. In addition, we show that any probabilistic planning problem, , can be encoded as a classical logic program with answer set semantics, whose answer sets corresponds to valid trajectories in . We also show that probabilistic planning problems can be encoded as proportional satisfiability problems.

Pp. 1-15

Top-k Retrieval in Description Logic Programs Under Vagueness for the Semantic Web

Thomas Lukasiewicz; Umberto Straccia

Description logics (DLs) and logic programs (LPs) are important representation languages for the Semantic Web. In this paper, we address an emerging problem in such languages, namely, the problem of evaluating ranked top-k queries. Specifically, we show how to compute the top-k answers in a data-complexity tractable combination of DLs and LPs under vagueness.

Pp. 16-30

A Fuzzy Set-Based Approach to Temporal Databases

J. Campaña; M. C. Garrido; N. Marín; O. Pons

The primary aim of temporal databases is to offer a common framework to those DB applications that need to store or handle different types of temporal data from a variety of sources since they allow the concept of time to be unified from the point of view of meaning, representation and manipulation. Although at first sight the incorporation of time into a DB might appear to be a direct and even simple task, it is, however, quite complex because not only must new structures and specific operators be included but the semantics of classical manipulation sentences (insert, update or delete) must be changed when temporal data are present. In addition, temporal information is not always as precise as desired since it is affected by imprecision due to the use of natural language or to the nature of the source. In this paper, we deal with the problem of the update and query operations when time is expressed by means of a fuzzy interval of dates. Throughout the text, we will see how the delete and insert operations are particular cases of the update process and will therefore be implicitly presented in the paper.

Pp. 31-44

Finding Most Probable Worlds of Probabilistic Logic Programs

Samir Khuller; Vanina Martinez; Dana Nau; Gerardo Simari; Amy Sliva; V. S. Subrahmanian

Probabilistic logic programs have primarily studied the problem of entailment of probabilistic atoms. However, there are some interesting applications where we are interested in finding a possible world that is most probable. Our first result shows that the problem of computing such ”maximally probable worlds” (MPW) is intractable. We subsequently show that we can often greatly reduce the size of the linear program used in past work (by Ng and Subrahmanian) and yet solve the problem exactly. However, the intractability results still make computational efficiency quite impossible. We therefore also develop several heuristics to solve the MPW problem and report extensive experimental results on the accuracy and efficiency of such heuristics.

Pp. 45-59

Managing Uncertainty in Schema Matcher Ensembles

Anan Marie; Avigdor Gal

Schema matching is the task of matching between concepts describing the meaning of data in various heterogeneous, distributed data sources. With many heuristics to choose from, several tools have enabled the use of schema matcher ensembles, combining principles by which different schema matchers judge the similarity between concepts. In this work, we investigate means of estimating the uncertainty involved in schema matching and harnessing it to improve an ensemble outcome. We propose a model for schema matching, based on simple probabilistic principles. We then propose the use of machine learning in determining the best mapping and discuss its pros and cons. Finally, we provide a thorough empirical analysis, using both real-world and synthetic data, to test the proposed technique. We conclude that the proposed heuristic performs well, given an accurate modeling of uncertainty in matcher decision making.

Pp. 60-73

The Consistency Extractor System: Querying Inconsistent Databases Using Answer Set Programs

Monica Caniupan; Leopoldo Bertossi

We present the () that uses to compute consistent answers to first-order queries posed to relational databases that may be inconsistent wrt their integrity constraints. Among other features,   implements a technique to evaluate queries via disjunctive logic programs with stable model semantics that specify the repair of the original database. We describe the methodology and the system; and also present some experimental results.

Pp. 74-88

Incomplete Statistical Information Fusion and Its Application to Clinical Trials Data

Jianbing Ma; Weiru Liu; Anthony Hunter

In medical clinical trials, overall trial results are highlighted in the of papers/reports. These results are summaries of underlying statistical analysis where most of the time normal distributions are assumed in the analysis. It is common for clinicians to focus on the information in the abstracts in order to review or integrate several clinical trial results that address the same or similar medical question(s). Therefore, developing techniques to merge results from clinical trials based on information in the abstracts is useful and important. In reality information in an abstract can either provide sufficient details about a normal distribution or just partial information about a distribution. In this paper, we first propose approaches to constructing normal distributions from both complete and incomplete statistical information in the abstracts. We then provide methods to merge these normal distributions (or sampling distributions). Following this, we investigate the conditions under which two normal distributions can be merged. Finally, we design an algorithm to sequence the merging of trials results to ensure that the most reliable trials are considered first.

Pp. 89-103

Quality Measures in Uncertain Data Management

Ander de Keijzer; Maurice van Keulen

Many applications deal with data that is uncertain. Some examples are applications dealing with sensor information, data integration applications and healthcare applications. Instead of these applications having to deal with the uncertainty, it should be the responsibility of the DBMS to manage all data including uncertain data. Several projects do research on this topic. In this paper, we introduce four measures to be used to assess and compare important characteristics of data and systems: uncertainty density, answer decisiveness and adapted precision and recall measures.

Pp. 104-115

Learning Different User Profile Annotated Rules for Fuzzy Preference Top-k Querying

A. Eckhardt; T. Horváth; P. Vojtáš

Uncertainty querying of large data can be solved by providing top-k answers according to a user fuzzy ranking/scoring function. Usually different users have different fuzzy scoring function – a user preference model. Main goal of this paper is to assign a user a preference model automatically. To achieve this we decompose user’s fuzzy ranking function to ordering of particular attributes and to a combination function. To solve the problem of automatic assignment of user model we design two algorithms, one for learning user preference on particular attribute and second for learning the combination function. Methods were integrated into a Fagin-like top-k querying system with some new heuristics and tested.

Pp. 116-130

Composable Markov Building Blocks

Sander Evers; Maarten M. Fokkinga; Peter M. G. Apers

In situations where disjunct parts of the same process are described by their own first-order Markov models and only one model applies at a time (activity in one model coincides with non-activity in the other models), these models can be joined together into one. Under certain conditions, nearly all the information to do this is already present in the component models, and the transition probabilities for the joint model can be derived in a purely analytic fashion. This composability provides a theoretical basis for building scalable and flexible models for sensor data.

Pp. 131-142