Catálogo de publicaciones - libros

Compartir en
redes sociales


Scalable Uncertainty Management: First International Conference, SUM 2007, Washington,DC, USA, October 10-12, 2007. Proceedings

Henri Prade ; V. S. Subrahmanian (eds.)

En conferencia: 1º International Conference on Scalable Uncertainty Management (SUM) . Washington, DC, USA . October 10, 2007 - October 12, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-75407-7

ISBN electrónico

978-3-540-75410-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Tractable Probabilistic Description Logic Programs

Thomas Lukasiewicz

We propose tractable probabilistic description logic programs (or ) for the Semantic Web, which combine tractable description logics, normal programs under the answer set semantics, and probabilities. In particular, we introduce the for probabilistic dl-programs. Contrary to the previous answer set and well-founded semantics, it is defined for all probabilistic dl-programs and all probabilistic queries. Furthermore, tight (resp., tight literal) query processing under the total well-founded semantics coincides with tight (resp., tight literal) query processing under the previous well-founded (resp., answer set) semantics whenever the latter is defined. We then present an anytime algorithm for tight query processing in probabilistic dl-programs under the total well-founded semantics. We also show that tight literal query processing in probabilistic dl-programs under the total well-founded semantics can be done in polynomial time in the data complexity and is complete for EXP in the combined complexity. Finally, we describe an application of probabilistic dl-programs in probabilistic data integration for the Semantic Web.

Pp. 143-156

Valued Hesitation in Intervals Comparison

Meltem Öztürk; Alexis Tsoukiàs

The paper presents a valued extension of the recently introduced concept of interval order. The main idea is that, while comparing objects represented by interval of values there is a zone of hesitation between strict difference and strict similarity which could be modelled through valued relations. The paper presents suitable definitions of such valued relations fulfilling a number of interesting properties. The use of such a tool in data analysis and rough sets theory is discussed in the paper.

Pp. 157-170

Aggregates in Generalized Temporally Indeterminate Databases

Octavian Udrea; Zoran Majkić; V. S. Subrahmanian

Dyreson and Snodgrass as well as Dekhtyar et. al. have provided a probabilistic model (as well as compelling example applications) for why there may be temporal indeterminacy in databases. In this paper, we first propose a formal model for aggregate computation in such databases when there is uncertainty not just in the temporal attribute, but also in the ordinary (non-temporal) attributes. We identify two types of aggregates: event correlated aggregates, and non event correlated aggregations, and provide efficient algorithms for both of them. We prove that our algorithms are correct, and we present experimental results showing that the algorithms work well in practice.

Pp. 171-186

An Indexing Technique for Fuzzy Numerical Data

Carlos D. Barranco; Jesús R. Campaña; Juan M. Medina

This paper introduces an indexing technique for fuzzy numerical data which relies on the classical, well-known and well-spread B+tree index data structure. The proposed indexing technique is specifically devised to increase the performance of query processing when a possibility measured flexible condition is involved. The proposal relies on the use of an indexing data structure implemented in virtually every database management system. This feature makes the proposal a good candidate to be used, with very low implementation effort, in a fuzzy database management system created as an extension of a classical one. The paper includes a performance analysis of the proposed indexing technique in contrast with other purpose equivalent techniques in order to evaluate the suitability of the proposal.

Pp. 187-200

Combining Uncertain Outputs from Multiple Ontology Matchers

Ying Wang; Weiru Liu; David Bell

An ontology matching method (or a matcher) aims at matching every entity (or concept) in one ontology to the most suitable entity (or entities) in another ontology. Usually it is almost impossible to find a perfect match in the second ontology for every entity in the first ontology, so a matcher generally returns a set of possible matches with some weights (uncertainty) attached to each pair of match. In order to improve a matching result, several matchers can be used and the matched results from these matchers are combined with suitable approaches. In this paper, we first propose two new matchers among three matchers we use. We then address the need of dealing with uncertainties in mapping by investigating how some uncertainty reasoning frameworks can be used to combine matching results. We apply both the Dempster Shafer theory of evidence (DS theory) and Possibility Theory to merge the results computed by different matchers. Our experimental results and comparisons with related work indicate that integrating these theories to deal with uncertain ontology matching is a promising way to improve the overall matching results.

Pp. 201-214

Preferred Database Repairs Under Aggregate Constraints

Sergio Flesca; Filippo Furfaro; Francesco Parisi

A framework for computing preferred repairs in numerical databases violating a given set of strong and weak aggregate constraints is introduced, which is based on a transformation into an Integer Linear Programming (ILP) instance. Aggregate constraints are linear inequalities on aggregates retrieved from the input data. While strong constraints express mandatory conditions, weak constraints define conditions which are expected to be satisfied, even if this is not mandatory. Thus, preferred repairs are repairs which make the data satisfy all the given strong constraints and as many weak constraints as possible. An experimental validation of the proposed approach is provided, proving its effectiveness.

Pp. 215-229

Consistent Data Integration in P2P Deductive Databases

L. Caroprese; E. Zumpano

Data Integration in (P2P) systems is concerned with the ability of physically connect autonomous sources (peer) for sharing and reuse information and for the creation of new information from existing one. In a P2P system a query can be posed to any peer and the answer is provided by integrating locally stored data with data provided from its neighbors. Anyhow, while collecting data for answering queries, imported data may corrupt the local database due to the violation of some integrity constraint, therefore inconsistencies have to be managed. This paper contributes to the proposal of a logic based framework for data integration and query answering in a environment. It is based on [11,12] in which the , capturing a different perspective for P2P data integration, has been proposed: just data not violating integrity constraints are exchanged among peers by using mapping rules. The motivation of this work stems from the observation that the complexity of computing preferred weak models in [11,12] does not let the approach to be implemented in practical applications. Therefore, a more pragmatic solution seems to be desirable for assigning semantics to a P2P system. First, the paper proposes a rewriting technique that allows modeling a P2P system, , as a unique logic program, (), whose stable models correspond to the preferred weak models of . Then, it presents the , that allows obtaining a deterministic model whose computation is polynomial time. This is a (partial) stable model obtained by evaluating with a three-value semantics a logic program obtained from (. Finally, the paper provides results on the complexity of answering queries in a P2P system.

Pp. 230-243

Learning from Imprecise Granular Data Using Trapezoidal Fuzzy Set Representations

Ronald R. Yager

We discuss the role and benefits of using trapezoidal fuzzy representa-tions of granular information. We focus on the use of level sets as a tool for implementing many operations involving trapezoidal fuzzy sets. Attention is particularly brought to the simplification that the linearity of the trapezoid brings in that it often allows us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.

Pp. 244-254

Refining Aggregation Functions for Improving Document Ranking in Information Retrieval

Mohand Boughanem; Yannick Loiseau; Henri Prade

Classical information retrieval (IR) methods use the sum for aggregating term weights. In some cases, this may diminish the discriminating power between documents because some information is lost in this aggregation. To cope with this problem, the paper presents an approach for ranking documents in IR, based on a refined vector-based ordering technique taken from multiple criteria analysis methods. Different vector representations of the retrieval status values are considered and compared. Moreover, another refinement of the sum-based evaluation that controls if a term is worth adding or not (in order to avoid noise effect) is considered. The proposal is evaluated on a benchmark collection that allows us to compare the effectiveness of the approach with respect to a classical one. The proposed method provides some improvement of the precision w.r.t Mercure IR system.

Pp. 255-267

A Qualitative Bipolar Argumentative View of Trust

Henri Prade

The paper views trust as a matter of levels ranging from full trust to full distrust on a finite scale. The level of trust of an agent w. r. t. an information source or another agent may not be precisely assessed due to the lack of information. Assessing a level of trust is viewed as a decision problem, which is handled in an argumentative manner. The interest of an argumentation approach is to be able to articulate reasons for supporting trust or distrust. Moreover, the updating trust process takes advantage of the bipolar nature of the representation, and is based on information fusion methods that are able to handle conflicts. The integration of these different features provides a novel view of trust evaluation.

Pp. 268-276