Catálogo de publicaciones - libros
Logic Programming: 21st International Conference, ICLP 2005, Sitges, Spain, October 2-5, 2005, Proceedings
Maurizio Gabbrielli ; Gopal Gupta (eds.)
En conferencia: 21º International Conference on Logic Programming (ICLP) . Sitges, Spain . October 2, 2005 - October 5, 2005
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
| Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
|---|---|---|---|---|
| No detectada | 2005 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-29208-1
ISBN electrónico
978-3-540-31947-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2005
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2005
Tabla de contenidos
doi: 10.1007/11562931_11
Parallelizing Union-Find in Constraint Handling Rules Using Confluence Analysis
Thom Frühwirth
Constraint Handling Rules is a logical concurrent committed-choice rule-based language. Recently it was shown that the classical union-find algorithm can be implemented in CHR with optimal time complexity. Here we investigate if a parallel implementation of this algorithm is also possible in CHR. The problem is hard for several reasons:
- Up to now, no parallel computation model for CHR was defined.
- Tarjan’s optimal union-find is known to be hard to parallelize.
- The parallel code should be as close as possible to the sequential one.
It turns out that confluence analysis of the sequential implementation gives almost all the information needed to parallelize the union-find algorithm under a rather general parallel computation model for CHR.
Pp. 113-127
doi: 10.1007/11562931_12
An Optimised Semantic Web Query Language Implementation in Prolog
Jan Wielemaker
The Semantic Web is a rapidly growing research area aiming at the exchange of information over the World Wide Web. The Semantic Web is built on top of RDF, an XML-based language representing a triple-based data model. Higher languages such as the description logic based OWL language family are defined on top of RDF. Making inferences over triple collections is a promising application area for Prolog.
In this article we study query translation and optimization in the context of the SeRQL RDF query language. Queries are translated to Prolog goals, which are optimised by reordering . We study the domain specific issues of this general problem. Conjunctions are often large, but the danger of poor performance of the optimiser can be avoided by exploiting the nature of the triple store. We discuss the optimisation algorithms as well as the information required from the low level storage engine.
Pp. 128-142
doi: 10.1007/11562931_13
A Distributed and Probabilistic Concurrent Constraint Programming Language
Luca Bortolussi; Herbert Wiklicky
We present a version of the CCP paradigm, which is both distributed and probabilistic. We consider networks with a fixed number of nodes, each of them possessing a local and independent constraint store. While locally the computations evolve asynchronously, following the usual rules of (probabilistic) CCP, the communications among different nodes are synchronous. There are channels, and through them different objects can be exchanged: constraints, agents and channel themselves. In addition, all this activities are embedded in a probabilistic scheme based on a discrete model of time, both locally and globally. Finally we enhance the language with the capability of performing an automatic remote synchronization of variables belonging to different constraint stores.
Pp. 143-158
doi: 10.1007/11562931_14
HYPROLOG: A New Logic Programming Language with Assumptions and Abduction
Henning Christiansen; Veronica Dahl
We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars. The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the “pure” sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide for abduction, the most efficient of known implementations; the price, however, is a limited use of negations. The main difference wrt. previous implementations of abduction is that we avoid any level of metainterpretation by having Prolog execute the deductive steps directly and by treating abducibles (and assumptions as well) as CHR constraints.
Pp. 159-173
doi: 10.1007/11562931_15
Abduction of Linear Arithmetic Constraints
Michael J. Maher
Abduction is usually carried out on partially-defined predicates. In this paper we investigate abduction applied to fully-defined predicates, specifically linear arithmetic constraints over the real numbers. Abduction in this context has application to query answering using views and type inference, and potential relevance to analysis of concurrent/constraint/logic programs. We show that only rarely do abduction problems over linear arithmetic constraints have unique most general answers. We characterize the cases where most general answers exist. In general there may be infinitely many maximally general answers, or even answers that are not represented by maximally general answers. We take steps towards representing such answers finitely.
Pp. 174-188
doi: 10.1007/11562931_16
Towards Implementations for Advanced Equivalence Checking in Answer-Set Programming
Hans Tompits; Stefan Woltran
In recent work, a general framework for specifying program correspondences under the answer-set semantics has been defined. The framework allows to define different notions of equivalence, including the well-known notions of and , as well as refined equivalence notions based on the of answer sets, where not all parts of an answer set are of relevance (like, e.g., removal of auxiliary letters). In the general case, deciding the correspondence of two programs lies on the fourth level of the polynomial hierarchy and therefore this task can (presumably) not be efficiently reduced to answer-set programming. In this paper, we describe an approach to compute program correspondences in this general framework by means of linear-time constructible reductions to . We can thus use extant solvers for the latter language as back-end inference engines for computing program correspondence problems. We also describe how our translations provide a method to construct in case a program correspondence does not hold.
Pp. 189-203
doi: 10.1007/11562931_17
Hybrid Probabilistic Logic Programs with Non-monotonic Negation
Emad Saad; Enrico Pontelli
In [22], a new Hybrid Probabilistic Logic Programs framework has been proposed, and a new semantics has been developed to enable encoding and reasoning about real-world applications. In this paper, the language of Hybrid Probabilistic Logic Programs framework of [22] is extended to allow non-monotonic negation, and two alternative semantics are defined: stable probabilistic model semantics and probabilistic well-founded semantics. Stable probabilistic model semantics and probabilistic well-founded semantics generalize stable model semantics and well-founded semantics of traditional normal logic programs, and they reduce to the semantics of original Hybrid Probabilistic Logic programs framework of [22] for programs without negation. It is the first time that two different semantics for Hybrid Probabilistic Programs with non-monotonic negation as well as their relationships are described. This development provides a foundational ground for developing computational methods for computing the proposed semantics. Furthermore, it makes it clearer how to characterize non-monotonic negation in probabilistic logic programming frameworks for commonsense reasoning.
Pp. 204-220
doi: 10.1007/11562931_18
Reducing Inductive Definitions to Propositional Satisfiability
Nikolay Pelov; Eugenia Ternovska
The FO(ID) logic is an extension of classical first-order logic with a uniform representation of various forms of inductive definitions. The definitions are represented as sets of rules and they are interpreted by two-valued well-founded models. For a large class of combinatorial and search problems, knowledge representation in FO(ID) offers a viable alternative to the paradigm of Answer Set Programming. The main reasons are that (i) the logic is an extension of classical logic and (ii) the semantics of the language is based on well-understood principles of mathematical induction.
In this paper, we define a reduction from the propositional fragment of FO(ID) to SAT. The reduction is based on a novel characterization of two-valued well-founded models using a set of inequality constraints on level mappings associated with the atoms. We also show how the reduction to SAT can be adapted for logic programs under the stable model semantics. Our experiments show that when using a state of the art SAT solver both reductions are competitive with other answer set programming systems — both direct implementations and SAT based.
Pp. 221-234
doi: 10.1007/11562931_19
Symbolic Support Graph: A Space Efficient Data Structure for Incremental Tabled Evaluation
Diptikalyan Saha; C. R. Ramakrishnan
In an earlier paper, we described a data structure, called support graph, for efficient incremental evaluation of tabled logic programs. The support graph records the dependencies between answers in the tables, and is crucial for efficiently propagating the changes to the tables when facts are deleted. Incremental computation with support graphs are hundreds of times faster than from-scratch evaluation for small changes in the program. However, the graph typically grows faster than the tables themselves, making it impractical to maintain the full support graph for large applications.
In this paper we present a data structure, called symbolic support graph, which represents support information compactly. For a variety of useful tabled logic programs, the size of the symbolic support graph grows no faster than the table size. We demonstrate its effectiveness using a large application: a logic-programming-based points-to analyzer for C programs. The incremental analyzer shows very good scalability in terms of space usage, and is hundreds of times faster than from-scratch analysis for small changes to the program.
Pp. 235-249
doi: 10.1007/11562931_20
Dynamic Mixed-Strategy Evaluation of Tabled Logic Programs
Ricardo Rocha; Fernando Silva; Vítor Santos Costa
Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog by reusing answers to subgoals. During tabled execution, several decisions have to be made. These are determined by the scheduling strategy. Whereas a strategy can achieve very good performance for certain applications, for others it might add overheads and even lead to unacceptable inefficiency. The ability of using multiple strategies within the same evaluation can be a means of achieving the best possible performance. In this work, we present how the YapTab system was designed to support dynamic mixed-strategy evaluation of the two most successful tabling scheduling strategies: batched scheduling and local scheduling.
Pp. 250-264