Catálogo de publicaciones - libros
KI 2006: Advances in Artificial Intelligence: 29th Annual German Conference on AI, KI 2006, Bremen, Germany, June 14-17, 2006. Proceedings
Christian Freksa ; Michael Kohlhase ; Kerstin Schill (eds.)
En conferencia: 29º Annual Conference on Artificial Intelligence (KI) . Bremen, Germany . June 14, 2006 - June 17, 2006
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Robotics and Automation
Disponibilidad
| Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
|---|---|---|---|---|
| No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-69911-8
ISBN electrónico
978-3-540-69912-5
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Finding Models for Blocked 3-SAT Problems in Linear Time by Systematical Refinement of a Sub-model
Gábor Kusper
We report a polynomial time SAT problem instance, the Blocked SAT problem. A blocked clause set, an instance of the Blocked SAT problem, contains only blocked clauses. A close is blocked (for resolution) if it has a literal on which no resolution is possible in the clause set. We know from work of O. Kullmann that a blocked clause can be added or deleted from a clause set without changing its satisfiability. Hence, any blocked clause set is satisfiable, but it is not clear how to find a satisfying assignment for it. We introduce the Blocked SAT Solver algorithm, which provides a model for Blocked SAT problems in linear time, if we know at least one blocked literal per clause. To collect these information polynomial time is needed in general. We show that in case of 3-SAT we can collect these information in linear time. This means that the Blocked 3-SAT problem is a linear time problem. We also discuss how to use blocked clauses if the whole clause set is not blocked.
- Session 4B. Reasoning | Pp. 128-142
Towards the Computation of Stable Probabilistic Model Semantics
Emad Saad
In [22], a stable model semantics extension of the language of hybrid probabilistic logic programs [21] with non-monotonic negation, normal hybrid probabilistic programs (NHPP), has been developed by introducing the notion of stable probabilistic model semantics. It has been shown in [22] that the stable probabilistic model semantics is a natural extension of the stable model semantics for normal logic programs and the language of normal logic programs is a subset of the language NHPP. This suggests that efficient algorithms and implementations for computing the stable probabilistic model semantics for NHPP can be developed by extending the efficient algorithms and implementation for computing the stable model semantics for normal logic programs, e.g., SMODELS [17]. In this paper, we explore an algorithm for computing the stable probabilistic model semantics for NHPP along with its auxiliary functions. The algorithm we develop is based on the SMODELS [17] algorithms. We show the soundness and completeness of the proposed algorithm. We provide the necessary conditions that these auxiliary functions have to satisfy to guarantee the soundness and completeness of the proposed algorithm. This algorithm is the first to develop for studying computational methods for computing the stable probabilistic models semantics for hybrid probabilistic logic programs with non-monotonic negation.
- Session 4B. Reasoning | Pp. 143-158
DiaWOz-II – A Tool for Wizard-of-Oz Experiments in Mathematics
Christoph Benzmüller; Helmut Horacek; Ivana Kruijff-Korbayová; Henri Lesourd; Marvin Schiller; Magdalena Wolska
We present DiaWOz-II, a configurable software environment for Wizard-of-Oz studies in mathematics and engineering. Its interface is based on a structural editor which allows the input of complex mathematical formulae. This allows the collection of dialog corpora consisting of natural language interleaved with non-trivial mathematical expressions, which is not offered by other Wizard-of-Oz tools in the field. We illustrate the application of DiaWOz-II in an empirical study on tutorial dialogs about mathematical proofs, summarize our experience with DiaWOz-II and briefly present some preliminary observations on the collected dialogs.
- Session 4B. Reasoning | Pp. 159-173
Applications of Automated Reasoning
Ulrich Furbach; Claudia Obermaier
This paper offers an informal overview and discussion on first order predicate logic reasoning systems together with a description of applications which are carried out in the Artificial Intelligence Research Group of the University in Koblenz. Furthermore the technique of knowledge compilation is shortly introduced.
- Session 5. Invited Talk | Pp. 174-187
On the Scalability of Description Logic Instance Retrieval
Ralf Möller; Volker Haarslev; Michael Wessel
Although description logic systems can adequately be used for representing and reasoning about incomplete information (e.g., for John we know he is French or Italian), in practical applications it can be assumed that (only) for some tasks the expressivity of description logics really comes into play whereas for building complete applications, it is often necessary to effectively solve instance retrieval problems with respect to largely deterministic knowledge. In this paper we present and analyze the main results we have found about how to contribute to this kind of scalability problem. We assume familiarity with description logics in general and tableau provers in particular.
- Session 6A. Ontologies | Pp. 188-201
Relation Instantiation for Ontology Population Using the Web
Viktor de Boer; Maarten van Someren; Bob J. Wielinga
The Semantic Web requires automatic ontology population methods. We developed an approach, that given existing ontologies, extracts instances of ontology relations, a specific subtask of ontology population. We use generic, domain independent techniques to extract candidate relation instances from the Web and exploit the redundancy of information on the Web to compensate for loss of precision caused by the use of these generic methods. The candidate relation instances are then ranked based on co-occurrence with a seed set. In an experiment, we extracted instances of the relation between artists and art styles. The results were manually evaluated against selected art resources.
- Session 6A. Ontologies | Pp. 202-213
GeTS – A Specification Language for Geo-Temporal Notions
Hans Jürgen Ohlbach
This paper contains a brief overview of the ‘Geo-Temporal’ specification language GeTS. The objects which can be described and manipulated with this language are time points, crisp and fuzzy time intervals and labeled partitionings of the time axis. The partitionings are used to represent periodic temporal notions like months, semesters etc. and also whole calendar systems. GeTS is essentially a typed functional language with a few imperative constructs and many built-ins. GeTS can be used to specify and compute with many different kinds of temporal notions, from simple arithmetic operations on time points up to complex fuzzy relations between fuzzy time intervals. A parser, a compiler and an abstract machine for GeTS is implemented.
- Session 6B. Spatio-temporal Reasoning | Pp. 214-228
Active Monte Carlo Recognition
Felix von Hundelshausen; Manuela Veloso
In this paper we introduce , a new approach for object recognition. The method is based on seeding and propagating ”relational” particles that represent hypothetical relations between low-level perception and high-level object knowledge. AMCR acts as a filter with each individual step verifying fragments of different objects, and with the sequence of resulting steps producing the overall recognition. In addition to the object label, AMCR also yields the point correspondences between the input object and the stored object. AMCR does not assume a given segmentation of the input object. It effectively handles object transformations in scale, translation, rotation, affine and non-affine distortion. We describe the general AMCR in detail, introduce a particular implementation, and present illustrative empirical results.
- Session 6B. Spatio-temporal Reasoning | Pp. 229-243
Cross System Personalization and Collaborative Filtering by Learning Manifold Alignments
Bhaskar Mehta; Thomas Hofmann
Today, personalization in digital libraries and other information systems occurs separately within each system that one interacts with. However, there are several potential improvements w.r.t. such isolated approaches. Investments of users in personalizing a system, either through explicit provision of information, or through long and regular use are not transferable to other systems. Moreover, users have little or no control over the information that defines their profile, since user profiles are deeply buried in personalization engines. Cross-system personalization, i.e. personalization that shares personalization information across different systems in a user-centric way, overcomes the aforementioned problems. Information about users, which is originally scattered across multiple systems, is combined to obtain maximum leverage. The key idea is that when a large number of users cross over from one system to another, carrying their user profiles with them, a mapping between the user profiles of the two systems can be discovered. In this paper, we discuss the use of manifold learning for the purpose of computing recommendations for a new user crossing over from one system to another.
- Session 7A. Machine Learning | Pp. 244-259
A Partitioning Method for Mixed Feature-Type Symbolic Data Using a Squared Euclidean Distance
Renata Maria Cardoso Rodrigues de Souza; Francisco de Assis Tenorio de Carvalho; Daniel F. Pizzato
A partitioning cluster method for mixed feature-type symbolic data is presented. This method needs a previous pre-processing step to transform Boolean symbolic data into modal symbolic data. The presented dynamic clustering algorithm has then as input a set of vectors of modal symbolic data (weight distributions) and furnishes a partition and a prototype to each class by optimizing an adequacy criterion based on a suitable squared Euclidean distance. To show the usefulness of this method, examples with synthetic symbolic data sets and applications with real symbolic data sets are considered.
- Session 7A. Machine Learning | Pp. 260-273