Catálogo de publicaciones - libros

Compartir en
redes sociales


Journal on Data Semantics II

Stefano Spaccapietra ; Elisa Bertino ; Sushil Jajodia ; Roger King ; Dennis McLeod ; Maria E. Orlowska ; Leon Strous (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-24208-6

ISBN electrónico

978-3-540-30567-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Incrementally Maintaining Materializations of Ontologies Stored in Logic Databases

Raphael Volz; Steffen Staab; Boris Motik

This article presents a technique to incrementally maintain materializations of ontological entailments. Materialization consists in precomputing and storing a set of implicit entailments, such that frequent and/or crucial queries to the ontology can be solved more efficiently. The central problem that arises with materialization is its maintenance when axioms change, viz. the process of propagating changes in explicit axioms to the stored implicit entailments.

When considering rule-enabled ontology languages that are operationalized in logic databases, we can distinguish two types of changes. Changes to the ontology will typically manifest themselves in changes to the rules of the logic program, whereas changes to facts will typically lead to changes in the extensions of logical predicates. The incremental maintenance of the latter type of changes has been studied extensively in the deductive database context and we apply the technique proposed in [30] for our purpose. The former type of changes has, however, not been tackled before.

In this article we elaborate on our previous papers [32, 33], which extend the approach of [30] to deal with changes in the logic program. Our approach is not limited to a particular ontology language but can be generally applied to arbitrary ontology languages that can be translated to Datalog programs, i.e. such as O-Telos, F-Logic [16] RDF(S), or Description Logic Programs [34].

- International Conference on Ontologies, DataBases, and Applications of Semantics for Large Scale Information Systems (ODBase 2003) | Pp. 1-34

Ontology Translation on the Semantic Web

Dejing Dou; Drew McDermott; Peishen Qi

Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation is required when translating datasets, generating ontology extensions, and querying through different ontologies. OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages. Ontology translation can be thought of in terms of formal inference in a merged ontology. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them, and then adding that relate their concepts. The resulting then serves as an inferential medium within which translation can occur. Our internal representation, , is a strong typed first-order logic language for web application. Using a uniform notation for all problems allows us to factor out syntactic and semantic translation problems, and focus on the latter. Syntactic translation is done by an automatic translator between Web-PDDL and OWL or other web languages. Semantic translation is implemented using an inference engine () which processes assertions and queries in Web-PDDL syntax, running in either a data-driven (forward chaining) or demand-driven (backward chaining) way.

- International Conference on Ontologies, DataBases, and Applications of Semantics for Large Scale Information Systems (ODBase 2003) | Pp. 35-57

Compound Term Composition Algebra: The Semantics

Yannis Tzitzikas; Anastasia Analyti; Nicolas Spyratos

The (CTCA) is an algebra with four algebraic operators, whose composition can be used to specify the meaningful (valid) compound terms (conjunctions of terms) in a given faceted taxonomy in an efficient and flexible manner. The “positive” operations allow the derivation of valid compound terms through the declaration of a small set of valid compound terms. The “negative” operations allow the derivation of valid compound terms through the declaration of a small set of invalid compound terms. In this paper, we formally define the model-theoretic semantics of the operations and the closed-world assumptions adopted in each operation. We prove that CTCA is monotonic with respect to both valid and invalid compound terms, meaning that the valid and invalid compound terms of a subexpression are not invalidated by a larger expression. We show that CTCA cannot be directly represented in Description Logics. However, we show how we could design a metasystem on top of Description Logics in order to implement this algebra.

- International Conference on Ontologies, DataBases, and Applications of Semantics for Large Scale Information Systems (ODBase 2003) | Pp. 58-84

Dynamic Pattern Mining: An Incremental Data Clustering Approach

Seokkyung Chung; Dennis McLeod

We propose a mining framework that supports the identification of useful patterns based on incremental data clustering. Given the popularity of Web news services, we focus our attention on news streams mining. News articles are retrieved from Web news services, and processed by data mining tools to produce useful higher-level knowledge, which is stored in a content description database. Instead of interacting with a Web news service directly, by exploiting the knowledge in the database, an information delivery agent can present an answer in response to a user request. A key challenging issue within news repository management is the high rate of document insertion. To address this problem, we present a sophisticated incremental hierarchical document clustering algorithm using a neighborhood search. The novelty of the proposed algorithm is the ability to identify meaningful patterns (e.g., news events, and news topics) while reducing the amount of computations by maintaining cluster structure incrementally. In addition, to overcome the lack of topical relations in conceptual ontologies, we propose a topic ontology learning framework that utilizes the obtained document hierarchy. Experimental results demonstrate that the proposed clustering algorithm produces high-quality clusters, and a topic ontology provides interpretations of news topics at different levels of abstraction.

- International Conference on Ontologies, DataBases, and Applications of Semantics for Large Scale Information Systems (ODBase 2003) | Pp. 85-112

A Knowledge Network Approach for Implementing Active Virtual Marketplaces

Minsoo Lee; Stanley Y. W. Su; Herman Lam

The current Web technology is not suitable for representing knowledge nor sharing it among organizations over the Web. There is a rapidly increasing need for exchanging and linking knowledge over the Web, especially when several sellers and buyers come together on the Web to form a virtual marketplace. Virtual marketplaces are increasingly being required to become more intelligent and active, thus leading to an active virtual marketplace concept. This paper explains an infrastructure called the knowledge network that enables sharing of knowledge over the Web and thus effectively supports the formation of virtual marketplaces on the Web. The concept of an active virtual marketplace can be realized using this infrastructure by allowing buyers and sellers to effectively specify their knowledge in the form of events, triggers, and rules. The knowledge network can actively distribute and process these knowledge elements to help buyers and sellers to easily find each other. An example active virtual marketplace application has been developed using the knowledge network.

- International Conference on Cooperative Information Systems (CoopIS 2003) | Pp. 113-135

Stream Integration Techniques for Grid Monitoring

Andy Cooke; Alasdair J. G. Gray; Werner Nutt

Grids are distributed systems that provide access to computational resources in a transparent fashion. Providing information about the status of the Grid itself is called Grid monitoring. As an approach to this problem, we present the Relational Grid Monitoring Architecture (R-GMA), which tackles Grid monitoring as an information integration problem.

A novel feature of R-GMA is its support for integrating stream data via a simple “local as view” approach. We describe the infrastructure that R-GMA provides for publishing and querying monitoring data. In this context, we discuss the semantics of continuous queries, provide characterisations of query plans, and present an algorithm for computing such plans.

The concepts and mechanisms offered by R-GMA are general and can be applied in other areas where there is a need for publishing and querying information in a distributed fashion.

- International Conference on Cooperative Information Systems (CoopIS 2003) | Pp. 136-175

Information Release Control: A Learning-Based Architecture

Claudio Bettini; X. Sean Wang; Sushil Jajodia

Modern information system applications involve collaboration in the form of information flow through organization boundaries. Indeed, organizations have vast amounts of information that is shared with other organizations and even the general public for various purposes. In addition to the standard network-level protections, systems usually use some access control mechanisms to protect data. However, access control systems are not designed to deal with deliberate and accidental release of information, to which the user has the authority to access but is not supposed to be released. Moreover, effective access control assumes a perfect categorization of information, which is increasingly difficult in a complex information system. Information release control is viewed as complementary to access control, and aims at restricting the outgoing information flow at the boundary of information systems. This paper presents a general architectural view of a release control system, and discusses the integration in the proposed architecture of a module for learning release control constraints. Continuous learning is applied to adjust the release control constraints in order to reduce both mistakenly released and mistakenly restricted documents. The paper describes in detail the process of learning keyword-based release control constraints.

- 6th IFIP TC 11 WG 11.5 Working Conference on Integrity and Internal Control in Information Systems (IICIS 2003) | Pp. 176-198

Enforcing Semantics-Aware Security in Multimedia Surveillance

Naren Kodali; Csilla Farkas; Duminda Wijesekera

Continuous audio-visual surveillance is utilized to ensure the physical safety of critical infrastructures such as airports, nuclear power plants and national laboratories. In order to do so, traditional surveillance systems place cameras, microphones and other sensory input devices in appropriate locations [Sch99]. These facilities are arranged in a hierarchy of physical zones reflecting the secrecy of the guarded information. Guards in these facilities carry clearances that permit them only in appropriate zones of the hierarchy, and monitor the facilities by using devices such as hand-held displays that send streaming media of the guarded zones possibly with some instructions. The main security constraint applicable to this model is that any guard can see streams emanating from locations with secrecy levels equal to or lower than theirs, but not higher. We show how to model these surveillance requirements using the synchronized multimedia integration language (SMIL) [Aya01] with appropriate security enhancements. Our solution consists of imposing a multi-level security model on SMIL documents to specify surveillance requirements. Our access control model ensures that a multimedia stream can only be displayed on a device if the security clearance of the display device dominates the security clearance of the monitored zone. Additionally, we pre-process a set of cover stories that can be released during emergency situations that allow using the services of guards with lower clearances without disclosing data with higher sensitive levels. For this, we create a view for each level, and show that these views are semantically coherent and comply with specified security polices.

- 6th IFIP TC 11 WG 11.5 Working Conference on Integrity and Internal Control in Information Systems (IICIS 2003) | Pp. 199-221