Catálogo de publicaciones - libros

Compartir en
redes sociales


Web Information Systems Engineering: WISE 2005: 6th International Conference on Web Information Systems Engineering, New York, NY, USA, November 20-22, 2005, Proceedings

Anne H. H. Ngu ; Masaru Kitsuregawa ; Erich J. Neuhold ; Jen-Yao Chung ; Quan Z. Sheng (eds.)

En conferencia: 6º International Conference on Web Information Systems Engineering (WISE) . New York, NY, USA . November 20, 2005 - November 22, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Popular Computer Science; Information Systems Applications (incl. Internet); Information Storage and Retrieval; Database Management; Artificial Intelligence (incl. Robotics); Computers and Society

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-30017-5

ISBN electrónico

978-3-540-32286-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Handling Transactional Properties in Web Service Composition

Marie-Christine Fauvet; Helga Duarte; Marlon Dumas; Boualem Benatallah

The development of new services by composition of existing ones has gained considerable momentum as a means of integrating heterogeneous applications and realising business collaborations. Services that enter into compositions with other services may have transactional properties, especially those in the broad area of resource management (e.g. booking services). These transactional properties may be exploited in order to derive composite services which themselves exhibit certain transactional properties. This paper presents a model for composing services that expose transactional properties and more specifically, services that support tentative holds and/or atomic execution. The proposed model is based on a high-level service composition operator that produces composite services that satisfy specified atomicity constraints. The model supports the possibility of selecting the services that enter into a composition at runtime, depending on their ability to provide resource reservations at a given point in time and taking into account user preferences.

Palabras clave: Composition Operator; Service Composition; Component Service; Service Type; Composite Service.

- Web Service Structure | Pp. 273-289

XFlow: An XML-Based Document-Centric Workflow

Andrea Marchetti; Maurizio Tesconi; Salvatore Minutoli

This paper aims at investigating on an appropriate framework that allows the definition of workflows for collaborative document procedures. In this framework, called XFlow and largely based on XSLT Processing Model, the workflows are described by means of a new XML application called XFlowML (XFlow Markup Language). XFlowML describes the document workflow using an agent-based approach. Each agent can participate to the workflow with one or more roles, defined as XPath expressions, based on a hierarchical role chart. An XFlowML document contains as many templates as agent roles participating to the workflow. The document workflow engine constitutes the run-time execution support for the document processing by implementing the XFlowML constructs. A prototype of XFlow has been implemented with an extensive use of XML technologies (XSLT, XPath, XForms, SVG) and open-source tools (Cocoon, Tomcat, mySQL).

Palabras clave: External Agent; Agent Role; XPath Expression; Document Instance; Document Flow.

- Collaborative Methodology | Pp. 290-303

Optimization of XSLT by Compact Specialization and Combination

Ce Dong; James Bailey

In recent times, there has been an increased utilization of server-side XSLT systems as part of e-commerce and e-publishing applications. For the high volumes of data in these applications, effective optimization techniques for XSLT are particularly important. In this paper, we propose two new optimization approaches, Specialization Combination and Specialization Set Compaction, to help improve performance. We describe rules for combining specialized XSLT stylesheets and provide methods for generating a more compact specialization set. An experimental evaluation of our methods is undertaken, where we show our methods to be particularly effective for cases with very large XML input and different varieties of user queries.

Palabras clave: Specialization Combination; Query Term; User Query; Compact Specialization; Distinct Query.

- Collaborative Methodology | Pp. 304-317

Extracting Web Data Using Instance-Based Learning

Yanhong Zhai; Bing Liu

This paper studies structured data extraction from Web pages, e.g., online product description pages. Existing approaches to data extraction include wrapper induction and automatic methods. In this paper, we propose an instance-based learning method, which performs extraction by comparing each new instance (or page) to be extracted with labeled instances (or pages). The key advantage of our method is that it does not need an initial set of labeled pages to learn extraction rules as in wrapper induction. Instead, the algorithm is able to start extraction from a single labeled instance (or page). Only when a new page cannot be extracted does the page need labeling. This avoids unnecessary page labeling, which solves a major problem with inductive learning (or wrapper induction), i.e., the set of labeled pages may not be representative of all other pages. The instance-based approach is very natural because structured data on the Web usually follow some fixed templates and pages of the same template usually can be extracted using a single page instance of the template. The key issue is the similarity or distance measure. Traditional measures based on the Euclidean distance or text similarity are not easily applicable in this context because items to be extracted from different pages can be entirely different. This paper proposes a novel similarity measure for the purpose, which is suitable for templated Web pages. Experimental results with product data extraction from 1200 pages in 24 diverse Web sites show that the approach is surprisingly effective. It outperforms the state-of-the-art existing systems significantly.

Palabras clave: Target Item; Inductive Learning; Semistructured Data; Comparative Shopping; Item Price.

- Collaborative Methodology | Pp. 318-331

PRoBe: Multi-dimensional Range Queries in P2P Networks

O. D. Sahin; S. Antony; D. Agrawal; A. El Abbadi

Structured P2P systems are effective for exact key searches in a distributed environment as they offer scalability, self-organization, and dynamicity. These valuable properties also make them a candidate for more complex queries, such as range queries. In this paper, we describe PRoBe, a system that supports range queries over multiple attributes in P2P networks. PRoBe uses a multi-dimensional logical space for this purpose and maps data items onto this space based on their attribute values. The logical space is divided into hyper-rectangles, each maintained by a peer in the system. The range queries correspond to hyper-rectangles which are answered by forwarding the query to the peers responsible for overlapping regions of the logical space. We also propose load balancing techniques and show how cached query answers can be utilized for the efficient evaluation of similar range queries. The performance of PRoBe and the effects of various parameters are analyzed through a simulation study.

Palabras clave: Load Balance; Data Item; Range Query; Logical Space; Hilbert Curve.

- P2P, Ubiquitous and Mobile | Pp. 332-346

An Infrastructure for Reactive Information Environments

Rudi Belotti; Corsin Decurtins; Michael Grossniklaus; Moira C. Norrie

We introduce the concept of reactive information environments and a general infrastructure for experimentation with such systems. Its asynchronous state-based processing model is described along with the architectural requirements and main components of our infrastructure. These include a general context engine coupled together with a web publishing platform. An application for a public news service is used to motivate the requirements, explain the processing model and show how an application is implemented using the platform.

Palabras clave: Resource Description Framework; Server Side; Client Side; Context Element; Application Database.

- P2P, Ubiquitous and Mobile | Pp. 347-360

LoT-RBAC: A Location and Time-Based RBAC Model

Suroop Mohan Chandran; J. B. D. Joshi

Recent growth in location-based mobile services has introduced a significant need for location and time-based access control to resources. High mobility of the users and services in the emerging mobile applications in particular make the issue of controlling who can access what information and resources from which locations a daunting challenge. Several RBAC based models have been proposed that attempt to capture the location based and/or time-based access control requirements in various applications. However, they have limited flexibility and granularity. In this paper, we propose a Location and Time-based RBAC (LoT-RBAC) model to address the access control requirements of highly mobile, dynamic environments to provide both location and time based control.

Palabras clave: location based access; role based access; temporal constraint.

- P2P, Ubiquitous and Mobile | Pp. 361-375

Document Re-ranking by Generality in Bio-medical Information Retrieval

Xin Yan; Xue Li; Dawei Song

Document ranking is an important process in information retrieval (IR). It presents retrieved documents in an order of their estimated degrees of relevance to query. Traditional document ranking methods are mostly based on the similarity computations between documents and query. In this paper we argue that the similarity-based document ranking is insufficient in some cases. There are two reasons. Firstly it is about the increased information variety. There are far too many different types documents available now for user to search. The second is about the users variety. In many cases user may want to retrieve documents that are not only similar but also general or broad regarding a certain topic. This is particularly the case in some domains such as bio-medical IR. In this paper we propose a novel approach to re-rank the retrieved documents by incorporating the similarity with their generality. By an ontology-based analysis on the semantic cohesion of text, document generality can be quantified. The retrieved documents are then re-ranked by their combined scores of similarity and the closeness of documents’ generality to the query’s. Our experiments have shown an encouraging performance on a large bio-medical document collection, OHSUMED, containing 348,566 medical journal references and 101 test queries.

Palabras clave: Generality; Relevance; Document Ranking.

- Document Retrieval Applications | Pp. 376-389

Representing and Reasoning About Privacy Abstractions

Yin Hua Li; Salima Benbernou

The emerging next generation Web technologies offer tremendous opportunities for automating information management in a variety of application domains including office tasks, travel, and digital government. One of the main challenges facing effective automation is privacy. Verifying the correct usage of collected personal data is a major concern for both individuals and organizations. In this paper, we present a framework for reasoning about privacy models including provider’s privacy policies and user’s privacy preferences. More specifically, we use a Description Logic (DL) based notation to specify privacy abstractions. We provide a formalization of matching user’s privacy preferences against provider’s privacy policies using DLs’ reasoning mechanisms. We have implemented a Privacy Match Engine(PME) which is based on RACER.

Palabras clave: Privacy Policy; Description Logic; Reasoning Mechanism; Concept Description; Privacy Preference.

- Document Retrieval Applications | Pp. 390-403

Conceptual Query Refinement: The Basic Model

Nenad Stojanovic

In this paper we present a novel approach for the refinement of Boolean queries by using ontologies. We introduce a conceptual model for defining user’s queries, whih enables that the disambiguation (and consequently the refinement) of a query can be performed on the level of the meaning of a query. In that way the refinement process results in a set of meaningful, conceptual extensions of the initial query. Moreover, since a query is represented as a set of logic formulas, the query refinement process can be modeled as an inference process. It opens a palette of additional services that can enrich the query refinement process, like cooperative answering.

- Document Retrieval Applications | Pp. 404-417