Catálogo de publicaciones - libros

Compartir en
redes sociales


Advanced Distributed Systems: 5th International School and Symposium, ISSADS 2005, Guadalajara, Mexico, January 24-28, 2005, Revised Selected Papers

Félix F. Ramos ; Victor Larios Rosillo ; Herwig Unger (eds.)

En conferencia: 5º International Symposium and School on Advancex Distributed Systems (ISSADS) . Guadalajara, Mexico . January 24, 2005 - January 28, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Theory of Computation; Computer Communication Networks; Software Engineering; Operating Systems; Information Storage and Retrieval; Information Systems Applications (incl. Internet)

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-28063-7

ISBN electrónico

978-3-540-31674-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Injecting Communication Faults to Experimentally Validate Java Distributed Applications

Gabriela Jacques-Silva; Roberto Jung Drebes; Taisy Silva Weber; Eliane Martins

The use of Java to develop highly available network services and the inherent unreliability of communication systems require the validation of its fault tolerance mechanisms to avoid unexpected behavior during execution. Tools for testing distributed programs that must remain operational in a faulty environment are not common and are generally difficult to operate. One possible solution is applying a fault injector to emulate communication faults and test the behavior of the distributed application under these faults. We developed Jaca.net, an extension of the fault injector based on computational reflection Jaca, to include UDP communication faults to its fault model. Potential target applications of Jaca and Jaca.net are the ones developed in Java. Using Jaca.net, software developers can complete the test cycle validating the high level fault detection and recovery mechanisms built on their applications.

Palabras clave: Transmission Control Protocol; Fault Model; Network Application; Fault Injection; Java Virtual Machine.

- Fault Tolerance | Pp. 235-245

Implementing Rollback-Recovery Coordinated Checkpoints

Clairton Buligon; Sérgio Cechin; Ingrid Jansch-Pôrto

Recovering from processor failures in distributed systems is an important problem in the design of reliable systems. The processes should coordinate their operation to guarantee that the set of local checkpoints taken by the individual processes form a consistent global checkpoint (recovery line). This allows the system to resume operation from a consistent global state, when recovering from failure. This paper shows the results of the implementation of a transparent (no special needs for applications) and coordinated (non blocking) rollback-recovery distributed algorithm. As it does not block applications, the overhead is reduced during failure-free operation. Furthermore, the rollback procedure can be executed fast as a recovery line is always available and well identified. Our preliminary experimental results show that the algorithm causes very low overhead on the performance (less than 2%), and high dependency on the checkpoint size. Now we study optimizations on the implementation to reduce checkpoint latency.

Palabras clave: Execution Time; Stable Storage; Application Message; Message Rate; Stable Memory.

- Fault Tolerance | Pp. 246-257

An Identity-Based Model for Grid Security Infrastructure

Xiaoqin Huang; Lin Chen; Linpeng Huang; Minglu Li

In this paper, we propose a grid security infrastructure based on identity cryptography. We mainly discuss the grid security authentication and authorization architecture by using Tate Pairing. We propose a private key generator security infrastructure and secure group communication scheme by using non-interaction secret sharing protocol and one round tripartite Diffie-Hellman protocol. Finally, we present the advantages and disadvantages of our ID-based security infrastructure comparing with the public key infrastructure in grid circumstance.

Palabras clave: Identity Cryptography; Grid Security Infrastructure; Secure Group Communication.

- Information Retrieval | Pp. 258-266

Lineage Tracing in Mediator-Based Information Integration Systems

Nematollaah Shiri; Ali Taghizadeh-Azari

The problem of identifying the data contributed to a query answer is referred to as lineage tracing. While this has been studied extensively in data warehouse systems, it is identified as a research topic in the mediator-based approach to information integration [10]. A main problem in this context is that a mediator does not store data, and hence for query processing and tracing, it has to communicate with the data sources. While this communication could be expensive, the real issue is that in some situations, after a query is being processed, lineage tracing may be more difficult, e.g., when the schema of a source has changed, or may even be impossible, e.g., when a source becomes unavailable. In this paper, we study the lineage tracing problem in mediator-based systems and propose a solution by collecting “enough” data and metadata during query processing so that tracing is possible in such situations.. We have developed a system prototype, called ELIT (for Explorationand LIneage Tracing ). To allow more flexibility, ELIT supports lineage tracing in two modes: batch and interactive. Due to the distributed nature of the context, efficiency is of primary concern for practical reasons. We therefore investigate ways to reduce the overhead of lineage tracing in the proposed framework while processing queries. Using some basic query optimization techniques in ELIT, our preliminary experimental results show considerable increase in efficiency. This indicates the proposed ideas in the framework of ELIT could lend themselves to powerful lineage tracing and data analysis tools, by incorporating more sophisticated query optimization techniques.

Palabras clave: Query Processing; Transformation Function; User Query; Query Evaluation; Atomic Data.

- Information Retrieval | Pp. 267-282

Combining Sources of Evidence for Recognition of Relevant Passages in Texts

Alexander Gelbukh; NamO Kang; SangYong Han

Automatically recognizing in large electronic texts short selfcontained passages relevant for a user query is necessary for fast and accurate information access to large text archives. Surprisingly, most search engines practically do not provide any help to the user in this tedious task, just presenting a list of whole documents supposedly containing the requested information. We show how different sources of evidence can be combined in order to assess the quality of different passages in a document and present the highest ranked ones to the user. Specifically, we take into account the relevance of a passage to the user query, structural integrity of the passage with respect to paragraphs and sections of the document, and topic integrity with respect to topic changes and topic threads in the text. Our experiments show that the results are promising.

Palabras clave: Query Term; User Query; Question Answering; Word Sense Disambiguation; Document Retrieval.

- Information Retrieval | Pp. 283-290

A Hierarchical and by Role Multi-agent Organization: Application to the Information Retrieval

Emmanuel Adam; René Mandiau

Web retrieval becomes more and more important for the knowledge management area, and we think that multi-agent systems are a good answer to this problem. We propose, in this paper, a centralized Information multi-agent system to help actors of technological watch cells (called CIMASTEWA). This system is an evolution of a previous project and is set-up within a n-tiers architecture, which follows the STRUTS framework. This information multiagent system that has been developed to answer to demands from technological watch cells (for example, to securitize the search, notably concerning the survey by spies, we have proposed particular search strategies).

Palabras clave: Multiagent System; Information Agent; Coordinator Agent; Technological Watch; Holonic Manufacture System.

- Information Retrieval | Pp. 291-300

Evaluating a Scientific SPMD Application on a Computational Grid with Different Load Balancing Techniques

André Oliveira; Gabriel Argolo; Pablo Iglesias; Simone Martins; Alexandre Plastino

The performance of SPMD programs is strongly affected by dynamic load imbalancing factors. The use of a suitable load balancing algorithm is essential for overcoming the effects of these imbalancing factors. In this work, we evaluate the performance of a scientific SPMD parallel application when executed on a computational grid, with different kinds of load balancing strategies. The developed SPMD application computes the macroscopic thermal dispersion in porous media. A set of experiments was conducted on a computational grid composed by two geographically separated clusters. The main contribution of this work is the performance evaluation and comparison of a large variety of load balancing techniques under dynamic environment conditions. The experimental results showed the importance of choosing appropriate load balancing strategies when developing SPMD applications on a grid environment.

Palabras clave: Load Balance; Computational Grid; Central Processor; Parallel Application; Grid Environment.

- Modeling and Simulation | Pp. 301-311

Increasing the Training Speed of SVM, the Zoutendijk Algorithm Case

Rodolfo E. Ibarra Orozco; Neil Hernández-Gress; Juan Frausto-Solís; Jaime Mora Vargas

The Support Vector Machine (SVM) is a well known method used for classification, regression and density estimation. Training a SVM consists in solving a Quadratic Programming (QP) problem. The QP problem is very resource consuming (computational time and computational memory), because the quadratic form is dense and the memory requirements grow square the number of data points. The support vectors found in the training of SVM’s represent a small subgroup of the training patterns. If an algorithm could make an approximation beforehand of the points standing for support vectors, we could train the SVM only with those data and the same results could be obtained as trained using the entire data base. This paper introduces an original initialization by the Zoutendijk method, called ZQP, to train SVM’s faster than classical ones. The ZQP method first makes a fast approximation to the solution using the Zoutendijk algorithm. As result of this approximation, a reduced number of training patterns is obtained. Finally, a QP algorithm makes the training with this subset of data. Results show the improvement of the methodology in comparison to QP algorithm and chunking with QP algorithm. The ideas presented here can be extended to another problems such as resource allocation, considering that allocation as a combinatorial problem, that could be solved using some artificial intelligent technique such as Genetic algorithms or simulated annealing. In such approach ZQP would be used as a measure for effective fitness.

Palabras clave: Support Vector Machine; Quadratic Programming; Quadratic Programming Problem; Training Pattern; Chunk Size.

- Modeling and Simulation | Pp. 312-320

Video Motion Detection Using the Algorithm of Discrimination and the Hamming Distance

Josué A. Hernández-García; Héctor Pérez-Meana; Mariko Nakano-Miyatake

Several video detection systems that use a simple system of motion detection (if something moves, is generated an alarm) have been proposed, for this reason we trust part of the process to the human interpretation. Recent studies have demonstrated that a person is almost impossible to kindly watch a static scene in a monitor more than 20 minutes, doing that traditional systems of video monitoring as CCTV systems are little reliable, also, it is necessary to add numerous and annoying the false alarms generated by the few elimination of irrelevant information (color, light, shade, etc.) within the scene. The artificial vision nowadays allows having an automatic system of monitoring with the capacities to identify real threats and alert of security at the same time that they are happening. This paper presents a method of video motion detection that bases its use on an algorithm of discrimination able to eliminate the irrelevant information caused by natural effects (sun, moon, wind, etc.) or animals, maintaining the maximum of details on the image, allowing a better detection of motion through the distance of Hamming doubly justified, reducing in this way rate of false alarms, obtaining a method of motion detection automatic and reliable. In this paper is mentioned the comparison with other techniques, demonstrating itself that the proposed method gives better results. The obtained results show the basic characteristics of this method of detection.

- Modeling and Simulation | Pp. 321-330

An Efficient and Grain Preservation Mapping Algorithm: From ER Diagram to Multidimensional Model

Yen-Ting Chen; Ping-Yu Hsu

Many practitioners and researchers advocate that the designs of the data models of the data warehouses should incorporate the source data as much as possible to answer the finest levels of queries. On the other hand, the source data are very likely to come from systems designed with ER Diagrams. Therefore, many researches have been devoted to design methodologies to build multidimensional model based on corresponding source ER diagrams. However, to the best of our knowledge, no algorithm has been proposed to systematically translates an entire ER Diagram into a multidimensional model with hierarchical snowflake structures. The algorithm proposed in the paper promised to do so with two characteristics, namely, grain preservation and minimal distance from each table to the fact table. Grain preservation characteristic guarantees that translated multidimensional model has cohesive granularity among entities. The minimal distance characteristics guarantees that if an entity can be connected to the fact table in the derived model with more than one paths, the one with the shortest hops will always be chosen. The first characteristic is achieved by translating problematic relationships between entities with weight_factor attributes in bridging tables and enhancing fact tables with unique primary keys. The second characteristic is achieved by including a revised shortest path algorithm in the translating algorithm with the distance being calculated as the number of relationships required between entities.

Palabras clave: Weight Factor; Data Warehouse; Dimension Table; Multidimensional Model; Short Path Algorithm.

- Modeling and Simulation | Pp. 331-346