Catálogo de publicaciones - libros

Compartir en
redes sociales


Data Management in a Connected World: Essays Dedicated to Hartmut Wedekind on the Occasion of His 70th Birthday

Theo Härder ; Wolfgang Lehner (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Database Management; Computer Communication Networks; Information Systems Applications (incl. Internet); Software Engineering; Management of Computing and Information Systems

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-26295-4

ISBN electrónico

978-3-540-31654-1

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Processes, Workflows, Web Service Flows: A Reconstruction

Stefan Jablonski

The last decade was heavily focusing on process-oriented approaches for application integration. It started with the advent of workflow management technology at the beginning of the nineties. This development has been continued with the definition of flow concepts for Web services. In this article, we discuss the purpose and advantages of process-oriented concepts for application integration. Therefore, workflow technology and Web service flows are briefly introduced. Then we assess both technologies with respect to their role in application integration. Especially, we will reconstruct the fundamental differences between workflows and Web services.

III - APPLICATION DESIGN | Pp. 201-213

Pros and Cons of Distributed Workflow Execution Algorithms

Hans Schuster

As an implementation of business processes workflows are inherently distributed. Consequently, there is a considerable amount both of commercial products and research prototypes that address distribution issues in workflow execution and workflow management systems (WfMS). However, most of these approaches provide only results focussed on the properties of a specific workflow model, workflow application, and/or WfMS implementation. An analysis of generic requirements on distributed workflow execution algorithms and their applicability, advantages, and disadvantages in different workflow scenarios is still missing but will be shown in this paper. A comprehensive requirements analysis on distributed workflow execution forms the basis of our discussion of distributed workflow execution. In contrast to existing work that primarily focuses on non-functional requirements, this paper explicitly considers issues that originate in the workflow model as well. Subsequently, four basic algorithms for distributed workflow execution are presented, namely remote access, workflow migration, workflow partitioning, and subworkflow distribution. Existing WfMS approaches use combinations and/or variants of these basic algorithms. The properties of these algorithms with respect to the aforementioned requirements are discussed in detail. As a primary result, subworkflow distribution proves to be a well-suited application-independent and thus generally applicable distributed execution model. Nevertheless, application-specific optimizations can be accomplished by other models.

III - APPLICATION DESIGN | Pp. 215-234

Business-to-Business Integration Technology

Christoph Bussler

Business-to-Business (B2B) integration technology refers to software systems that enable the communication of electronic business events between organizations across computer networks like the Internet or specialized networks like SWIFT [19]. A typical example of business events is a sent from a buyer to a seller with the intent that the seller delivers the ordered products eventually, or a sent from a supplier to a buyer with the intent that the buyer fulfills his obligation to pay for delivered products. Business events carry business data as such and the sender’s intent about what it expects the receiver to do. As business events are mission critical for the success of private, public, and government organizations, their reliable and dependable processing and transmission is paramount.

Database technology is a platform technology that has proven to be reliable and dependable for the management of large sets of dynamic data across a huge variety of applications. In recent years, functionality beyond data management was added to database technology making it a feasible platform for business event processing in addition to data processing itself. New functionality like complex data types, audit trails, message queuing, remote message transmission or publish/subscribe communication fulfills basic requirements for business event processing and are all relevant for B2B integration technology.

This contribution investigates the use of database technology for business event processing between organizations. First, a high-level conceptual model for B2B integration is introduced that derives basic business event processing requirements. A B2B integration system architecture outline is provided that defines the B2B integration system boundaries, before specific database functionality is discussed as implementation technology for business event processing. Some future trends as well as some proposals for extended database functionality is presented as a conclusion of this chapter.

III - APPLICATION DESIGN | Pp. 235-254

Information Dissemination in Modern Banking Applications

Peter Peinl; Uta Störl

Requirements for information systems, especially in the banking and finance industry, have drastically changed in the past few years to cope with phenomena like globalization and the growing impact of financial markets. Nowadays flexibility and profitability in this segment of the economy depends on the availability of ready, actual and accurate information at the working place of every single employee. These theses are exemplified by outlining two modern real-life banking applications, each different. Their business value is founded on the rapid dissemination of accurate information in a global, distributed working environment. To succeed technically, they employ a combination of modern database, networking and software engineering concepts. One case study centers on the swift dissemination of structured financial data to hundreds of investment bankers; the other deals with the rapid dissemination of semi-structured and/or unstructured information in a knowledge retrieval context.

IV - APPLICATION SCENARIOS | Pp. 257-276

An Intermediate Information System Forms Mutual Trust

Dieter Steinbauer

On the Internet, business transactions between anonyms are being made on a minute cycle. How can confidence between such business partners be obtained? For this purpose, an organization called the “credit bureau” exists in all countries having a functioning free market. In Germany, the leading credit bureau is the SCHUFA.

On the one hand, a credit bureau operates an information system which supplies for the credit grantor data about the credit-worthiness of his clients. On the other hand, the credit bureau offers the customer the possibility to document his reliability to the contractor or the credit grantor, respectively. Of its own accord, the credit bureau strictly commits itself to neutrality and only gives data to credit grantors that are relevant for the credit granting itself. This procedure prevents the system from being abused thereby alienating customers.

In many branches, the credit-granting process is highly automated. Via statistical methods the data of the credit bureaus are condensed into scoring systems. Via correlation of scores, equivalence classes of customers are being formed according to their non-payment risk.

The final credit decision is not only based on the data and the score of the customer in question but obviously also on the data which the credit grantor already possessed or which he was collecting since the contract was concluded.

An integrated decision support system for credit processing starts at the point of sale. It supports an appropriate computer-based dialogue and it includes a rule engine in which the rules for risk assessment are integrated. The information system of the credit bureau can be used in an interactive way.

While a credit is used, the non-payment risk and its probability are of substantial interest. For this purpose, a special monitoring process has to be established.

In summary, the credit-bureau system combines several techniques of computer science in an interesting way. You will find everything from database technology, via mathematical/statistical methods and rule-based systems to Web-based communication.

IV - APPLICATION SCENARIOS | Pp. 277-292

Data Refinement in a Market Research Applications’ Data Production Process

Thomas Ruf; Thomas Kirsche

In this contribution, we will show how empirically collected field data for a market research application are refined in a stepwise manner and enriched into end-user market reports and charts. The collected data are treated by selections, transformations, enrichments, and aggregations to finally derive new market knowledge from the raw data material. Besides data-oriented aspects, process- and organization-related aspects have to be considered as well to ensure the required product quality for GfK Marketing Services’ customers, which have known GfK for decades as a top-10 player in the international market research area. Based on an ongoing example from the panel-based Retail & Technology application domain, we will show how de-centrally collected and pre-processed data are transformed into integrated, global market knowledge in a network of world-wide companies.

IV - APPLICATION SCENARIOS | Pp. 293-314

Information Management in Distributed Healthcare Networks

Richard Lenz

Providing healthcare increasingly changes from isolated treatment episodes towards a continuous treatment process involving multiple healthcare professionals and various institutions. Information management plays a crucial role in this interdisciplinary process. By using information technology (IT) different goals are in the focus: To decrease overall costs for healthcare, to improve healthcare quality, and to consolidate patient-related information from different sources.

Consolidation of patient data is ultimately aimed at a lifetime patient record which serves as the basis for healthcare processes involving multiple healthcare professionals and different institutions. To enable seamless integration of various kinds of IT applications into a healthcare network, a commonly accepted framework is needed. Powerful standards and middleware technology are already at hand to develop a technical and syntactical infrastructure for such a framework. Yet, semantic heterogeneity is a limiting factor for system interoperability. Existing standards do support semantic interoperability of healthcare IT systems to some degree, but standards alone are not sufficient to support an evidence-based cooperative patient treatment process across organizational borders.

Medicine is a rapidly evolving scientific domain, and medical experts are developing and consenting new guidelines as new evidence occurs. Unfortunately, there is a gap between guideline development and guideline usage at the point of care. Medical treatment today is still characterized by a large diversity of different opinions and treatment plans. Medical pathways and reminder systems are an attempt to reduce the diversity in medical treatment and to bring evidence to the point of care. Developing such pathways, however, is primarily a process of achieving consensus between the participating healthcare professionals. IT support for pathways thus requires a responsive IT infrastructure enabling a demand-driven system evolution.

This article describes modern approaches for “integrated care” as well as the major challenges that are yet to be solved to adequately support distributed healthcare networks with IT services.

IV - APPLICATION SCENARIOS | Pp. 315-334

Data Managment for Engineering Applications

Hans-Peter Steiert

Current database technology has proven to fulfill the requirements of business applications, i.e., processing a high number of short transactions on more or less simple-structured data. Unfortunately, the requirements of engineering applications are quite different. A car’s bill of material, for example, is a deep tree with many branches at every level. Data objects become even more complex if we consider the engineered design objects themselves, as for example a gear box with its parts and how they are related to each other. Supporting complex data objects has many implications for the underlying data management system. It needs to be reflected at nearly any layer, from the API down to the storage system. Besides complex objects, the way design objects are processed in engineering applications differs from business applications. Because engineering is an explorative task, the concept of short transactions does not fit here. Working with design objects is a task of days, which leads to a different programming model for engineering applications. In addition, the data management system needs to support versioning of objects and configuration management. Furthermore, engineering is done in a collaborative team. Hence, sharing of design objects in a team is necessary while, at the same time, their collaborative work has to be synchronized. All those special requirements have to be considered in data management systems for engineering applications. In this contribution, the special requirements, as sketched above, are characterized. Also the approaches developed to cope with these requirements will be described.

IV - APPLICATION SCENARIOS | Pp. 335-356