Catálogo de publicaciones - libros

Compartir en
redes sociales


Conceptual Modeling: ER 2006: 25th International Conference on Conceptual Modeling, Tucson, AZ, USA, November 6-9, 2006, Proceedings

David W. Embley ; Antoni Olivé ; Sudha Ram (eds.)

En conferencia: 25º International Conference on Conceptual Modeling (ER) . Tucson, AZ, USA . November 6, 2006 - November 9, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Database Management; Information Systems Applications (incl. Internet); Mathematical Logic and Formal Languages; Artificial Intelligence (incl. Robotics); Models and Principles; Software Engineering

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-47224-7

ISBN electrónico

978-3-540-47227-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Eliciting Data Semantics Via Top-Down and Bottom-Up Approaches: Challenges and Opportunities

Lois Delcambre; Vijay Khatri; Yair Wand; Barbara Williams; Carson Woo; Mark Zozulia

Data semantics can be defined as the meaning and use of data [2]. In the context of databases, data semantics refers to the set of mappings from a representation language to agreed-upon concepts in the real world [1]. Eliciting and capturing data semantics can enable better management of the enterprise data. Additionally, elicitation of data semantics can enhance understanding of applications and result in reduced maintenance and testing costs along with improved administration of applications. “Bad” data, or data whose semantics are not known or are not clear, is considered a major cause of failures such as “botched marketing campaigns, failed CRM and data warehouse projects, angry customers, and lunkhead decisions” [3]. To investigate the practical challenges and to propose future research opportunities, this discussion panel, moderated by Vijay Khatri and Carson Woo, will present: 1) views from Management Information Systems (MIS) and Computer Science (CS) research as well as 2) methods, tools and approaches employed in practice.

- Panels | Pp. 548-551

The ADO.NET Entity Framework: Making the Conceptual Level Real

José A. Blakeley; S. Muralidhar; Anil Nori

This paper describes the ADO.NET Entity Framework, a platform for programming against data that raises the level of abstraction from the logical (relational) level to the conceptual (entity) level, and thereby significantly reduces the impedance mismatch for applications and data services such as reporting, analysis, and replication. The conceptual data model is made real by a runtime that implements an extended relational model (the Entity Data Model aka the EDM), that embraces entities and relationships as first class concepts; a query language for the EDM; a comprehensive mapping engine that translates from the conceptual to the logical (relational) level, and a set of model-driven tools that help create entity-object, object-xml, and entity-xml transformers.

- Industrial Track | Pp. 552-565

XMeta Repository and Services

Lee Scheffler

The XMeta repository is an emerging industrial-strength model and instance persistence, management, access, query, update, upgrade and mapping facility based on EMF modelling technology. It is actively used as the foundation of several commercial metadata intensive products within IBM as well as several research efforts involving conceptual modeling. This talk covers both the features of XMeta and its services, and some of its current uses. It is expected that a version of XMeta will be made more widely available in some external form in the future.

- Industrial Track | Pp. 566-566

IBM Industry Models: Experience, Management and Challenges

Pat G. O’Sullivan; Dan Wolfson

IBM’s Industry Models for Banking and Insurance continue to evolve to encompass our accumulated experience with our customers, the changing needs of the industry, and the changing directions in technologies. With over 15 years of use, these models represent a wealth of information about the information models, process models and integration models for these industries. The models are in use today by more than 300 leading Banks and Insurance companies, where they serve in a variety of capacities – from supporting Data Consolidation initiatives and Business Process Re-Design to addressing Risk & Compliance issues such as Anti-Money Laundering, Sarbanes-Oxley, or Basel II.

As successful as these models have been, technical challenges remain. Such challenges include:

1. the extension of the models to formally incorporate (and relate) additional dimensions (ontologies, states, KPIs, rules, etc)

2. how to establish both inter-model relationships as well as traceability (and per-haps round-tripping) from models to runtime

3. how to better identify and reuse common model snippets across domains

4. how to facilitate better understanding of complex models

5. automating the deployment of run-time artifacts based on models

While the IBM Industry Models represent a significant body of content, the management of this content and more broadly the management of metadata are also key concerns. In this talk, we will review the IBM Industry Models, discuss how this work is evolving, the metadata management roadmap and discuss some of these key ongoing technical challenges to be addressed by both research and development communities.

- Industrial Track | Pp. 567-567

Community Semantics for Ultra-Scale Information Management

Scott Renner

The U.S. Department of Defense (DoD) presents an instance of an ultra-scale information management problem: thousands of information systems, millions of users, billions of dollars for procurement and operations. Military organizations are often viewed as the ultimate in rigid hierarchical control. In fact, authority over users and developers is widely distributed, and centralized control is quite difficult – or even impossible, as many of the DoD core functions involve an extended enterprise that includes completely independent entities, such as allied military forces, for-profit corporations, and non-governmental organizations. For this reason, information management within the DoD must take place in an environment of limited autonomy, one in which influence and negotiation are as necessary as top-down direction and control.

This presentation examines the DoD’s information management problems in the context of its transformation to network-centric warfare The key tenent of NCW holds that “seamless” information sharing leads to increased combat power. We examine several implications of the net-centric transformation and show how each depends upon shared semantic understanding within communities of interest. Opportunities for research and for commercial tool development in the area of conceptual modeling will be apparent as we go along.

- Industrial Track | Pp. 568-568

Managing Data in High Throughput Laboratories: An Experience Report from Proteomics

Thodoros Topaloglou

Scientific laboratories are rich in data management challenges. This paper describes an end-to-end information management infrastructure for a high throughput proteomics industrial laboratory. A unique feature of the platform is a data and applications integration framework that is employed for the integration of heterogeneous data, applications and processes across the entire laboratory production workflow. We also define a reference architecture for implementing similar solutions organized according to the laboratory data lifecycle phases. Each phase is modeled by a set of workflows integrating programs and databases in sequences of steps and associated communication and data transfers. We discuss the issues associated with each phase, and describe how these issues were approached in the proteomics implementation.

- Industrial Track | Pp. 569-580

Policy Models for Data Sharing

Ken Smith

Data sharing has become an enabler of a diverse and important set of activities in areas such as science, law enforcement, and commerce. Data sharing scenarios frequently involve issues such as personal confidentiality, data misinterpretation, the potential for malicious exploitation of shared data, data with proprietary or first-use value, secret data, and governmental regulation. For these reasons, the need to state and enforce data sharing policy has grown increasingly significant. In this talk, we discuss models for data sharing policy and their key concepts.

- Industrial Track | Pp. 581-582

Protocol Analysis for Exploring the Role of Application Domain in Conceptual Schema Understanding

Vijay Khatri; Iris Vessey

In keeping with prior research [1] that suggests that the objective of empirical research in conceptual modeling is to understand the cognitive model created by viewing conceptual schemas (e.g., ER diagrams), this research contributes to the foundation for developing a cognitive model of conceptual modelers. The aspect on which we focus in our research is the role of the application domain in conceptual schema understanding.

- Demos and Posters | Pp. 583-583

Auto-completion of Underspecified SQL Queries

Terrence Mason; Ramon Lawrence

Formulating SQL queries involving joins is tedious, error-prone, and requires in-depth schema knowledge. We demonstrate a modified version of SQL [2] that does not require specification of table references and joins. The Schema-Free SQL system can expresses queries not supported in keyword-based searches [1]. Unlike Universal Relation approaches, the system is scaleable to large schemas, and it has built-in mechanisms for handling ambiguity and ranking interpretations for the user. The auto-completion feature is not intended to remove all of the complexity in building SQL queries, just like auto-completion of code fragments does not remove the challenges of programming. However, it does make it easier to build SQL queries. Thus, the system provides a value-added feature to SQL querying that increases its flexibility and usability with no sacrifice in expressiveness or performance. The amount of the final SQL code that is autocompleted depends on the number of joins and the complexity of the rest of the SQL expression. The time to complete a query takes around 5 milliseconds [3].

- Demos and Posters | Pp. 584-584

iQL: A Query Language for the Instance-Based Data Model

Jeffrey Parsons; Jianmin Su

Unlike class-based models, such as the relational model and object-oriented models, the instance-based data model (IBDM) [1] separates data stored about individual things (instances) from how those instances are classified, resulting in a ‘two-layered’ approach (instance and class layers). In this poster and system demonstration, we illustrate the use of iQL (instance-based query language) for the IBDM. This language supports unique query capabilities that exploit the separation of data about instances from their classification. The tool has been implemented in conjunction with a prototype DBMS to support the IBDM [2].

- Demos and Posters | Pp. 585-585