Catálogo de publicaciones - libros
Task Models and Diagrams for Users Interface Design: 5th International Workshop, TAMODIA 2006, Hasselt, Belgium, October 23-24, 2006. Revised Papers
Karin Coninx ; Kris Luyten ; Kevin A. Schneider (eds.)
En conferencia: 5º International Workshop on Task Models and Diagrams for User Interface Design (TAMODIA) . Hasselt, Belgium . October 23, 2006 - October 24, 2006
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
User Interfaces and Human Computer Interaction; Multimedia Information Systems; Software Engineering; Simulation and Modeling; Logics and Meanings of Programs; Computer Communication Networks
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-70815-5
ISBN electrónico
978-3-540-70816-2
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Cobertura temática
Tabla de contenidos
An MDA Approach for Generating Web Interfaces with UML ConcurTaskTrees and Canonical Abstract Prototypes
Duarte Costa; Leonel Nóbrega; Nuno Jardim Nunes
UML has become the standard language for modelling in different areas and domains, but it is widely recognized that it lacks support for User Interface Design (UID). On the other hand, ConcurTaskTree (CTT) is one of the most widely used notations for task and dialogue modelling. An important achievement is the proposed notation and semantics for CTT by extending the UML metamodel, proving that task modelling in user interface design can be accomplished by a UML compliant notation. For the interface structure design was proposed that UML’s CTT could be complemented with Canonical Abstract Prototypes (CAP) leading to a model-based user interface design method co-specified by the presentation (CAP) and behaviour (UML’s CTT) perspectives. In this paper we propose another step in this UID method by defining a specific model compliant with the OMG recommended Model Driven Architecture (MDA), which will be the intermediary between the design model and an implementation of the user interface. This proposal will align the UID method with the MDA recommendation making it possible to automatically generate interface prototypes from conceptual models.
- Bridging the Gap: Driven by Models | Pp. 137-152
High-Level Modeling of Multi-user Interactive Applications
Jan Van den Bergh; Kris Luyten; Karin Coninx
With the shift of networked software applications away from the desktop computers and into home appliances comes the challenge of finding new ways to model this type of software. The most important home appliance that has new found computing capabilities is the television set. Through television people can now be participants instead of viewers, and use interactive software to enter the participative stage. Staged participatory multimedia events are a subset of this kind of interactive software that have a predefined temporal structure that lets television viewers and mobile users become participants in a distributed multimedia event. In this paper, we introduce an interaction model that is part of modelling language for staged participatory multimedia events. We show how the language can be mapped on UML constructs and shortly discuss related work using a common example.
- Bridging the Gap: Driven by Models | Pp. 153-168
Goals: Interactive Multimedia Documents Modeling
Pedro Valente; Paulo N. M. Sampaio
Multimedia has been largely applied to develop attractive and functional applications that allow achieving useful user-tasks. These sophisticated applications are usually developed using bottom-up approaches regardless of the complexity of the implementation. Furthermore, the design of complex Interactive Multimedia Documents (IMDs) introduces an additional complexity; the authoring of these applications can be an error-prone task considering the increasing number of media objects participating in these documents and the synchronization among them. For this reason, the authoring of IMDs should be supported by a structured methodology based on an intuitive abstraction that facilitates the design of complex IMDs. This paper presents a top-down use-case driven, architectural-centric, UML based methodology that allows for the intuitive authoring of complex IMDs through the structured modeling of the presentations aspects, content and the complete behavior of these documents.
- Bridging the Gap: Driven by Models | Pp. 169-185
Using Task Models for Cascading Selective Undo
Aaron G. Cass; Chris S. T. Fernandes
Many studies have shown that selective undo, a variant of the widely-implemented linear undo, has many advantages over the prevailing model. In this paper, we define a task model for implementing selective undo in the face of dependencies that may exist between the undone action and other subsequent user actions. Our model accounts for these dependencies by identifying other actions besides the undone one that should also be undone to keep the application in a stable state. Our approach, which we call , is built upon a process-programming language originally designed in the software engineering community. The result is a formal analytical framework by which the semantics of selective undo can be represented separately from the application itself. We present our task model, the selective undo algorithm, and discuss extensions that account for differing kinds of inter-action dependencies.
- Task-Centered Design | Pp. 186-201
Exploring Interaction Space as Abstraction Mechanism for Task-Based User Interface Design
Christian M. Nielsen; Michael Overgaard; Michael B. Pedersen; Jan Stage; Sigge Stenild
Designing a user interface is often a complex undertaking. Model-based user interface design is an approach where models and mappings between them form the basis for creating and specifying the design of a user interface. Such models usually include descriptions of the tasks of the prospective user, but there is considerable variation in the other models that are employed. This paper explores the extent to which the notion of interaction space is useful as an abstraction mechanism to reduce the complexity of creating and specifying a user interface design. We present how we designed a specific user interface through use of design techniques and models that employ the notion of interaction space. This design effort departed from the task models in an object-oriented model of the users’ problem and application domains. The lessons learned emphasize that the notion of interactions spaces is a useful abstraction mechanism that can help user interface designers exploit object-oriented analysis results and reduce the complexity of designing a user interface.
- Task-Centered Design | Pp. 202-216
Comparing NiMMiT and Data-Driven Notations for Describing Multimodal Interaction
Joan De Boeck; Chris Raymaekers; Karin Coninx
In the past few years, multimodal interaction is gaining importance in virtual environments. Although multimodality makes interaction with the environment more intuitive and natural for the user, the development cycle of such an environment is often a long and expensive process. In our overall field of research, we investigate how model-based design can help shorten this process by designing the application with the use of high-level diagrams. In this scope, we developed ‘NiMMiT’, a graphical notation especially suitable for expressing multimodal user interaction. We have already experienced the benefits of NiMMiT in several in-house applications, and are currently assessing the value of NiMMiT with respect to existing notations. In this paper we report on our comparison of NiMMiT against some well known data-driven modeling notations.
- Multi-modal User Interfaces | Pp. 217-229
Incorporating Tilt-Based Interaction in Multimodal User Interfaces for Mobile Devices
Jani Mäntyjärvi; Fabio Paternò; Carmen Santoro
Emerging ubiquitous environments raise the need to support multiple interaction modalities in diverse types of devices. Designing multimodal interfaces for ubiquitous environments using development tools creates challenges since target platforms support different resources and interfaces. Model-based approaches have been recognized as useful for managing the increasing complexity consequent to the many available interaction platforms. However, they have usually focused on graphical and/or vocal modalities. This paper presents a solution for enabling the development of tilt-based hand gesture and graphical modalities for mobile devices in a multimodal user interface development tool. The challenges related to developing gesture-based applications for various types of devices involving mobile devices are discussed in detail. The possible solution presented is based on a logical description language for hand-gesture user interfaces. Such language allows us to obtain a user interface implementation on the target mobile platform. The solution is illustrated with an example application that can be accessed from both the desktop and mobile device supporting tilt-based gesture interaction.
- Multi-modal User Interfaces | Pp. 230-244
An HCI Model for Usability of Sonification Applications
Ag. Asri Ag. Ibrahim; Andy Hunt
Sonification is a representation of data using sounds with the intention of communication and interpretation. The process and technique of converting the data into sound is called the sonification . One or more techniques might be required by a sonification . However, sonification techniques are not always suitable for all kinds of data, and often custom techniques are used - where the design is tailored to the domain and nature of the data as well as the users’ required tasks within the application. Therefore, it is important to assure the usability of the technique for the specific domain application being developed. This paper describes a new HCI Model for usability of sonification applications. It consists of two other models, namely the Sonification Application (SA) Model and User Interpretation Construction (UIC) Model. The SA model will be used to explain the application from the designer’s point of view. The UIC Model will be used to explain what the user might perceive and understand.
- Multi-modal User Interfaces | Pp. 245-258
Non-functional User Interface Requirements Notation (NfRn) for Modeling the Global Execution Context of Tasks
Demosthenes Akoumianakis; Athanasios Katsis; Nikolas Vidakis
This paper describes the rationale behind a user interface requirements management notation and a supporting tool suite. The notation is being developed to facilitate the design of interactions based on an account of non-functional requirements (NFRs), thus the acronym NfRn for the technique. NfRn is a graphical notation which is used to specify an interactive system’s global execution context (GEC). The resulting depiction is referred to as the Global Execution Context graph (GECg). The GECg is a visual construction, which consists of nodes, representing interaction scenarios, and directed links representing scenario relationships designating alternate execution, concurrency, ordering, and set-oriented relationships between two scenario nodes. The technique is particularly useful for specifying certain NFRs - such as adaptability, adaptivity, scalability and portability - which are especially relevant for anytime, anywhere access. In the paper, we demonstrate the application of the technique in the context of an on-going research project aiming to build an ‘electronic village’ of local interest in the region of Crete.
- Reflections on Tasks and Activities in Modeling | Pp. 259-274
Requirements Elicitation and Elaboration in Task-Based Design Needs More Than Task Modelling: A Case Study
Anke Dittmar; Andreas Gellendin; Peter Forbrig
In this paper, a small case study is presented to illustrate our conceptual understanding of a task-based requirements process. We argue that sub-models as known in model-based design (e.g. task models, dialog models) support the reflection about an existing work situation at a conceptual level and allow a formal specification of requirements. However, it is also shown that the integration of complementary analysis approaches facilitates a richer consideration of social as well as technical aspects. An intertwined creation of models differing in their focus and in the degree of abstraction and formality supports a more effective requirements elicitation and elaboration.
In addition, the paper discusses some crucial issues in task- and model-based design such as the ‘myth’ of generalised task models, the different roles of task and dialog models, or the influence of intentions on models of current situations. We hope to contribute to a further clarification of the problem space.
- Reflections on Tasks and Activities in Modeling | Pp. 275-291