Catálogo de publicaciones - libros

Compartir en
redes sociales


Artifical Intelligence for Human Computing: ICMI 2006 and IJCAI 2007 International Workshops, Banff, Canada, November 3, 2006, Hyderabad, India, January 6, 2007, Revised Seleced and Invited Papers

Thomas S. Huang ; Anton Nijholt ; Maja Pantic ; Alex Pentland (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Computer Graphics; Pattern Recognition; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72346-2

ISBN electrónico

978-3-540-72348-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Feedback Loops in Communication and Human Computing

Rieks op den Akker; Dirk Heylen

Building systems that are able to analyse communicative behaviours or take part in conversations requires a sound methodology in which the complex organisation of conversations is understood and tested on real-life samples. The data-driven approaches to human computing not only have a value for the engineering of systems, but can also provide feedback to the study of conversations between humans and between human and machines.

III - Anthropocentric Interaction Models for Human Computing | Pp. 215-233

Evaluating the Future of HCI: Challenges for the Evaluation of Emerging Applications

Ronald Poppe; Rutger Rienks; Betsy van Dijk

Current evaluation methods are inappropriate for emerging HCI applications. In this paper, we give three examples of these applications and show that traditional evaluation methods fail. We identify trends in HCI development and discuss the issues that arise with evaluation. We aim at achieving increased awareness that evaluation too has to evolve in order to support the emerging trends in HCI systems.

III - Anthropocentric Interaction Models for Human Computing | Pp. 234-250

Gaze-X: Adaptive, Affective, Multimodal Interface for Single-User Office Scenarios

Ludo Maat; Maja Pantic

This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user’s actions and emotions are modeled and then used to adapt the interaction and support the user in his or her activity. The proposed system, which we named Gaze-X, is based on sensing and interpretation of the human part of the computer’s context, known as W5+ (who, where, what, when, why, how). It integrates a number of natural human communicative modalities including speech, eye gaze direction, face and facial expression, and a number of standard HCI modalities like keystrokes, mouse movements, and active software identification, which, in turn, are fed into processes that provide decision making and adapt the HCI to support the user in his or her activity according to his or her preferences. A usability study conducted in an office scenario with a number of users indicates that Gaze-X is perceived as effective, easy to use, useful, and affectively qualitative.

III - Anthropocentric Interaction Models for Human Computing | Pp. 251-271

SmartWeb Handheld — Multimodal Interaction with Ontological Knowledge Bases and Semantic Web Services

Daniel Sonntag; Ralf Engel; Gerd Herzog; Alexander Pfalzgraf; Norbert Pfleger; Massimo Romanelli; Norbert Reithinger

aims to provide intuitive multimodal access to a rich selection of Web-based information services. We report on the current prototype with a smartphone client interface to the Semantic Web. An advanced ontology-based representation of facts and media structures serves as the central description for rich media content. Underlying content is accessed through conventional web service middleware to connect the ontological knowledge base and an intelligent web service composition module for external web services, which is able to translate between ordinary XML-based data structures and explicit semantic representations for user queries and system responses. The presentation module renders the media content and the results generated from the services and provides a detailed description of the content and its layout to the fusion module. The user is then able to employ multiple modalities, like speech and gestures, to interact with the presented multimedia material in a multimodal way.

III - Anthropocentric Interaction Models for Human Computing | Pp. 272-295

A Learning-Based High-Level Human Computer Interface for Face Modeling and Animation

Volker Blanz

This paper describes a system for animation and modeling of faces in images or in 3D. It provides high-level control of facial appearance to users, due to a learning-based approach that extracts class-specific information from a database of 3D scans. The modification tools include changes of facial attributes, such as body weight, masculine or feminine look, or overall head shape. Facial expressions are learned from examples and can be applied to new individuals. The system is intrinsically based on 3D face shapes and surface colors, but it can be applied to existing images as well, using a 3D shape reconstruction algorithm that operates on single images. After reconstruction, faces can be modified and drawn back into the original image, so the users can manipulate, animate and exchange faces in images at any given pose and illumination. The system can be used to create face models or images from a vague description or mental image, for example based on the recollection of eyewitnesses in forensic applications. For this specific problem, we present a software tool and a user study with a forensic artist. Our model-based approach may be considered a prototype implementation of a high-level user interface to control meaningful attributes in human faces.

III - Anthropocentric Interaction Models for Human Computing | Pp. 296-315

Challenges for Virtual Humans in Human Computing

Dennis Reidsma; Zsófia Ruttkay; Anton Nijholt

The vision of Ambient Intelligence (AmI) presumes a plethora of embedded services and devices that all endeavor to support humans in their daily activities as unobtrusively as possible. Hardware gets distributed throughout the environment, occupying even the fabric of our clothing. The environment is equipped with a diversity of sensors, the information of which can be accessed from all over the AmI network. Individual services are distributed over hardware, share sensors with other services and are generally detached from the traditional singleaccess- point computer (see also the paper of Pantic et al. in this volume [51]).

III - Anthropocentric Interaction Models for Human Computing | Pp. 316-338

Affect Detection and an Automated Improvisational AI Actor in E-Drama

Li Zhang; Marco Gillies; John A. Barnden; Robert J. Hendley; Mark G. Lee; Alan M. Wallington

Enabling machines to understand emotions and feelings of the human users in their natural language textual input during interaction is a challenging issue in Human Computing. Our work presented here has tried to make our contribution toward such machine automation. We report work on adding affect-detection to an existing e-drama program, a text-based software system for dramatic improvisation in simple virtual scenarios, for use primarily in learning contexts. The system allows a human director to monitor improvisations and make interventions, for instance in reaction to excessive, insufficient or inappropriate emotions in the characters’ speeches. Within an endeavour to partially automate directors’ functions, and to allow for automated affective bit-part characters, we have developed an affect-detection module. It is aimed at detecting affective aspects (concerning emotions, moods, value judgments, etc.) of human-controlled characters’ textual “speeches”. The work also accompanies basic research into how affect is conveyed linguistically. A distinctive feature of the project is a focus on the metaphorical ways in which affect is conveyed. Moreover, we have also introduced how the detected affective states activate the animation engine to produce gestures for human-controlled characters. The description of our approach in this paper is taken in part from our previous publications [1, 2] with new contributions mainly on metaphorical language processing (practically and theoretically), 3D emotional animation generation and user testing evaluation. Finally, Our work on affect detection in open-ended improvisational text contributes to the development of automatic understanding of human language and emotion. The generation of emotional believable animations based on detected affective states and the production of appropriate responses for the automated affective bit-part character based on the detection of affect contribute greatly to the ease and innovative user interface in e-drama, which leads to high-level user engagement and enjoyment.

III - Anthropocentric Interaction Models for Human Computing | Pp. 339-358