Catálogo de publicaciones - libros

Compartir en
redes sociales


Universal Access in Human-Computer Interaction. Ambient Interaction: 4th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2007 Held as Part of HCI International 2007 Beijing, China, July 22-27, 2007 Proceedings, Part

Constantine Stephanidis (eds.)

En conferencia: 4º International Conference on Universal Access in Human-Computer Interaction (UAHCI) . Beijing, China . July 22, 2007 - July 27, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

User Interfaces and Human Computer Interaction; Multimedia Information Systems; Information Storage and Retrieval; Computer Communication Networks; Software Engineering; Logics and Meanings of Programs

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73280-8

ISBN electrónico

978-3-540-73281-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Towards a Physical Based Interaction-Model for Information Visualization

Roland Blach; Günter Wenzel; Manfred Dangelmaier; Jörg Frohnmayer

The ongoing process of collection and storage of knowledge with computer technology leads to highly complex information environments. The efficient access to information and the structure itself gets more and more complicated. The presented work investigates the usefulness of physical based interaction and representation behavior in immersive environments for information visualization. A framework will be presented for the mapping of physical behavior on abstract data entities and interaction. This framework is applied to an early prototype for market research.

- Part IV: Interaction Techniques and Devices | Pp. 750-758

A Note on Brain Actuated Spelling with the Berlin Brain-Computer Interface

Benjamin Blankertz; Matthias Krauledat; Guido Dornhege; John Williamson; Roderick Murray-Smith; Klaus-Robert Müller

Brain-Computer Interfaces (BCIs) are systems capable of decoding neural activity in real time, thereby allowing a computer application to be directly controlled by the brain. Since the characteristics of such direct brain-to-computer interaction are limited in several aspects, one major challenge in BCI research is intelligent front-end design. Here we present the mental text entry application ‘Hex-o-Spell’ which incorporates principles of Human-Computer Interaction research into BCI feedback design. The system utilises the high visual display bandwidth to help compensate for the extremely limited control bandwidth which operates with only two mental states, where the timing of the state changes encodes most of the information. The display is visually appealing, and control is robust. The effectiveness and robustness of the interface was demonstrated at the CeBIT 2006 (world’s largest IT fair) where two subjects operated the mental text entry system at a speed of up to 7.6 char/min.

- Part IV: Interaction Techniques and Devices | Pp. 759-768

EOG Pattern Recognition Trial for a Human Computer Interface

Sara Brunner; Sten Hanke; Siegfried Wassertheuer; Andreas Hochgatterer

The setup of a human computer interaction electrooculography (EOG) measurement trail for developing pattern recognition algorithms is described. With an easy to wear EOG measurement device we relized performance tests with a group of normal individuals as well as with one individual suffering from multiple sclerosis (MS). The individuals had to perform different eye movement patterns for coding information to control the environment. Different patterns of recognition in the time domain have been tried and implemented to perform online performance tests. The aim is to develop an EOG based communication device passed on pattern recognition algorithms for user with limited functionality.

- Part IV: Interaction Techniques and Devices | Pp. 769-776

Continuous Recognition of Human Facial Expressions Using Active Appearance Model

Kyoung-Sic Cho; Yong-Guk Kim

Recognizing human facial expressions continuously is useful since it has many potential applications. We have implemented a continuous facial expression recognition system using Active Appearance Model (AAM). AAM has been widely used in face tracking, face recognition, and object recognition tasks. In this study, we adopt an independent AAM using Inverse Compositional Image Alignment method. The evaluation of this system carried out with the standard Cohn-Kanade facial expression database. Result shows that it could useful for many applications.

- Part IV: Interaction Techniques and Devices | Pp. 777-783

Robust Extraction of Moving Objects Based on Hue and Hue Gradient

Yoo-Joo Choi; Je-Sung Lee; We-Duke Cho

This paper presents a new method for robustly extracting moving objects in an environment with varying illuminations. The proposed method applies background subtraction scheme based on hue and hue gradient to minimize the effect of the illumination change. First, we train the background images in the HSI color space and build the Gaussian background model with respect to the hue and hue gradient. Next, image subtraction is performed between the trained background image and the current input image based on the Gaussian background model. Finally, the morphological operations are applied to remove the background noise. In this paper, we compare the previous subtraction schemes to our method applied to both the hand and body tracking in order to prove the robustness of the proposed method in sudden illumination changes.

- Part IV: Interaction Techniques and Devices | Pp. 784-791

Detailed Monitoring of User’s Gaze and Interaction to Improve Future E-Learning

Heiko Drewes; Richard Atterer; Albrecht Schmidt

In this paper, we investigate how to use future interaction technologies to enhance learning technologies. We examine in detail how tracking the mouse pointer and observing the user’s gaze can help to monitor the use of web applications and in particular E-learning applications. To improve learning and teaching, it is of interest to understand in what order and to what extent users read texts, how much time they spend on individual parts of the teaching materials, and where they get stuck. Based on a standard web browser as an application platform, extended with a gaze tracking facility, we conducted studies to explore the feasibility of this novel approach. The concept includes an extension of current technologies to allow JavaScript code running in the browser to access the current gaze position of the user. Our work shows how pieces of web technology and eye gaze tracking can be put together to create a new platform for E-learning that provides additional benefits for learners and teachers.

- Part IV: Interaction Techniques and Devices | Pp. 802-811

Facial Expression Recognition Based on Color Lines Model and Region Based Processing

GeonAe Eom; Hyun-Seung Yang

Facial expression involves various movements. We present various facial expressions as simple regions on facial area and get recognition rate. There are two steps for obtaining the expression regions. First step is to extract facial area from input image with color lines model and second step is to catch regions of expression on extracted facial area with active contour without edges method as a region based processing. We have tested this presentation method for facial expressions from the open facial expression database JAFFE(Japanese Female Facial Expressions). In this method, we can get a facial expression region without any manual works. In the future, we will compensate for lack of ambiguities among expressions with person specific method and this method will be extended to image sequence.

- Part IV: Interaction Techniques and Devices | Pp. 812-817

Enhancing Human-Computer Interaction with Embodied Conversational Agents

Mary Ellen Foster

We survey recent research in which the impact of an embodied conversational agent on human-computer interaction has been assessed through a human evaluation. In some cases, the evaluation involved comparing different versions of the agent against itself in the context of a full interactive system; in others, it measured the effect on user perception of spoken output of specific aspects of the embodied agent’s behaviour. In almost all of the studies, an embodied agent that displays appropriate non-verbal behaviour was found to enhance the interaction.

- Part IV: Interaction Techniques and Devices | Pp. 828-837

Comparison Between Event Related Potentials Obtained by Syllable Recall Tasks and by Associative Recall Tasks

Mariko F. Funada; Miki Shibukawa; Tadashi Funada; Satoki P. Ninomija; Yoshihide Igarashi

The final goal of this research is to establish some useful verbal communication systems between computers and persons, or between handicapped persons and normal persons. For a substantial progress toward the research goal, we investigate Event Related Potentials (ERP’s for short) caused by Electroencephalograms (EEG’s for short). By observing ERP’s, we estimate recalled words, phrases, or sentences that may contain some homonyms or related words. In particular, we pay attention to the difference between ERP’s caused by recalling a single syllable individually without other syllables and ERP’s caused by recalling a syllable together with a word containing the syllable. From our observation of this difference, we believe that it may be useful to discuss the possibility of estimating a recalled word by combining ERPs caused by syllables.

- Part IV: Interaction Techniques and Devices | Pp. 838-847

Gaze as a Supplementary Modality for Interacting with Ambient Intelligence Environments

Daniel Gepner; Jérôme Simonin; Noëlle Carbonell

We present our current research on the implementation of gaze as an efficient and usable pointing modality supplementary to speech, for interacting with augmented objects in our daily environment or large displays, especially immersive virtual reality environments, such as reality centres and caves. We are also addressing issues relating to the use of gaze as the main interaction input modality. We have designed and developed two operational user interfaces: one for providing motor-disabled users with easy gaze-based access to map applications and graphical software; the other for iteratively testing and improving the usability of gaze-contingent displays.

- Part IV: Interaction Techniques and Devices | Pp. 848-857