Catálogo de publicaciones - libros
Universal Access in Human-Computer Interaction. Ambient Interaction: 4th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2007 Held as Part of HCI International 2007 Beijing, China, July 22-27, 2007 Proceedings, Part
Constantine Stephanidis (eds.)
En conferencia: 4º International Conference on Universal Access in Human-Computer Interaction (UAHCI) . Beijing, China . July 22, 2007 - July 27, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
User Interfaces and Human Computer Interaction; Multimedia Information Systems; Information Storage and Retrieval; Computer Communication Networks; Software Engineering; Logics and Meanings of Programs
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-73280-8
ISBN electrónico
978-3-540-73281-5
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Cobertura temática
Tabla de contenidos
Integrating Multimodal Cues Using Grammar Based Models
Manuel Giuliani; Alois Knoll
Multimodal systems must process several input streams efficiently and represent the input in a way that allows the establishment of connections between modalities. This paper describes a multimodal system that uses Combinatory Categorial Grammars to parse several input streams and translate them into logical formulas. These logical formulas are expressed in Hybrid Logic, which is very suitable for multimodal integration because it can represent temporal relationships between modes in an abstract way. This level of abstraction makes it possible to define rules for multimodal processing in a straightforward way.
- Part IV: Interaction Techniques and Devices | Pp. 858-867
New Type of Auditory Progress Bar: Exploration, Design and Evaluation
Shuo Hsiu Hsu; Cécile Le Prado; Stéphane Natkin; Claude Liard
In this paper, we focus on the method to explore a different type of auditory progress bar by analyzing the characteristics of the visual progress bar and contexts of auditory application. A scenario of bearing in the forward/reverse modes of digital compass is selected to implement the auditory progress bar. The auditory cues play an interactive role in the bearing that they are altered according to the user’s operating behavior. Composed of sound signal and silent pause, the auditory cues are generated from a formula based on the warm/cold metaphor. A method incorporating the foreground/ background sounds is also designed to provide different ranges of progress information/progress update expressed through auditory cues. In this report four versions of auditory cues are presented as the solution to the interactive auditory progress bar and a pilot study is evaluated.
- Part IV: Interaction Techniques and Devices | Pp. 868-877
Factors Influencing the Usability of Icons in the LCD Touch Screens
Hsinfu Huang; Wang-Chin Tsai; Hsin-His Lai
The purpose of this study was to investigate factors influencing the usability of icons in the LCD touch screens. In this study, subjects had to fill in the questionnaire and rated questions on 7-point Likert scales. 20 evaluation items were collected from relevant interface design guidelines. A total of 30 subjects, 10 none- experience users, 10 click-experience users (PDA user), 10 touch-experience users (LCD touch screens user), participated in the investigation. As main statistical test, a principal component analysis (PCA) was performed with SPSS/ PC. The results from the principal components analysis showed that the usability of touch icon was affected by seven factors: touch field, semantics quality, dynamics, hit quality, tactility, color quality and shape quality. Among these, touch field was the most important. Finally, the results of correlation analyses indicated that experience related to importance ratings for usability. Especially, subjects show significant difference in the size element (p<0.05). Further, user who has pen-click experience, such as PDA user, still show better performance for touch screen even if they use smaller icon.
- Part IV: Interaction Techniques and Devices | Pp. 878-887
Smart SoftPhone Device for the Network Quality Parameters Discovery and Measurement
Jinsul Kim; Minsoo Hahn; Hyun-Woo Lee
Due to the shared nature of current network structures, guaranteeing the quality of service (QoS) of Internet applications from an end-to-end is sometimes difficult and then it has been requested to develop smart devices which have multi-modal functionality for ubiquitous network and computing environment. In this paper, we design smart SoftPhone device for guaranteeing QoS which can discover and measure various network parameters during realtime phone-call service through IP network. The smart SoftPhone for discovering and measuring of QoS-factors in realtime consists of four main blocks that is in order to control and measure various parameters independently based on UDP/SIP/RTP protocol during the end-to-end voice service. Also, we provide critical message report procedures and management schemes to guarantee QoS based on using smart SoftPhone device. For the reporting quality parameters optimally during establishing call sessions of VoIP service, we design critical management module blocks for call session and for quality reporting. To sum up, for the performance evaluation of the smart SoftPhone with scientific exactitude of quality factors, we examine the proposed technique based on the realtime phone-call service through heterogeneous network. The experimental results confirm that the developed smart SoftPhone is very useful to quality-measuring for the quality guaranteed realtime VoIP service and then it could also be applied to improve speech quality as a packet compensation device.
- Part IV: Interaction Techniques and Devices | Pp. 898-907
BloNo: A New Mobile Text-Entry Interface for the Visually Impaired
Paulo Lagoá; Pedro Santana; Tiago Guerreiro; Daniel Gonçalves; Joaquim Jorge
We present a new mobile text-entry method that relies on alphabet navigation and dismisses memorizing, offering visually impaired individuals an easy writing mechanism. Current mobile text-entry interfaces are not suitable for blind users and special braille devices are too heavy, large and cumbersome to be used in a mobile context. With the enormous growth of mobile communications and applications it was urgent to offer visually impaired individuals the ability to operate this kind of devices. Evaluation studies were carried and validated the navigation method as a new mobile text-entry interface for the target population.
- Part IV: Interaction Techniques and Devices | Pp. 908-917
Low-Cost Portable Text Recognition and Speech Synthesis with Generic Laptop Computer, Digital Camera and Software
Lauri Lahti; Jaakko Kurhila
Blind persons or people with reduced eyesight could benefit from a portable system that can interpret textual information in the surrounding environment and speak directly to the user. The need for such a system was surveyed with a questionnaire, and a prototype system was built using generic, inexpensive components readily available. The system architecture is component-based so that every module can be replaced with another generic module. Even though the system makes partly incorrect recognition of text in a versatile environment, the evaluation of the system with five actual users suggested that the system can provide genuine additional value in coping with everyday issues outdoors.
- Part IV: Interaction Techniques and Devices | Pp. 918-927
Human Interface for the Robot Control in Networked and Multi-sensored Environment
Hyun-Gu Lee; Yong-Guk Kim; Ho-Dong Lee; Joo-Hyung Kim; Gwi-Tae Park
In this paper, we propose a human-robot interface in networked and multi-sensored environments. The human robot interface is an essential part of intelligent robotic system. Through the human robot interface, human being can interact with the robot. Especially, in multi-sensored environment, the human robot interface can be developed with remarkably extended functionality. Generally, handheld device such as PDA is a suitable for human robot interface because of its mobility and network ability. In this paper, we select PDA as device of human robot interface. In the implemented framework, the robot user can monitor what happens in multi-sensored environment and control the mobile robot easily and intuitively.
- Part IV: Interaction Techniques and Devices | Pp. 928-935
Gesture-Based Interactions on Multiple Large Displays with a Tabletop Interface
Jangho Lee; Jun Lee; HyungSeok Kim; Jee-In Kim
We like large displays. Also, we love to equip with multiple displays for exercising multiple tasks in parallel. It is not unusual to have multiple large displays in our offices. Therefore, we can see many widgets on multiple large displays and would like to select and manipulate them in more convenient and faster ways. Because the widgets are physically spread in multiple large displays, it is not easy for users to reach them easily. It follows that new interaction techniques must be provided.[1] New interaction techniques for accessing distant widgets on multiple large displays using a tabletop interface called ‘u-Table’ [2] are proposed in this paper. Hand gestures are mainly used on tabletop interfaces because of their intuitive, non-invasive and easy operations. We incorporate advantages of existing techniques such as intuitiveness of tabletop interfaces, fastness and simultaneity of existing interaction techniques such as Drag-and-pick [10] and Vacuum [11]. The proposed interaction techniques include fetching, sending, and manipulating distant widgets on multiple large displays. We expect our techniques can be applied various interfaces using hand gestures and heterogeneous displays.
- Part IV: Interaction Techniques and Devices | Pp. 936-942
3D Model Based Face Recognition by Face Representation Using PVM and Pose Approximation
Yang-Bok Lee; Taehwa Hong; Hyeon-Joon Moon; Yong-Guk Kim
Since a generative 3D face model consists of a large number of vertex points and polygons, a 3D model based face recognition system is generally inefficient in computation time. In this paper, we present a novel 3D face representation method to reduce the number of vertices and optimize its computation time and generate the 3D Korean face model based on the representation method. Also, a pose approximation method is described for initial fitting parameter. Finally, we evaluate the performance of proposed method with the face databases collected using a stereo-camera based 3D face capturing device and a web camera.
- Part IV: Interaction Techniques and Devices | Pp. 943-951
The Use of Interactive Visual Metaphors to Enhance Group Discussions Using Mobile Devices
John McGinn; Rich Picking; Liz Picking; Vic Grout
In this paper, we consider the problems of group discussions and collaborative decision-making, where one or more of the participants are using restrictive interfaces such as mobile phones or PDAs. We suggest possible solutions to some of these problems and present MAVis (Mobile Argumentation Visualizer), a web-based interface built upon upon a balance-beam visual metaphor. We report on user experiences of interacting with the visual metaphor, and on the challenges of transferring this to a multi-user environment supporting mobile devices.
- Part IV: Interaction Techniques and Devices | Pp. 952-960