Catálogo de publicaciones - libros

Compartir en
redes sociales


Virtual Reality: Second International Conference, ICVR 2007, Held as part of HCI International 2007, Beijing, China, July 22-27, 2007. Proceedings

Randall Shumaker (eds.)

En conferencia: 2º International Conference on Virtual Reality (ICVR) . Beijing, China . July 22, 2007 - July 27, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

User Interfaces and Human Computer Interaction; Computer Graphics; Artificial Intelligence; Special Purpose and Application-Based Systems; Information Systems Applications (incl. Internet); Multimedia Information Systems

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73334-8

ISBN electrónico

978-3-540-73335-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Virtual Gaze. A Pilot Study on the Effects of Computer Simulated Gaze in Avatar-Based Conversations

Gary Bente; Felix Eschenburg; Nicole. C. Krämer

The paper introduces a novel methodology for the computer simulation of gaze behavior in net-based interactions. 38 female participants interacted using a special avatar platform, allowing for the real-time transmission of nonverbal behavior (head and body movement, gestures, gaze) as captured by motion trackers, eye tracking devices and data gloves. During interaction eye movement and gaze direction of one partner was substituted by computer simulated data. Simulated duration of directed gaze (looking into the face of the vis-a-vis) was varied lasting 2 seconds in one condition and 4 seconds in the other. Mutual person perception (impression ratings) and individual experience of social presence (short questionnaire) were measured as dependent variables. The results underline the validity of the computer animation approach. Consistent with the literature the longer gaze duration was found to cause significantly better evaluations of the interaction partners and higher levels of co-presence.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 185-194

Evaluating the Need for Display-Specific and Device-Specific 3D Interaction Techniques

Doug A. Bowman; Brian Badillo; Dhruv Manek

There are many visual display devices and input devices available to designers of immersive virtual environment (VE) applications. Most 3D interaction techniques, however, were designed using a particular combination of devices. The effects of migrating these techniques to different displays and input devices are not known. In this paper, we report on a series of studies designed to determine these effects. The studies show that while 3D interaction techniques are quite robust under some conditions, migration to different displays and input devices can cause serious usability problems in others. This implies that display-specific and/or device-specific versions of these techniques are necessary. In addition to the studies, we describe our display- and device-specific designs for two common 3D manipulation techniques.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 195-204

Human Computer Intelligent Interaction Using Augmented Cognition and Emotional Intelligence

Jim X. Chen; Harry Wechsler

Human-Computer Interaction (HCI) has mostly developed along two competing methodologies: direct manipulation and intelligent agents. Other possible but complementary methodologies are those of augmented cognition and affective computing and their adaptive combination. Augmented cognition harnesses computation to exploit explicit or implicit knowledge about context, mental state, and motivation for the user, while affective computing provides the means to recognize emotional intelligence and affects human-computer interfaces and interactions people are engaged with. Most HCI studies elicit emotions in relatively simple settings, whereas augmented cognition and affective computing include bodily (physical) embedded within mental (cognitive) and emotional events. Recognition of affective states currently focuses on their physical form (e.g., blinking or face distortions underlying human emotions) rather than implicit behavior and function (their impact on how the user employs the interface or communicates with others). Augmented cognition and affective computing are examined throughout this paper regarding design, implementation, and benefits. Towards that end we have designed an HCII interface that diagnoses and predicts whether the user was fatigued, confused, frustrated, momentarily distracted, or even alive through non-verbal information, namely paralanguage, in a virtual reality (VR) learning environment.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 205-214

Interactive Haptic Rendering of High-Resolution Deformable Objects

Nico Galoppo; Serhat Tekin; Miguel A. Otaduy; Markus Gross; Ming C. Lin

We present an efficient algorithm for haptic rendering of deformable bodies with highly detailed surface geometry using a fast contact handling algorithm. We exploit a layered deformable representation to augment the physically based deformation simulation with efficient collision detection, contact handling and interactive haptic force feedback.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 215-223

Collaborative Virtual Environments: You Can’t Do It Alone, Can You?

Arturo S. García; Diego Martínez; José P. Molina; Pascual González

Many Collaborative Virtual Environments (CVEs) have been developed to date. However, when focusing our attention on the way users perform their task in these systems, there is still little research on understanding the impact of different platforms on the collaboration experience. This paper describes not only a CVE, in this case, one that reproduces a block building game; but also an experiment that was carried out to evaluate both the interaction using different input and output technologies and the impact of collaboration on task performance and overall experience.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 224-233

Development of Wide-Area Tracking System for Augmented Reality

Hirotake Ishii; Hidenori Fujino; Bian Zhiqiang; Tomoki Sekiyama; Toshinori Nakai; Hiroshi Shimoda

In this study, two types of marker-based tracking systems for Augmented Reality were developed: a tracking system using line markers and a tracking system using circular markers. Both markers were designed for use inside buildings such as nuclear power plants, which are very complicated and in which it is difficult to paste many large markers. To enlarge the area in which tracking is available using only a limited number of markers, a hybrid tracking method and a two-layer tracking method were also developed. The experimental evaluation shows that both methods can increase the available area compared to legacy methods such as single-camera tracking and simple square markers.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 234-243

An Efficient Navigation Algorithm of Large Scale Distributed VRML/X3D Environments

Jinyuan Jia; Guanghua Lu; Yuan Pan

Typical shortest-path search algorithm, e.g. Dijkstra algorithm, is difficult to implement in VRML/X3D world directly due to the simplicity of VRML/X3D programming. By using JavaScript with good cross-platform and compatibility with VRML, this paper proposed an efficient back-traceable climbing (BTC) based navigation algorithm, by improving Hill Climbing search algorithm with the destination oriented guidance and loop removal, and amplifying it with simple data structure and flexible interfaces. The BTC based navigation algorithm performs greatly better than Dijkstra algorithm in terms of efficiency, consumed memory and the number of accessed nodes. It also possesses the merits of simplicity, easy implementation and reliability. Experimental results also show that it can provide real-time virtual navigation services with enough precision for large scale VRML/X3D environment.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 244-252

Development of a Handheld User Interface Framework for Virtual Environments

Seokhwan Kim; Yongjoo Cho; Kyoung Shin Park; Joasang Lim

This paper describes a design and implementation of a new handheld user interface framework called HIVE. HIVE provides the familiar 2D user interface on a mobile handheld computer to support user interactions in a virtual environment. It provides a scripting language based on XML and Lua to ease the development of handheld user interfaces and to increase the reusability of the interface components. This paper also discusses the use of the HIVE framework to develop a couple of handheld user interface applications for virtual environments to show usability.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 253-261

Time-Varying Factors Model with Different Time-Scales for Studying Cybersickness

Tohru Kiryu; Eri Uchiyama; Masahiro Jimbo; Atsuhiko Iijima

We have investigated cybersickness in terms of image motion vectors, visual characteristics, and the autonomic nervous regulation. We obtained the RR interval, respiration, and blood pressure time-series and estimated the low-frequency (LF) and high-frequency (HF) power components to determine the some sensation intervals. Then, we traced the time-series of the LF component backwards to find out the local minimum as the onset. An experiment consisted of five consecutive exposure sessions of the same first-person-view video image. In the unpleasant group from fifteen healthy young subjects, the LF/HF increased with respect to the number of trials and a significant difference was confined between two groups. The trigger points concentrated around the specific segments. Within the unpleasant group, eyes did not follow the camera motion around the trigger points. Accordingly, it recommends to monitor image motion vectors as a trigger factor and autonomic nervous regulation as an accumulation factor for studying cybersickness.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 262-269

A True Spatial Sound System for CAVE-Like Displays Using Four Loudspeakers

Torsten Kuhlen; Ingo Assenmacher; Tobias Lentz

The paper introduces an audio rendering system based on the binaural approach, which allows a real-time simulation of spatially distributed sound sources and, in addition to that, near-to-head sources in room-mounted virtual environments. We have been developing a dynamic crosstalk cancellation, allowing the listener to freely move around without wearing any headphones. The paper gives a comprehensive description of the system, concentrating on the dual dynamic crosstalk cancellation and aspects of the integration of a real-time room acoustical simulation. Finally, two applications are described to show the wide applicability of the system.

- Part 2: Interacting and Navigating in Virtual and Augmented Environments | Pp. 270-279