Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Artificial Reality and Tele-Existence: 16th International Conference on Artificial Reality and Telexistence, ICAT 2006, Hangzhou, China, November 28 - December 1, 2006, Proceedings

Zhigeng Pan ; Adrian Cheok ; Michael Haller ; Rynson W. H. Lau ; Hideo Saito ; Ronghua Liang (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Multimedia Information Systems; Image Processing and Computer Vision; Computer Appl. in Arts and Humanities

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-49776-9

ISBN electrónico

978-3-540-49779-0

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

AR Table Tennis: A Video-Based Augmented Reality Sports Game

Jong-Seung Park; TaeYong Kim; Jong-Hyun Yoon

Interactions in augmented reality systems are based on human gestures rather than classical input devices. Due to the instability of the current gesture recognition technologies, the previous development processes of augmented reality games have introduced many ad hoc methods to handle the shortcomings, and the game architectures have become highly irregular and inefficient. This paper describes an efficient development methodology for gesture-based augmented reality games through prototyping a table tennis game with a gesture interface. We also verify the applicability of the prototyping mechanism by implementing and demonstrating the augmented reality table tennis game played with real rackets wherein the implementation requires the real-time tracking of real rackets to allow fast movements and interactions without delay.

- Augmented Reality/Mixed Reality | Pp. 197-206

Enhancing Immersiveness in AR-Based Product Design

Taejin Ha; Yeongmi Kim; Jeha Ryu; Woontack Woo

Recently, various AR-based product design methodologies have been introduced. In this paper, we propose technologies for enhancing the immersive realization of virtual objects, where, by adapting tangible objects we can provide touch sensation to users. A 3D model of the same scale overlays the whole area of the tangible object so the marker area is invisible. This contributes to enhancing immersion. Also, the hand occlusion problem when the virtual objects overlay the user’s hands is partially solved, providing more immersive and natural images to users. Finally, multimodal feedback also creates better immersion. In our work, both vibrotactile feedback through page motors and sound feedback are considered. In our scenario, a game-phone model is selected, by way of proposed augmented vibrotactile feedback, hands occlusion-reduced visual effects and sound feedback are provided to users. These proposed methodologies will contribute to a better immersive realization of the conventional AR system.

- Augmented Reality/Mixed Reality | Pp. 207-216

Personal Information Annotation on Wearable Computer Users with Hybrid Peer-to-Peer Communication

Koji Makita; Masayuki Kanbara; Naokazu Yokoya

This paper proposes a wearable annotation overlay system which can correctly annotate dynamic users of wearable computers. To provide users with the newest annotation information, a network shared database system for wearable AR systems has been proposed. With the database, a wearable annotation overlay system which can dynamically annotate users of wearable systems has been investigated. In conventional systems, since dynamic users’ position is transmitted to wearable AR systems via a shared database server, it is difficult to overlay annotations at a correct position because of low frequency of updating and delay of client-server communication. In this paper, we propose a new effective method for wearable AR systems to obtain dynamic users’ positions by using hybrid peer-to-peer(P2P). In experiments, annotations on dynamic users have been proven to be overlaid correctly enough to show where users are.

- Augmented Reality/Mixed Reality | Pp. 217-227

An Initialization Tool for Installing Visual Markers in Wearable Augmented Reality

Yusuke Nakazato; Masayuki Kanbara; Naokazu Yokoya

It is necessary to precisely measure pose (position and orientation) of a user in order to realize an augmented reality (AR) system with a wearable computer. One of major methods for measuring user’s pose in AR is visual marker-based approach which calculates them by recognizing markers pasted up on the ceilings or walls. The method needs 3D pose information of visual markers in advance. However, much cost is necessary to calibrate visual markers pasted up on the ceiling in a wide environment. In this paper, an initialization tool for installing visual markers in wearable AR is proposed. The administrator is assisted in installing visual markers in a wide environment by the proposed tool. The tool calibrates alignment of visual markers which exist in the real environment with high accuracy by recognizing them in the images captured by a high-resolution still camera. Additionally, the tool assists the administrator in repairing the incorrect pattern of marker using a wearable AR system.

- Augmented Reality/Mixed Reality | Pp. 228-238

Increasing Camera Pose Estimation Accuracy Using Multiple Markers

Jong-Hyun Yoon; Jong-Seung Park; Chungkyue Kim

If the geometry of a marker is known and camera parameters are available, it is possible to recover a camera pose. The transformation between a camera and a marker is defined relative to the local coordinate system of the marker. This paper proposes a real-time camera tracking method using multiple markers while the camera is allowed to move freely in a 3D space. We utilize multiple markers to improve the accuracy of the pose estimation. We also present a coordinate registration algorithm to obtain a global optimal camera pose from local transformations of multiple markers. For the registration, a reference marker is automatically chosen among multiple markers and the global camera pose is computed using all local transforms weighted by marker detection confidence rates. Experimental results show that the proposed method provides more accurate camera poses than those from other methods.

- Augmented Reality/Mixed Reality | Pp. 239-248

Exploring Human-Robot Interaction Through Telepresence Board Games

Min Xin; Ehud Sharlin

This paper presents an experimental test bed for exploring and evaluating human-robot interaction (HRI). Our system is designed around the concept of playing board games involving collaboration between humans and robots in a shared physical environment. Unlike the classic human-versusmachine situation often established in computer-based board games, our test bed takes advantage of the rich interaction opportunities that arise when humans and robots play collaboratively as a team. To facilitate interaction within a shared physical environment, our game is played on a large checkerboard where human and robotic players can be situated and play as game pieces. With meaningful interaction occurring within this controlled setup, various aspects of human-robot interaction can be easily explored and evaluated such as interaction methods and robot behaviour. In this paper we present our test bed which uses a telepresence interface for playing the game and the results of a user study demonstrating the sensitivity of our system in assessing the effect of different robot behaviours on users.

- Augmented Reality/Mixed Reality | Pp. 249-261

Projected Reality – Content Delivery Right onto Objects of Daily Life

Jochen Ehnes; Michitaka Hirose

Spatial augmented reality using controllable projector- camera-systems to project onto objects directly, or as we call it, offers the possibility to augment objects without requiring their users to wear or carry any devices. In order to provide the freedom of movement to users, we developed an architecture that allows to roam a network of projection units to follow the users. The concept of connecting physical objects with content in form of , although devised for projection based augmented reality systems, can be applied to HMD based or hand held AR systems as well. After a description of our AR projection system and a example application that could be used to provide location specific information to travelers using public transportation systems, we will lay out our vision of a system that may be able to provide content on a global scale.

- Augmented Reality/Mixed Reality | Pp. 262-271

An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

Sylvia Irawati; Scott Green; Mark Billinghurst; Andreas Duenser; Heedong Ko

This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.

- Augmented Reality/Mixed Reality | Pp. 272-283

An Augmented Reality System for Computer Maintenance

Bo Kang; Peng Ren; Changzhi Ke

Augmented reality (AR) aims to create user interface in which virtual objects are overlaid on the physical environment. Equipment maintenance and training tasks are excellent domains for augmented reality application. In this paper, a prototype of video see-through based AR system for training and supporting to computer maintenance is presented. A vision and inertial based hybrid tracking method is adopted in the prototype and discussed in details. A WIMP-based user interface which is simple, convenient and informative is designed for this system. Experimental results illustrate the effectiveness of the prototype.

- Augmented Reality/Mixed Reality | Pp. 284-291

Multi-view Video Composition with Computer Graphics

Artem Ignatov; Manbae Kim

Multi-view video has recently gained much attraction from academic and commercial fields because it can deliver the immersive viewing of natural scenes. This paper presents a composition method of generating composite views combined with graphic objects. First we generate virtual views between multi-view cameras using depth and texture images of the input videos. Then we mix graphic objects to the generated view images. A distinctive feature of our approach is that we use an identical coordinate system for camera, virtual, and graphics views. The reason for using the same system for all types of cameras is the ability of full interactions between real scene and graphic objects. Another merit is that the occlusion between them is handled automatically by a graphics engine because z values of real scene and graphic objects are stored in the same z buffer. We present experimental results that validate our proposed method and show that graphic objects could become the inalienable part of the multi-view video. For the validation of our method we used multi-view sequences where a graphic object is mixed into camera and virtual images.

- Augmented Reality/Mixed Reality | Pp. 292-301