Catálogo de publicaciones - libros
Advances in Artificial Reality and Tele-Existence: 16th International Conference on Artificial Reality and Telexistence, ICAT 2006, Hangzhou, China, November 28 - December 1, 2006, Proceedings
Zhigeng Pan ; Adrian Cheok ; Michael Haller ; Rynson W. H. Lau ; Hideo Saito ; Ronghua Liang (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Multimedia Information Systems; Image Processing and Computer Vision; Computer Appl. in Arts and Humanities
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-49776-9
ISBN electrónico
978-3-540-49779-0
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11941354_11
VR-Based Simulation for the Learning of Gynaecological Examination
Liliane dos Santos Machado; Ronei Marcos de Moraes
Training is an effective way to acquire learning. Recently, virtual reality systems have been proposed and developed to provide training in medical procedures. This work presents the SITEG, a virtual reality simulator for education in the practice of gynaecological examinations. The SITEG offers a virtual environment where the user can practice the two steps present in a real exam and learn different stages of diseases like Herpes and Human papillomavirus. For that, the system integrates stereoscopic view and a haptic device to allow the identification of malignancies commonly found in this kind of examination. Attached to the simulator there is an assessment tool to classify the user training.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 97-104
doi: 10.1007/11941354_12
Visual Perception Modeling for Intelligent Avatars
Ronghua Liang; Tinan Huang; Meleagros Krokos; Jinglong Fan
Visual perception modeling for intelligent avatars can play subsidiary roles in controlling the behavior and animation of avatars in Intelligent Virtual Environments (IVE). We present a visual perception recognition model by integrating the color model, 3D object matching, sensors perception and path planning, therefore, it is quite useful for the avatar to make appropriate decisions in real-time in an IVE. In our approach, the 3D objects in the IVE are recognized and the moving directions of the avatar are obtained by using the avatar’s perception viewpoint and sensor. The avatar’s path planning employs our motion path synthesis proposed previously, called two-stage process synthesis. Experimental results show the efficiency and high prospect of our approach.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 105-112
doi: 10.1007/11941354_13
Steering Behavior Model of Visitor NPCs in Virtual Exhibition
Kyungkoo Jun; Meeyoung Sung; Byoungjo Choi
We propose model to simulate virtual characters (NPC) which walk around in virtual exhibition environment and see exhibits one by one just like humans do. Steering behavior model of such environment has its significance in that it would raise the reality level of virtually recreated exhibition space such as museums or city area with points of attraction. However, currently available steering behavior models and techniques are improper to generate such behavior patterns. Our model is autonomous and individual–centric; virtual characters autonomously determine what exhibit they are going to see, in what order, at which view point, and how long, based on their personal preferences. We also describe the implementation of our model and provide the graphical results of simulations.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 113-121
doi: 10.1007/11941354_14
Scalable Architecture and Content Description Language for Mobile Mixed Reality Systems
Fumihisa Shibata; Takashi Hashimoto; Koki Furuno; Asako Kimura; Hideyuki Tamura
We propose a new scalable architecture for mobile mixed reality (MR) systems and a content description language to be used in such architecture. Several architectures already exist to realize mobile MR systems, however, most of them are device specific. The architecture we propose here is able to accommodate a variety of devices, from mobile phones to notebook PCs. We have already designed a concrete specification for our architecture and content description language. We have also confirmed their viability by implementing several applications on various mobile devices.
- Augmented Reality/Mixed Reality | Pp. 122-131
doi: 10.1007/11941354_15
A Novel Sound Localization Experiment for Mobile Audio Augmented Reality Applications
Nick Mariette
This paper describes a subjective experiment in progress to study human sound localization using mobile audio augmented reality systems. The experiment also serves to validate a new methodology for studying sound localization where the subject is outdoors and freely mobile, experiencing virtual sound objects corresponding to real visual objects. Subjects indicate the perceived location of a static virtual sound source presented on headphones, by walking to a position where the auditory image coincides with a real visual object. This novel response method accounts for multimodal perception and interaction via self-motion, both ignored by traditional sound localization experiments performed indoors with a seated subject, using minimal visual stimuli. Results for six subjects give a mean localization error of approximately thirteen degrees; significantly lower error for discrete binaural rendering than for ambisonic rendering, and insignificant variation to filter lengths of 64, 128 and 200 samples.
- Augmented Reality/Mixed Reality | Pp. 132-142
doi: 10.1007/11941354_16
A Tangible User Interface for Remote Collaboration System Using Mixed Reality
Yuichi Bannai; Hidekazu Tamaki; Yuji Suzuki; Hiroshi Shigeno; Kenichi Okada
In this paper we propose a remote collaboration system mediated by the tangible object held by each user. The process and result of interaction with the object are depicted through overlapped CG images using mixed reality and is shared between users. This system enables the user to interact with the object naturally via touch. We implemented a pointing function to the object, and conducted an experimental evaluation to investigate the effectiveness of this function. The result shows the pointing task was accomplished correctly within a practical length of time. Finally, we describe the applicability of our concept to the field of remote communication mediated by tangible objects.
- Augmented Reality/Mixed Reality | Pp. 143-154
doi: 10.1007/11941354_17
Multi-sensor Data Fusion Based on Fuzzy Integral in AR System
Yan Feng; Yimin Chen; Minghui Wang
In this paper, a data fusion model, based on the notion of fuzzy integral is presented to combine the results of multiple tracking sensors in augmented reality (AR). According to the application characteristic in AR, the tracking range and the tracking error have been chosen to act as evaluation factors. A method for dynamically assigning weighting factors, using the comprehensive performance evaluation of individual sensors is also proposed. The fuzzy integral approach can release the user’s burden from tuning the fusion parameters. Experiments demonstrate that our fusion algorithm prominently improve the tracking precision, consequently to enhance the third dimension of AR system.
- Augmented Reality/Mixed Reality | Pp. 155-162
doi: 10.1007/11941354_18
AR Baseball Presentation System with Integrating Multiple Planar Markers
Yuko Uematsu; Hideo Saito
This paper presents “On-line AR Baseball Presentation System”, which is a vision-based AR application for entertainment. In this system, a user can watch a virtual baseball game scene on a real baseball field model placed on a tabletop through a web-camera attached to a hand-held LCD monitor. The virtual baseball scene is synthesized from an input history data of an actual baseball game. Visualizing the input history data can help the user to understand the contents of the game. For aligning the coordinate of the virtual baseball game scene with the coordinate of the real field model, we use multiple planar markers manually distributed into the real field model. In contrast with most of AR approaches using multiple markers, we do not need any manual measurement of the geometrical relationship of the markers, so that the user can easily started and enjoy this system.
- Augmented Reality/Mixed Reality | Pp. 163-174
doi: 10.1007/11941354_19
An Efficient 3D Registration Method Using Markerless Image in AR-Based CNC Machining Simulation
Z. Y. Ma; Y. P. Chen; C. M. Yuan; Z. D. Zhou
In this paper we present a model-based approach for the real-time recognition and registration for augmented reality applications. It does not need any artificial markers to track the target. What the system need is the 3D model of target and some representative learning images. The most contribution of our work is that we put forward an idea of transferring the construction of correspondent point pairs between model and real scenes to the calculation of the affine matrix between a pair of 2D images. Our method is based on a two-stage process. In the first stage, a set of features is learned from the training images. The second stage matches the learned features with that obtained from the real scenes. If the target is recognized, the final correspondences used for registration are built with the help of the calculated affine matrix. The system is robust to large viewpoint changes and partial occlusions. And in the premise of stability assurance, the system has a good performance in reducing the computation burden.
- Augmented Reality/Mixed Reality | Pp. 175-184
doi: 10.1007/11941354_20
Shared Design Space: Sketching Ideas Using Digital Pens and a Large Augmented Tabletop Setup
Michael Haller; Peter Brandl; Daniel Leithinger; Jakob Leitner; Thomas Seifried; Mark Billinghurst
Collaborative Augmented Reality (AR) setups are becoming increasingly popular. We have developed a collaborative tabletop environment that is designed for brainstorming and discussion meetings. Using a digital pen, participants can annotate not only virtual paper, but also real printouts. By integrating both forms of physical and digital paper, we combine virtual and real 2d drawings, and digital data which are overlaid into a single information space. In this paper, we describe why we have integrated these devices together in a unique way and how they can be used efficiently during a design process.
- Augmented Reality/Mixed Reality | Pp. 185-196