Catálogo de publicaciones - libros
Advances in Artificial Reality and Tele-Existence: 16th International Conference on Artificial Reality and Telexistence, ICAT 2006, Hangzhou, China, November 28 - December 1, 2006, Proceedings
Zhigeng Pan ; Adrian Cheok ; Michael Haller ; Rynson W. H. Lau ; Hideo Saito ; Ronghua Liang (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Multimedia Information Systems; Image Processing and Computer Vision; Computer Appl. in Arts and Humanities
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-49776-9
ISBN electrónico
978-3-540-49779-0
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11941354_101
Wearable Multimodal Interface for Helping Visually Handicapped Persons
Byeong-Seok Shin; Honam Ahn; Gye Young Kim
Handicapped Persons who partially or entirely lose their sensory for external stimuli struggle to lead on daily life due to lack of the ability of sensors. We propose a system to provide alternative sensory for visually handicapped persons with wearable multimodal interface. It acquires visual information with a variety of sensors, converts them to high-level information, and represents the information with alternative sensory such as sound or vibration. The system is mainly composed of two sub-systems. The first is obstacle detection and avoidance module with multiple ultrasound sensors and a tracker. It helps handicapped person to access to target place easily and safely. The second is stereo vision system to provide ability to track some target objects, to recognize persons around a user, and to understand color and gray-level of a specific object. Since our system is designed for wearable computers, visually handicapped persons carry and operate it very easily.
- Ubiquitous/Wearable Computing | Pp. 980-988
doi: 10.1007/11941354_102
Personalized Primary Port: Analysis of User View for the Smart Environment
Wei-jin Park; Junhwan Kim; Doo-Hwan Bae
In this paper, the user side view of smart space based on a hand-held device was proposed. We gave precedence to the personalized primary port (PPP)—currently the cell phone—which is the most popular personal belonging with a powerful functionality among the ubiquitous computing environment. It can be used as the gateway to ubiquitous computing for the smart environment. The implementation focus is a combination of short-term-based sensing using an accelerometer and long-term-based user preference sensing through a software robot.
- Ubiquitous/Wearable Computing | Pp. 989-997
doi: 10.1007/11941354_103
Mobile Information Presentation Schemes for Supra-adaptive Logistics Applications
Björn Schwerdtfeger; Troels Frimor; Daniel Pustka; Gudrun Klinker
In supra-adaptive logistics systems, operative workers must be able to quickly and precisely retrieve objects from large storage areas with minimal training. We explore concepts toward providing workers with location-based wayfinding and picking information on mobile and spatial displays, varying both visualization schemes (1D: textual descriptions, 2D: map-based overviews and 3D: AR-based arrows) and display devices (head-attached, hand-held and spatial displays). In this paper we describe the system and report on first user evaluations.
- Ubiquitous/Wearable Computing | Pp. 998-1007
doi: 10.1007/11941354_104
Interactive Collaborative Scene Assembly Using AR on Mobile Phones
Miroslav Andel; Alexander Petrovski; Anders Henrysson; Mark Ollila
In this paper we present and evaluate a platform for interactive collaborative face-to-face Augmented Reality using a distributed scene graph on mobile phones. The results of individual actions are viewed on the screen in real-time on every connected phone. We show how multiple collaborators can use consumer mobile camera phones to furnish a room together in an Augmented Reality environment. We have also presented a user case study to investigate how untrained users adopt this novel technology and to study the collaboration between multiple users. The platform is totally independent of a PC server though it is possible to connect a PC client to be used for high quality visualization on a big screen device such as a projector or a plasma display.
- Ubiquitous/Wearable Computing | Pp. 1008-1017
doi: 10.1007/11941354_105
Design of the Multimedia Communication Protocol and System for Wireless LAN
Qingzhang Chen; Jianghong Han; Keji Mao
The protocol of wireless LAN —IEEE 802.11 put forward DCF/PCF to offer the real time transmission, but it did not meet the need of multimedia transmission, so a new MAC protocol is developed to solve this problem. In this paper we discuss the way of implementing and processing the multimedia data streams in WLAN. We regard the transmission of multimedia as multi-channel accessing with synchronization to study it. We start from the parameter QoS and mapped the multimedia transmission data streams to the different priorities, and use negotiation approach to allocate the resource.
- Ubiquitous/Wearable Computing | Pp. 1018-1025
doi: 10.1007/11941354_106
Representation of Color in Ancient Noh Masks
Xin Yin; Yasuo Miichi; Hiromi T. Tanaka
Noh is a traditional Japanese dramatic art. The color of the Noh mask constantly changes in appearance, because of natural factors. In this paper, a digital technique for representing the original color of Noh masks is introduced. At first, an image which shows the normal on surface is generated from the constructed 3D Noh mask mesh. And then, the color of the Noh mask is updated by reference to the color of the surrounding points. As the result, the original color is obtained. Finally, the implementation is carried out using a current Noh mask and an ancient Noh mask. The technique introduced in this paper can also be used to represent the original color in other 3D objects.
- Virtual Heritage, Virtual Medicine and Health Science | Pp. 1026-1034
doi: 10.1007/11941354_107
Modeling and Immersive Display of Cultural Heritage
Lei Zhao; Duan-qing Xu; Yi Li; Yi-min Yu
In order to display virtual cultural heritage environment in CAVE based on multiple cheap PC, we implement a collaborative display platform called Multipro. The application developer doesn’t need to know anything about stereo display principles and the platform architecture. We reconstruct the model of No285 cavern of Dunhuang and Hemudu virtual environments by a variety of software and techniques. The paper also addresses the issues of the modeling and display of immersive virtual heritage environments (VHEs).We in detail describe the methods we use to model the heritages and how we fill the holes in the raw heritage models and remove noise of the models. This work is being developed in the context of the National Grand Fundamental Research 973 Program of china.
- Virtual Heritage, Virtual Medicine and Health Science | Pp. 1035-1044
doi: 10.1007/11941354_108
Personalized Content Presentation for Virtual Gallery
Wonil Kim; Hanku Lee; Kyoungro Yoon; Hyungseok Kim; Changduk Jung
Utilizing Virtual Reality technologies for virtual museum brings new ways of interactive presentation of the contents. In addition to interactivity, personalization is an important emerging issue in digital content management especially with virtual reality. For the virtual museum or gallery, selection and presentation of personalized content will improve user experience in navigating through huge collections like Musée du Louvre or British Museum. In this paper, we present a personalization method of massive multimedia content in virtual galleries. The proposed method is targeted for the pictures that could be characterized by its large amount of source in galleries. The method is based on classified image features which are extracted using standard MPEG-7 visual descriptors. Using Neural Networks, the best matching pictures are selected and presented in the virtual gallery by choosing similar styles from the extracted preference features. The simulation results show that the proposed system successfully classifies images into correct classes with the rate of over 75% depending on the employed features. We employ the result into a virtual gallery application which gives opportunities of automatically personalized gallery browsing.
- Virtual Heritage, Virtual Medicine and Health Science | Pp. 1045-1054
doi: 10.1007/11941354_109
Accessible Information from Radiated Sound of Extracorporeal Shock Wave Lithotriptor for Improving Fragmentation Efficiency
Yun Seok Jang; Jin Ho Choi
This paper investigates the information related to the radiated sounds during treatment using the extracorporeal shock wave lithotriptor(ESWL). First, we observe the vibration of the objects from the radiated sounds and extract the information about the change of the peak frequency from the relation between the vibration and the radiated sounds. Next, we observe the change of the peak frequency according to the position of the focus and demonstrate that it is more difficult to vibrate objects exactly at the focal point than above or below the focus of the ESWL device. Lastly, we analyze the relation between the cavitation and the radiated sounds and examine the benefiting condition for the object fragmentation.
- Virtual Heritage, Virtual Medicine and Health Science | Pp. 1055-1061
doi: 10.1007/11941354_110
Visualization of Segmented Color Volume Data Using GPU
Koo-Joo Kwon; Byeong-Seok Shin
Recently, several color volume data such as Visible Human became available for generating a realistic image. These dataset are commonly operated on CPU, however, the rendering time is time-consuming task on CPU. GPU-based volume rendering method can visualize color volume data more easily and quickly because it provides 3D texture including RGB channel. In this paper, we present the GPU-based visualization method of segmented color volume data. During the rendering stage, we need two volume datasets, color and segmented volume. However, the segmented volume requires additional memory. In our method, we use only one 3D texture in GPU. We encode three kinds of values in the 3D texture, color, segmented index and tagged values. Segmented index means the index value of internal organ. And the tagged values are the information of region of interest. We can visualize fast the color image of real human body without additional memory.
- Virtual Heritage, Virtual Medicine and Health Science | Pp. 1062-1069