Catálogo de publicaciones - libros
Advances in Artificial Reality and Tele-Existence: 16th International Conference on Artificial Reality and Telexistence, ICAT 2006, Hangzhou, China, November 28 - December 1, 2006, Proceedings
Zhigeng Pan ; Adrian Cheok ; Michael Haller ; Rynson W. H. Lau ; Hideo Saito ; Ronghua Liang (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Multimedia Information Systems; Image Processing and Computer Vision; Computer Appl. in Arts and Humanities
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-49776-9
ISBN electrónico
978-3-540-49779-0
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11941354_1
Gesture Recognition Based on Context Awareness for Human-Robot Interaction
Seok-Ju Hong; Nurul Arif Setiawan; Song-Gook Kim; Chil-Woo Lee
In this paper, we describe an algorithm which can naturally communicate with human and robot for Human-Robot Interaction by utilizing vision. We propose a state transition model using attentive features for gesture recognition. This method defines the recognition procedure as five different states; NULL, OBJECT, POSE, Local Gesture and Global Gesture. We first infer the situation of the system by estimating the transition of the state model and then apply different recognition algorithms according to the system state for robust recognition. And we propose Active Plane Model (APM) that can represent 3D and 2D information of gesture simultaneously. This method is constructing a gesture space by analyzing the statistical information of training images with PCA and the symbolized images are recognized with HMM as one of model gestures. Therefore, proposed algorithm can be used for real world application efficiently such as controlling intelligent home appliance and humanoid robot.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 1-10
doi: 10.1007/11941354_2
Evolving Creatures in Virtual Ecosystems
Nicolas Lassabe; Herve Luga; Yves Duthen
In this article, we introduce the first steps of our 3D ecosystem project. We present the latest research on morphology, behavior and virtual environments of evolving creatures and we try to investigate future possibilities for improving more complex lifeforms in dynamic environment. We propose our model of an ecosystem which uses a new type of controller for evolving creatures based on classifiers. Our model comprises three different lifeforms: insects, plants and creatures based on a genetic coding and evolving by evolutionary methods. In the future, we intend to add more different lifeforms and more interactions. To focus on this aspect, we have chosen Breve an existing physic engine centered on artificial life simulation. In this article, we show the results of evolving creatures being able to crawl, to walk and to climb stairs.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 11-20
doi: 10.1007/11941354_3
The Application of Affective Virtual Human in Intelligent Web Tutoring System
Huiling Peng
Aiming at the defects of the difficult emotional interaction of traditional Intelligent Web Tutoring System, based on the theory of artificial emotion of virtual human, an intelligent affective Web Tutoring System was presented in the paper. Second, the key module of the system was designed and the modeling on the Affective Virtual Human was carried out. Finally, the simulation command on it was made. The result showed this model coinciding with the emotion of human behaviors.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 21-27
doi: 10.1007/11941354_4
Extracting Behavior Knowledge and Modeling Based on Virtual Agricultural Mobile Robot
Xiangjun Zou; Jun Lu; Lufeng Luo; Xiwen Luo; Yanqiong Zhou
Aiming at complexity, unknown and uncertainty of picking object of agricultural intelligence mobile robot, extracting behavior knowledge and modeling based on the robot was put forward to help them obtain information effectively during operation, thereby to make decision for their behaviors. Firstly, behavior was studied based on picking behavior of the robot in virtual environment. Propose a behavior and method of extracting knowledge in virtual environment those are based on the association rules and, classify and express the entities such as robots, fruit trees and litchi, etc. Secondly, knowledge bases and models were built for reasoning. Thirdly, put forward for the first time to behavior knowledge classifies based on rough sets systematically, and classify the behaviors into obstacle-avoidance, picking, reasoning and fusion behavior to reduce redundant knowledge. Finally, an example for reasoning and simulation of the behavior was given. It realized picking behavior by message and route mechanism.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 28-37
doi: 10.1007/11941354_5
Human Motion Interpolation Using Space-Time Registration
Zhihua Chen; Lizhuang Ma; Yan Gao; Xiaomao Wu
Human motion interpolation plays an important role in virtual reality and human animation. In this paper, we propose a motion interpolation algorithm with space-time registration. Firstly, time registration based on motion cycles is used to seek the corresponding frames between input motions. Secondly, space registration based on analysis method is employed to align spatially the motion frames. Then the corresponding frames are interpolated according to the interpolation methods of transition and angles, respectively. Finally, we reconstruct the constraints of the resulting motion. Experimental results demonstrate that our algorithm succeeds in interpolating input motions to create new motions with new style.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 38-47
doi: 10.1007/11941354_6
Multi-stream Based Rendering Resource Interception and Reconstruction in D3DPR for High-Resolution Display
Zhen Liu; Jiaoying Shi; Hua Xiong; Aihong Qin
In order to bring Direct3D9 application to run on PC clusters for high-resolution display with no modification, we have firstly presented D3DPR parallel rendering system. According to multi-stream scene data organization and storage mode of Direct3D9, rendering resource has been classified as command stream, vertex stream, index stream and texture stream in D3DPR and the rendering resource interception and reconstruction becomes the most crucial problem. In this paper, above all we give an overview of D3DPR system architecture. Subsequently we bring forward multi-stream based rendering resource interception in D3DPR. In this section, we design DPGL, i.e. Direct3D9-based Parallel Graphics Library, through which the rendering information of Direct3D9 application can be converted into the above described rendering resource. Finally we focus on multi-stream based rendering resource reconstruction in D3DPR, through which the rendering command and scene data was reconstructed.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 48-57
doi: 10.1007/11941354_7
: A Text Input Tool for Virtual Environments
Jaewoo Ahn; Kyungha Min
This paper presents an effective text input tool for virtual environments. The key idea of this paper is that we make use of the text input mechanism of mobile phones. We present a 3 × 3 keypad widget whose user interface is similar to the keypad of mobile phones, where each key on the widget has several characters. We present indexing schemes by which users locate the key on the keypad they want and selection schemes by which users choose the desired character from the characters on the keys of the keypad using diverse input devices.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 58-64
doi: 10.1007/11941354_8
An Animation System for Imitation of Object Grasping in Virtual Reality
Matthias Weber; Guido Heumer; Heni Ben Amor; Bernhard Jung
Interactive virtual characters are nowadays commonplace in games, animations, and Virtual Reality (VR) applications. However, relatively few work has so far considered the animation of interactive object manipulations performed by virtual humans. In this paper, we first present a hierarchical control architecture incorporating plans, behaviors, and motor programs that enables virtual humans to accurately manipulate scene objects using different grasp types. Furthermore, as second main contribution, we introduce a method by which virtual humans learn to imitate object manipulations performed by human VR users. To this end, movements of the VR user are analyzed and processed into abstract actions. A new data structure called is used for storing information about user interactions with scene objects. High-level plans are instantiated based on grasp events to drive the virtual humans’ animation. Due to their high-level representation, recorded manipulations often naturally adapt to new situations without losing plausibility.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 65-76
doi: 10.1007/11941354_9
Generating Different Realistic Humanoid Motion
Zhenbo Li; Yu Deng; Hua Li
Different realistic humanoid motion can be used in vary situations in animation. It also plays an important role in virtual reality. In this paper, we propose a novel method to generate different realistic humanoid motion automatically. Firstly, eigenvectors of a motion sequence is computed using principle component analysis. The principle components are served as “virtual joints” in our system. The number of “virtual joints” can be used to control the realistic level of motions. After given the “virtual joints” number, the actual joints’ parameters of new motion are computed using the selected “virtual joints”. The experiments illuminate that this method has good ability to generate different realistic level motions.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 77-84
doi: 10.1007/11941354_10
Handheld AR for Collaborative Edutainment
Daniel Wagner; Dieter Schmalstieg; Mark Billinghurst
Handheld Augmented Reality (AR) is expected to provide ergonomic, intuitive user interfaces for untrained users. Yet no comparative study has evaluated these assumptions against more traditional user interfaces for an education task. In this paper we compare the suitability of a handheld AR arts-history learning game against more traditional variants. We present results from a user study that demonstrate not only the effectiveness of AR for untrained users but also its fun-factor and suitability in environments such as public museums. Based on these results we provide design guidelines that can inform the design of future collaborative handheld AR applications.
- Anthropomorphic Intelligent Robotics, Artificial Life | Pp. 85-96