Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Multimedia Information Processing: 6th Pacific Rim Conference on Multimedia, Jeju Island, Korea, November 11-13, 2005, Proceedings, Part I

Yo-Sung Ho ; Hyoung Joong Kim (eds.)

En conferencia: 6º Pacific-Rim Conference on Multimedia (PCM) . Jeju Island, South Korea . November 13, 2005 - November 16, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Multimedia Information Systems; Information Storage and Retrieval; Computer Communication Networks; Information Systems Applications (incl. Internet); Computer Graphics; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-30027-4

ISBN electrónico

978-3-540-32130-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Image-Based Generation of Facial Skin Texture with Make-Up

Sang Min Kim; Kyoung Chin Seo; Sang Wook Lee

This paper presents a new approach to the realistic rendering of facial skin and to the transform of bare-skin colors into make-up ones based on the BTF measurement from 2D photographs. We develop an apparatus to capture skin texture images under various lighting and viewing directions. Instead of physical modeling of human skin layers, our approach is based on the BTF (Bidirectional Texture Function) for skin texture generation and rendering. We present a method to add make-up effects to bare skin using a color transformation between the BTFs of bare skin and skin with make-up. Experimental results demonstrate that the presented method generates realistic skin texture and cosmetic effects.

Palabras clave: Skin Region; Facial Skin; Lighting Direction; Texture Synthesis; Viewing Direction.

Pp. 350-360

Responsive Multimedia System for Virtual Storytelling

Youngho Lee; Sejin Oh; Youngmin Park; Beom-Chan Lee; Jeung-Chul Park; Yoo Rhee Oh; Seokhee Lee; Han Oh; Jeha Ryu; Kwan H. Lee; Hong Kook Kim; Yong-Gu Lee; JongWon Kim; Yo-Sung Ho; Woontack Woo

In this paper, we propose Responsive Multimedia System (RMS) for a virtual storytelling. It consists of three key components; Multi-modal Tangible User Interface (MTUI), a Unified Context-aware Application Model for Virtual Environments (vr-UCAM), and Virtual Environment Manager (VEManager). MTUI allows users to interact with virtual environments (VE) through human’s senses by exploiting tangible, haptic and vision-based interfaces. vr-UCAM decides reactions of VE according to multi-modal input. VEManager generates dynamic VE by applying the reactions and display it through 3D graphics and 3D sounds, etc. To demonstrate an effectiveness of the proposed system, we implemented a virtual storytelling system which unfolds a legend of Unju Temple. We believe the proposed system plays an important role in implementing various entertainment applications.

Pp. 361-372

Communication and Control of a Home Robot Using a Mobile Phone

Kuniya Shinozaki; Hajime Sakamoto; Takaho Tanaka; Ryohei Nakatsu

Research and development of humanoid/animal robots is being actively pursued, and it is expected that in the near future these robots will be introduced into our homes as “home robots.” The key function of these robots is their ability to communicate with humans. Since communication via mobile phones is becoming increasingly common among us, it is essential for these robots also to be able to communicate using mobile phones. In this paper we propose the concept of a home robot and describe the essential functions that we consider are essential for the robot. Then, as one of the key functions of the robot, we explain the basic architecture and construction of the communication/control function of the robot, which uses mobile phones as a primary medium.

Palabras clave: Mobile Phone; Humanoid Robot; Servo Motor; Communication Capability; Robot Communication.

Pp. 373-383

Real-Time Stereo Using Foreground Segmentation and Hierarchical Disparity Estimation

Hansung Kim; Dong Bo Min; Kwanghoon Sohn

We propose a fast disparity estimation algorithm using background registration and object segmentation for stereo sequences from fixed cameras. Dense background disparity information is calculated in an initialization step so that only disparities of moving object regions are updated in the main process. We propose a real-time segmentation technique using background subtraction and inter-frame differences, and a hierarchical disparity estimation using a region-dividing technique and shape-adaptive matching windows. Experimental results show that the proposed algorithm provides accurate disparity vector fields with an average processing speed of 15 frames/sec for 320x240 stereo sequences on a common PC.

Pp. 384-395

Multi-view Video Coding Using Illumination Change-Adaptive Motion Estimation and 2-D Direct Mode

Yung-Lyul Lee; Yung-Ki Lee; Dae-Yeon Kim

A MVC (Multi-view video coding) method, which uses both an illumination change-adaptive ME (Motion estimation)/MC (Motion compensation) and a 2-D (Dimensional) direct mode, is proposed. A new SAD (sum of absolute difference) measure for ME/MC is proposed to compensate Luma pixel value changes for spatio-temporal motion vector prediction. Illumination change-adaptive (ICA) ME/MC uses the new SAD to improve both MV (motion vector) accuracy and bit saving. The proposed 2-D direct mode that can be used in inter-view prediction is an extended version of the temporal direct mode in MPEG-4 AVC. The proposed MVC method obtains approximately 0.8dB PSNR increment compared with the MPEG-4 AVC simulcast coding.

Palabras clave: Motion Vector; Direct Mode; Arrow Direction; Reference Picture; Camera Arrangement.

Pp. 396-407

Fast Ray-Space Interpolation with Depth Discontinuity Preserving for Free Viewpoint Video System

Gangyi Jiang; Liangzhong Fan; Mei Yu; Xien Ye; Rangding Wang; Yong-Deak Kim

Ray-space representation is the main technology to realize free viewpoint video system with complicated scenes. Ray-space interpolation is one of key problems to be solved. A new fast ray-space interpolation method that can preserve depth discontinuity is proposed in this paper. Discontinuity features of ray-space data are first extracted by using adaptive threshold, then, a cross check operation is carried out between neighboring epipolar lines to refine the extracted feature points. After that, ray directions of feature points are determined, and ray-space interpolation is implemented in the regions segmented by feature points. Experimental results show that the proposed method achieves much higher PSNR than the pixel matching based interpolation method and the block matching based interpolation method, and the quality of rendered intermediate viewpoint images is also improved greatly. In addition, the proposed interpolation method requires low computational cost and is suitable for hardware implementation.

Palabras clave: Mean Square Error; Feature Point; Interpolation Method; Virtual View; Epipolar Line.

Pp. 408-419

Haptic Interaction with Depth Video Media

Jongeun Cha; Seung-man Kim; Ian Oakley; Jeha Ryu; Kwan H. Lee

In this paper we propose a touch enabled video player system. A conventional video player only allows viewers to passively experience visual and audio media. In virtual environment, touch or haptic interaction has been shown to convey a powerful illusion of the tangible nature – the reality – of the displayed environments and we feel the same benefits may be conferred to a broadcast, viewing domain. To this end, this paper describes a system that uses a video representation based on depth images to add a haptic component to an audio-visual stream. We generate this stream through the combination of a regular RGB image and a synchronized depth image composed of per-pixel depth-from-camera information. The depth video, a unified stream of the color and depth images, can be synthesized from a computer graphics animation by rendering with commercial packages or captured from a real environment by using a active depth camera such as the Zcam^ TM . In order to provide a haptic representation of this data, we propose a modified proxy graph algorithm for depth video streams. The modified proxy graph algorithm can (i) detect collisions between a moving virtual proxy and time-varying video scenes, (ii) generates smooth touch sensation by handling the implications of the radically different display update rates required by visual (30Hz) and haptic systems (in the order of 1000Hz), (iii) avoid sudden change of contact forces. A sample experiment shows the effectiveness of the proposed system.

Palabras clave: Depth Image; Collision Detection; Virtual Object; Haptic Interface; Haptic Interaction.

Pp. 420-430

A Framework for Multi-view Video Coding Using Layered Depth Images

Seung-Uk Yoon; Eun-Kyung Lee; Sung-Yeol Kim; Yo-Sung Ho

The multi-view video is a collection of multiple videos capturing the same scene at different viewpoints. Since the data size of the multi-view video linearly increases as the number of cameras, it is necessary to compress multi-view video data for efficient storage and transmission. The multi-view video can be coded using the concept of the layered depth image (LDI). In this paper, we describe a procedure to generate LDI from the natural multi-view video and present a framework for multi-view video coding using the concept of LDI.

Palabras clave: Multi-view video coding; layered depth image; MPEG 3DAV.

Pp. 431-442

A Proxy-Based Distributed Approach for Reliable Content Sharing Among UPnP-Enabled Home Networks

HyunRyong Lee; JongWon Kim

We propose a proxy-based distributed scheme for reliable streaming services among UPnP(universal plug and play)-enabled home networks. We design a “SHARE” module that extends HG (home gateway) with a UPnP-compatible protocol. By relaying SSDP (simple service discovery protocol) messages used in the UPnP device architecture, the SHARE module provides connectivity that is needed to control other UPnP devices for streaming services among home networks. To provide reliable streaming services in a distributed way, the SHARE module tries to coordinate the distribution of streaming loads among multiple senders by using many-to-one distributed streaming. It also tries to provide reliable streaming services based on the system and network resource status of each sender by leveraging the UPnP QoS services. Based on the UPnP components, the SHARE module provides the transparent content sharing to users. Through design-level verification and partial implementations of the proposed SHARE module, we validate the feasibility of our work.

Palabras clave: Home Network; Streaming Service; Share Module; Content Sharing; Home Gateway.

Pp. 443-454

Adaptive Distributed Video Coding for Video Applications in Ad-Hoc Networks

Ke Liang; Lifeng Sun; Yuzhuo Zhong

In nowadays distributed video coding systems, side information is generated at the decoder using motion estimation. Therefore, the high computational complexity is swaped from the encoder to the decoder. In order to reduce the computational complexity at the decoder, generating the side information using extrapolation may be a compromise, but it brings a drawback of rate-distortion performance. To compensate this drawback, we proposed an Adaptive Distributed Video Codec (ADVC) based on multilevel coset codes. In our implementation, the temporal similarities among successive frames can be exploited substantially, and the side information is available at the encoder that achieves more accurate correlation. The simulation results show the proposed ADVC has a better rate-distortion performance than non-adaptive distributed video codec (DVC), especially in low-rate scenarios . ...

Palabras clave: Side Information; Previous Frame; Video Application; Distribute Video Code; Foreman Sequence.

Pp. 455-465