Catálogo de publicaciones - libros

Compartir en
redes sociales


Entertainment Computing: ICEC 2007: 6th International Conference, Shanghai, China, September 15-17, 2007. Proceedings

Lizhuang Ma ; Matthias Rauterberg ; Ryohei Nakatsu (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Computer Applications; User Interfaces and Human Computer Interaction; Information Systems Applications (incl. Internet); Multimedia Information Systems; Artificial Intelligence (incl. Robotics); Computer Graphics

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74872-4

ISBN electrónico

978-3-540-74873-1

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© IFIP International Federation for Information Processing 2007

Tabla de contenidos

Video Processing and Retrieval on Cell Processor Architecture

Junqing Yu; Haitao Wei

A multi-level parallel partition schema and three mapping model –Service, Streaming and OpenMP model – are proposed to map video processing and retrieval (VPR) workloads to Cell processor. We present a task and data parallel partition scheme to partition and distribute intensive computation workloads of VPR to exploit the parallelism of a sequential program through the different processing core on Cell. To facilitate the VPR programming on Cell, OpenMP programming model is loaded to Cell. Some effective mapping strategies are also presented to conduct the thread creating and data handling between the different processors and reduce the overhead of system performance. The experimental results show that such parallel partition schema and mapping model can be effective to speed up VPR processing on Cell multi-core architecture.

- Session 7: Video Processing | Pp. 255-262

A Hybrid Image Coding in Overdriving for Motion Blur Reduction in LCD

Jun Wang; Kyeongyuk Min; Jongwha Chong

Overdriving technique enlarges the desired change of the pixel value, the error in general compression methods is enlarged at the same time. Therefore, we propose a novel Adaptive Quantization Coding (AQC) to reduce the error in compression for overdriving technique reducing motion blur. Considering hardware implementation, we develop a hybrid image coding which uses color transform first, and then uses AQC to compress luminance data as well as Block Truncation Coding (BTC) to compress chrominance data. The simulation results shown that the average PSNR was improved 5.676dB as compared with the result of BTC, and the average SD of error was reduced 50.2% than that in the BTC. The proposed algorithm is implemented with the verilog HDL and synthesized with the synopsys design compiler using 0.13m Samsung Library.

- Session 7: Video Processing | Pp. 263-270

See, Hear or Read the Film

Carlos Teixeira; Ana Respicio

Films have been the most entertaining art form during the past century. Sometimes they were inspired in written novels; sometimes they have inspired new written novels. Film scripts are halfway between the film in the screen and the pure world of written imagination. Real time is one of the dimensions lost in the script, breaking the anchors to the time signals of what films are made. This paper presents a full approach for merging these two worlds in the real time dimension. Using subtitles time stamping and a new parallel text alignment algorithm, a time stamped script is produced. This is used to create new enriched narrative films, also presented in the paper.

- Session 7: Video Processing | Pp. 271-281

A Practical Framework for Virtual Viewing and Relighting

Qi Duan; Jianjun Yu; Xubo Yang; Shuangjiu Xiao

Recently many practical applications have concerned with observing objects that have specular reflection properties. They intend to know how the specular reflections and other details vary according to different lighting conditions and view positions. In this paper, we propose an innovative framework combining the novel view synthesis algorithm and the relighting algorithm to fulfill these requirements. Another important feature of this framework is that all the algorithms are based on image without acquiring any 3D model which may contain some high confidential secrets. Meanwhile an image measurement criterion is proposed to verify the feasibility and effectiveness of the framework.

- Session 8: Rendering | Pp. 282-287

Interactive Image Based Relighting with Physical Light Acquisition

Jianjun Yu; Xubo Yang; Shuangjiu Xiao

We present an interactive image-based technique to relight real scene with physical light sources acquisition. Firstly, basis images of the real scene are acquired from a fixed viewpoint. Secondly, light direction is estimated and physical light image is captured. Thirdly, measurement image between the reference light image and the novel light image is computed for relighting scene. We demonstrate the technique by interactively relighting a pre-captured real scene using a hand held spotlight projecting light on a white paper.

- Session 8: Rendering | Pp. 288-293

Real-Time Rendering of Daylight Sky Scene for Virtual Environment

Changbo Wang

Realistic rendering of sky scene is important in virtual environment. Traditional methods are mostly based on skybox, thus failing to realistically simulate the change of sky scene under different weathers and conditions. In this paper, a new sky light model under different weathers for virtual environment is proposed. We first analyze the atmosphere characters of sky scene, and calculate the light path and light intensity into eyes through the atmosphere considering atmospheric scattering and refraction. Then for different weathers, by adapting a path tracing algorithm, the intensity distribution of sky light is gained. we also adopt the scattered volume model and GPU techniques to accelerate the calculation. Finally, various sky scenes in sunny day, foggy day, raining day and that with rainbow under different conditions are realistically rendered in real time.

- Session 8: Rendering | Pp. 294-303

Robust Dense Depth Acquisition Using 2-D De Bruijn Structured Light

Zhiliang Xu; Lizhuang Ma; Wuzheng Tan

We present a new dense depth acquisition method using 2-D De Bruijn structured light, which is robust to various textures and is able to reconstruct dense depth maps of moving and deforming objects. A 2-D binary De Bruijn pattern is emitted to the target object by an off-the-shelf projector. Fast dynamic programming based stereo matching is performed on images taken from two different views. The depth is obtained by robust least square triangulation. The advantages include that we do not need to take image sequences with different illumination patterns and do not assume that the surface for reconstruction has uniform texture. Experimental results show that shapes can be efficiently obtained in good quality by the proposed approach. We believe that our approach is a good choice in applications of acquiring depth maps for moving scenes with inexpensive equipments.

- Session 8: Rendering | Pp. 304-314

Semiautomatic Rule Assist Architecture Modeling

Hua Liu; Hongxin Zhang; Hujun Bao

This paper presents a novel rule-driven architecture modeling technique. Different from grammar based procedural modeling approaches, our proposed method, called , tends to integrate user interactions with implied modeling rules. Construction rules, configure rules and constrain rules are introduced in our method to minimize user interactions and enhance modeling efficiency. The experimental results demonstrate the efficiency and flexibility of our method to generate villas and skyscrapers.

- Session 9: Computer Animation and Networks | Pp. 315-323

Online Expression Mapping for Performance-Driven Facial Animation

Hae Won Byun

Recently, performance-driven facial animation has been popular in various entertainment area, such as game, animation movie, and advertisement. With the easy use of motion capture data from a performer’s face, the resulting animated faces are far more natural and lifelike. However, when the characteristic features between live performer and animated character are quite different, expression mapping becomes a difficult problem. Many previous researches focus on facial motion capture only or facial animation only. Little attention has been paid to mapping motion capture data onto 3D face model.

Therefore, we present a new expression mapping approach for performance-driven facial animation. Especially, we consider online factor of expression mapping for real-time application. Our basic idea is capturing the facial motion from a real performer and adapting it to a virtual character in real-time. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. We first propose a comprehensive solution for real-time facial expression capture without any devices such as head-mounted cameras and face-attached markers. With the analysis of the facial expression, the facial motion can be effectively mapped onto another 3D face model. We present a novel example-based approach for creating facial expressions of model to mimic those of face performer. Finally, real-time facial animation is provided with multiple face models, called ”facial examples”. Each of these examples reflects both a facial expression of different type and designer’s insight to be a good guideline for animation. The resulting animation preserves the facial expressions of performer as well as the characteristic features of the target examples.

- Session 9: Computer Animation and Networks | Pp. 324-338

Predicting Peer Offline Probability in BitTorrent Using Nonlinear Regression

Dongdong Nie; Qinyong Ma; Lizhuang Ma; Wuzheng Tan

BitTorrent is a popular and scalable P2P content distribution tool. This study attempts to analyze the factors that affect the offline probability of BitTorrent peer, and express the probability using these factors. We first collect large data set of BitTorrent peers’ activities. Then we use nonlinear least-squares regression to determine the probability distribution function for each of the three factors (download percent, download speed, and local time) and the joint probability distribution function of the three factors, and use another large data set to verify the prediction results.

- Session 9: Computer Animation and Networks | Pp. 339-344