Catálogo de publicaciones - libros

Compartir en
redes sociales


Advanced Concepts for Intelligent Vision Systems: 8th International Conference, ACIVS 2006, Antwerp, Belgium, September 18-21, 2006, Proceedings

Jacques Blanc-Talon ; Wilfried Philips ; Dan Popescu ; Paul Scheunders (eds.)

En conferencia: 8º International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS) . Antwerp, Belgium . September 18, 2006 - September 21, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Image Processing and Computer Vision; Pattern Recognition; Computer Graphics; Artificial Intelligence (incl. Robotics)

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-44630-9

ISBN electrónico

978-3-540-44632-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Complexity Reduction Algorithm for Intra Mode Selection in H.264/AVC Video Coding

Jongho Kim; Donghyung Kim; Jechang Jeong

The emerging H.264/AVC video coding standard improves coding performance significantly by adopting many advanced techniques. This is achieved at the expense of great increase of encoder complexity. Specifically, the intra prediction using RDO examines all possible combinations of coding modes, which depend on spatial directional correlation with adjacent blocks. There are 9 modes for a 4×4 luma block, and 4 modes for a 16×16 luma block and an 8×8 chroma block, respectively. Therefore the number of mode combinations for each MB is 592. This paper proposes a complexity reduction algorithm using simple directional masks and neighboring modes. The proposed method reduces the number of mode combinations into 132 at the most. Simulation results show the proposed method reduces the encoding time up to 70% with negligible loss of PSNR and bit-rate increase compared with the H.264/AVC exhaustive search.

- Video Processing and Coding | Pp. 454-465

Simple and Effective Filter to Remove Corner Outlier Artifacts in Highly Compressed Video

Jongho Kim; Donghyung Kim; Jechang Jeong

We propose a detection method of corner outlier artifacts and a simple and effective filter in order to remove the artifacts in highly compressed video. We detect the corner outlier artifacts based on the direction of edges going through a block corner and the properties of blocks around the edges. Based on the detection results, we remove the stair-shaped discontinuities, i.e., corner outlier artifacts, using the neighboring pixels of the corner outlier artifact in the spatial domain. Simulation results show that the proposed method improves, particularly in combination with a deblocking filter, both objective performance and subjective visual quality.

- Video Processing and Coding | Pp. 466-473

Content-Based Model Template Adaptation and Real-Time System for Behavior Interpretation in Sports Video

Jungong Han; Peter H. N. de With

In this paper, we present a sports analysis system, which not only recognizes the semantic events, but also concludes the behavior, like player’s tactics. To this end, we propose an advanced multiple-player tracking algorithm, which addresses two improvements on practical problems: (1) updating of the player template so that it remains a good model over time, and (2) adaptive scaling of the template size depending on the player motion. In this algorithm, we obtain the initial locations of players in the first frame. The tracking is performed by considering both the kinematic constraints of the player and the color distribution of appearance, thereby achieving promising results. We demonstrate the performance of the proposed system by evaluating it for double tennis matches where the player count and the resulting occlusions are challenging.

- Video Processing and Coding | Pp. 474-484

New Approach to Wireless Video Compression with Low Complexity

Gangyi Jiang; Zhipeng Jin; Mei Yu; Tae-Young Choi

Because of limitation of low power and computational ability in mobile video devices, it is significant to develop energy-efficient wireless video compression methods for mobile video systems. In this paper, a new approach to low complexity wireless video compression is proposed, based on Wyner-Ziv coding theorem. The proposed method encodes video only by intracoding techniques and detection of regions of interest, without the complicated motion estimation and compensation in the mobile video terminal. Thus, the computational burden is obviously reduced and the requirement of low power can be satisfied in mobile video devices. Experimental results show that the proposed method is quite effective.

- Video Processing and Coding | Pp. 485-492

Fast Multi-view Disparity Estimation for Multi-view Video Systems

Gangyi Jiang; Mei Yu; Feng Shao; You Yang; Haitao Dong

Disparity estimation can be used to eliminate redundancy among different views in multi-view video compression to obtain high compression efficiency. However, the problem of high computational complexity in disparity estimation, which limits real-time applications of multi-view systems, needs to be solved. In this paper, a novel fast multi-view disparity estimation algorithm based on Hadamard similarity coefficient for multi-view video coding is proposed by using prediction of initial search point, selection of reference view, determination of the best disparity vector, and strategies of search stop. Experimental results show that the proposed algorithm can significantly reduce the computational complexity in multi-view disparity estimation.

- Video Processing and Coding | Pp. 493-500

AddCanny: Edge Detector for Video Processing

Luis Antón-Canalís; Mario Hernández-Tejera; Elena Sánchez-Nielsen

In this paper, we present AddCanny, an Anisotropic Diffusion and Dynamic reformulation of the Canny edge detector. The proposal provides two modifications to classical Canny detector. The first one consists of using an anisotropic diffusion filter instead of a Gaussian filter as Canny does in order to obtain better edge detection and location. The second one is the replacement of the hysteresis step by a dynamic threshold process, in order to reduce blinking effect of edges during successive frames and, therefore, generate more stable edges in sequences. Also, a new performance measurement based on the Euclidean Distance Transform to evaluate the consistency of computed edges is proposed. The paper includes experimental evaluations with different video streams that illustrate the advantages of AddCanny compared to classical Canny edge detector.

- Video Processing and Coding | Pp. 501-512

Video-Based Facial Expression Hallucination: A Two- Level Hierarchical Fusion Approach

Jian Zhang; Yueting Zhuang; Fei Wu

Facial expression hallucination is an important approach to facial expression synthesis. Existing works mainly focused on synthesizing a static facial expression image given one face image with neutral expression. In this paper, we propose a novel two-level hierarchical fusion approach to hallucinate dynamic expression video sequences when given only one neutral expression face image. By fusion of local linear and global nonlinear subspace learning, the two-level approach provides a sound solution to organizing the complex video sample space. Experiments show that our approach generates reasonable facial expression sequences both in temporal domain and spatial domain with less artifact compared with existing works.

- Video Processing and Coding | Pp. 513-521

Blue Sky Detection for Picture Quality Enhancement

Bahman Zafarifar; Peter H. N. de With

Content analysis of video and still images is attractive for multiple reasons, such as enabling content-based actions and image manipulation. This paper presents a new algorithm and feature model for blue-sky detection. The algorithm classifies the sky areas by computing a pixel-accurate sky probability. Such a probabilistic measure matches well with the requirements of typical video enhancement functions in TVs. This algorithm enables not only content-adaptive picture quality improvement, but also more advanced applications such as content-based annotation of, and retrieval from image and video databases. When compared to existing algorithms, our proposal shows considerable improvements in correct detection/rejection rate of sky areas, and an improved consistency of the segmentation results.

- Video Processing and Coding | Pp. 522-532

Requantization Transcoding in Pixel and Frequency Domain for Intra 16x16 in H.264/AVC

Jan De Cock; Stijn Notebaert; Peter Lambert; Davy De Schrijver; Rik Van de Walle

In the context of Universal Multimedia Access, efficient techniques are needed for the adaptation of video content. An important example is the reduction of the bitrate in order to satisfy the bandwidth constraints imposed by the network or the decoding capability of the terminal devices. Requantization transcoding is a fast technique for bitrate reduction, and has been successfully applied in previous video coding standards such as MPEG-2. In this paper, we examine requantization in H.264/AVC, focusing on the intra 16×16 prediction modes. Due to the newly introduced coding tools in H.264/AVC, new techniques are needed that are able to lower the bitrate at a minimal quality loss. We propose two novel architectures, one in the pixel domain and one in the frequency domain, that reuse the information from the incoming bitstream in an efficient way, and perform approximately equally well as a cascade of decoder and encoder. Due to their low computational complexity, the introduced architectures are highly suitable for on-the-fly video adaptation scenarios.

- Video Processing and Coding | Pp. 533-544

Motion-Compensated Deinterlacing Using Edge Information

Taeuk Jeong; Chulhee Lee

In this paper, we propose a new deinterlacing method using motion estimation and compensation of edge regions. Although motion-compensated deinterlacing methods provide significant results in interlaced-to-progressive conversion, they still have undesirable artifacts in fast moving areas and edge regions. The proposed method mitigates the problems by applying the edge region motion estimation and compensation with properly small search range. After filling the missing lines with the conventional spatial and temporal methods, motion estimation and compensation is applied to the predefined edge areas. Experimental results show that the proposed method produces noticeable improvement more than existing motion-compensated deinterlacing methods.

- Video Processing and Coding | Pp. 545-553