Catálogo de publicaciones - libros
Advanced Concepts for Intelligent Vision Systems: 8th International Conference, ACIVS 2006, Antwerp, Belgium, September 18-21, 2006, Proceedings
Jacques Blanc-Talon ; Wilfried Philips ; Dan Popescu ; Paul Scheunders (eds.)
En conferencia: 8º International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS) . Antwerp, Belgium . September 18, 2006 - September 21, 2006
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Image Processing and Computer Vision; Pattern Recognition; Computer Graphics; Artificial Intelligence (incl. Robotics)
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-44630-9
ISBN electrónico
978-3-540-44632-3
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11864349_31
New Intra Luma Prediction Mode in H.264/AVC Using Collocated Weighted Chroma Pixel Value
Ik-Hwan Cho; Jung-Ho Lee; Woong-Ho Lee; Dong-Seok Jeong
Intra coding in current hybrid video coding method has very important functionality like low delay in decoder, random access and error resilience. Unfortunately coding efficiency of intra frame is very low relative to inter frame coding because of mismatch between current block and its predicted block. In this paper, new intra luma prediction algorithm which improves intra coding efficiency is proposed. The proposed additional intra luma prediction mode uses collocated chroma pixels and weight values to estimate correct spatial pattern of coded block. From neighboring blocks, weight value between chroma and luma values is calculated and then the predicted luma block is obtained by multiplying calculated weight value and collocated upsampled chroma block. The proposed method is effective for complex or non-directional macroblocks and experimental results show that the efficiency of intra coding is increased up to 0.6 dB.
- Video Processing and Coding | Pp. 344-353
doi: 10.1007/11864349_32
Fast Mode Decision for H.264/AVC Using Mode Prediction
Song-Hak Ri; Joern Ostermann
In this paper, we present a new method to speed up the mode decision process using mode prediction. In general, video coding exploits spatial and temporal redundancies between video blocks, in particular temporal redundancy is a crucial key to compress video sequence with little loss of image quality. The proposed method determines the best coding mode of a given macroblock by predicting the mode and its rate-distortion (RD) cost from neighboring MBs in time and space. Compared to the H.264/AVC reference software, the simulation results show that the proposed method can save up to 53% total encoding time with up to 2.4% bit rate increase at the same PSNR.
- Video Processing and Coding | Pp. 354-363
doi: 10.1007/11864349_33
Performing Deblocking in Video Coding Based on Spatial-Domain Motion-Compensated Temporal Filtering
Adrian Munteanu; Joeri Barbarien; Jan Cornelis; Peter Schelkens
Employing block-based motion models in scalable video coding based on spatial-domain motion-compensated temporal filtering (SDMCTF) introduces artificial block-boundary discontinuities that adversely affect the video compression performance, particularly at low bit-rates. This paper focuses on the problem of deblocking in the context of SDMCTF-based video coding. One possible solution to this problem is the use of overlapped-block motion compensation (OBMC). An alternative solution is applying an adaptive deblocking filter, similar to the one used in H.264. With this respect, a novel adaptive deblocking filter, tailored to SDMCTF video coding is proposed. In terms of visual-quality, both approaches yield similar performance. However, adaptive deblocking is less complex than OBMC, as it requires up to 34% less processing time. Experimental results show that the two techniques significantly improve the subjective and objective quality of the decoded sequences, confirming the expected benefits brought by deblocking in SDMCTF-based video coding.
- Video Processing and Coding | Pp. 364-374
doi: 10.1007/11864349_34
Improving DCT-Based Coders Through Block Oriented Transforms
Antoine Robert; Isabelle Amonou; Béatrice Pesquet-Popescu
This paper describes a pre-processing for DCT-based coders and more generally for block-based image or video coders taking advantage of the orientation of the blocks. Contrary to most solutions proposed so far, it is not the transform that adapts to the signal but the signal that is pre-processed to fit the transform. The blocks are oriented using circular shifts at the pixel level. Before applying these shifts, the orientation of each block is evaluated with the help of a selection based on a rate-distortion criterion. We show that the insertion of this pre-processing stage in an H.264 coder and applied on residual intra frames can improve its rate-distortion performance.
- Video Processing and Coding | Pp. 375-383
doi: 10.1007/11864349_35
Improvement of Conventional Deinterlacing Methods with Extrema Detection and Interpolation
Jérôme Roussel; Pascal Bertolino; Marina Nicolas
This article presents a new algorithm for spatial deinterlacing that could easily be integrated in a more complete deinterlacing system, typically a spatio-temporal motion adaptive one. The spatial interpolation part often fails to reconstruct close to horizontal lines with a proper continuity, leading to highly visible artifacts. Our system preserves the structure continuity taking into account that the mis-interpolated points usually correspond to local value extrema. The processing is based on chained lists and connected graph construction. The new interpolation method is restricted to such structures, for the rest of the image, a proper traditional directional spatial interpolation gives satisfactory results already. Although the number of pixels affected by the extrema interpolation is relatively small, the overall image quality is subjectively well improved. Moreover, our solution allows to gain back one of the major advantages of motion compensation methods, without having to afford their complexity cost.
- Video Processing and Coding | Pp. 384-395
doi: 10.1007/11864349_36
Adaptive Macroblock Mode Selection for Reducing the Encoder Complexity in H.264
Donghyung Kim; Jongho Kim; Jechang Jeong
The H.264/AVC standard is a video compression standard that was jointly developed by the ITU-T Video Coding Experts Group and the ISO/IEC Motion Picture Experts Group. The H.264 video coding standard uses new coding tools, such as variable block size, quarter-pixel-accuracy motion estimation, intra prediction and a loop filter. Using these coding tools, H.264 achieves significant improvement in coding efficiency compared with existing standards. Encoder complexity, however, also increases tremendously. Among the tools, macroblock mode selection and motion estimation contribute most to total encoder complexity. This paper focuses on complexity reduction in macroblock mode selection. Of the macroblock modes which can be selected, inter8×8 and intra4×4 have the highest complexity. We propose two methods for complexity reduction of inter8×8 and intra4×4 by using the costs of the other macroblock modes. Simulation results show that the proposed methods save about 55% and 74% of total encoding time compared with the H.264 reference implementation when using a full search and a fast motion estimation scheme, respectively, while maintaining comparable PSNR.
- Video Processing and Coding | Pp. 396-405
doi: 10.1007/11864349_37
Dynamic Light Field Compression Using Shared Fields and Region Blocks for Streaming Service
Yebin Liu; Qionghai Dai; Wenli Xu; Zhihong Liao
The multi-view simul-switching is one of the most important features of dynamic light field (DLF) streaming. In this paper, we jointly consider light field rendering and compression and propose a novel DLF compression scheme based on the requirements of DLF streaming. In this scheme, successive temporal prediction chains are broken and a shared field is used as a reference for all the later P frames in a group of field. Meanwhile, considering the region of interest for DLF rendering, we partition all the P frames into regional blocks and code them in a manner that any region block can be independently transmitted for bandwidth economized streaming. With this coding scheme, a multi-camera DLF system is developed to verify the streaming performance of the proposed scheme. Experimental results show that our scheme saves the per-user transmission bandwidth a lot compared with other DLF compression schemes.
- Video Processing and Coding | Pp. 406-417
doi: 10.1007/11864349_38
Complexity Scalability in Motion-Compensated Wavelet-Based Video Coding
T. Clerckx; A. Munteanu; J. Cornelis; P. Schelkens
Scalable wavelet-based video codecs based on motion-compensated temporal filtering (MCTF) require complexity scalability to cope with the growing heterogeneity of devices on which video has to be processed. The computational and memory complexity of two spatial-domain (SD) MCTF and in-band (IB) MCTF video codec instantiations are examined in this paper. Comparisons in terms of complexity versus performance are presented for both types of codecs. Some of the trade-offs between complexity and coding performance are analyzed and it is indicated how complexity scalability can be achieved in such video-codecs. Furthermore, a new approach is presented to obtain complexity scalability in IBMCTF video coding, by targeting the complexity of the complete-to-overcomplete discrete wavelet transform at the cost of a limited and controllable penalty on the overall coding performance.
- Video Processing and Coding | Pp. 418-430
doi: 10.1007/11864349_39
Spatial Error Concealment with Low Complexity in the H.264 Standard
Donghyung Kim; Seungjong Kim; Jechang Jeong
H.264 adopts new coding tools such as intra-prediction, variable block size, motion estimation with quarter-pixel-accuracy, loop filter, etc. The adoption of these tools enables an H.264-coded bitstream to have more information compared with previous standards. In this paper we proposed an effective spatial error concealment method with low complexity. Among the information included in an H.264-coded bitstream, we use prediction modes of intra-blocks for recovering a damaged block. This is because a prediction direction in each prediction mode is highly correlated to the edge direction. We first estimate the edge direction of a damaged block using prediction modes of intra-blocks adjacent to a damaged block and classify the area inside a damaged block into the edge and the flat area. And then our method recovers pixel values in the edge area using edge-directed interpolation, and recovers pixel values in the flat area using weighted interpolation. Simulation results show the proposed method yields better video quality than conventional approaches.
- Video Processing and Coding | Pp. 431-441
doi: 10.1007/11864349_40
A Real-Time Content Adaptation Framework for Exploiting ROI Scalability in H.264/AVC
Peter Lambert; Davy De Schrijver; Davy Van Deursen; Wesley De Neve; Yves Dhondt; Rik Van de Walle
In many application scenarios, the use of Regions of Interest (ROIs) within video sequences is a useful concept. It is shown in this paper how Flexible Macroblock Ordering (FMO), defined in H.264/AVC as an error resilience tool, can be used for the coding arbitrary-shaped ROIs. In order to exploit the coding of ROIs in an H.264/AVC bitstream, a description-driven content adaptation framework is introduced that is able to extract the ROIs of a given bitstream.
The results of a series of tests indicate that the ROI extraction process significantly reduces the bit rate of the bitstreams and increases the decoding speed. In case of a fixed camera and a static background, the impact of this reduction on the visual quality of the video sequence is negligible. Regarding the adaptation framework itself, it is shown that in all cases, the framework operates in real time and that it is suited for streaming scenarios by design.
- Video Processing and Coding | Pp. 442-453