scispace - formally typeset

Journal ArticleDOI

Hyper-Trellis Decoding of Pixel-Domain Wyner–Ziv Video Coding

01 May 2008-IEEE Transactions on Circuits and Systems for Video Technology (IEEE)-Vol. 18, Iss: 5, pp 557-568

TL;DR: A new decoding algorithm based on decoding on a hyper-trellis, in which multiple states of the original code trellis are combined, which significantly improves performance without changing the complexity of the decoder.

AbstractIn this paper, we present a new decoding algorithm for the Wyner-Ziv (WZ) video coding scheme based on turbo codes. In this scheme, a video frame is encoded using a turbo code, and only a subset of the parity bits are sent to the decoder. At the decoder, the temporal correlation of the video sequence is exploited by using the previous frame as noisy side information (SI) for the current frame. However, there is a mismatch between the SI, which is available as pixel values, and the binary code bits. Previous implementations of the decoder use suboptimal approaches that convert pixel values to soft information for code bits. We present a new decoding algorithm for this application based on decoding on a hyper-trellis, in which multiple states of the original code trellis are combined. We show that this approach significantly improves performance without changing the complexity of the decoder. We also introduce a new technique for the WZ decoder to exploit the spatial correlation within a frame without requiring transform-domain encoding at the encoder, thereby reducing its complexity. Simulation results for fixed-rate transmission show a 9-10-dB improvement in the peak signal-to-noise ratio when compared to a WZ video codec that does bitwise decoding and utilizes only the temporal correlation.

Topics: Soft-decision decoder (71%), Distributed source coding (66%), Residual frame (64%), Decoding methods (59%), Turbo code (56%)

...read more

Content maybe subject to copyright    Report

Citations
More filters

Journal ArticleDOI
TL;DR: The higher the estimation granularity is, the better the rate-distortion performance is since the deeper the adaptation of the decoding process is to the video statistical characteristics, which means that the pixel and coefficient levels are the best performing for PDWZ and TDWZ solutions, respectively.
Abstract: In recent years, practical Wyner-Ziv (WZ) video coding solutions have been proposed with promising results. Most of the solutions available in the literature model the correlation noise (CN) between the original frame and its estimation made at the decoder, which is the so-called side information (SI), by a given distribution whose relevant parameters are estimated using an offline process, assuming that the SI is available at the encoder or the originals are available at the decoder. The major goal of this paper is to propose a more realistic WZ video coding approach by performing online estimation of the CN model parameters at the decoder, for pixel and transform domain WZ video codecs. In this context, several new techniques are proposed based on metrics which explore the temporal correlation between frames with different levels of granularity. For pixel-domain WZ (PDWZ) video coding, three levels of granularity are proposed: frame, block, and pixel levels. For transform-domain WZ (TDWZ) video coding, DCT bands and coefficients are the two granularity levels proposed. The higher the estimation granularity is, the better the rate-distortion performance is since the deeper the adaptation of the decoding process is to the video statistical characteristics, which means that the pixel and coefficient levels are the best performing for PDWZ and TDWZ solutions, respectively.

236 citations


Cites background from "Hyper-Trellis Decoding of Pixel-Dom..."

  • ..., [4], [7], and [20], and, therefore, it will be also adopted in this paper....

    [...]


Journal ArticleDOI
TL;DR: A surveillance video compression system with low-complexity encoder based on Wyner-Ziv coding principles and an error resilience scheme for BCAWZ to address the concern of reliable transmission in the backward-channel, which is essential to the quality of video data for real-time and reliable object detection and event analysis.
Abstract: Video surveillance has been widely used in recent years to enhance public safety and privacy protection. A video surveillance system that deals with content analysis and activity monitoring needs efficient transmission and storage of the surveillance video data. Video compression techniques can be used to achieve this goal by reducing the size of the video with no or small quality loss. State-of-the-art video compression methods such as H.264/AVC often lead to high computational complexity at the encoder, which is generally implemented in a video camera in a surveillance system. This can significantly increase the cost of a surveillance system, especially when a mass deployment of end cameras is needed. In this paper, we discuss the specific considerations for surveillance video compression. We present a surveillance video compression system with low-complexity encoder based on Wyner-Ziv coding principles to address the tradeoff between computational complexity and coding efficiency. In addition, we propose a backward-channel aware Wyner-Ziv (BCAWZ) video coding approach to improve the coding efficiency while maintaining low complexity at the encoder. The experimental results show that for surveillance video contents, BCAWZ can achieve significantly higher coding efficiency than H.264/AVC intra coding as well as existing Wyner-Ziv video coding methods and is close to H.264/AVC inter coding, while maintaining similar coding complexity with intra coding. This shows that the low motion characteristics of many surveillance video contents and the low-complexity encoding requirement make our scheme a particularly suitable candidate for surveillance video compression. We further propose an error resilience scheme for BCAWZ to address the concern of reliable transmission in the backward-channel, which is essential to the quality of video data for real-time and reliable object detection and event analysis.

49 citations


Journal ArticleDOI
TL;DR: This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs.
Abstract: Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs.

17 citations


Cites methods from "Hyper-Trellis Decoding of Pixel-Dom..."

  • ...• Hyper-Trellis decoding for PDWZ video coding is proposed in (Avudainayagam et al. 2008) to optimize the approach for the reconstruction of WZ frames....

    [...]


Book ChapterDOI
17 Dec 2007
TL;DR: In the proposed scheme, Wyner-Ziv decoder compensates wrong blocks by side information using side matching and bi-directional searching and the noise reduction in side information allows the proposed algorithm to achieve coding improvements not only in bitrate but also in PSNR.
Abstract: To make an encoder extremely simple by eliminating motion prediction/compensation from encoder, source coding with side information has been investigated based on the Wyner-Ziv theorem as the basic coding principle. However, the frame interpolation at decoder which is essential for redundancy elimination makes erroneous side information when the basic assumption of linear motion between frames is not satisfied. In this paper, we propose a new Wyner-Ziv video coding scheme featuring side matching in the frame interpolation to improve the side information. In the proposed scheme, Wyner-Ziv decoder compensates wrong blocks by side information using side matching and bi-directional searching. The noise reduction in side information allows the proposed algorithm to achieve coding improvements not only in bitrate but also in PSNR. Results of our experiments show improvement of PSNR up to 0.4dB.

17 citations


Cites methods or result from "Hyper-Trellis Decoding of Pixel-Dom..."

  • ...i) Conventional: Hyper-trellis-based PDWZ with frame interpolation[ 5 ] ii) Proposed: Hyper-trellis-based PDWZ with frame interpolation using the proposed side matching The test conditions are as follow....

    [...]

  • ...In this respect, the independence assumption of bit planes taken by previously proposed bitplane-based turbo coding has a problem in calculating channel likelihood [ 5 ]....

    [...]

  • ...Table 1 shows average performance improvements which lead us to conclude that our proposed scheme is better than the conventional scheme[ 5 ], and as we mentioned above, performance improvements occur in both PSNR and bit-rate....

    [...]

  • ...Second, we use the hyper-trellis turbo coding [ 5 ] instead of turbo coding based on bit-plane....

    [...]

  • ...Table 1. Average performance improvements of proposed scheme over the conventional one [ 5 ]...

    [...]


Proceedings ArticleDOI
23 Sep 2010
TL;DR: The the major research challenges and objectives of image coding for WMSN are discussed, including Wyner-Ziv coding and collaborative coding, and some theoretical explorations are investigated and classified.
Abstract: There are a large number of video data have to be processed and transmitted in resource-constrained wireless multimedia sensor networks (WMSN). One possible way of achieving maximum utilization of those resources is to apply an adaptive image coding scheme, which must consider the trade-off between energy consumption and image quality. The the major research challenges and objectives of image coding for WMSN are discussed. Distributed image coding schemes especially designed for WMSN and some theoretical explorations are investigated and classified, including Wyner-Ziv coding and collaborative coding.

13 citations


Cites background from "Hyper-Trellis Decoding of Pixel-Dom..."

  • ...[21] also exploit the spatial correlation within a frame at decoder....

    [...]


References
More filters

Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

42,928 citations


"Hyper-Trellis Decoding of Pixel-Dom..." refers background in this paper

  • ...The first approach [6]–[9] is based on explicit random binning techniques [10], [11] that are typically used in achievability proofs in information theory....

    [...]


Book
01 Jan 1983

25,004 citations


Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,642 citations


"Hyper-Trellis Decoding of Pixel-Dom..." refers background in this paper

  • ...A proper vertex coloring is harmonious if every pair of colors appears on at most one pair of adjacent vertices [28]....

    [...]

  • ...A vertex coloring is proper if no two adjacent vertices7 are assigned the same color [28]....

    [...]


01 Jan 2005

19,237 citations


01 Nov 1985
TL;DR: This month's guest columnist, Steve Bible, N7HPR, is completing a master’s degree in computer science at the Naval Postgraduate School in Monterey, California, and his research area closely follows his interest in amateur radio.
Abstract: Spread Spectrum It’s not just for breakfast anymore! Don't blame me, the title is the work of this month's guest columnist, Steve Bible, N7HPR (n7hpr@tapr.org). While cruising the net recently, I noticed a sudden bump in the number of times Spread Spectrum (SS) techniques were mentioned in the amateur digital areas. While QEX has discussed SS in the past, we haven't touched on it in this forum. Steve was a frequent cogent contributor, so I asked him to give us some background. Steve enlisted in the Navy in 1977 and became a Data Systems Technician, a repairman of shipboard computer systems. In 1985 he was accepted into the Navy’s Enlisted Commissioning Program and attended the University of Utah where he studied computer science. Upon graduation in 1988 he was commissioned an Ensign and entered Nuclear Power School. His subsequent assignment was onboard the USS Georgia, a trident submarine stationed in Bangor, Washington. Today Steve is a Lieutenant and he is completing a master’s degree in computer science at the Naval Postgraduate School in Monterey, California. His areas of interest are digital communications, amateur satellites, VHF/UHF contesting, and QRP. His research area closely follows his interest in amateur radio. His thesis topic is Multihop Packet Radio Routing Protocol Using Dynamic Power Control. Steve is also the AMSAT Area Coordinator for the Monterey Bay area. Here's Steve, I'll have some additional comments at the end.

8,574 citations


"Hyper-Trellis Decoding of Pixel-Dom..." refers methods in this paper

  • ...channels are combined using diversity combining techniques [21] in order to improve performance....

    [...]