scispace - formally typeset
Search or ask a question
Author

Fuling Liu

Other affiliations: Baker Hughes, Western Atlas
Bio: Fuling Liu is an academic researcher from University of Nebraska–Lincoln. The author has contributed to research in topics: Error detection and correction & Redundancy (engineering). The author has an hindex of 3, co-authored 4 publications receiving 83 citations. Previous affiliations of Fuling Liu include Baker Hughes & Western Atlas.

Papers
More filters
Journal ArticleDOI
TL;DR: A family of nonbinary encoders is developed which more efficiently use the residual redundancy in the source coder output which outperform conventional source-channel coder pairs with gains of greater than 9 dB in the reconstruction SNR at high probability of error.
Abstract: The design of joint source/channel coders in situations where there is residual redundancy at the output of the source coder is examined. It has previously been shown that this residual redundancy can be used to provide error protection without a channel coder. In this paper, this approach is extended to conventional source coder/convolutional coder combinations. A family of nonbinary encoders is developed which more efficiently use the residual redundancy in the source coder output. It is shown through simulation results that the proposed systems outperform conventional source-channel coder pairs with gains of greater than 9 dB in the reconstruction SNR at high probability of error. >

69 citations

Proceedings ArticleDOI
01 Jan 1988
TL;DR: A MAP approach to joint source/channel coding has been proposed which uses a MAP decoder and a modification of the source coder to provide error correction and results for an image coding application are provided.
Abstract: One of Shannon's many fundamental contributions was his result that source coding and channel coding can be implemented separately without any loss of optimality. However, the assumption underlying this result may at times be violated in practice. Various joint source/channel coding approaches have been developed for handling such situations. A MAP approach to joint source/channel coding has been proposed which uses a MAP decoder and a modification of the source coder to provide error correction. We present various implementation strategies for this approach and provide results for an image coding application.

9 citations

Proceedings ArticleDOI
04 Nov 1991
TL;DR: The authors examine the situation where there is residual redundancy at the source coder output and present the design of encoders for this situation, showing that the proposed coders consistently outperform conventional source-channel coder pairs with gains of up to 12 dB at a high probability of error.
Abstract: Source coders and channel coders are generally designed separately without reference to each other. This approach is justified by a famous result of C.E. Shannons (1948). However, there are many situations in practice in which the assumptions upon which this result is based are violated. Specifically, the authors examine the situation where there is residual redundancy at the source coder output. They have previously shown that this residual redundancy can be used to provide error correction using a Viterbi decoder. In this work, they present the second half of the design: the design of encoders for this situation. They show through simulation results that the proposed coders consistently outperform conventional source-channel coder pairs with gains of up to 12 dB at a high probability of error. >

3 citations

Proceedings ArticleDOI
23 May 1993
TL;DR: The situation in which there is residual redundancy at the source coder output is examined and a design for nonbinary encoders for this situation is developed that consistently outperform conventional source-channel coder pairs with gains of greater than 10 dB at high probability of error.
Abstract: The situation in which there is residual redundancy at the source coder output is examined. It has previously been shown that this residual redundancy can be used to provide error correction without a channel encoder. This approach is extended to conventional source coder/convolutional coder combinations. A design for nonbinary encoders for this situation is developed. It is shown through simulation results that the proposed systems consistently outperform conventional source-channel coder pairs with gains of greater than 10 dB at high probability of error. >

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work chronicles the development of rate-distortion theory and provides an overview of its influence on the practice of lossy source coding.
Abstract: Lossy coding of speech, high-quality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called rate-distortion theory. For the first 25 years of its existence, rate-distortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, rate-distortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of rate-distortion theory and provide an overview of its influence on the practice of lossy source coding.

213 citations

01 Dec 1978
TL;DR: In this paper, a combined source-channel coding approach is described for the encoding, transmission and remote reconstruction of image data, where the source encoder employs two-dimensional (2-D) differential pulse code modulation (DPCM).
Abstract: A combined source-channel coding approach is described for the encoding, transmission and remote reconstruction of image data. The source encoder employs two-dimensional (2-D) differential pulse code modulation (DPCM). This is a relatively efficient encoding scheme in the absence of channel errors. In the presence of channel errors, however, the performance degrades rapidly. By providing error control protection to those encoded bits which contribute most significantly to image reconstruction, it is possible to minimize this degradation without sacrificing transmission bandwidth. The result is a relatively robust design which is reasonably insensitive to channel errors and yet provides performance approaching the rate-distortion bound. Analytical results are provided for assumed 2-D autoregressive image models while simulation results are described for real-world images.

167 citations

Journal ArticleDOI
01 Oct 1999
TL;DR: This paper advocates the use of rate-compatible punctured systematic recursive convolutional (RCPRSC) codes which are show to lead to a straightforward and versatile unequal error protection (UEP) design.
Abstract: Multimedia transmission has to handle a variety of compressed and uncompressed source signals such as data, text, image, audio, and video. On wireless channels the error rates are high and joint source/channel coding and decoding methods are advantageous. Also, the system architecture has to adapt to the bad channel conditions. Several examples of a joint design are given. We especially advocate the use of rate-compatible punctured systematic recursive convolutional (RCPRSC) codes which are show to lead to a straightforward and versatile unequal error protection (UEP) design. In addition, the high-end receiver could use soft outputs and source-controlled channel decoding for even better performance.

123 citations

Journal ArticleDOI
TL;DR: This paper proposes a technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems.
Abstract: When using entropy coding over a noisy channel, it is customary to protect the highly vulnerable bitstream with an error correcting code In this paper, we propose a technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems

118 citations

Journal ArticleDOI
TL;DR: The decoding scheme proposed can be viewed as a turbo algorithm using alternately the intersymbol correlation due to the Markov source and the redundancy introduced by the channel code, which is used as a translator of soft information from the bit clock to the symbol clock.
Abstract: We analyze the dependencies between the variables involved in the source and channel coding chain. This analysis is carried out in the framework of Bayesian networks, which provide both an intuitive representation for the global model of the coding chain and a way of deriving joint (soft) decoding algorithms. Three sources of dependencies are involved in the chain: (1) the source model, a Markov chain of symbols; (2) the source coder model, based on a variable length code (VLC), for example a Huffman code; and (3) the channel coder, based on a convolutional error correcting code. Joint decoding relying on the hidden Markov model (HMM) of the global coding chain is intractable, except in trivial cases. We advocate instead an iterative procedure inspired from serial turbo codes, in which the three models of the coding chain are used alternately. This idea of using separately each factor of a big product model inside an iterative procedure usually requires the presence of an interleaver between successive components. We show that only one interleaver is necessary here, placed between the source coder and the channel coder. The decoding scheme we propose can be viewed as a turbo algorithm using alternately the intersymbol correlation due to the Markov source and the redundancy introduced by the channel code. The intermediary element, the source coder model, is used as a translator of soft information from the bit clock to the symbol clock.

117 citations