scispace - formally typeset
Search or ask a question

Showing papers on "Noisy-channel coding theorem published in 2000"


Dissertation
01 Jan 2000
TL;DR: The reciprocal-channel approximation, based on dualizing LDPC codes, provides a very accurate model of density evolution for the AWGN channel, and another approximation method, Gaussian approximation, is developed, which enables us to visualize infinite-dimensional density evolution and optimization ofLDPC codes.
Abstract: This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source-channel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signal-to-noise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on space-filling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on space-filling curves operates within 1.1 dB of Shannon's rate-distortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the rate-distortion bound. The second scheme is based on low-density parity-check (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution from this observation, namely erasure-channel, Gaussian-capacity, and reciprocal-channel approximations. The reciprocal-channel approximation, based on dualizing LDPC codes, provides a very accurate model of density evolution for the AWGN channel. We also develop another approximation method, Gaussian approximation, which enables us to visualize infinite-dimensional density evolution and optimization of LDPC codes. We also develop other tools to better understand density evolution. Using these tools, we design some LDPC codes that approach the Shannon limit extremely closely. For multilevel AWGN channels, we design a rate 1/2 code that has a threshold within 0.0063 dB of the Shannon limit of the noisiest level. For binary-input AWGN channels, our best rate 1/2 LDPC code has a threshold within 0.0045 dB of the Shannon limit. Simulation results show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10−6 using a block length of 10 7. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

394 citations


Journal ArticleDOI
TL;DR: An overview of the novel class of channel codes referred to as turbo codes, which have been shown to be capable of performing close to the Shannon limit, is provided.
Abstract: We provide an overview of the novel class of channel codes referred to as turbo codes, which have been shown to be capable of performing close to the Shannon limit. We commence with a discussion on turbo encoding, and then move on to describing the form of the iterative decoder most commonly used to decode turbo codes. We then elaborate on various decoding algorithms that can be used in an iterative decoder, and give an example of the operation of such a decoder using the so-called soft output Viterbi (1996) algorithm (SOVA). Lastly, the effect of a range of system parameters is investigated in a systematic fashion, in order to gauge their performance ramifications.

342 citations


15 Aug 2000
TL;DR: This article computes the capacity of various idealized soft-decision channels modeling an optical channel using an avalanche photodiode detector (APD) and pulse-position modulation (PPM) and shows that soft-output channels of the APD-detected optical channel have a 3-dB advantage.
Abstract: This article computes the capacity of various idealized soft-decision channels modeling an optical channel using an avalanche photodiode detector (APD) and pulse-position modulation (PPM). The capacity of this optical channel depends in a complicated way on the physical parameters of the APD and the constraints imposed by the PPM orthogonal signaling set. This article attempts to identify and separate the effects of several fundamental parameters on the capacity of the APD-detected optical PPM channel. First, an overall signal-to-noise ratio (SNR) parameter is de ned such that the capacity as a function of a bit-normalized version of this SNR drops precipitously toward zero at quasi-brick-wall limits on bit SNR that are numerically the same as the well-understood brick-wall limits for the standard additive white Gaussian noise (AWGN) channel. A second parameter is used to quantify the effects on capacity of one unique facet of the optical PPM channel (as compared with the standard AWGN channel) that causes the noise variance to be higher in signal slots than in nonsignal slots. This nonuniform noise variance yields interesting capacity effects even when the channel model is AWGN. A third parameter is used to measure the effects on capacity of the difference between an AWGN model and a non-Gaussian model proposed by Webb (see reference in [2]) for approximating the statistics of the APD-detected optical channel. Finally, a fourth parameter is used to quantify the blending of a Webb model with a pure AWGN model to account for thermal noise. Numerical results show that the capacity of M-ary orthogonal signaling on the Webb channel exhibits the same brick-wall Shannon limit, (M ln 2)=(M 1), as on the AWGN channel ( 1:59 dB for large M). Results also compare the capacity obtained by hard- and soft-output channels and indicate that soft-output channels o er a 3-dB advantage.

78 citations


Proceedings ArticleDOI
27 Nov 2000
TL;DR: This paper presents the first algebraic method for constructing LDPC codes systematically based on finite analytic geometries based on cyclic or quasi-cyclic codes, and four classes of finite geometryLDPC codes with relatively good minimum distances are constructed.
Abstract: Low density parity check (LDPC) codes with iterative decoding based on belief propagation (IDBP) achieve astonishing error performance close to the Shannon limit. Until now there has been no known method for constructing these Shannon limit approaching codes systematically. Good LDPC codes are largely generated by computer search. As a result, the encoding of long LDPC codes is in general very complex. This paper presents the first algebraic method for constructing LDPC codes systematically based on finite analytic geometries. Four classes of finite geometry LDPC codes with relatively good minimum distances are constructed. These codes are either cyclic or quasi-cyclic and therefore their encoding can be implemented with simple linear feedback shift registers. Long finite geometry LDPC codes have been constructed and they achieve an error performance only a few tenths of a dB away from the Shannon limit. Finite geometry LDPC codes are strong competitors to turbo codes for error control in communication and digital data storage systems.

62 citations


Proceedings ArticleDOI
17 Apr 2000
TL;DR: This paper presents an overview of these coding schemes, then discusses the issues involved in building an LDPC decoder using reconfigurable hardware, and presents a hypothetical LDPC implementation using a commercial FPGA, which will give an idea of future research issues and performance gains.
Abstract: Error correcting codes (ECCs) are widely used in digital communications. New types of ECCs have been proposed which permit error-free data transmission over noisy channels at rates which approach the Shannon capacity. For wireless communication, these new codes allow more data to be carried in the same spectrum, lower transmission power, and higher data security and compression. One new type of ECC, referred to as Turbo Codes, has received a lot of attention, but is computationally expensive to decode and difficult to realize in hardware. Low density parity check codes (LDPCs), another ECC, also provide near Shannon limit error correction ability. However, LDPCs use a decoding scheme which is much more amenable to hardware implementation. This paper first presents an overview of these coding schemes, then discusses the issues involved in building an LDPC decoder using reconfigurable hardware. It presents a hypothetical LDPC implementation using a commercial FPGA, which will give an idea of future research issues and performance gains.

58 citations


Patent
08 Sep 2000
TL;DR: In this article, the data streams for transmission may be interleaved among the transmit antenna elements in order to reduce decision errors, and the receiver can select number and/or identity of receive antenna elements from among a larger group.
Abstract: Communication systems which employ multiple transmit and receive antenna-element arrays. Data streams for transmission may be interleaved among the transmit antenna elements in order to reduce decision errors. Turbo processing of equalizer output from a number of layers in a layered space-time processing architecture may be employed to reduce decision errors. Additionally, space-time equalization may be performed to maximize signal to noise ratio such as via minimum mean square error processing, rather than zero forcing, in order to achieve the Shannon limit, reduce multi-path effects and/or reduce intersymbol interference. Moreover, the receiver can select number and/or identity of receive antenna elements from among a larger group in order to optimize performance of the system.

43 citations


01 Jan 2000
TL;DR: It is shown that there is no performance loss as compared to floating point software approach and a lowered complexity and higher speed implementation of the LDPC decoder can thus be achieved using fixed point DSP implementation.
Abstract: It has been shown earlier and rediscovered recently that Low-Density Parity Check Codes can achieve bit error rate near to Shannon limit. They can be decoded using soft decision iterative decoding scheme. As low cost, high performance DSPs are widespread, DSP implementation for LDPC decoder is a viable option. Floating point implementation has higher costs and so fixed point DSP implementation is considered. Various optimal and sub-optimal implementations of the algorithm are considered. Various algorithms are compared for performance loss and low complexity trade off. It is shown that there is no performance loss as compared to floating point software approach. A lowered complexity and higher speed implementation of the LDPC decoder can thus be achieved using fixed point DSP implementation.

18 citations


Proceedings ArticleDOI
25 Jun 2000
TL;DR: A Gaussian approximation is used for analyzing the sum-product algorithm for low-density parity-check (LDPC) codes and memoryless binary-input continuous-output additive white Gaussian noise channels to calculate the threshold quickly and to understand the behavior of the decoder better.
Abstract: We use a Gaussian approximation (GA) for analyzing the sum-product algorithm for low-density parity-check (LDPC) codes and memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels. This simplification allows us to calculate the threshold quickly and to understand the behavior of the decoder better. We have also designed high rate LDPC codes using the Gaussian approximation that have thresholds less than 0.05 dB from the Shannon limit.

16 citations


Proceedings ArticleDOI
22 Oct 2000
TL;DR: Numerical results show that the capacity of M-ary orthogonal signaling on the Webb channel exhibits the same brick-wall Shannon limit as on the AWGN channel, and that soft output channels offer a 3 dB advantage over hard output channels.
Abstract: This report defines the fundamental parameters affecting the capacity of a soft-decision optical channel, and relates them to corresponding parameters for the well-understood AWGN channel. For example, just as the performance on a standard additive white Gaussian noise (AWGN) channel is fully characterized by its SNR, a corresponding Webb channel is fully characterized by its SNR, and a single additional skewness parameter /spl delta//sup 2/ which depends on the photon detector. In fact, this Webb channel reduces to the standard AWGN channel when /spl delta//sup 2//spl rarr//spl infin/. Numerical results show that the capacity of M-ary orthogonal signaling on the Webb channel exhibits the same brick-wall Shannon limit (M ln 2)/(M-1) as on the AWGN channel (/spl ap/-1.59 dB for large M), and that soft output channels offer a 3 dB advantage over hard output channels.

9 citations


Proceedings ArticleDOI
02 May 2000
TL;DR: In this paper, the capacity of an optical channel employing PPM and an APD detector is determined for an X2000 second delivery, where the detector output is characterized by a Webb-plus-Gaussian distribution, not a Poison distribution.
Abstract: The capacity is determined for an optical channel employing Pulse Position Modulation (PPM) and an Avalanche PhotoDiode (APD) detector. This channel is different from the usual optical channel in that the detector output is characterized by a Webb-plus-Gaussian distribution, not a Poison distribution. The capacity is expressed as a function of the PPM order, slot width, laser dead time, average number of incident signal and background photons received, and APD parameters. Based on a system using a laser and detector proposed for X2000 second delivery, numerical results provide upper bounds on the data rate and level of background noise that the channel can support while operating at a given BER. For the particular case studied, the capacity-maximizing PPM order is near 2048 for nighttime reception and 16 for daytime reception. Reed-Solomon codes can handle background levels 2.3 to 7.6 dB below the ultimate level that can be handled by codes operating at the Shannon limit.

9 citations


Proceedings ArticleDOI
25 Jun 2000
TL;DR: An interactive concatenated turbo coding system in which a Reed-Solomon outer code is concatenation with a binary turbo inner code, and an effective criterion for stopping the iterative decoding process and a new reliability-based decoding algorithm called the Chase-GMD algorithm for nonbinary codes are presented.
Abstract: Although turbo codes with iterative decoding have been shown to achieve bit-error rates (BER) close to the Shannon limit, they suffer from three disadvantages: a large decoding delay, an error floor at low BER, and a relatively poor frame error performance (FER). This paper presents an interactive concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. Also presented is an effective criterion for stopping the iterative decoding process and a new reliability-based decoding algorithm called the Chase-GMD algorithm for nonbinary codes.

Proceedings ArticleDOI
22 Oct 2000
TL;DR: This paper presents a survey of turbo coding designs based on existing research journals and publications, and investigates the key design parameters for each coding scheme, such as choice of component codes, memory size, interleaver size, and the number of decoding iterations.
Abstract: Turbo codes were first proposed by Berrou and Glavieux (1993), and shown to have a near Shannon limit error correction capability. Since then, turbo codes have become the focus of research and study among the coding community. Turbo codes are particularly attractive to higher data rate applications where the additional coding gain is necessary to maintain the link performance level with limited power. For instance, the Advanced EHF satellite system is a candidate for implementing turbo codes, which offer a superior performance compared to convolutional codes currently used in the Milstar system. This paper presents a survey of turbo coding designs based on existing research journals and publications. It investigates the key design parameters for each coding scheme, such as choice of component codes, memory size, interleaver size, and the number of decoding iterations. In addition, it examines the trade-offs between improvement in code performance, and the overall delay and the computational complexity of the coding algorithm. This paper also presents bit error rate (BER) performance comparisons between different turbo code designs, both in additive white Gaussian and Rayleigh fading environments.

Book ChapterDOI
01 Jan 2000
TL;DR: The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.
Abstract: In 1948 Shannon developed fundamental limits on the efficiency of communication over noisy channels. The coding theorem asserts that there are block codes with code rates arbitrarily close to channel capacity and probabilities of error arbitrarily close to zero. Fifty years later, codes for the Gaussian channel have been discovered that come close to these fundamental limits. There is now a substantial algebraic theory of error-correcting codes with as many connections to mathematics as to engineering practice, and the last 20 years have seen the construction of algebraic-geometry codes that can be encoded and decoded in polynomial time, and that beat the Gilbert-Varshamov bound. Given the size of coding theory as a subject, this review is of necessity a personal perspective, and the focus is reliable communication, and not source coding or cryptography. The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.

Proceedings ArticleDOI
E.F.C. LaBerge1
07 Oct 2000
TL;DR: The utility of turbo codes for aeronautical satellite communications is examined using a superset of block and convolutional codes familiar to most digital communications engineers.
Abstract: Turbo codes have performance that approaches the Shannon limit. The key to this near-optimum performance lies in the iterated decoding of the received data stream. Turbo codes are a superset of block and convolutional codes familiar to most digital communications engineers. Despite their obvious advantages, however, turbo codes do have some limitations that make them more or less suited to specific applications. This paper examines the utility of turbo codes for aeronautical satellite communications. A brief introduction to turbo code principles is followed by a high level examination of turbo code and presentation of simple examples of turbo code applications to aeronautical communications.

Journal ArticleDOI
TL;DR: A family of concatenated Hadamard codes is presented, which have performances close to the low rate Shannon limit, which has been observed using an interleaver length of 60000.
Abstract: A family of concatenated Hadamard codes is presented, which have performances close to the low rate Shannon limit. A BER of /spl sim/10/sup -5/ at an E/sub b//N/sub 0//spl sime/-1.15 dB has been observed using an interleaver length of 60000.

05 Oct 2000
TL;DR: These codes and modulation formats substantially close the large gap between state of the art systems and the Shannon limit by taking maximum advantage of inherent physical constraints on the communications link.
Abstract: We present new codes and modulation formats for the deep space optical channel. By taking maximum avdvantage of inherent physical constraints on the communications link, these codes and modulation formats substantially close the large gap between state of the art systems and the Shannon limit.

Journal ArticleDOI
15 Jul 2000
TL;DR: A receiver is proposed which is very useful at the extraction of sampling instants in binary turbo coding and is composed of a non-linear element followed by notch filter matched to data rate of the binary sequences.
Abstract: Turbo is a contemporary coding technique which is first proposed in 1993 having performance close to Shannon limit theory. In binary turbo coding, jitter problem arises because of phase distortion through the channel and the imperfect property of the receiver filters that are not capable of choosing exact coding instants. In this paper , a receiver is proposed which is very useful at the extraction of sampling instants. The receiver is composed of a non-linear element followed by notch filter matched to data rate of the binary sequences.

01 Jan 2000
TL;DR: A variance transfer function approach is presented to the analysis of the concatenation of these three codes, which captures the behavior of the component decoders in the overall iterative decoding system.
Abstract: Joint iterative decoding of multiple forward error control (FEC) encoded data streams is studied for linear multiple access channels, such as CDMA. It is shown that such systems can be viewed as serially concatenated coding systems, and that iterative soft-decision decoding can be performaned sucessfully. In order to achieve good power efficiency, powerful error control codes are used. These FEC codes are themselves serially concatenated. The overall system can be viewed as the concatenation of two error control codes with the linear multiple access channel. We present a variance transfer function approach to the analysis of the concatenation of these three codes, which captures the behavior of the component decoders in the overall iterative decoding system. We show that this approach forms a methodology to design the component codes as well as the iteration schedule. This is used to modify a serially concatenated FEC code designed for the AWGN channel to achieve a performance close to the Shannon limit on the multiple access channel. Supported in part by NSF under Grant CCR 9732962. 1

Proceedings ArticleDOI
25 Jun 2000
TL;DR: Progress has occurred in establishing the optimal maximum likelihood performance of iteratively decodable very long block-length codes.
Abstract: As new classes of iteratively decodable very long block-length codes are discovered, they appear to have the potential of achieving extremely low error probabilities close to the Shannon limit. While the degree to which the suboptimal iterative decoding process degrades this performance is not yet well determined, progress has occurred in establishing the optimal maximum likelihood performance.