scispace - formally typeset
Search or ask a question

Showing papers on "Noisy-channel coding theorem published in 2003"


Proceedings ArticleDOI
01 Dec 2003
TL;DR: This letter presents two classes of quasi-cyclic low-density parity-check codes that perform close to the Shannon limit.
Abstract: The paper presents two classes of quasi-cyclic low-density parity-check (LDPC) codes which perform close to the Shannon limit. The construction of these codes is based on decomposition of circulant matrices constructed from finite geometries.

78 citations


Journal ArticleDOI
TL;DR: In this article, two 8-state 7-bit Viterbi decoders matched to an EPR4 channel and a rate-8/9 convolutional code are implemented in a 0.18-/spl mu/m CMOS technology.
Abstract: Two eight-state 7-bit soft-output Viterbi decoders matched to an EPR4 channel and a rate-8/9 convolutional code are implemented in a 0.18-/spl mu/m CMOS technology. The throughput of the decoders is increased through architectural transformation of the add-compare-select recursion, with a small area overhead. The survivor-path decoding logic of a conventional Viterbi decoder register exchange is adapted to detect the two most likely paths. The 4-mm/sup 2/ chip has been verified to decode at 500 Mb/s with 1.8-V supply. These decoders can be used as constituent decoders for Turbo codes in high-performance applications requiring information rates that are very close to the Shannon limit.

63 citations


Proceedings ArticleDOI
01 Dec 2003
TL;DR: It is shown that the proposed coding scheme for holographic memories has an easy design procedure and results in efficient codes, and the capacity bound and the stability condition for the proposed codes over the binary erasure channel are derived.
Abstract: We propose a technique for designing low-density parity-check (LDPC) codes over non-uniform channels. In particular, we investigate LDPC codes for volume holographic memory (VHM) systems. We show that the proposed coding scheme for holographic memories has an easy design procedure and results in efficient codes. An important property of the proposed technique is that we can design simple codes whose performance is close to the Shannon limit, while they are very good in terms of the error floor effect. We also derive a capacity bound and the stability condition for the proposed codes over the binary erasure channel. We briefly discuss other applications like punctured codes, OFDM systems and multilevel coding.

23 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: The AITC-UWB-IR system using low rate turbo codes improves the BER on an AWGN channel and a real Gaussian fading channel, and it is confirmed that for the same bit rate, it had better make the code rate of turbo codes lower and make the number of repetition codes smaller to achieve the better BER.
Abstract: As a new spread spectrum system, an ultra wideband-impulse radio (UWB-IR) has attracted much attention in high speed indoor multiple access radio communications. Since the UWB-IR system repeats and transmits pulses for each bit, the UWB-IR system is considered as a coded scheme with a simple repetition block code. As an error correcting code, turbo-codes proposed by C. Berrou et al. in 1993 have attracted much attention beyond the field of coding theory, because the performance is very close to the Shannon limit with practical decoding complexity. In this paper, we propose an adaptive internally turbo-coded UWB-IR (AITC-UWB-IR) system. The proposed system employs a turbo code in addition to a repetition block code and shares the transmission bandwidth adaptively between them. We evaluate the performance of the AITC-UWB-IR by theoretical analysis and computer simulation. From our numerical and simulation results, we show that the BER of the AITC-UWB-IR is superior to that of the SOC-UWB-IR on an AWGN channel when the bit rate is high. We also showed that the AITC-UWB-IR system using low rate turbo codes improves the BER on an AWGN channel and a real Gaussian fading channel. Consequently, we confirm that for the same bit rate, we had better make the code rate of turbo codes lower and make the number of repetitions of the repetition codes smaller to achieve the better BER.

22 citations


Proceedings ArticleDOI
22 Apr 2003
TL;DR: A concatenation scheme of low-density parity-check (LDPC) codes and STBC with multiple transmit antennas is proposed and it is shown that when the block length is relatively large, the error rate performance of the LDPC codes is better than that of the turbo codes with almost identical code rate and block length.
Abstract: Space-time transmit diversity (STTD) and space-time block coding (STBC) are the attractive techniques for high bit-rate and high capacity transmission. The concatenation scheme of turbo codes and STBC (turbo-STBC) was proposed and it has been shown that the turbo-STBC can achieve the good error rate performance. Recently, low-density parity-check (LDPC) codes have attracted much attention as the good error correcting codes achieving the near Shannon limit performance like turbo codes. The decoding algorithm of LDPC codes has less complexity than that of turbo codes. Furthermore, it has been shown that when the block length is relatively large, the error rate performance of the LDPC codes is better than that of the turbo codes with almost identical code rate and block length. In this paper, we propose a concatenation scheme of LDPC codes and STBC with multiple transmit antennas. We refer to it as the LDPC-STBC. We evaluate the frame error rate (FER) of the LDPC-STBC with multiple transmit antennas in a flat Rayleigh fading channel by the computer simulation. We show that the FER of the LDPC-STBC is worse than that of the turbo-STBC in a quasi-static Rayleigh fading channel, while that of the LDPC-STBC is better than that of the turbo-STBC in a flat Rayleigh fading channel. Furthermore, we show that the FER of the LDPC-STBC without channel interleaver (CI) is better than that of the turbo-STBC with CI in a fast fading channel.

20 citations


01 Jan 2003
TL;DR: This dissertation finds that analog error control decoders have quite regular structures and can be built by using a small number of basic cells in a cell library, facilitating automatic synthesis, and presents the cell library and how to automatically synthesize analog decodes from a factor graph description.
Abstract: In order to reach the Shannon limit, researchers have found more efficient error control coding schemes. However, the computational complexity of such error control coding schemes is a barrier to implementing them. Recently, researchers have found that bioinspired analog network decoding is a good approach with better combined power/speed performance than its digital counterparts. However, the lack of CAD (computer aided design) tools makes the analog implementation quite time consuming and error prone. Meanwhile, the performance loss due to the nonidealities of the analog circuits has not been systematically analyzed. Also, how to organize analog circuits so that the nonideal effects are minimized has not been discussed. In designing analog error control decoders, simulation is a time-consuming task because the bit error rate is quite low at high SNR (signal to noise ratio), requiring a large number of simulations. By using high-level VHDL simulations, the simulation is done both accurately and efficiently. Many researchers have found that error control decoders can be interpreted as operations of the sum-product algorithm on probability propagation networks, which is a kind of factor graph. Of course, analog error control decoders can also be described at a high-level using factor graphs. As a result, an automatic simulation tool is built. From its high-level factor graph description, the VHDL simulation files for an analog error control decoder can be automatically generated, making the simulation process simple and efficient. After analyzing the factor graph representations of analog error control decoders, we found that analog error control decoders have quite regular structures and can be built by using a small number of basic cells in a cell library, facilitating automatic synthesis. This dissertation also presents the cell library and how to automatically synthesize analog decoders from a factor graph description. All substantial nonideal effects of the analog circuit are also discussed in the dissertation. How to organize the circuit to minimize these effects and make the circuit optimized in a combined consideration of speed, performance, and power is also provided.

20 citations


Proceedings ArticleDOI
15 Jun 2003
TL;DR: A method for optimizing the average spectral efficiency of an ACM system is presented, showing that only a small number of optimally designed codes is needed to yield throughput close to the Shannon limit.
Abstract: Adaptive coded modulation (ACM) is a promising tool for transmission in a fading environment. The main motivation for employing ACM schemes is to improve the spectral efficiency of wireless communications. In this paper, we present a method for optimizing the average spectral efficiency of an ACM system. One important result of this work is that only a small number of optimally designed codes is needed to yield throughput close to the Shannon limit.

18 citations


Proceedings ArticleDOI
15 Sep 2003
TL;DR: A class of deterministic interleaver for turbo codes based on permutation polynomials over ZN is introduced and near Shannon limit performance can be achieved with these interleavers for large frame sizes.
Abstract: In this paper, we introduce a class of deterministic interleavers for turbo codes based on permutation polynomials over ZN. A search for in- terleavers has been performed based on a subset of error events with input weight 2m and the simulated performance is compared with S-random interleavers and Quadratic interleavers. A typical Turbo code (TC) is constructed by parallel con- catenating two convolutional codes via an interleaver. The design of the interleaver is critical to the performance of the turbo code. Interleavers for TC's can in general be separated into random interleavers and deterministic interleavers. The basic random interleaver permutes the information bits in a pseudo-random manner. Near Shannon limit performance can be achieved with these interleavers for large frame sizes. The S-random interleaver proposed in (3) is an improvement to the random interleaver. Deterministic interleavers are constructed using deterministic rules. In (4), quadratic interleavers have been introduced. Quadratic interleavers achieve the average performance of random interleavers. However, they are still not as good as S-random interleavers. We propose another class of deterministic interleavers. They are based on permutation polynomials (PP) over ZN. A PP P(x) = Cc, aixi over ZN is a polynomial with integer coefficients such that, when computed modulo N, it permutes

11 citations


Journal ArticleDOI
TL;DR: For a discrete-time, binary-input, Gaussian channel with finite intersymbol interference, it is proved that reliable communication can be achieved if, and only if, E/sub b//N/sub 0/>log2/G/ sub opt/, for some constant G/sub opt/ that depends on the channel.
Abstract: For a discrete-time, binary-input, Gaussian channel with finite intersymbol interference, we prove that reliable communication can be achieved if, and only if, E/sub b//N/sub 0/>log2/G/sub opt/, for some constant G/sub opt/ that depends on the channel. To determine this constant, we consider the finite-state machine which represents the output sequences of the channel filter when driven by binary inputs. We then define G/sub opt/ as the maximum output power achieved by a simple cycle in this graph, and show that no other cycle or asymptotically long sequence can achieve an output power greater than this. We provide examples where the binary input constraint leads to a suboptimality, and other cases where binary signaling is just as effective as real signaling at very low signal-to-noise ratios.

10 citations


Proceedings ArticleDOI
04 May 2003
TL;DR: A family of simple analog circuits used to implement soft-output decoding algorithms will be discussed and challenges in the design of analog decoders, such as mismatch and input interfaces will be addressed.
Abstract: Since their introduction in 1993, turbo codes and iterative decoding have made a significant impact in the area of coding theory by providing for the first time near Shannon limit decoding at practical hardware complexity levels. Turbo codes and other iteratively decoded codes have recently been incorporated into several digital communications standards such as DVB-RCS, DVB-RCT, and 3GPP. Because of the iterative nature of the decoding algorithm, turbo decoders are prone to long decoding latency and large power consumption. For these reasons, much research has been directed toward developing novel turbo decoder architectures in order to make them viable for power and speed-conscious applications such as wireless products. This paper presents a review of analog iterative decoding techniques. A family of simple analog circuits used to implement soft-output decoding algorithms will be discussed. Challenges in the design of analog decoders, such as mismatch and input interfaces will also be addressed. Finally, a survey of existing analog decoder integrated circuits will be presented.

8 citations


Patent
28 Aug 2003
TL;DR: In this article, an encoder and a decoder that perform high-performance encoding and decoding that are nearer to theoretical limitation are provided. But the encoder does not have the ability to provide an external code with an encoding rate k/p, interleavers that perform replacement of data orders comprising bit series of p pieces encoded and reordering of the data order.
Abstract: PROBLEM TO BE SOLVED: To provide an encoder and a decoder that perform high-performance encoding and decoding that are nearer to theoretical limitation. SOLUTION: The encoder includes external codes for performing the encoding with an encoding rate k/p, interleavers that performs replacement of data orders comprising bit series of p pieces encoded and reordering of the data order, and internal codes for performing encoding with an encoding rate p/n. The internal codes has a singular intersection in a region where an output mutual information quantity of the external code is 0.9 or more and below 1.0 in the encoding rate p/n and in a value E b /N o that is not more than a value resulting from doing addition of 0.4 dB to the Shannon limit in a biphasic phase modulation encoding method when drawing an EXIT chart relative to the external codes. COPYRIGHT: (C)2005,JPO&NCIPI

DissertationDOI
01 Jan 2003
TL;DR: Log-MAP based turbo decoding offers the best compromise among the different turbo decoding algorithms investigated in this thesis and is verified by comparing its simulation results with those obtained from a behavioral model of the same turbo decoder written in C language.
Abstract: Turbo coding is one of the most significant achievements in coding theory during the last decade. It has been shown in the literature that transmission systems employing turbo codes could achieve a performance close to the Shannon limit. Turbo decoding is the major contributor to the overall complexity of turbo coding. Therefore, the challenge is to implement turbo coding in various communications systems at affordable decoding complexity using current VLSI technology. Four different turbo decoding algorithms were investigated in this thesis. Comparisons on both their performances and implementation complexities were performed. Log-MAP based turbo decoding offers the best compromise among the different turbo decoding algorithms. A Register-Transfer-Level (RTL) fixed-point turbo decoder model based on Log-MAP algorithm was designed and simulated using VHDL as the hardware description language. The RTL model was verified by comparing its simulation results with those obtained from a behavioral model of the same turbo decoder written in C language.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: In this paper, the authors describe an experimental implementation of an interference avoidance waveform, suitable for mobile or fixed wireless channels, and comprised of orthogonal signaling using wavelet packets in cohesive combination with a multidimensional channel error coding technique.
Abstract: This paper describes an experimental implementation of an interference avoidance waveform, suitable for mobile or fixed wireless channels, and comprised of orthogonal signaling using wavelet packets in cohesive combination with a multidimensional channel error coding technique. The patent pending adaptive waveform, known as circular simplex turbo block coded wavelet packet modulation (CSTBC-WPM), possesses an unrivaled time-frequency localization and agility capability for avoiding joint narrowband/impulsive interference patterns. Excision of residual interference at the receiver is facilitated by isolation to a sparse number of time-frequency cells and removal using energy thresholding. Moreover, the time dilation of symbols in the WPM subbands is useful in mitigating the adverse effects of multipath-induced, frequency-selective fading. For an even more potent countermeasure to nonGaussian interference sources and channel propagation anomalies, the patent pending CSTBC forward error correction component is distinctly mapped onto the orthogonally multiplexed WPM symbols and interleaved to exploit the subband frequency diversity. The shorter block sizes of CSTBC provide a bit error rate performance competitive with turbo product coding's large code blocks, approaching the Shannon limit but with considerably lower latency (up to 20-fold improvement).


Proceedings ArticleDOI
15 Oct 2003
TL;DR: The performance of turbo codes is studied using new modified technique for UPA ( unequal power allocation) to optimize the power allocation for each bit stream in the 3G wireless communication system.
Abstract: Turbo codes are part of the standardization for the 3G wireless communication system. They have been chosen for their error correcting ability to approach the Shannon limit. In a standard turbo coding system, all bits are transmitted with equal energy. This strategy does not guarantee optimum allocation. UPA ( unequal power allocation) been adopted to optimize the power allocation for each bit stream. Using new modified technique for UPA, the performance of turbo codes is studied.

Proceedings ArticleDOI
20 Oct 2003
TL;DR: A cross-layer dual adaptive coded modulation architecture using turbo codes for mobile multimedia communication is proposed, which adapts to both the varying channel characteristics and the QoS of various multimedia services simultaneously to increase the average system throughput substantially.
Abstract: We propose a cross-layer dual adaptive coded modulation architecture using turbo codes for mobile multimedia communication, which adapts to both the varying channel characteristics and the QoS of various multimedia services simultaneously to increase the average system throughput substantially. A pragmatic channel-adaptive turbo coded modulation scheme, which comes within 2.5 dB of the Shannon limit, is optimally designed, and then a QoS-adaptive scheme is superimposed to build the dual adaptive architecture. Simulation results show that the novel dual adaption reduces the gap from the fading channel capacity to 2 dB when assuming different services occur in equal probability and the service duration follows exponential distribution.

Book ChapterDOI
TL;DR: This paper will present the operation of a multifunctional turbo-based receiver, whose structure is reminiscent of a data-aided turbo synchronizer, both of which are described in detail.
Abstract: A multifunctional system comprising a turbo decoder, low complexity turbo equalizer and data-aided frame synchronizer is developed to compensate for frequency-selective channels with fading. The turbo codes are based on Partial Unit Memory Codes, which may be constructed with higher free distance than equivalent recursive systematic convolutional codes and can achieve better performance than the latter. The purely digital multifunctional receiver is targeted for end-applications such as combat-radio and radio-relay, providing robust communication at low signal-to-noise ratio with performance approaching the Shannon limit. This paper will present the operation of a multifunctional turbo-based receiver, whose structure is reminiscent of a data-aided turbo synchronizer, both of which are described in detail.

Proceedings ArticleDOI
22 Apr 2003
TL;DR: Through simulation results, it is shown that the performance of PAA codes with multiple code rates is close to the Shannon limit over the AWGN channel.
Abstract: In this paper we introduce multi-rate PAA (parity accumulator accumulator) codes obtained from replacing single parity check codes with multi-rate cycle-free codes in the conventional PAA codes. They can be easily decoded by applying the sum-product algorithm to their factor graph. Using the Gaussian approximation, we investigate the noise thresholds of multi-rate PAA codes on the additive white Gaussian noise (AWGN) channel. Through simulation results, it is shown that the performance of PAA codes with multiple code rates is close to the Shannon limit over the AWGN channel.

01 Dec 2003
TL;DR: Results indicate that the quality of the watermark is improved greatly and the proposed system based on LDPC codes is very robust to attacks.
Abstract: With the rapid growth of internet technologies and wide availability of multimedia computing facilities, the enforcement of multimedia copyright protection becomes an important issue Digital watermarking is viewed as an effective way to deter content users from illegal distributions In recent years, digital watermarking has been intensively studied to achieve this goal However, when the watermarked media is transmitted over the channels modeled as the additive white Gaussian noise (AWGN) channel, the watermark information is often interfered by the channel noise and produces a large number of errors So many error-correcting codes have been applied in the digital watermarking system to protect the embedded message from the disturbance of the noise, such as BCH codes, Reef-Solomon (RS) codes and Turbo codes Recently, low-density parity-check (LDPC) codes were demonstrated as good error correcting codes achieving near Shannon limit performance and outperforming turbo codes nth low decoding complexity In this paper, in order to mitigate the channel conditions and improve the quality of watermark, we proposed the application of LDPC codes on implementing a fairly robust digital image watermarking system The implemented watermarking system operates in the spectrum domain where a subset of the discrete wavelet transform (DWT) coefficients is modified by the watermark without using original image during watermark extraction The quality of watermark is evaluated by taking Into account the trade-off between the chip-rate and the rate of LDPC codes Many simulation results are presented in this paper, these results indicate that the quality of the watermark is improved greatly and the proposed system based on LDPC codes is very robust to attacks