scispace - formally typeset
Search or ask a question

Showing papers on "Noisy-channel coding theorem published in 2007"


Journal ArticleDOI
30 Jul 2007
TL;DR: The contributions that have led to the most significant improvements in performance versus complexity for practical applications are focused on, particularly on the additive white Gaussian noise channel.
Abstract: Starting from Shannon's celebrated 1948 channel coding theorem, we trace the evolution of channel coding from Hamming codes to capacity-approaching codes. We focus on the contributions that have led to the most significant improvements in performance versus complexity for practical applications, particularly on the additive white Gaussian noise channel. We discuss algebraic block codes, and why they did not prove to be the way to get to the Shannon limit. We trace the antecedents of today's capacity-approaching codes: convolutional codes, concatenated codes, and other probabilistic coding schemes. Finally, we sketch some of the practical applications of these codes.

390 citations


Journal ArticleDOI
TL;DR: Two algebraic methods for systematic construction of structured regular and irregular low-density parity-check (LDPC) codes with girth of at least six and good minimum distances are presented, based on geometry decomposition and a masking technique.
Abstract: Two algebraic methods for systematic construction of structured regular and irregular low-density parity-check (LDPC) codes with girth of at least six and good minimum distances are presented. These two methods are based on geometry decomposition and a masking technique. Numerical results show that the codes constructed by these methods perform close to the Shannon limit and as well as random-like LDPC codes. Furthermore, they have low error floors and their iterative decoding converges very fast. The masking technique greatly simplifies the random-like construction of irregular LDPC codes designed on the basis of the degree distributions of their code graphs

159 citations


Journal ArticleDOI
TL;DR: The pioneering work of Shannon provides fundamental bounds on the rate limitations of communicating information reliably over noisy channels, as well as the compressibility of data subject to distortion constraints, which have witnessed dramatic advances in practical constructions and algorithms.
Abstract: The pioneering work of Shannon provides fundamental bounds on the rate limitations of communicating information reliably over noisy channels (the channel coding problem), as well as the compressibility of data subject to distortion constraints (the lossy source coding problem). However, Shannon's theory is nonconstructive in that it only establishes the existence of coding schemes that can achieve the fundamental bounds but provides neither concrete codes nor computationally efficient algorithms. In the case of channel coding, the past two decades have witnessed dramatic advances in practical constructions and algorithms, including the invention of turbo codes and the surge of interest in low-density parity check (LDPC) codes. Both these classes of codes are based on sparse graphs and yield excellent error-correction performance when decoded using computationally efficient methods such as the message-passing sum-product algorithm. Moreover, their performance limits are well characterized, at least in the asymptotic limit of large block lengths, via the density evolution method.

32 citations


Journal ArticleDOI
TL;DR: The design of adaptable as well as efficient LDPC decoders with low bit-error rate (BER) in low signal-to-noise ratio (SNR) channels for CR environments are discussed.
Abstract: With the rapid growth of multimedia communication systems during the last decade, there has been an increasing demand for improved technology for Error Correcting Code (ECO to enable the communication systems to have a reliable transmission over noisy channels. Low Density Parity Check (LDPC) codes are the best known ECC codes that can achieve data rates very close to the Shannon limit. In addition, superior error correction performance and parallelizable decoding algorithms have made LDPC codes a powerful competitor to turbo codes for reliable high speed communication applications. Recently, Cognitive Radio (CR) has been proposed as a promising technology to solve today's spectrum scarcity problem. CR promises to alleviate this spectrum shortage problem by dynamically accessing free spectrum resources. This implies that the radio has to work in multi-band, cope with various wireless channels and support various services such as voice, data and video. The basic requirement for CR is that it has a reconfigurable architecture to support multi-band and frequency adaptive operations. One of the ambitious design goals of future wireless systems, including 4G, IEEE 802.11n/802.16 standards, is to provide reliably very high data rate transmission in hostile environments: for example, around 100 Mb/s peak rate for downlink and around 30 Mb/s sum rate for uplink transmission with a low frame error rate (FER), typically less than 5 times10-4. To ensure reliable nd error-free communication, there is a demand to consider implementing LDPC decoders in CR and frequency agile environments. In this article we discuss the design of adaptable as well as efficient LDPC decoders with low bit-error rate (BER) in low signal-to-noise ratio (SNR) channels for CR environments.

13 citations


01 Jan 2007
TL;DR: The contributions that have led to the most significant improvements in performance versus complexity for practical applications are focused on, particularly on the additive white Gaussian noise channel.
Abstract: Starting from Shannon's celebrated 1948 channel coding theorem, we trace the evolution of channel coding from Hamming codes to capacity-approaching codes. We focus on the contributions that have led to the most significant improvements in performance versus complexity for practical applications, particularly on the additive white Gaussian noise channel. We discuss algebraic block codes, and why they did not prove to be the way to get to the Shannon limit. We trace the antecedents of today's capacity-approaching codes: con- volutional codes, concatenated codes, and other probabilistic coding schemes. Finally, we sketch some of the practical applications of these codes.

13 citations


Journal ArticleDOI
TL;DR: It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios, which makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Abstract: This paper investigates the joint maximum likelihood (ML) estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.

12 citations


Posted ContentDOI
TL;DR: In this paper, the authors proposed to adopt a particular family of QC-LDPC codes in the McEliece cryptosystem to reduce the key size and increase the transmission rate.
Abstract: The McEliece cryptosystem is a public-key cryptosystem based on coding theory that has successfully resisted cryptanalysis for thirty years. The original version, based on Goppa codes, is able to guarantee a high level of security, and is faster than competing solutions, like RSA. Despite this, it has been rarely considered in practical applications, due to two major drawbacks: i) large size of the public key and ii) low transmission rate. Low-Density Parity-Check (LDPC) codes are state-of-art forward error correcting codes that permit to approach the Shannon limit while ensuring limited complexity. Quasi-Cyclic (QC) LDPC codes are a particular class of LDPC codes, able to join low complexity encoding of QC codes with high-performing and low-complexity decoding of LDPC codes. In a previous work it has been proposed to adopt a particular family of QC-LDPC codes in the McEliece cryptosystem to reduce the key size and increase the transmission rate. Recently, however, new attacks have been found that are able to exploit a flaw in the transformation from the private key to the public one. Such attacks can be effectively countered by changing the form of some constituent matrices, without altering the system parameters. This work gives an overview of the QC-LDPC codes-based McEliece cryptosystem and its cryptanalysis. Two recent versions are considered, and their ability to counter all the currently known attacks is discussed. A third version able to reach a higher security level is also proposed. Finally, it is shown that the new QC-LDPC codes-based cryptosystem scales favorably with the key length.

12 citations


Proceedings ArticleDOI
20 Oct 2007
TL;DR: Genetic algorithms are investigated as a promising optimization method to find good performing interleaver for the large frame sizes in the Turbo Codes field.
Abstract: Since the appearance in 1993, first approaching the Shannon limit, the Turbo Codes give a new direction for the channel encoding field, especially since they were adopted for multiple norms of telecommunications, such as deeper communication. To obtain an excellent performance it is necessary to design robust turbo code interleaver. We are investigating genetic algorithms as a promising optimization method to find good performing interleaver for the large frame sizes. In this paper, we present our work, compare with several previous approaches and present experimental results.

5 citations


Journal ArticleDOI
TL;DR: This paper proposes a new scheme, called ‘low density parity check coded‐continuous phase frequency shift keying (LDPCC‐CPFSK)’, which is a special type of CPM and is considered together to improve both error performance and bandwidth efficiencies.
Abstract: In this paper, in order to improve bit error performance, bandwidth efficiency and reduction of complexity compared to related schemes such as turbo codes, we combine low density parity check (LDPC) codes and continuous phase frequency shift keying (CPFSK) modulation and introduce a new scheme, called ‘low density parity check coded-continuous phase frequency shift keying (LDPCC-CPFSK)’. Since LDPC codes have very large Euclidean distance and use iterative decoding algorithms, they have high error correcting capacity and have very close performances to Shannon limit. In all communication systems, phase discontinuities of modulated signals result extra bandwidth requirements. Continuous phase modulation (CPM) is a powerful solution for this problem. Beside CPM provides good bandwidth efficiency; it also improves bit error performance with its memory unit. In our proposed scheme, LDPC and CPFSK, which is a special type of CPM, are considered together to improve both error performance and bandwidth efficiencies. We also obtain error performance curves of LDPCC-CPFSK via computer simulations for both regular and irregular LDPC code. Simulation results are drawn for 4-ary CPFSK, 8-ary CPFSK and 16-ary CPFSK over AWGN, Rician and Rayleigh fading channels for maximum 100 iterations, while the frame size is chosen as 504. Copyright © 2006 John Wiley & Sons, Ltd.

5 citations


Proceedings ArticleDOI
Feng Man1, Wu Lenan1
01 Dec 2007
TL;DR: The aim of the study was to extend Shannon's channel capacity formula in the case of different signal and noise bandwidth, and found that capacity values calculated by this extension is larger than ones by Shannon's equation, which is the reason for "breaking" Shannon's capacity in ultra narrow band system.
Abstract: Shannon's channel capacity equation, a very important theory, defines the maximum transmission rate of communication systems. However, a new communication scheme, named ultra narrow band, is said to "break" Shannon's limit. During the research, a special kind of filters having different signal and noise bandwidth was found, therefore, the aim of our study was to extend Shannon's channel capacity formula in the case of different signal and noise bandwidth. Considering the conversion of time and space domain, a new capacity formula was deduced based on MIMO theory. Results indicated that extension equation is in good agreement with Shannon's channel capacity equation under the assumption of the same signal and noise bandwidth, but in different signal and noise bandwidth, capacity values calculated by this extension is larger than ones by Shannon's equation, which is the reason for "breaking" Shannon's capacity in ultra narrow band system.

5 citations


Proceedings ArticleDOI
14 May 2007
TL;DR: This paper presents a new design methodology/process for low-density parity-check codes (LDPC) and proposed distributions with low degrees of lambda and p outperform other comparable distributions.
Abstract: This paper presents a new design methodology/process for low-density parity-check codes (LDPC). To minimize the gap to Shannon limit, the particle swarm optimizer is applied to optimize the variable and check node degree distribution lambda and p respectively in case of irregular LDPC codes. discrete fast density evolution (FDE) is used (as the analysis technique) to compute the threshold value of LDPC code and the Shannon limit is evaluated based on Butman and McEliece formula. The results conducted show that, our proposed distributions with low degrees of (lambda, p) outperform other comparable distributions.

Journal Article
TL;DR: This paper studies the reasoning of the Shannon equation in detail and points out a casual hypothesis in Shannon's reasoning which results in the confliction with the UNB communication.
Abstract: This paper studies the reasoning of the Shannon equation in detail,it points out a casual hypothesis in Shannon's reasoning which results in the confliction with the UNB communication.After advancing an improved method and using it into the UNB communication theory again,we find the right result.

Proceedings ArticleDOI
A. Mackie1
04 Dec 2007
TL;DR: In this article, the authors describe measurement of the low frequency amplitude and phase response for current signals on a distribution network associated with a 6 MW co-generation plant, which includes an LV/MV (240 V/22 kV) distribution transformer.
Abstract: This paper describes measurement of the low frequency amplitude and phase response for current signals on a distribution network associated with a 6 MW co-generation plant The channel includes an LV/MV (240 V/22 kV) distribution transformer Measurements were made between 100 Hz and 3 kHz and a parametric model is derived Noise measurements for the channel are also described and the maximum information capacity (Shannon limit) of this "through transformer" channel, when used with practical signal levels, is estimated

Proceedings ArticleDOI
25 Jun 2007
TL;DR: A new approach called 'localized decoding' to resolve the phase ambiguity over sub-blocks, which involves modifying the parity-check matrix of the existing LDPC code to create 'local check nodes' and operating on them to resolve phase ambiguity.
Abstract: Low-density parity-check codes (LDPC) are known to perform close to the Shannon limit as the block length increases. Residual frequency offsets make iterative decoding difficult for longer block lengths. Sub-block decoding techniques provide a practical, low-complexity approach to deal with residual frequency offsets. However they require phase ambiguity resolution (PAR) in every sub-block. This paper presents a new approach called 'localized decoding' to resolve the phase ambiguity over sub-blocks. The algorithm involves modifying the parity-check matrix of the existing LDPC code to create 'local check nodes' and operating on them to resolve phase ambiguity. Simulation results of this approach show a negligible loss for small residual frequency offsets.

Proceedings ArticleDOI
24 Apr 2007
TL;DR: Genetic algorithms are investigated as a promising optimization method to find good performing interleaver for the larger frame sizes and some experimental results are presented.
Abstract: Since them appearance in 1993, the turbo codes gives first best approaching to the Shannon limit, and a new direction for the channel encoding field, especially since they were adopted for multiple norms of telecommunications, such as deeper communication. To obtain an excellent performance it is necessary to design robust turbo code interleaver. We are investigating genetic algorithms as a promising optimization method to find good performing interleaver for the larger frame sizes. In this paper, we present our work, compare with several previous approaches and present some experimental results.

Journal IssueDOI
TL;DR: This paper proposes a new scheme, called ‘low density parity check coded-continuous phase frequency shift keying (LDPCC-CPFSK)’, which is a special type of CPM and is considered together to improve both error performance and bandwidth efficiencies.
Abstract: In this paper, in order to improve bit error performance, bandwidth efficiency and reduction of complexity compared to related schemes such as turbo codes, we combine low density parity check (LDPC) codes and continuous phase frequency shift keying (CPFSK) modulation and introduce a new scheme, called ‘low density parity check coded-continuous phase frequency shift keying (LDPCC-CPFSK)’. Since LDPC codes have very large Euclidean distance and use iterative decoding algorithms, they have high error correcting capacity and have very close performances to Shannon limit. In all communication systems, phase discontinuities of modulated signals result extra bandwidth requirements. Continuous phase modulation (CPM) is a powerful solution for this problem. Beside CPM provides good bandwidth efficiency; it also improves bit error performance with its memory unit. In our proposed scheme, LDPC and CPFSK, which is a special type of CPM, are considered together to improve both error performance and bandwidth efficiencies. We also obtain error performance curves of LDPCC-CPFSK via computer simulations for both regular and irregular LDPC code. Simulation results are drawn for 4-ary CPFSK, 8-ary CPFSK and 16-ary CPFSK over AWGN, Rician and Rayleigh fading channels for maximum 100 iterations, while the frame size is chosen as 504. Copyright © 2006 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
15 Apr 2007
TL;DR: The minimum mean-square error (MSE) distributed linear transceiver is derived and the performance loss of linear source-channel codes with respect to the Shannon limit is quantified.
Abstract: We consider distributed linear transceivers for sending a second-order wide-sense stationary process observed by two noisy sensors over a Gaussian multiple-access channel (MAC). We derive the minimum mean-square error (MSE) distributed linear transceiver. The optimal linear transmitter exploits bandwidth expansion by repeating transmission and the transmitters at the two sensors are the same except for a constant factor. When the source is white, encoded transmission is the best linear code for any SNR. But for a colored source, whitening transmit filter is sub-optimal. In high SNR regime, the magnitude response of the optimal transmission filter is inversely proportional to fourth-root of the power spectrum of the process (while that for the whitening filler is inversely proportional to the square-root of the spectrum). In the special case of n single sensor with Gaussian source, we also quantify the performance loss of linear source-channel codes with respect to the Shannon limit.

Proceedings ArticleDOI
Feng Man1, Wu Lenan1
01 Nov 2007
TL;DR: This paper presents that geometric feature filter is satisfied with this assumption, therefore, in the frame of information theory, the feasibility of high-efficiency modulation in ultra narrow band and the rationality of extension to Shannon's limit can be explained.
Abstract: Shannon's channel capacity equation, a very important theory, defines the maximum transmission rate of communication systems. However, if we assume that signal bandwidth is larger than noise bandwidth, then the channel capacity is larger than Shannon' limit, which can be proved. This paper presents that geometric feature filter is satisfied with this assumption, therefore, in the frame of information theory, the feasibility of high-efficiency modulation in ultra narrow band and the rationality of extension to Shannon's limit can be explained.

Proceedings ArticleDOI
05 Nov 2007
TL;DR: Simulation results show that turbo codes have a good performance in this channel of impulsive channel noise, which needs error correction techniques.
Abstract: Data transmission in impulsive noise channel such as mobile, power line communications, ADSL, etc. needs error correction techniques. In error correction codes, Turbo codes have a performance that approaches the Shannon limit on the capacity of a band limited communication channel. Simulation results show that turbo codes have a good performance in this channel. Turbo code with K=4, R=1/2 and iteration=3 is purposed for impulsive channel noise.

Proceedings ArticleDOI
29 Oct 2007
TL;DR: Genetic algorithms are investigated as a promising optimization method to find good performing interleaver for the large frame sizes in the Turbo Codes field.
Abstract: Since the appearance in 1993, first approaching the Shannon limit, the Turbo Codes give a new direction for the channel encoding field, especially since they were adopted for multiple norms of telecommunications, such as deeper communication. To obtain an excellent performance it is necessary to design robust turbo code interleaver. We are investigating genetic algorithms as a promising optimization method to find good performing interleaver for the large frame sizes. In this paper, we present our work, compare with several previous approaches and present experimental results.

Journal ArticleDOI
01 May 2007
TL;DR: Simulation results show that substantial benefits can be obtained by direct sequence spreading with space time coding the coded bits over Rayleigh fading and narrowband jamming environment.
Abstract: This paper considers the performance enhancement of a space-time turbo codes processing system employing a combined turbo code and antenna transmission diversity scheme for direct sequence spread spectrum communication system. Turbo codes have been shown to approach the Shannon limit for error correcting capability at low signal-to-noise ratios given large sized frames. Extensive research efforts have been examining ways to enhance the performance of turbo codes forshort frames (e.g., voice transmission). The proposed system is investigated over Rayleigh fading channels only and also in case of narrowband jamming environment in addition to Rayleigh fading channels. This paper focuses on the implementation and performance of a modified turbo decoder for this model. Furthermore, the description of the concatenation of turbo code and direct sequence/spread spectrum with a space-time block codes are presented. Simulation results show that substantial benefits can be obtained by direct sequence spreading with space time coding the coded bits over Rayleigh fading and narrowband jamming environment.