scispace - formally typeset
Search or ask a question

Showing papers by "Johannes B. Huber published in 2006"


Journal ArticleDOI
TL;DR: A low-complexity single antenna interference cancellation (SAIC) algorithm for real-valued modulation formats referred to as mono interference Cancellation (MIC) is introduced which is well suited for practical applications.
Abstract: In mobile communications networks, system capacity is often limited by cochannel interference. Therefore, receiver algorithms for cancellation of cochannel interference have recently attracted much interest. At the mobile terminal, algorithms can usually rely only on one received signal delivered by a single receive antenna. In this letter, a low-complexity single antenna interference cancellation (SAIC) algorithm for real-valued modulation formats referred to as mono interference cancellation (MIC) is introduced which is well suited for practical applications. Field trials in commercial GSM networks using prototype terminals with the proposed MIC algorithm have demonstrated that the novel concept may yield capacity improvements of up to 80%. The underlying principle is also beneficial for adjacent channel interference and receivers with multiple antennas. Furthermore, in coverage-limited scenarios, there is no performance degradation compared with conventional receivers

84 citations


Journal ArticleDOI
01 Nov 2006
TL;DR: This book first introduces the concept of mutual information profiles and revisits the well-known Jensen's inequality, and derives bounds on information combining from an information-theory point of view for single parity-check codes and for repetition codes.
Abstract: Consider coded transmission over a binary-input symmetric memoryless channel. The channel decoder uses the noisy observations of the code symbols to reproduce the transmitted code symbols. Thus, it combines the information about individual code symbols to obtain an over-all information about each code symbol, which may be the reproduced code symbol or its a-posteriori probability. This tutorial addresses the problem of "information combining" from an information-theory point of view: the decoder combines the mutual information between channel input symbols and channel output symbols (observations) to the mutual information between one transmitted symbol and all channel output symbols. The actual value of the combined information depends on the statistical structure of the channels. However, it can be upper and lower bounded for the assumed class of channels. This book first introduces the concept of mutual information profiles and revisits the well-known Jensen's inequality. Using these tools, the bounds on information combining are derived for single parity-check codes and for repetition codes. The application of the bounds is illustrated in four examples: information processing characteristics of coding schemes, including extrinsic information transfer (EXIT) functions; design of multiple turbo codes; bounds for the decoding threshold of low-density parity-check codes; EXIT function of the accumulator.

64 citations


Journal ArticleDOI
TL;DR: A new combined precoding/shaping technique for fast digital transmission over twisted pair lines is proposed, which may simplify all kinds of high-speed data communications via copper lines, such as LAN's, ADSL, CDDI, etc.
Abstract: A new combined precoding/shaping technique for fast digital transmission over twisted pair lines is proposed. Major advantages of this “dynamics shaping” are: dynamics of the signal at the input of the decision device are reduced by a great amount. Thereby, A/D-conversion, adaptive equalization, and symbol timing are rather facilitated. A trade-off between signal dynamics at the transmitter output, decision device input and SNR-gain by noise whitening is offered. For dynamics limitation relevant in practice, gains up to 6 dB are achieved. Additionally, the transmitter can be fixed to a typical application because, in contrast to Tomlinson-Harashima or other precoding techniques, blind adaptive equalization is practicable to remove residual intersymbol interference in the case of a mismatch of precoding and actual cable characteristics. The residual SNR-loss is negligible in most applications. SNR-gains due to noise prediction, channel coding and signal shaping simply can be combined using dynamics shaping. Nevertheless, system complexity is of the order of other precoding/shaping techniques. Although numerical results are only presented for a HDSL-application in the German Telekom subscriber network, the proposed transmission scheme may simplify all kinds of high-speed data communications via copper lines, such as LAN's, ADSL, CDDI, etc

62 citations


Proceedings ArticleDOI
01 Nov 2006
TL;DR: These findings show that a large number of iterative decoding errors in the Margulis code, confined to point trapping sets in the standard Tanner graph, can be corrected if only one redundant parity-check equation is added to the decoder's matrix.
Abstract: We analyze the effect of redundant parity-check equations on the error-floor performance of low-density parity- check (LDPC) codes used over the additive white Gaussian noise (AWGN) channel. Our findings show that a large number of iterative decoding errors in the [2640,1320] Margulis code, confined to point trapping sets in the standard Tanner graph, can be corrected if only one redundant parity-check equation is added to the decoder's matrix. We also derive an analytic expression relating the number of rows in the parity-check matrix of a code and the parameters of trapping sets in the code's graph.

32 citations


Journal ArticleDOI
TL;DR: The notion of the trapping redundancy of a code is introduced, which quantifies the relationship between the number of redundant rows in any parity-check matrix of a given code and the size of its smallest trapping set.
Abstract: We generalize the notion of the stopping redundancy in order to study the smallest size of a trapping set in Tanner graphs of linear block codes. In this context, we introduce the notion of the trapping redundancy of a code, which quantifies the relationship between the number of redundant rows in any parity-check matrix of a given code and the size of its smallest trapping set. Trapping sets with certain parameter sizes are known to cause error-floors in the performance curves of iterative belief propagation decoders, and it is therefore important to identify decoding matrices that avoid such sets. Bounds on the trapping redundancy are obtained using probabilistic and constructive methods, and the analysis covers both general and elementary trapping sets. Numerical values for these bounds are computed for the [2640,1320] Margulis code and the class of projective geometry codes, and compared with some new code-specific trapping set size estimates.

28 citations


Proceedings Article
01 Jan 2006
TL;DR: This work derives lower and upper bounds for the i-th stopping redundancy of a code by using probabilistic methods and Bonferroni-type inequalities, and specialization the findings for cyclic codes, and shows that parity-check matrices in cyclic form have some desirable redundancy properties.
Abstract: We extend the framework for studying the stopping redundancy of a linear block code by introducing and analyzing the stopping redundancy hierarchy. The stopping redundancy hierarchy of a code represents a measure of the trade-off between performance and complexity of iteratively decoding a code used over the binary erasure channel. It is defined as an ordered list of positive integers in which the ith entry, termed the i-th stopping redundancy, corresponds to the minimum number of rows in any parity-check matrix of the code that has stopping distance at least i. In particular, we derive lower and upper bounds for the i-th stopping redundancy of a code by using probabilistic methods and Bonferroni-type inequalities. Furthermore, we specialize the findings for cyclic codes, and show that parity-check matrices in cyclic form have some desirable redundancy properties. We also propose to investigate the influence of the generator codeword of the cyclic parity-check matrix on its stopping distance properties.

15 citations


03 Apr 2006
TL;DR: A solution for combining this vector quantization scheme with DPCM in the sense of a stepwise spherical quantization which results in an extreme reduction of computational effort with respect to the state of the art is introduced.
Abstract: Spherical logarithmic quantization (SLQ) is a vector quantization method for efficient digitizing analog signals at a high dynamic range as well as a high Signal-to-Noise Ratio (SNR) while preserving the original waveform as close as possible. Short vectors of samples are represented in sphere coordinates while correlations within the source signal are exploited by means of differential pulsecode modulation (DPCM). This paper introduces a solution for combining this vector quantization scheme with DPCM in the sense of a stepwise spherical quantization which results in an extreme reduction of computational effort with respect to the state of the art. Moreover an optimum indexing of the quantization cells covering the surface of a multidimensional unit sphere is presented for both the encoder and the decoder side.

7 citations


Journal ArticleDOI
TL;DR: The method of extrinsic information transfer charts is extended, that is limited to the case of a concatenation of two component codes, to the cases of multiple turbo codes.
Abstract: In the low signal-to-noise ratio regime, the performance of concatenated coding schemes is limited by the convergence properties of the iterative decoder. Idealizing the model of iterative decoding by an independence assumption, which represents the case in which the codeword length is infinitely large, leads to analyzable structures from which this performance limit can be predicted. Mutual information transfer characteristics of the constituent coding schemes comprising convolutional encoders and soft-in/soft-out decoders have been shown to be sufficient to characterize the components within this model. Analyzing serial and parallel concatenations is possible just by these characteristics. In this paper, we extend the method of extrinsic information transfer charts, that is limited to the case of a concatenation of two component codes, to the case of multiple turbo codes. Multiple turbo codes are parallel concatenations of three or more constituent codes, which, in general, may not be identical and may not have identical code rates. For the construction of low-rate codes, this concept seems to be very favorable, as power efficiencies close to the Shannon limit can be achieved with reasonable complexity

6 citations


03 Apr 2006
TL;DR: An overview of near Shannon-limit operating codes when transmitted over the additive white Gaussian noise (AWGN) channel with erasures is provided and the PEG algorithm is used, improved by a novel method, to design better LDPC codes for this channel.
Abstract: This paper provides an overview of near Shannon-limit operating codes when transmitted over the additive white Gaussian noise (AWGN) channel with erasures. We compare the performance of standardized low-density parity-check (LDPC) codes and parallel-concatenated (turbo) codes to two progressive edge growth (PEG) optimized codes and a new design. The assumed channel, an AWGN channel with erasures, plays an important role in the field of satellite communications. The standardized codes we chose for our comparison purposes are the DVB-S2 LDPC code and a previously designed turbo code of 3GPP2. Furthermore, we use the PEG algorithm, which is improved by a novel method, to design better LDPC codes for this channel.

2 citations


01 Jan 2006
TL;DR: The method of extrinsic information transfer charts is extended to the case of multiple turbo codes, which seems to be very favorable, as power efficiencies close to the Shannon limit can be achieved with reasonable complexity.
Abstract: In the low signal-to-noise ratio regime, the per- formance of concatenated coding schemes is limited by the convergence properties of the iterative decoder. Idealizing the model of iterative decoding by an independence assumption, which represents the case in which the codeword length is infinitely large, leads to analyzable structures from which this performance limit can be predicted. Mutual information transfer characteristics of the constituent coding schemes comprising convolutional encoders and soft-in/soft-out decoders have been shown to be sufficient to characterize the components within this model. Analyzing serial and parallel concatenations is possible just by these characteris- tics. In this paper, we extend the method of extrinsic information transfer charts, that is limited to the case of a concatenation of two component codes, to the case of multiple turbo codes. Multiple turbo codes are parallel concatenations of three or more con- stituent codes, which, in general, may not be identical and may not have identical code rates. For the construction of low-rate codes, this concept seems to be very favorable, as power efficiencies close to the Shannon limit can be achieved with reasonable complexity.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: The concept of "information combing" is extended to multiple access schemes, where the receiver is equipped with multiple receive antennas and the connection of the respective rate region to that of the scalar MACs, which are present if the receive antennas are treated separately.
Abstract: The concept of "information combing" is extended to multiple access schemes, where the receiver is equipped with multiple receive antennas. In particular, the connection of the respective rate region to that of the scalar MACs, which are present if the receive antennas are treated separately, is derived. This interpretation leads to new, expedient insights and the gains over "scalar combing", i.e., the synergy available in MIMO channels, are quantified from a new point of view. The theoretical results are illustrated by numerical examples.

03 Apr 2006
TL;DR: This paper presents an additional easily understandable methodology for calculation of the lower bound for the required mutual information (denoted “best performance” bound in [1]) and is intuitive as its approach is based on EXIT charts.
Abstract: Belief-propagation decoding for low-density parity-check (LDPC) codes only performs well if the decoder does not get stuck during information exchange between variable and check nodes, i.e. performs above a certain threshold. As the binary erasure channel and the binary symmetric channel are both the least and most informative channels from information combining point of view (depending on the type of nodes being considered (variableor check-)), one can calculate upper and lower bounds on the required mutual information that has to be sent over the channel for successful iterative decoding, see [1]. In this paper we present an additional easily understandable methodology for calculation of the lower bound for the required mutual information (denoted “best performance” bound in [1]). Our method is intuitive as its approach is based on EXIT charts.