scispace - formally typeset
Search or ask a question
Author

E. Offer

Bio: E. Offer is an academic researcher from Ludwig Maximilian University of Munich. The author has contributed to research in topics: Sequential decoding & Decoding methods. The author has an hindex of 4, co-authored 4 publications receiving 149 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Using analog, non-linear and highly parallel networks, this work attempts to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals.
Abstract: Using analog, non-linear and highly parallel networks, we attempt to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals. This is in contrast to common practice where these tasks are performed by sequentially operating processors. Our advantage is that we operate fully on soft values for input and output, similar to what is done in 'turbo' decoding. However, we do not have explicit iterations because the networks float freely in continuous time. The decoder has almost no latency in time because we are only restricted by the time constants from the parasitic RC values of integrated circuits. Simulation results for several simple examples are shown which, in some cases, achieve the performance of a conventional MAP detector. For more complicated codes we indicate promising solutions with more complex analog networks based on the simple ones. Furthermore, we discuss the principles of the analog VLSI implementation of these networks.

70 citations

Proceedings ArticleDOI
02 Dec 1991
TL;DR: A static code design with unequal error protection (UEP) is presented that also takes auxiliary data services into account and emphasis is on the use of source-adapted channel coding with rate-compatible punctured convolutional (RCPC) codes.
Abstract: A system proposal for DAB is investigated. The kernel is orthogonal frequency division multiplexing (OFDM) with 4-DPSK (differential phase shift keying) modulation, rectangular pulse-shaping, and a guard interval to reject multipath distortions. Emphasis is on the use of source-adapted channel coding with rate-compatible punctured convolutional (RCPC) codes. Based on analytical and simulated BER (bit error rate) curves for several propagation conditions and on preliminary source significance information (SSI), a static code design with unequal error protection (UEP) is presented that also takes auxiliary data services into account. The gain due to UEP is on the order of 8dB in signal power or 25% in bandwidth. >

69 citations

Journal ArticleDOI
TL;DR: The development of soft output algorithms over the last two decades along with associated state-of-the-art applications and an outlook towards novel applications of the soft principle are outlined.
Abstract: A major breakthrough in digital communications was the provisioning of soft outputs at each processing stage, with appropriate capabilities to use this as soft inputs in the next processing stage. This allowed for much more performant receivers especially in difficult mobile radio channel conditions, and set the stage for iterative processing. This article will outline the development of soft output algorithms over the last two decades along with associated state-of-the-art applications and conclude with an outlook towards novel applications of the soft principle.

7 citations

Book ChapterDOI
01 Jan 2001
TL;DR: It is explained exactly how suboptimal algorithms approximate the optimal, and it is shown how good these approximations are in some special cases.
Abstract: Several popular, suboptimal algorithms for bit decoding of binary block codes such as turbo decoding, threshold decoding, and message passing for LDPC, were developed almost as a common sense approach to decoding of some specially designed codes After their introduction, these algorithms have been studied by mathematical tools pertinent more to computer science than the conventional algebraic coding theory We give an algebraic description of the optimal and suboptimal bit decoders and of the optimal and suboptimal message passing We explain exactly how suboptimal algorithms approximate the optimal, and show how good these approximations are in some special cases

5 citations


Cited by
More filters
Book
06 Oct 2003
TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Abstract: Fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.

8,091 citations

Journal ArticleDOI
TL;DR: This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.
Abstract: Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a "valid" or "maxent-normal" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the "Bethe method", the "junction graph method", the "cluster variation method", and the "region graph method". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.

1,827 citations

Journal ArticleDOI
TL;DR: In this contribution the transmission of M-PSK and M-QAM modulated orthogonal frequency division multiplexed (OFDM) signals over an additive white Gaussian noise (AWGN) channel is considered and the degradation of the bit error rate is evaluated.
Abstract: In this contribution the transmission of M-PSK and M-QAM modulated orthogonal frequency division multiplexed (OFDM) signals over an additive white Gaussian noise (AWGN) channel is considered. The degradation of the bit error rate (BER), caused by the presence of carrier frequency offset and carrier phase noise is analytically evaluated. It is shown that for a given BER degradation, the values of the frequency offset and the linewidth of the carrier generator that are allowed for OFDM are orders of magnitude smaller than for single carrier systems carrying the same bit rate. >

1,816 citations

Journal ArticleDOI
TL;DR: The authors discuss the potential of OFDM signaling, with its limitations and inherent problems, as well as another potential technique that has so far been overlooked: single-carrier transmission with frequency- domain equalization, and introduces coded-OFDM (COFDM), which makes use of channel coding and frequency-domain interleaving.
Abstract: The authors discuss the potential of OFDM signaling, with its limitations and inherent problems, as well as another potential technique that has so far been overlooked: single-carrier transmission with frequency-domain equalization. The carrier synchronisation issue is dealt with before the authors introduce coded-OFDM (COFDM), which makes use of channel coding and frequency-domain interleaving. >

1,423 citations

Journal ArticleDOI
TL;DR: This work uses Forney-style factor graphs, which support hierarchical modeling and are compatible with standard block diagrams, and uses them to derive practical detection/estimation algorithms in a wide area of applications.
Abstract: Graphical models such as factor graphs allow a unified approach to a number of key topics in coding and signal processing such as the iterative decoding of turbo codes, LDPC codes and similar codes, joint decoding, equalization, parameter estimation, hidden-Markov models, Kalman filtering, and recursive least squares. Graphical models can represent complex real-world systems, and such representations help to derive practical detection/estimation algorithms in a wide area of applications. Most known signal processing techniques -including gradient methods, Kalman filtering, and particle methods -can be used as components of such algorithms. Other than most of the previous literature, we have used Forney-style factor graphs, which support hierarchical modeling and are compatible with standard block diagrams.

959 citations