scispace - formally typeset
Search or ask a question
Dissertation

On Linear Transmission Systems

TL;DR: The object in Part I is to study the impact of both the signaling rate and the pulse shape on the information rate of single antenna, single carrier linear modulation systems, and a iterative optimization method is developed, which produces precoders improving upon the best known ones in the literature.
Abstract: This thesis is divided into two parts. Part I analyzes the information rate of single antenna, single carrier linear modulation systems. The information rate of a system is the maximum number of bits that can be transmitted during a channel usage, and is achieved by Gaussian symbols. It depends on the underlying pulse shape in a linear modulated signal and also the signaling rate, the rate at which the Gaussian symbols are transmitted. The object in Part I is to study the impact of both the signaling rate and the pulse shape on the information rate. Part II of the thesis is devoted to multiple antenna systems (MIMO), and more specifically to linear precoders for MIMO channels. Linear precoding is a practical scheme for improving the performance of a MIMO system, and has been studied intensively during the last four decades. In practical applications, the symbols to be transmitted are taken from a discrete alphabet, such as quadrature amplitude modulation (QAM), and it is of interest to find the optimal linear precoder for a certain performance measure of the MIMO channel. The design problem depends on the particular performance measure and the receiver structure. The main difficulty in finding the optimal precoders is the discrete nature of the problem, and mostly suboptimal solutions are proposed. The problem has been well investigated when linear receivers are employed, for which optimal precoders were found for many different performance measures. However, in the case of the optimal maximum likelihood (ML) receiver, only suboptimal constructions have been possible so far. Part II starts by proposing new novel, low complexity, suboptimal precoders, which provide a low bit error rate (BER) at the receiver. Later, an iterative optimization method is developed, which produces precoders improving upon the best known ones in the literature. The resulting precoders turn out to exhibit a certain structure, which is then analyzed and proved to be optimal for large alphabets.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
01 Jan 2004

33 citations

Dissertation
01 Jan 2013
TL;DR: A framework to design reduced-complexity receivers for FTN and general linear channels that achieve optimal or near-optimal performance and an improvement of the minimum phase conversion that sharpens the focus of the ISI model energy is proposed.
Abstract: Fast and reliable data transmission together with high bandwidth efficiency are important design aspects in a modern digital communication system. Many different approaches exist but in this thesis bandwidth efficiency is obtained by increasing the data transmission rate with the faster-than-Nyquist (FTN) framework while keeping a fixed power spectral density (PSD). In FTN consecutive information carrying symbols can overlap in time and in that way introduce a controlled amount of intentional intersymbol interference (ISI). This technique was introduced already in 1975 by Mazo and has since then been extended in many directions. Since the ISI stemming from practical FTN signaling can be of significant duration, optimum detection with traditional methods is often prohibitively complex, and alternative equalization methods with acceptable complexity-performance tradeoffs are needed. The key objective of this thesis is therefore to design reduced-complexity receivers for FTN and general linear channels that achieve optimal or near-optimal performance. Although the performance of a detector can be measured by several means, this thesis is restricted to bit error rate (BER) and mutual information results. FTN signaling is applied in two ways: As a separate uncoded narrowband communication system or in a coded scenario consisting of a convolutional encoder, interleaver and the inner ISI mechanism in serial concatenation. Turbo equalization where soft information in the form of log likelihood ratios (LLRs) is exchanged between the equalizer and the decoder is a commonly used decoding technique for coded FTN signals. The first part of the thesis considers receivers and arising stability problems when working within the white noise constraint. New M-BCJR algorithms for turbo equalization are proposed and compared to reduced-trellis VA and BCJR benchmarks based on an offset label idea. By adding a third low-complexity M-BCJR recursion, LLR quality is improved for practical values of M. M here measures the reduced number of BCJR computations for each data symbol. An improvement of the minimum phase conversion that sharpens the focus of the ISI model energy is proposed. When combined with a delayed and slightly mismatched receiver, the decoding allows a smaller M without significant loss in BER. The second part analyzes the effect of the internal metric calculations on the performance of Forney- and Ungerboeck-based reduced-complexity equalizers of the M-algorithm type for both ISI and multiple-input multiple-output (MIMO) channels. Even though the final output of a full-complexity equalizer is identical for both models, the internal metric calculations are in general different. Hence, suboptimum methods need not produce the same final output. Additionally, new models working in between the two extremes are proposed and evaluated. Note that the choice of observation model does not impact the detection complexity as the underlying algorithm is unaltered. The last part of the thesis is devoted to a different complexity reducing approach. Optimal channel shortening detectors for linear channels are optimized from an information theoretical perspective. The achievable information rates of the shortened models as well as closed form expressions for all components of the optimal detector of the class are derived. The framework used in this thesis is more general than what has been previously used within the area.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: It will be shown that the lattice basis reduction algorithm of Lenstra, Lenstra and Lovasz (LLL) can significantly improve the performance of suboptimal lattice decoders such as the zero-forcing and VBLAST detectors.
Abstract: The idea of formulating the detection of a lattice-type modulation, such as M-PAM and M-QAM, transmitted over a linear channel as the so-called universal lattice decoding problem dates back to at least the early 1990s. The applications of such lattice decoders have proliferated in the last few years because of the growing importance of some linear channel models such as multiple-antenna fading channels and multi-user CDMA channels. The principle of universal lattice decoding can trace its roots back to the theory and algorithms developed for solving the shortest/closest lattice vector problem for integer programming and cryptoanalysis applications. In this semi-tutorial paper, such a principle as well as some related recent advances will be reviewed and extended. It will be shown that the lattice basis reduction algorithm of Lenstra, Lenstra and Lovasz (LLL) can significantly improve the performance of suboptimal lattice decoders such as the zero-forcing and VBLAST detectors. In addition, new implementation of the optimal lattice decoder that is particularly efficient at moderate signal-to-noise ratios will also be presented. Copyright © 2003 John Wiley & Sons, Ltd.

107 citations

MonographDOI
18 Dec 2008
TL;DR: In this article, the authors take the reader step by step through the connections with lattice sphere packing and covering problems, and show how computers may help to gain new insights, including the classification of totally real thin number fields, connections to the Minkowski conjecture, and the discovery of new, sometimes surprising, properties of exceptional structures such as the Leech lattice or the root lattices.
Abstract: Starting from classical arithmetical questions on quadratic forms, this book takes the reader step by step through the connections with lattice sphere packing and covering problems. As a model for polyhedral reduction theories of positive definite quadratic forms, Minkowski's classical theory is presented, including an application to multidimensional continued fraction expansions. The reduction theories of Voronoi are described in great detail, including full proofs, new views, and generalizations that cannot be found elsewhere. Based on Voronoi's second reduction theory, the local analysis of sphere coverings and several of its applications are presented. These include the classification of totally real thin number fields, connections to the Minkowski conjecture, and the discovery of new, sometimes surprising, properties of exceptional structures such as the Leech lattice or the root lattices. Throughout this book, special attention is paid to algorithms and computability, allowing computer-assisted treatments. Although dealing with relatively classical topics that have been worked on extensively by numerous authors, this book is exemplary in showing how computers may help to gain new insights.

104 citations

Proceedings ArticleDOI
28 Jun 2009
TL;DR: The design of the precoder the maximizes the mutual information in linear vector Gaussian channels with an arbitrary input distribution is studied and the optimal precoder optimal left singular vectors and singular values are derived.
Abstract: The design of the precoder the maximizes the mutual information in linear vector Gaussian channels with an arbitrary input distribution is studied. Precisely, the precoder optimal left singular vectors and singular values are derived. The characterization of the right singular vectors is left, in general, as an open problem whose computational complexity is then studied in three cases: Gaussian signaling, low SNR, and high SNR. For the Gaussian signaling case and the low SNR regime, the dependence of the mutual information on the right singular vectors vanishes, making the optimal precoder design problem easy to solve. In the high SNR regime, however, the dependence on the right singular vectors cannot be avoided and we show the difficulty of computing the optimal precoder through an NPhardness analysis.

101 citations

Book
31 Jan 2003
TL;DR: Coded Modulation Systems is an introduction to the subject of coded modulation in digital communication designed for classroom use and for anyone wanting to learn the ideas behind this modern kind of coding.
Abstract: Coded Modulation Systems is an introduction to the subject of coded modulation in digital communication. It is designed for classroom use and for anyone wanting to learn the ideas behind this modern kind of coding. Coded modulation is signal encoding that takes into account the nature of the channel over which it is used. Traditional error correcting codes work with bits and add redundant bits in order to correct transmission errors. In coded modulation, continuous time signals and their phases and amplitudes play the major role. The coding can be seen as a patterning of these quantities. The object is still to correct errors, but more fundamentally, it is to conserve signal energy and bandwidth at a given error performance. The book divides coded modulation into three major parts. Trellis coded modulation (TCM) schemes encode the points of QAM constellations; lattice coding and set-partition techniques play major roles here. Continuous-phase modulation (CPM) codes encode the signal phase, and create constant envelope RF signals. The partial-response signaling (PRS) field includes intersymbol interference problems, signals generated by real convolution, and signals created by lowpass filtering. In addition to these topics, the book covers coding techniques of several kinds for fading channels, spread spectrum and repeat-request systems. The history of the subject is fully traced back to the formative work of Shannon in 1949. Full explanation of the basics and complete homework problems make the book ideal for self-study or classroom use.

93 citations

Journal ArticleDOI
TL;DR: The conditions necessary to achieve undistorted transmission of a pulse signal over a channel of finite bandwidth have been set down by Nyquist and are extended in this paper to eliminate the bandwidth restrictions.
Abstract: The conditions necessary to achieve undistorted transmission of a pulse signal over a channel of finite bandwidth have been set down by Nyquist. These conditions are extended in this paper to eliminate the bandwidth restrictions. Conditions on the real and imaginary parts of the overall system characteristic which lead to the elimination of intersymbol amplitude and pulse width distortion are found. These generalized constraints do not depend on any sharp band limitation and permit one to find ideal conditions for band pass and gradual cutoff systems. The application of Nyquist's conditions usually amounts to equalizing the transmission characteristics in order to approximate an overall linear phase and some sort of symmetrical amplitude roll-off. This paper shows that the principles of channel shaping for distortionless transmission are a good deal more flexible than this. The application of this more general interpretation of Nyquist's theory is illustrated by several examples.

92 citations