scispace - formally typeset

Dissertation

On Linear Transmission Systems

01 Jan 2012-

TL;DR: The object in Part I is to study the impact of both the signaling rate and the pulse shape on the information rate of single antenna, single carrier linear modulation systems, and a iterative optimization method is developed, which produces precoders improving upon the best known ones in the literature.

AbstractThis thesis is divided into two parts. Part I analyzes the information rate of single antenna, single carrier linear modulation systems. The information rate of a system is the maximum number of bits that can be transmitted during a channel usage, and is achieved by Gaussian symbols. It depends on the underlying pulse shape in a linear modulated signal and also the signaling rate, the rate at which the Gaussian symbols are transmitted. The object in Part I is to study the impact of both the signaling rate and the pulse shape on the information rate. Part II of the thesis is devoted to multiple antenna systems (MIMO), and more specifically to linear precoders for MIMO channels. Linear precoding is a practical scheme for improving the performance of a MIMO system, and has been studied intensively during the last four decades. In practical applications, the symbols to be transmitted are taken from a discrete alphabet, such as quadrature amplitude modulation (QAM), and it is of interest to find the optimal linear precoder for a certain performance measure of the MIMO channel. The design problem depends on the particular performance measure and the receiver structure. The main difficulty in finding the optimal precoders is the discrete nature of the problem, and mostly suboptimal solutions are proposed. The problem has been well investigated when linear receivers are employed, for which optimal precoders were found for many different performance measures. However, in the case of the optimal maximum likelihood (ML) receiver, only suboptimal constructions have been possible so far. Part II starts by proposing new novel, low complexity, suboptimal precoders, which provide a low bit error rate (BER) at the receiver. Later, an iterative optimization method is developed, which produces precoders improving upon the best known ones in the literature. The resulting precoders turn out to exhibit a certain structure, which is then analyzed and proved to be optimal for large alphabets.

Topics: Precoding (58%), MIMO (56%), Quadrature amplitude modulation (56%), Code rate (54%), QAM (52%)

...read more

Content maybe subject to copyright    Report

Citations
More filters

Book ChapterDOI
01 Jan 2004

33 citations


Dissertation
01 Jan 2013
TL;DR: A framework to design reduced-complexity receivers for FTN and general linear channels that achieve optimal or near-optimal performance and an improvement of the minimum phase conversion that sharpens the focus of the ISI model energy is proposed.
Abstract: Fast and reliable data transmission together with high bandwidth efficiency are important design aspects in a modern digital communication system. Many different approaches exist but in this thesis bandwidth efficiency is obtained by increasing the data transmission rate with the faster-than-Nyquist (FTN) framework while keeping a fixed power spectral density (PSD). In FTN consecutive information carrying symbols can overlap in time and in that way introduce a controlled amount of intentional intersymbol interference (ISI). This technique was introduced already in 1975 by Mazo and has since then been extended in many directions. Since the ISI stemming from practical FTN signaling can be of significant duration, optimum detection with traditional methods is often prohibitively complex, and alternative equalization methods with acceptable complexity-performance tradeoffs are needed. The key objective of this thesis is therefore to design reduced-complexity receivers for FTN and general linear channels that achieve optimal or near-optimal performance. Although the performance of a detector can be measured by several means, this thesis is restricted to bit error rate (BER) and mutual information results. FTN signaling is applied in two ways: As a separate uncoded narrowband communication system or in a coded scenario consisting of a convolutional encoder, interleaver and the inner ISI mechanism in serial concatenation. Turbo equalization where soft information in the form of log likelihood ratios (LLRs) is exchanged between the equalizer and the decoder is a commonly used decoding technique for coded FTN signals. The first part of the thesis considers receivers and arising stability problems when working within the white noise constraint. New M-BCJR algorithms for turbo equalization are proposed and compared to reduced-trellis VA and BCJR benchmarks based on an offset label idea. By adding a third low-complexity M-BCJR recursion, LLR quality is improved for practical values of M. M here measures the reduced number of BCJR computations for each data symbol. An improvement of the minimum phase conversion that sharpens the focus of the ISI model energy is proposed. When combined with a delayed and slightly mismatched receiver, the decoding allows a smaller M without significant loss in BER. The second part analyzes the effect of the internal metric calculations on the performance of Forney- and Ungerboeck-based reduced-complexity equalizers of the M-algorithm type for both ISI and multiple-input multiple-output (MIMO) channels. Even though the final output of a full-complexity equalizer is identical for both models, the internal metric calculations are in general different. Hence, suboptimum methods need not produce the same final output. Additionally, new models working in between the two extremes are proposed and evaluated. Note that the choice of observation model does not impact the detection complexity as the underlying algorithm is unaltered. The last part of the thesis is devoted to a different complexity reducing approach. Optimal channel shortening detectors for linear channels are optimized from an information theoretical perspective. The achievable information rates of the shortened models as well as closed form expressions for all components of the optimal detector of the class are derived. The framework used in this thesis is more general than what has been previously used within the area.

2 citations


References
More filters

Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

42,928 citations


Journal ArticleDOI
Emre Telatar1
01 Nov 1999
Abstract: We investigate the use of multiple transmitting and/or receiving antennas for single user communications over the additive Gaussian channel with and without fading. We derive formulas for the capacities and error exponents of such channels, and describe computational procedures to evaluate such formulas. We show that the potential gains of such multi-antenna systems over single-antenna systems is rather large under independenceassumptions for the fades and noises at different receiving antennas.

12,396 citations


Book
01 Jan 1989
Abstract: For senior/graduate-level courses in Discrete-Time Signal Processing. THE definitive, authoritative text on DSP -- ideal for those with an introductory-level knowledge of signals and systems. Written by prominent, DSP pioneers, it provides thorough treatment of the fundamental theorems and properties of discrete-time linear systems, filtering, sampling, and discrete-time Fourier Analysis. By focusing on the general and universal concepts in discrete-time signal processing, it remains vital and relevant to the new challenges arising in the field --without limiting itself to specific technologies with relatively short life spans.

10,383 citations


Journal ArticleDOI
Abstract: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised.

10,358 citations


Journal ArticleDOI

9,152 citations