scispace - formally typeset
Search or ask a question
Author

Anthony G. Constantinides

Other affiliations: General Post Office, University of Hull, City University London  ...read more
Bio: Anthony G. Constantinides is an academic researcher from Imperial College London. The author has contributed to research in topics: Adaptive filter & Signal processing. The author has an hindex of 28, co-authored 319 publications receiving 5012 citations. Previous affiliations of Anthony G. Constantinides include General Post Office & University of Hull.


Papers
More filters
Book
01 Jan 2007
TL;DR: In this article, a detailed introduction to the analysis and design of multiple-input multiple-output (MIMO) wireless systems is presented, and the fundamental capacity limits of MIMO systems are examined.
Abstract: Multiple-input multiple-output (MIMO) technology constitutes a breakthrough in the design of wireless communications systems, and is already at the core of several wireless standards. Exploiting multipath scattering, MIMO techniques deliver significant performance enhancements in terms of data transmission rate and interference reduction. This book is a detailed introduction to the analysis and design of MIMO wireless systems. Beginning with an overview of MIMO technology, the authors then examine the fundamental capacity limits of MIMO systems. Transmitter design, including precoding and space-time coding, is then treated in depth, and the book closes with two chapters devoted to receiver design. Written by a team of leading experts, the book blends theoretical analysis with physical insights, and highlights a range of key design challenges. It can be used as a textbook for advanced courses on wireless communications, and will also appeal to researchers and practitioners working on MIMO wireless systems.

721 citations

Journal ArticleDOI
TL;DR: A class of neural networks appropriate for general nonlinear programming, i.e., problems including both equality and inequality constraints, is analyzed in detail and the methodology is based on the Lagrange multiplier theory in optimization and seeks to provide solutions satisfying the necessary conditions of optimality.
Abstract: A class of neural networks appropriate for general nonlinear programming, i.e., problems including both equality and inequality constraints, is analyzed in detail. The methodology is based on the Lagrange multiplier theory in optimization and seeks to provide solutions satisfying the necessary conditions of optimality. The equilibrium point of the network satisfies the Kuhn-Tucker condition for the problem. No explicit restriction is imposed on the form of the cost function apart from some general regularity and convexity conditions. The stability of the neural networks is analyzed in detail. The transient behavior of the network is simulated and the validity of the approach is verified for a practical problem, maximum entropy image restoration. >

366 citations

Journal ArticleDOI
01 Aug 1970
TL;DR: In this paper, the authors describe general transformations for digital filters in the frequency domain, which operate on a lowpass-digital-filter prototype to give either another lowpass or a highpass, bandpass or band-elimination characteristic.
Abstract: The paper describes certain general transformations for digital filters in the frequency domain. The term digital filter is used to denote a processing unit operating on a sampled waveform, so that the input, output and intermediate signals are only defined at discrete intervals of time; the signals may be either p.a.m. or p.c.m. The transformations discussed operate on a lowpass-digital-filter prototype to give either another lowpass or a highpass, bandpass or band-elimination characteristic. The transformations are carried out by mapping the lowpass complex variable Z -1 [where Z -1 = exp ( -jωT) and T is the time interval between samples] by functions of the form e jθ Πni=1z -1 -α I /1-α * I z -1 known as unit functions.

285 citations

Journal ArticleDOI
01 Aug 1990
TL;DR: A new motion compensation scheme based on block matching is presented, where the size for each block is variable and the proposed algorithm adaptively divides the image into blocks of vari- able size to meet the assumption on uniform motion for all blocks.
Abstract: Indexing term: Codes and decoding: Algorithms Abstract: In block matching type motion com- pensation schemes, the image is divided into blocks of the same size. For each block a search is conducted in the previous frame to locate the best correspondence. For the scheme to succeed an implicit assumption has to be made that the motion within each block is uniform, an assump- tion which may not necessarily be correct, and as a result the quality of the prediction suffers. In the paper, a new motion compensation scheme based on block matching is presented, where the size for each block is variable. The proposed algorithm adaptively divides the image into blocks of vari- able size to meet the assumption on uniform motion for all blocks. The scheme has been suc- cessfully applied to simple interframe video coding. It is shown that the proposed algorithm can be extended to form the basis of a complete and efficient codec with low complexity. The possibility of the combination of the scheme with novel hybrid coding techniques to form sophisti- cated systems with low bit-rate performance, that compare favourably with other existing schemes, is also demonstrated.

228 citations

Journal ArticleDOI
TL;DR: A new family of stochastic gradient adaptive filter algorithms is proposed which is based on mixed error norms which combine the advantages of different error norms, for example the conventional, relatively well-behaved, least mean square algorithm and the more sensitive, but better converging, least means fourth algorithm.
Abstract: A new family of stochastic gradient adaptive filter algorithms is proposed which is based on mixed error norms These algorithms combine the advantages of different error norms, for example the conventional, relatively well-behaved, least mean square algorithm and the more sensitive, but better converging, least mean fourth algorithm A mixing parameter is included which controls the proportions of the error norms and offers an extra degree of freedom within the adaptation A system identification simulation is used to demonstrate the performance of a least mean mixed-norm (square and fourth) algorithm

187 citations


Cited by
More filters
Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Journal ArticleDOI
TL;DR: A texture segmentation algorithm inspired by the multi-channel filtering theory for visual information processing in the early stages of human visual system is presented, which is based on reconstruction of the input image from the filtered images.

2,351 citations

Journal ArticleDOI
TL;DR: This paper describes the statistical models of fading channels which are frequently used in the analysis and design of communication systems, and focuses on the information theory of fading channel, by emphasizing capacity as the most important performance measure.
Abstract: In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.

2,017 citations

Journal ArticleDOI
01 Oct 1980

1,565 citations

Journal ArticleDOI
TL;DR: The paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner and develops a new multi-cell MMSE-based precoding method that mitigates this problem.
Abstract: This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. Channel state information (CSI) is essential for precoding at the base stations. An effective technique for obtaining this CSI is time-division duplex (TDD) operation where uplink training in conjunction with reciprocity simultaneously provides the base stations with downlink as well as uplink channel estimates. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigates this problem. In addition to being linear, this precoding method has a simple closed-form expression that results from an intuitive optimization. Numerical results show significant performance gains compared to certain popular single-cell precoding methods.

1,306 citations