Journal ArticleDOI

# Wiener filters in canonical coordinates for transform coding, filtering, and quantizing

01 Mar 1998--Vol. 46, Iss: 3, pp 647-654

TL;DR: Canonical correlations are used to decompose the Wiener filter into a whitening transform coder, a canonical filter, and a coloring transform decoder, which produces new formulas for error covariance, spectral flatness, and entropy.

AbstractCanonical correlations are used to decompose the Wiener filter into a whitening transform coder, a canonical filter, and a coloring transform decoder. The outputs of the whitening transform coder are called canonical coordinates; these are the coordinates that are reduced in rank and quantized in our finite-precision version of the Gauss-Markov theorem. Canonical correlations are, in fact, cosines of the canonical, angles between a source vector and a measurement vector. They produce new formulas for error covariance, spectral flatness, and entropy.

Topics: , Wiener filter (57%), Transform coding (53%), Spectral flatness (52%)

### Introduction

• Publisher Item Identifier S 1053-587X(98)01999-0. measurement coordinates to produce a quantized Wiener filter or a quantized Gauss–Markov theorem.
• The abstract motivation for studying canonical correlations is that they provide a minimal description of the correlation between a source vector and a measurement vector.
• The vector is the coherence between and , or the cross correlation between the white random scalar and the white random vector : (7) This basic idea may be iterated to write as (8) (9) where is the squared coherence between the scalar and the vector .
• The source vector and the measurement vector are generated by Mother Nature.

### A. Standard Coordinates

• In standard coordinates, the Wiener filter and the error covariance matrix are (14) (15) We shall call Fig. 2(a) the Wiener filter in standard coordinates.the authors.the authors.
• The linear transformation (16) resolves the source vector and the measurement vector into orthogonal vectors and , with respective covariances and (17).

### B. Coherence Coordinates

• The coherence matrixmeasures the cross-correlation between thewhite vectors and : (20) Using coherence, the authors can refine the Wiener filter and its corresponding error covariance matrix as (21) (22).
• The corresponding Wiener filter, in coherence coordinates, is illustrated in Fig. 2(b).
• The first stage whitens bothand to produce the coherence coordinatesand , the second stage filters with the coherence filter to produce the estimator error and the estimator , and the third stage colors these to produce and .
• The refined linear transformation from to is (23).
• It also shows that the covariance matrix for the error in coherence coordinates is .

### C. Canonical Coordinates

• The authors achieve one more level of refinement by replacing the coherence matrix by its SVD: (27) (28) diag (29) The corresponding Wiener filter, in canonical coordinates, is illustrated in Fig. 2(c).
• The diagonal structure of this covariance matrix shows that the estimator error and the measurement are also uncorrelated, meaning that the estimatorand the error orthogonally decompose the canonical coordinate.
• This canonical correlation matrix is also the Wiener filter for estimating the canonical source coordinates from the canonical measurement coordinates.

### A. Linear Dependence

• This formula tells us that what matters is theintradependence within as measured by its direction cosines, theintradependence within as measured by its direction cosines, and theinterdependence betweenand as measured by the direction cosines between and .
• These latter direction cosines are measured in canonical coordinates, much as principal angles between subspaces are measured in something akin to canonical coordinates.

### B. Relative Filtering Errors

• The prior error covariance for the message vectoris , and the posterior error covariance for the error is .
• The volumes of the concentration ellipses associated with these covariances are proportional to and .
• The relative volumes depend only on the direction cosines : (40).

### C. Entropy and Rate

• The entropyof the random vector is (41) Normally, the authors write this entropy as the conditional entropy of given , plus the entropy of .
• Thus, the rate at which brings information about is determined by the direction cosines or squared canonical correlations between the source and the measurement.
• The authors observe that tr consists of three terms: the infinite-precision filtering error, the bias-squared introduced by rank reduction, and the variance introduced by quantizing.
• If the bit rate is specified, then the slicing level is adjusted to achieve it.
• The SVD representation for becomes a Fourier representation; therefore (55) where coherence spectrum; spectral mask diag ; and Fourier matrices.

### A. Error Variance, Spectral Flatness, and Entropy

• The Toeplitz matrix has the error variance on its diagonal.
• This formula shows the error spectrum to be the product of the source spectrum and an error spectrum, where the latter is determined by the squared coherence spectrum.
• The spectral flatness of the error spectrum is (62) which is the ratio of prediction error variance to prior variance.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998 647
Wiener Filters in Canonical Coordinates for
Transform Coding, Filtering, and Quantizing
Louis L. Scharf, Fellow, IEEE, and John K. Thomas
Abstract Canonical correlations are used to decompose the
Wiener ﬁlter into a whitening transform coder, a canonical ﬁlter,
and a coloring transform decoder. The outputs of the whitening
transform coder are called canonical coordinates; these are the
coordinates that are reduced in rank and quantized in our
ﬁnite-precision version of the Gauss–Markov theorem. Canonical
correlations are, in fact, cosines of the canonical angles between
a source vector and a measurement vector. They produce new
formulas for error covariance, spectral ﬂatness, and entropy.
Index TermsAdaptive ﬁltering, canonical coordinates, canon-
ical correlations, quantizing, transform coding, Wiener ﬁlters.
I. INTRODUCTION
C
ANONICAL correlations were introduced by Hotelling
[1], [2] and further developed by Anderson [3]. They are
now a standard topic in texts on multivariate analysis [4], [5].
Canonical correlations are closely related to coherency spectra,
and these spectra have engaged the interest of acousticians
and others for decades. In this paper, we take a fresh look at
canonical correlations, in a ﬁltering context, and discover that
they provide a natural decomposition of the Wiener ﬁlter. In
this decomposition, the singular value decomposition (SVD)
of a coherence matrix plays a central role: The right singular
vectors are used in a whitening transform coder to produce
canonical coordinates of the measurement vector; the diagonal
singular value matrix is used as a canonical Wiener ﬁlter to
estimate the canonical source coordinates from the canonical
measurement coordinates; and the left singular vectors are used
in a coloring transform decoder to reconstruct the estimate of
the source. The canonical source coordinates and the canonical
measurement coordinates are white, but their cross correlation
is the diagonal singular value matrix of the SVD, which is
also called the canonical correlation matrix.
The Wiener ﬁlter is reduced in rank by purging subdominant
canonical measurement coordinates that have small squared-
canonical correlation with the canonical source coordinates.
Quantizing is done by independently quantizing the canonical
Manuscript received September 24, 1996; revised May 21, 1997. This work
supported by the National Science Foundation under Contract MIP-9529050
and by the Ofﬁce of Naval Research under Contract N00014-89-J-1070. The
associate editor coordinating the review of this paper and approving it for
publication was Dr. Jos
´
e Principe.
L. L. Scharf is with the Department of Electrical and Computer Engi-
neering, University of Colorado, Boulder, CO 80309-0425 USA (e-mail:
J. K. Thomas is with the Data Fusion Corporation, Westminster, CO 80021
USA (e-mail: thomasjk@datafusion.com).
Publisher Item Identiﬁer S 1053-587X(98)01999-0.
measurement coordinates to produce a quantized Wiener ﬁlter
or a quantized GaussMarkov theorem.
The abstract motivation for studying canonical correlations
is that they provide a minimal description of the correlation
between a source vector and a measurement vector. Canonical
correlations are also cosines of canonical angles; therefore,
some very illuminating geometrical insights are gained from a
study of Wiener ﬁlters in canonical coordinates. The concrete
motivation for studying canonical correlations is that they are
the variables that determine how a Wiener ﬁlter can be reduced
in rank and quantized for a ﬁnite-precision implementation.
Canonical correlations decompose formulas for error co-
variance, spectral ﬂatness, and entropy, and they produce
geometrical interpretations of all three. These decompositions
show that canonical correlations play the role of direction
cosines between random vectors, lending new insights into
old formulas. All of these ﬁnite-dimensional results generalize
to cyclic time series and to wide-sense stationary time series.
Finally, experimental training data may be used in place of
second-order information to produce formulas for adaptive
Wiener ﬁlters in adaptive canonical coordinates.
II. P
RELIMINARY OBSERVATIONS
Let us begin our discussion of canonical coordinates by
revisiting an old problem in linear prediction. The zero-
mean random vector
has covariance matrix
(1)
The determinant of
may be written as
(2)
where
is the error variance for estimating the scalar
from the vector . This error variance may be written
as
(3)
(4)
We call
the squared coherence between the scalar
and the vector because it may be written as the product
(5)
(6)
1053–587X/98\$10.00 1998 IEEE

648 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998
The vector is the coherence between and ,
or the cross correlation between the white random scalar
and the white random vector :
(7)
This basic idea may be iterated to write
as
(8)
(9)
where
is the squared coherence between the scalar
and the vector . This formula for
is the Gram determinant, with each prediction error
variance written in terms of squared coherence. It provides a
ﬁne-grained resolution of entropy and spectral ﬂatness:
(10)
(11)
Therefore, entropy is near its maximum, and spectral ﬂatness
is near 1 when the squared coherences between
and
are near zero for all .
The sequence of Wiener ﬁlters that underlies this decompo-
sition of
is
(12)
which is a decomposition of the ﬁlter into a whitener
, a coherence ﬁlter , and a colorer .
This idea is fundamental.
III. C
ANONICAL CORRELATIONS IN A FILTERING CONTEXT
The context for our further development of canonical cor-
relations is illustrated in Fig. 1. The
source vector
and the measurement vector are generated by Mother
Nature. Father Nature views only the measurement vector
,
and from it, he must estimate Mother Nature’s source vector
. This problem is meaningful because the zero-mean random
vectors
and share the covariance matrix :
(13)
Fig. 1. Filtering problem.
(a)
(b)
(c)
Fig. 2. Wiener ﬁlter in various coordinate systems.
A. Standard Coordinates
The linear MMSE estimator of
from is , and
the corresponding (orthogonal) error is
. In standard
coordinates, the Wiener ﬁlter
and the error covariance
matrix
are
(14)
(15)
We shall call Fig. 2(a) the Wiener ﬁlter in standard coordi-
nates.
The linear transformation
(16)
resolves the source vector
and the measurement vector
into orthogonal vectors and , with respective covariances
and
(17)
This is one of the Schur decompositions of
. From this
formula, it follows that
may be written as
(18)
(19)

SCHARF AND THOMAS: WIENER FILTERS IN CANONICAL COORDINATES FOR TRANSFORM CODING, FILTERING, AND QUANTIZING 649
B. Coherence Coordinates
The coherence matrix measures the cross-correlation be-
tween the white vectors
and :
(20)
Using coherence, we can reﬁne the Wiener ﬁlter
and its
corresponding error covariance matrix
as
(21)
(22)
We shall call the matrix
the squared coherence
matrix.
The corresponding Wiener ﬁlter, in coherence coordinates,
is illustrated in Fig. 2(b). It resolves the source vector
and
the measurement vector
into the error vector and the
estimate
in three stages. The ﬁrst stage whitens both and
to produce the coherence coordinates and , the second
stage ﬁlters
with the coherence ﬁlter to produce the
estimator error
and the estimator , and the third stage
colors these to produce
and . We shall call this the Wiener
ﬁlter in coherence coordinates.
The reﬁned linear transformation from
to is
(23)
The corresponding reﬁnement for the covariance matrix for
and is
(24)
The diagonal structure of this covariance matrix shows that
the estimator error
and the measurement , in coherence
coordinates, are also uncorrelated, providing an orthogonal
decomposition of the coherence coordinate
into the estimator
and the error . It also shows that the covariance matrix
for the error in coherence coordinates is
.
The formula for
is now
(25)
(26)
C. Canonical Coordinates
We achieve one more level of reﬁnement by replacing the
coherence matrix
by its SVD:
(27)
(28)
diag (29)
We shall call the orthogonal matrices
and transform
coders, the matrix
the canonical correlation matrix, and
the matrix
the squared canonical correlation matrix.
The canonical correlation matrix
is the cross correlation
between the white vector
and the white vector
:
(30)
The Wiener ﬁlter
and error covariance matrix in these
canonical coordinates are
(31)
(32)
The corresponding Wiener ﬁlter, in canonical coordinates, is
illustrated in Fig. 2(c). It resolves the source vector
and
the measurement vector
into the error vector and the
estimator
in ﬁve stages. The ﬁrst stage whitens both and
to produce the coherence coordinates and , the second
stage transforms the coherence coordinates
and into the
canonical coordinates
and , the third stage ﬁlters with
the canonical ﬁlter
to produce the estimator and the
estimator error
, the fourth stage transforms and into
the coherence coordinates
and , and the ﬁfth stage colors
these to produce
and . We shall call this the Wiener ﬁlter
in canonical coordinates.
The reﬁned linear transformation from
to is
(33)
The corresponding reﬁnement of the covariance matrix for
and is
(34)
The diagonal structure of this covariance matrix shows that
the estimator error
and the measurement are also un-
correlated, meaning that the estimator
and the error
orthogonally decompose the canonical coordinate . It also
shows that the covariance matrix for the error in canonical
coordinates is
. The formula for
is now
(35)
(36)
This formula shows that the squared canonical correlations
are objects of fundamental importance for ﬁltering. We
pursue this point in Section IV.

650 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998
IV. FILTERING FORMULAS IN CANONICAL COORDINATES
We summarize as follows. The Wiener ﬁlter in canonical
coordinates replaces the source and measurement vectors in
standard coordinates with source and measurement vectors
in canonical coordinates. In these coordinates, the source
and measurement are white but diagonally cross correlated
according to the canonical correlation matrix
. This canon-
ical correlation matrix is also the Wiener ﬁlter for estimating
the canonical source coordinates from the canonical mea-
surement coordinates. The error covariance matrix associated
with Wiener ﬁltering in these canonical coordinates is just
.
Recall that the canonical correlations are deﬁned as
(37)
(38)
so that each canonical correlation
measures the cosine
of the angle between two unit variance random variables: one
drawn from the canonical source coordinates and one drawn
from the canonical measurement coordinates. For this reason,
we call the squared canonical correlations
direction
cosines. By making the canonical variables diagonally cor-
related, we have uncoupled the measurement of one direction
cosine from the measurement of another.
A. Linear Dependence
We think of the Hadamard ratio
as a
measure of linear dependence of the variables
and . Using
the results of (8) and (35), we may write the Hadamard ratio
as the product
(39)
This formula tells us that what matters is the intradependence
within
as measured by its direction cosines, the in-
as measured by its direction cosines,
and the interdependence between
and as measured by the
direction cosines between
and . These latter direction
cosines are measured in canonical coordinates, much as
principal angles between subspaces are measured in something
akin to canonical coordinates. They are scale invariant.
B. Relative Filtering Errors
The prior error covariance for the message vector
is ,
and the posterior error covariance for the error
is
. The volumes of the concentration ellipses associated with
these covariances are proportional to
and .
The relative volumes depend only on the direction cosines
:
(40)
C. Entropy and Rate
The entropy of the random vector
is
(41)
Normally, we write this entropy as the conditional entropy of
given , plus the entropy of . The conditional entropy, or
equivocation, is therefore
(42)
and the direction cosines determine how
brings information
to reduce its entropy from its prior value of .
The second term on the right-hand side of this equation is
the negative of rate in canonical coordinates. Thus, the rate
at which
brings information about is determined by the
direction cosines or squared canonical correlations between
the source and the measurement.
V. R
ANK REDUCTION FOR TRANSFORM
CODING,FILTERING, AND QUANTIZING
The Wiener ﬁlter in canonical coordinates is a ﬁlterbank
idea. That is, the measurement is decomposed into canonical
coordinates that bring information about the canonical coordi-
nates of the source. It is also a spread-spectrum idea because
the canonical coordinates are white. The question of rank
reduction and bit allocation for ﬁnite-precision Wiener ﬁltering
or, equivalently, for source coding from noisy measurements is
clariﬁed in canonical coordinates. The problem is to quantize
the canonical coordinates
so that the trace of the error
covariance matrix
is minimized. The error covariance
matrix and its trace are
(43)
tr
(44)
where the
are the energies of the “impulse responses”
for the coloring (or synthesizing) transform decoder:
(45)
If the canonical measurement coordinates
that are weakly
correlated with the canonical source coordinates are purged

SCHARF AND THOMAS: WIENER FILTERS IN CANONICAL COORDINATES FOR TRANSFORM CODING, FILTERING, AND QUANTIZING 651
and the remaining are uniformly quantized with bits, then
the resulting error covariance matrix for estimating the source
vector
from the reduced-rank and quantized canonical mea-
surement vector
is
tr
(46)
In this latter form, we observe that tr
consists of three
terms: the inﬁnite-precision ﬁltering error, the bias-squared
introduced by rank reduction, and the variance introduced by
quantizing. The trick is to properly balance the second and
third. To this end, we will consider the rate-distortion problem
tr under constraint
(47)
Using the standard procedure for minimizing with constraint
(see, for example, [9] and [10]), we obtain the solution
(48)
(49)
(50)
(51)
These formulas generalize the formulas of [9] by providing a
solution to the problem of uniformly quantizing the Wiener
ﬁlter or quantizing the Gauss–Markov theorem. They may be
interpreted as follows.
If the bit rate
is speciﬁed, then the slicing level
is adjusted to achieve it. The slicing level determines the
bit allocation
, the rank , and the minimum achievable
distortion
. Conversely, if the distortion is speciﬁed, is
adjusted to achieve it. This determines
, , and the minimum
rate
. These formulas are illustrated in Fig. 3 for the idealized
case where the
are unity. The components of distortion
illustrate the tradeoff between bias and variance.
(a) (b)
(c) (d)
Fig. 3. Components of distortion. (a) Squared canonical correlation. (b)
Inﬁnite-precision distortion. (c) Extra components of distortion due to rank
reduction and quantizing. (d) Finite-precision distortion.
VI. CANONICAL TIME SERIES
If and are jointly stationary random vectors whose
dimensions increase without bound (that is, they are stationary
time series), then all of the correlation matrices in these formu-
las are inﬁnite Toeplitz matrices with Fourier representations
(52)
Furthermore, if the time series are not perfectly predictable
(that is, the power spectra
and satisfy the Sz
¨
ego
conditions), then
and may be spectrally
factored as
(53)
where the ﬁlters
and are minimum phase, meaning
that
, , , and are causal and stable
ﬁlters. Then, the various square roots in the ﬁltering fomulas
have the Fourier representations
(54a)
(54b)
(54c)
The SVD representation for
becomes a Fourier represen-
tation; therefore
(55)

##### Citations
More filters

Journal ArticleDOI
TL;DR: It is demonstrated that the cross-spectral metric is optimal in the sense that it maximizes mutual information between the observed and desired processes and is capable of outperforming the more complex eigendecomposition-based methods.
Abstract: The Wiener filter is analyzed for stationary complex Gaussian signals from an information theoretic point of view. A dual-port analysis of the Wiener filter leads to a decomposition based on orthogonal projections and results in a new multistage method for implementing the Wiener filter using a nested chain of scalar Wiener filters. This new representation of the Wiener filter provides the capability to perform an information-theoretic analysis of previous, basis-dependent, reduced-rank Wiener filters. This analysis demonstrates that the cross-spectral metric is optimal in the sense that it maximizes mutual information between the observed and desired processes. A new reduced-rank Wiener filter is developed based on this new structure which evolves a basis using successive projections of the desired signal onto orthogonal, lower dimensional subspaces. The performance is evaluated using a comparative computer analysis model and it is demonstrated that the low-complexity multistage reduced-rank Wiener filter is capable of outperforming the more complex eigendecomposition-based methods.

803 citations

• ...where the squared canonical correlation [8]–[11] is...

[...]

Journal ArticleDOI
TL;DR: The concepts of propriety and joint propriety are linked to eigenanalysis and canonical correlation analysis and applied to the problem of rank reduction through principal components of complex random vectors and wide-sense stationary signals.
Abstract: We present a comprehensive treatment of the second-order theory of complex random vectors and wide-sense stationary (WSS) signals. The main focus is on the improper case, in which the complementary covariance does not vanish. Accounting for the information present in the complementary covariance requires the use of widely linear transformations. Based on these, we present the eigenanalysis of complex vectors and apply it to the problem of rank reduction through principal components. We also investigate joint properties of two complex vectors by introducing canonical correlations, which paves the way for a discussion of the Wiener filter and its rank-reduced version. We link the concepts of propriety and joint propriety to eigenanalysis and canonical correlation analysis, respectively. Our treatment is extended to WSS signals. In particular, we give a result on the asymptotic distribution of eigenvalues and examine the connection between WSS, proper, and analytic signals.

369 citations

Journal ArticleDOI
TL;DR: The large system output signal-to-interference plus noise ratio (SINR) is evaluated as a function of filter rank D for the multistage Wiener filter (MSWF) and it is shown that for large systems, the MSWF allows a dramatic reduction in rank relative to the other techniques considered.
Abstract: The performance of reduced-rank linear filtering is studied for the suppression of multiple-access interference. A reduced-rank filter resides in a lower dimensional space, relative to the full-rank filter, which enables faster convergence and tracking. We evaluate the large system output signal-to-interference plus noise ratio (SINR) as a function of filter rank D for the multistage Wiener filter (MSWF) presented by Goldstein and Reed. The large system limit is defined by letting the number of users K and the number of dimensions N tend to infinity with K/N fixed. For the case where all users are received with the same power, the reduced-rank SINR converges to the full-rank SINR as a continued fraction. An important conclusion from this analysis is that the rank D needed to achieve a desired output SINR does not scale with system size. Numerical results show that D=8 is sufficient to achieve near-full-rank performance even under heavy loads (K/N=1). We also evaluate the large system output SINR for other reduced-rank methods, namely, principal components and cross-spectral, which are based on an eigendecomposition of the input covariance matrix, and partial despreading. For those methods, the large system limit lets D/spl rarr//spl infin/ with D/N fixed. Our results show that for large systems, the MSWF allows a dramatic reduction in rank relative to the other techniques considered.

249 citations

### Cites methods from "Wiener filters in canonical coordin..."

• ...We remark that other reduced-rank methods have been proposed in [5], [10], [23], [24], [4], [25]....

[...]

• ..., see [1]–[4] and references therein)....

[...]

Journal ArticleDOI
TL;DR: It is the early, nonasymptotic elements of the generated sequence of estimators that offer favorable bias covariance balance and are seen to outperform in mean-square estimation error, constraint-LMS, RLS-type, orthogonal multistage decomposition, as well as plain and diagonally loaded SMI estimates.
Abstract: Statistical conditional optimization criteria lead to the development of an iterative algorithm that starts from the matched filter (or constraint vector) and generates a sequence of filters that converges to the minimum-variance-distortionless-response (MVDR) solution for any positive definite input autocorrelation matrix. Computationally, the algorithm is a simple, noninvasive, recursive procedure that avoids any form of explicit autocorrelation matrix inversion, decomposition, or diagonalization. Theoretical analysis reveals basic properties of the algorithm and establishes formal convergence. When the input autocorrelation matrix is replaced by a conventional sample-average (positive definite) estimate, the algorithm effectively generates a sequence of MVDR filter estimators; the bias converges rapidly to zero and the covariance trace rises slowly and asymptotically to the covariance trace of the familiar sample-matrix-inversion (SMI) estimator. In fact, formal convergence of the estimator sequence to the SMI estimate is established. However, for short data records, it is the early, nonasymptotic elements of the generated sequence of estimators that offer favorable bias covariance balance and are seen to outperform in mean-square estimation error, constraint-LMS, RLS-type, orthogonal multistage decomposition, as well as plain and diagonally loaded SMI estimates. An illustrative interference suppression example is followed throughout this presentation.

236 citations

### Cites methods from "Wiener filters in canonical coordin..."

• ...Non-eigenbased, -stage, , orthogonal decomposition and synthesis of was pursued in [30] and [31], filter decomposition using canonical correlations was considered in [32], and modular designs through factorization of the orthogonal projection operator were developed in [33]....

[...]

Proceedings Article
01 Jan 2000
TL;DR: The optimal linear transform is derived to combine the audio and visual information and an implementation that avoids the numerical problems caused by computing the correlation matrices is described.
Abstract: FaceSync is an optimal linear algorithm that finds the degree of synchronization between the audio and image recordings of a human speaker. Using canonical correlation, it finds the best direction to combine all the audio and image data, projecting them onto a single axis. FaceSync uses Pearson's correlation to measure the degree of synchronization between the audio and image data. We derive the optimal linear transform to combine the audio and visual information and describe an implementation that avoids the numerical problems caused by computing the correlation matrices.

158 citations

##### References
More filters

Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

60,029 citations

Book
14 Sep 1984
Abstract: Preface to the Third Edition.Preface to the Second Edition.Preface to the First Edition.1. Introduction.2. The Multivariate Normal Distribution.3. Estimation of the Mean Vector and the Covariance Matrix.4. The Distributions and Uses of Sample Correlation Coefficients.5. The Generalized T2-Statistic.6. Classification of Observations.7. The Distribution of the Sample Covariance Matrix and the Sample Generalized Variance.8. Testing the General Linear Hypothesis: Multivariate Analysis of Variance9. Testing Independence of Sets of Variates.10. Testing Hypotheses of Equality of Covariance Matrices and Equality of Mean Vectors and Covariance Matrices.11. Principal Components.12. Cononical Correlations and Cononical Variables.13. The Distributions of Characteristic Roots and Vectors.14. Factor Analysis.15. Pattern of Dependence Graphical Models.Appendix A: Matrix Theory.Appendix B: Tables.References.Index.

9,680 citations

Journal ArticleDOI

7,404 citations

Book ChapterDOI
Abstract: Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each

5,568 citations

### "Wiener filters in canonical coordin..." refers methods in this paper

• ...INTRODUCTION CANONICAL correlations were introduced by Hotelling [1], [2] and further developed by Anderson [3]....

[...]

Journal ArticleDOI
01 Jan 1985

992 citations

01 Jan 1983
01 Jan 1985
01 Jan 1982