scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Wiener filters in canonical coordinates for transform coding, filtering, and quantizing

01 Mar 1998-IEEE Transactions on Signal Processing (IEEE)-Vol. 46, Iss: 3, pp 647-654
TL;DR: Canonical correlations are used to decompose the Wiener filter into a whitening transform coder, a canonical filter, and a coloring transform decoder, which produces new formulas for error covariance, spectral flatness, and entropy.
Abstract: Canonical correlations are used to decompose the Wiener filter into a whitening transform coder, a canonical filter, and a coloring transform decoder. The outputs of the whitening transform coder are called canonical coordinates; these are the coordinates that are reduced in rank and quantized in our finite-precision version of the Gauss-Markov theorem. Canonical correlations are, in fact, cosines of the canonical, angles between a source vector and a measurement vector. They produce new formulas for error covariance, spectral flatness, and entropy.

Summary (2 min read)

Introduction

  • Publisher Item Identifier S 1053-587X(98)01999-0. measurement coordinates to produce a quantized Wiener filter or a quantized Gauss–Markov theorem.
  • The abstract motivation for studying canonical correlations is that they provide a minimal description of the correlation between a source vector and a measurement vector.
  • The vector is the coherence between and , or the cross correlation between the white random scalar and the white random vector : (7) This basic idea may be iterated to write as (8) (9) where is the squared coherence between the scalar and the vector .
  • The source vector and the measurement vector are generated by Mother Nature.

A. Standard Coordinates

  • In standard coordinates, the Wiener filter and the error covariance matrix are (14) (15) We shall call Fig. 2(a) the Wiener filter in standard coordinates.the authors.the authors.
  • The linear transformation (16) resolves the source vector and the measurement vector into orthogonal vectors and , with respective covariances and (17).

B. Coherence Coordinates

  • The coherence matrixmeasures the cross-correlation between thewhite vectors and : (20) Using coherence, the authors can refine the Wiener filter and its corresponding error covariance matrix as (21) (22).
  • The corresponding Wiener filter, in coherence coordinates, is illustrated in Fig. 2(b).
  • The first stage whitens bothand to produce the coherence coordinatesand , the second stage filters with the coherence filter to produce the estimator error and the estimator , and the third stage colors these to produce and .
  • The refined linear transformation from to is (23).
  • It also shows that the covariance matrix for the error in coherence coordinates is .

C. Canonical Coordinates

  • The authors achieve one more level of refinement by replacing the coherence matrix by its SVD: (27) (28) diag (29) The corresponding Wiener filter, in canonical coordinates, is illustrated in Fig. 2(c).
  • The diagonal structure of this covariance matrix shows that the estimator error and the measurement are also uncorrelated, meaning that the estimatorand the error orthogonally decompose the canonical coordinate.
  • This canonical correlation matrix is also the Wiener filter for estimating the canonical source coordinates from the canonical measurement coordinates.

A. Linear Dependence

  • This formula tells us that what matters is theintradependence within as measured by its direction cosines, theintradependence within as measured by its direction cosines, and theinterdependence betweenand as measured by the direction cosines between and .
  • These latter direction cosines are measured in canonical coordinates, much as principal angles between subspaces are measured in something akin to canonical coordinates.

B. Relative Filtering Errors

  • The prior error covariance for the message vectoris , and the posterior error covariance for the error is .
  • The volumes of the concentration ellipses associated with these covariances are proportional to and .
  • The relative volumes depend only on the direction cosines : (40).

C. Entropy and Rate

  • The entropyof the random vector is (41) Normally, the authors write this entropy as the conditional entropy of given , plus the entropy of .
  • Thus, the rate at which brings information about is determined by the direction cosines or squared canonical correlations between the source and the measurement.
  • The authors observe that tr consists of three terms: the infinite-precision filtering error, the bias-squared introduced by rank reduction, and the variance introduced by quantizing.
  • If the bit rate is specified, then the slicing level is adjusted to achieve it.
  • The SVD representation for becomes a Fourier representation; therefore (55) where coherence spectrum; spectral mask diag ; and Fourier matrices.

A. Error Variance, Spectral Flatness, and Entropy

  • The Toeplitz matrix has the error variance on its diagonal.
  • This formula shows the error spectrum to be the product of the source spectrum and an error spectrum, where the latter is determined by the squared coherence spectrum.
  • The spectral flatness of the error spectrum is (62) which is the ratio of prediction error variance to prior variance.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998 647
Wiener Filters in Canonical Coordinates for
Transform Coding, Filtering, and Quantizing
Louis L. Scharf, Fellow, IEEE, and John K. Thomas
Abstract Canonical correlations are used to decompose the
Wiener filter into a whitening transform coder, a canonical filter,
and a coloring transform decoder. The outputs of the whitening
transform coder are called canonical coordinates; these are the
coordinates that are reduced in rank and quantized in our
finite-precision version of the Gauss–Markov theorem. Canonical
correlations are, in fact, cosines of the canonical angles between
a source vector and a measurement vector. They produce new
formulas for error covariance, spectral flatness, and entropy.
Index TermsAdaptive filtering, canonical coordinates, canon-
ical correlations, quantizing, transform coding, Wiener filters.
I. INTRODUCTION
C
ANONICAL correlations were introduced by Hotelling
[1], [2] and further developed by Anderson [3]. They are
now a standard topic in texts on multivariate analysis [4], [5].
Canonical correlations are closely related to coherency spectra,
and these spectra have engaged the interest of acousticians
and others for decades. In this paper, we take a fresh look at
canonical correlations, in a filtering context, and discover that
they provide a natural decomposition of the Wiener filter. In
this decomposition, the singular value decomposition (SVD)
of a coherence matrix plays a central role: The right singular
vectors are used in a whitening transform coder to produce
canonical coordinates of the measurement vector; the diagonal
singular value matrix is used as a canonical Wiener filter to
estimate the canonical source coordinates from the canonical
measurement coordinates; and the left singular vectors are used
in a coloring transform decoder to reconstruct the estimate of
the source. The canonical source coordinates and the canonical
measurement coordinates are white, but their cross correlation
is the diagonal singular value matrix of the SVD, which is
also called the canonical correlation matrix.
The Wiener filter is reduced in rank by purging subdominant
canonical measurement coordinates that have small squared-
canonical correlation with the canonical source coordinates.
Quantizing is done by independently quantizing the canonical
Manuscript received September 24, 1996; revised May 21, 1997. This work
supported by the National Science Foundation under Contract MIP-9529050
and by the Office of Naval Research under Contract N00014-89-J-1070. The
associate editor coordinating the review of this paper and approving it for
publication was Dr. Jos
´
e Principe.
L. L. Scharf is with the Department of Electrical and Computer Engi-
neering, University of Colorado, Boulder, CO 80309-0425 USA (e-mail:
scharf@boulder.colorado.edu).
J. K. Thomas is with the Data Fusion Corporation, Westminster, CO 80021
USA (e-mail: thomasjk@datafusion.com).
Publisher Item Identifier S 1053-587X(98)01999-0.
measurement coordinates to produce a quantized Wiener filter
or a quantized GaussMarkov theorem.
The abstract motivation for studying canonical correlations
is that they provide a minimal description of the correlation
between a source vector and a measurement vector. Canonical
correlations are also cosines of canonical angles; therefore,
some very illuminating geometrical insights are gained from a
study of Wiener filters in canonical coordinates. The concrete
motivation for studying canonical correlations is that they are
the variables that determine how a Wiener filter can be reduced
in rank and quantized for a finite-precision implementation.
Canonical correlations decompose formulas for error co-
variance, spectral flatness, and entropy, and they produce
geometrical interpretations of all three. These decompositions
show that canonical correlations play the role of direction
cosines between random vectors, lending new insights into
old formulas. All of these finite-dimensional results generalize
to cyclic time series and to wide-sense stationary time series.
Finally, experimental training data may be used in place of
second-order information to produce formulas for adaptive
Wiener filters in adaptive canonical coordinates.
II. P
RELIMINARY OBSERVATIONS
Let us begin our discussion of canonical coordinates by
revisiting an old problem in linear prediction. The zero-
mean random vector
has covariance matrix
(1)
The determinant of
may be written as
(2)
where
is the error variance for estimating the scalar
from the vector . This error variance may be written
as
(3)
(4)
We call
the squared coherence between the scalar
and the vector because it may be written as the product
(5)
(6)
1053–587X/98$10.00 1998 IEEE

648 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998
The vector is the coherence between and ,
or the cross correlation between the white random scalar
and the white random vector :
(7)
This basic idea may be iterated to write
as
(8)
(9)
where
is the squared coherence between the scalar
and the vector . This formula for
is the Gram determinant, with each prediction error
variance written in terms of squared coherence. It provides a
fine-grained resolution of entropy and spectral flatness:
(10)
(11)
Therefore, entropy is near its maximum, and spectral flatness
is near 1 when the squared coherences between
and
are near zero for all .
The sequence of Wiener filters that underlies this decompo-
sition of
is
(12)
which is a decomposition of the filter into a whitener
, a coherence filter , and a colorer .
This idea is fundamental.
III. C
ANONICAL CORRELATIONS IN A FILTERING CONTEXT
The context for our further development of canonical cor-
relations is illustrated in Fig. 1. The
source vector
and the measurement vector are generated by Mother
Nature. Father Nature views only the measurement vector
,
and from it, he must estimate Mother Nature’s source vector
. This problem is meaningful because the zero-mean random
vectors
and share the covariance matrix :
(13)
Fig. 1. Filtering problem.
(a)
(b)
(c)
Fig. 2. Wiener filter in various coordinate systems.
A. Standard Coordinates
The linear MMSE estimator of
from is , and
the corresponding (orthogonal) error is
. In standard
coordinates, the Wiener filter
and the error covariance
matrix
are
(14)
(15)
We shall call Fig. 2(a) the Wiener filter in standard coordi-
nates.
The linear transformation
(16)
resolves the source vector
and the measurement vector
into orthogonal vectors and , with respective covariances
and
(17)
This is one of the Schur decompositions of
. From this
formula, it follows that
may be written as
(18)
(19)

SCHARF AND THOMAS: WIENER FILTERS IN CANONICAL COORDINATES FOR TRANSFORM CODING, FILTERING, AND QUANTIZING 649
B. Coherence Coordinates
The coherence matrix measures the cross-correlation be-
tween the white vectors
and :
(20)
Using coherence, we can refine the Wiener filter
and its
corresponding error covariance matrix
as
(21)
(22)
We shall call the matrix
the squared coherence
matrix.
The corresponding Wiener filter, in coherence coordinates,
is illustrated in Fig. 2(b). It resolves the source vector
and
the measurement vector
into the error vector and the
estimate
in three stages. The first stage whitens both and
to produce the coherence coordinates and , the second
stage filters
with the coherence filter to produce the
estimator error
and the estimator , and the third stage
colors these to produce
and . We shall call this the Wiener
filter in coherence coordinates.
The refined linear transformation from
to is
(23)
The corresponding refinement for the covariance matrix for
and is
(24)
The diagonal structure of this covariance matrix shows that
the estimator error
and the measurement , in coherence
coordinates, are also uncorrelated, providing an orthogonal
decomposition of the coherence coordinate
into the estimator
and the error . It also shows that the covariance matrix
for the error in coherence coordinates is
.
The formula for
is now
(25)
(26)
C. Canonical Coordinates
We achieve one more level of refinement by replacing the
coherence matrix
by its SVD:
(27)
(28)
diag (29)
We shall call the orthogonal matrices
and transform
coders, the matrix
the canonical correlation matrix, and
the matrix
the squared canonical correlation matrix.
The canonical correlation matrix
is the cross correlation
between the white vector
and the white vector
:
(30)
The Wiener filter
and error covariance matrix in these
canonical coordinates are
(31)
(32)
The corresponding Wiener filter, in canonical coordinates, is
illustrated in Fig. 2(c). It resolves the source vector
and
the measurement vector
into the error vector and the
estimator
in five stages. The first stage whitens both and
to produce the coherence coordinates and , the second
stage transforms the coherence coordinates
and into the
canonical coordinates
and , the third stage filters with
the canonical filter
to produce the estimator and the
estimator error
, the fourth stage transforms and into
the coherence coordinates
and , and the fifth stage colors
these to produce
and . We shall call this the Wiener filter
in canonical coordinates.
The refined linear transformation from
to is
(33)
The corresponding refinement of the covariance matrix for
and is
(34)
The diagonal structure of this covariance matrix shows that
the estimator error
and the measurement are also un-
correlated, meaning that the estimator
and the error
orthogonally decompose the canonical coordinate . It also
shows that the covariance matrix for the error in canonical
coordinates is
. The formula for
is now
(35)
(36)
This formula shows that the squared canonical correlations
are objects of fundamental importance for filtering. We
pursue this point in Section IV.

650 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998
IV. FILTERING FORMULAS IN CANONICAL COORDINATES
We summarize as follows. The Wiener filter in canonical
coordinates replaces the source and measurement vectors in
standard coordinates with source and measurement vectors
in canonical coordinates. In these coordinates, the source
and measurement are white but diagonally cross correlated
according to the canonical correlation matrix
. This canon-
ical correlation matrix is also the Wiener filter for estimating
the canonical source coordinates from the canonical mea-
surement coordinates. The error covariance matrix associated
with Wiener filtering in these canonical coordinates is just
.
Recall that the canonical correlations are defined as
(37)
(38)
so that each canonical correlation
measures the cosine
of the angle between two unit variance random variables: one
drawn from the canonical source coordinates and one drawn
from the canonical measurement coordinates. For this reason,
we call the squared canonical correlations
direction
cosines. By making the canonical variables diagonally cor-
related, we have uncoupled the measurement of one direction
cosine from the measurement of another.
A. Linear Dependence
We think of the Hadamard ratio
as a
measure of linear dependence of the variables
and . Using
the results of (8) and (35), we may write the Hadamard ratio
as the product
(39)
This formula tells us that what matters is the intradependence
within
as measured by its direction cosines, the in-
tradependence within
as measured by its direction cosines,
and the interdependence between
and as measured by the
direction cosines between
and . These latter direction
cosines are measured in canonical coordinates, much as
principal angles between subspaces are measured in something
akin to canonical coordinates. They are scale invariant.
B. Relative Filtering Errors
The prior error covariance for the message vector
is ,
and the posterior error covariance for the error
is
. The volumes of the concentration ellipses associated with
these covariances are proportional to
and .
The relative volumes depend only on the direction cosines
:
(40)
C. Entropy and Rate
The entropy of the random vector
is
(41)
Normally, we write this entropy as the conditional entropy of
given , plus the entropy of . The conditional entropy, or
equivocation, is therefore
(42)
and the direction cosines determine how
brings information
about
to reduce its entropy from its prior value of .
The second term on the right-hand side of this equation is
the negative of rate in canonical coordinates. Thus, the rate
at which
brings information about is determined by the
direction cosines or squared canonical correlations between
the source and the measurement.
V. R
ANK REDUCTION FOR TRANSFORM
CODING,FILTERING, AND QUANTIZING
The Wiener filter in canonical coordinates is a filterbank
idea. That is, the measurement is decomposed into canonical
coordinates that bring information about the canonical coordi-
nates of the source. It is also a spread-spectrum idea because
the canonical coordinates are white. The question of rank
reduction and bit allocation for finite-precision Wiener filtering
or, equivalently, for source coding from noisy measurements is
clarified in canonical coordinates. The problem is to quantize
the canonical coordinates
so that the trace of the error
covariance matrix
is minimized. The error covariance
matrix and its trace are
(43)
tr
(44)
where the
are the energies of the “impulse responses”
for the coloring (or synthesizing) transform decoder:
(45)
If the canonical measurement coordinates
that are weakly
correlated with the canonical source coordinates are purged

SCHARF AND THOMAS: WIENER FILTERS IN CANONICAL COORDINATES FOR TRANSFORM CODING, FILTERING, AND QUANTIZING 651
and the remaining are uniformly quantized with bits, then
the resulting error covariance matrix for estimating the source
vector
from the reduced-rank and quantized canonical mea-
surement vector
is
tr
(46)
In this latter form, we observe that tr
consists of three
terms: the infinite-precision filtering error, the bias-squared
introduced by rank reduction, and the variance introduced by
quantizing. The trick is to properly balance the second and
third. To this end, we will consider the rate-distortion problem
tr under constraint
(47)
Using the standard procedure for minimizing with constraint
(see, for example, [9] and [10]), we obtain the solution
(48)
(49)
(50)
(51)
These formulas generalize the formulas of [9] by providing a
solution to the problem of uniformly quantizing the Wiener
filter or quantizing the Gauss–Markov theorem. They may be
interpreted as follows.
If the bit rate
is specified, then the slicing level
is adjusted to achieve it. The slicing level determines the
bit allocation
, the rank , and the minimum achievable
distortion
. Conversely, if the distortion is specified, is
adjusted to achieve it. This determines
, , and the minimum
rate
. These formulas are illustrated in Fig. 3 for the idealized
case where the
are unity. The components of distortion
illustrate the tradeoff between bias and variance.
(a) (b)
(c) (d)
Fig. 3. Components of distortion. (a) Squared canonical correlation. (b)
Infinite-precision distortion. (c) Extra components of distortion due to rank
reduction and quantizing. (d) Finite-precision distortion.
VI. CANONICAL TIME SERIES
If and are jointly stationary random vectors whose
dimensions increase without bound (that is, they are stationary
time series), then all of the correlation matrices in these formu-
las are infinite Toeplitz matrices with Fourier representations
(52)
Furthermore, if the time series are not perfectly predictable
(that is, the power spectra
and satisfy the Sz
¨
ego
conditions), then
and may be spectrally
factored as
(53)
where the filters
and are minimum phase, meaning
that
, , , and are causal and stable
filters. Then, the various square roots in the filtering fomulas
have the Fourier representations
(54a)
(54b)
(54c)
The SVD representation for
becomes a Fourier represen-
tation; therefore
(55)

Citations
More filters
Journal ArticleDOI
TL;DR: The results show that exploiting both audio and visual modalities in a multistream hidden Markov model based scheme clearly improves performance relative to either audio or visual-only estimation.
Abstract: We are interested in recovering aspects of vocal tract's geometry and dynamics from speech, a problem referred to as speech inversion. Traditional audio-only speech inversion techniques are inherently ill-posed since the same speech acoustics can be produced by multiple articulatory configurations. To alleviate the ill-posedness of the audio-only inversion process, we propose an inversion scheme which also exploits visual information from the speaker's face. The complex audiovisual-to-articulatory mapping is approximated by an adaptive piecewise linear model. Model switching is governed by a Markovian discrete process which captures articulatory dynamic information. Each constituent linear mapping is effectively estimated via canonical correlation analysis. In the described multimodal context, we investigate alternative fusion schemes which allow interaction between the audio and visual modalities at various synchronization levels. For facial analysis, we employ active appearance models (AAMs) and demonstrate fully automatic face tracking and visual feature extraction. Using the AAM features in conjunction with audio features such as Mel frequency cepstral coefficients (MFCCs) or line spectral frequencies (LSFs) leads to effective estimation of the trajectories followed by certain points of interest in the speech production system. We report experiments on the QSMT and MOCHA databases which contain audio, video, and electromagnetic articulography data recorded in parallel. The results show that exploiting both audio and visual modalities in a multistream hidden Markov model based scheme clearly improves performance relative to either audio or visual-only estimation.

43 citations

Journal ArticleDOI
TL;DR: This work proposes finite-length multi-input multi-output (MIMO) equalization methods for "smart" antenna arrays using the statistical theory of canonical correlations and shows that the proposed methods are related to maximum likelihood reduced-rank channel and noise estimation algorithms in unknown spatially correlated noise.
Abstract: We propose finite-length multi-input multi-output (MIMO) equalization methods for "smart" antenna arrays using the statistical theory of canonical correlations. We show that the proposed methods are related to maximum likelihood (ML) reduced-rank channel and noise estimation algorithms in unknown spatially correlated noise as well as to several previously developed equalization schemes.

40 citations

Proceedings ArticleDOI
15 Apr 2007
TL;DR: The proposed technique provides a higher spectral resolution than the well-known Welch's method, and it also avoids the signal mismatch problem associated to the minimum variance distortionless response (MVDR) based approach.
Abstract: In this paper, a new technique for the estimation of the magnitude squared coherence (MSC) spectrum is proposed. The method is based on the relationship between the MSC and the canonical correlation analysis (CCA) of stationary time series. Particularly, the canonical correlations coincide asymptotically with the squared roots of the MSC, which is exploited in the paper to obtain an estimate of the MSC based on a reduced-rank version of the estimated coherence matrix. The proposed technique provides a higher spectral resolution than the well-known Welch's method, and it also avoids the signal mismatch problem associated to the minimum variance distortionless response (MVDR) based approach. Finally, the performance of the proposed method is evaluated by means of some numerical examples.

39 citations


Cites background or methods from "Wiener filters in canonical coordin..."

  • ...Canonical correlation analysis (CCA) is a well-known technique in multivariate statistical analysis which has been widely used in communications and statistical signal processing [1, 10, 11] problems....

    [...]

  • ..., x2[n]] T are associated to stationary time series, and in the cases of n → ∞ [1], or circulant channels [9], the associated whitened canonical vectors are the Fourier vectors fk, and the MSC is given by the square of the canonical correlations....

    [...]

  • ...For instance, the MSC provides a measure of the mutual information between two signals [1]....

    [...]

  • ...γ(2) x1x2(ω) = |Sx1x2(ω)|(2) |Sx1x1(ω)| |Sx2x2(ω)| ≤ 1, and it provides a measure of the rate at which one signal brings information about the other [1]....

    [...]

Journal ArticleDOI
TL;DR: An iterative quadratic minimum distance (IQMD) algorithm for computing the reduced-rank Wiener filter is presented, shown to be globally and exponentially convergent under some weak conditions.
Abstract: The reduced-rank Wiener filter (RRWF) is a generic tool for data compression and filtering. This letter presents an iterative quadratic minimum distance (IQMD) algorithm for computing the RRWF. Although it is iterative in nature, the IQMD algorithm is shown to be globally and exponentially convergent under some weak conditions. While the conventional algorithms for computing the RRWF require an order of n/sup 3/ flops, the IQMD algorithm requires only an order of n/sup 2/ flops at each iteration where n is the dimension of data. The number of iterations required in practice is often small due to the exponential convergence rate of the IQMD.

37 citations


Cites background from "Wiener filters in canonical coordin..."

  • ...One such development currently under way is to consider the computation of the reduced rank maximum likelihood estimator of a multivariate regression system [6] and the computation of the canonical coordinates elaborated in [7]....

    [...]

Journal ArticleDOI
TL;DR: Maximum likelihood methods for space-time fading channel estimation with an antenna array in spatially correlated noise having unknown covariance are presented and coherent matched-filter and concentrated-likelihood receivers are proposed that account for the spatial noise covariance and analyze their performance.
Abstract: We present maximum likelihood (ML) methods for space-time fading channel estimation with an antenna array in spatially correlated noise having unknown covariance; the results are applied to symbol detection. The received signal is modeled as a linear combination of multipath-delayed and Doppler-shifted copies of the transmitted waveform. We consider structured and unstructured array response models and derive the Cramer-Rao bound (CRB) for the unknown directions of arrival, time delays, and Doppler shifts. We also develop methods for spatial and temporal interference suppression. Finally, we propose coherent matched-filter and concentrated-likelihood receivers that account for the spatial noise covariance and analyze their performance.

37 citations


Cites background from "Wiener filters in canonical coordin..."

  • ...th components of estimated canonical coordinates of the data and basis functions, which is defined (see [36]) as...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Book
14 Sep 1984
TL;DR: In this article, the distribution of the Mean Vector and the Covariance Matrix and the Generalized T2-Statistic is analyzed. But the distribution is not shown to be independent of sets of Variates.
Abstract: Preface to the Third Edition.Preface to the Second Edition.Preface to the First Edition.1. Introduction.2. The Multivariate Normal Distribution.3. Estimation of the Mean Vector and the Covariance Matrix.4. The Distributions and Uses of Sample Correlation Coefficients.5. The Generalized T2-Statistic.6. Classification of Observations.7. The Distribution of the Sample Covariance Matrix and the Sample Generalized Variance.8. Testing the General Linear Hypothesis: Multivariate Analysis of Variance9. Testing Independence of Sets of Variates.10. Testing Hypotheses of Equality of Covariance Matrices and Equality of Mean Vectors and Covariance Matrices.11. Principal Components.12. Cononical Correlations and Cononical Variables.13. The Distributions of Characteristic Roots and Vectors.14. Factor Analysis.15. Pattern of Dependence Graphical Models.Appendix A: Matrix Theory.Appendix B: Tables.References.Index.

9,693 citations

Book ChapterDOI
TL;DR: The concept of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions as discussed by the authors, where the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting.
Abstract: Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each

6,122 citations


"Wiener filters in canonical coordin..." refers methods in this paper

  • ...INTRODUCTION CANONICAL correlations were introduced by Hotelling [1], [2] and further developed by Anderson [3]....

    [...]

Journal ArticleDOI
01 Jan 1985

992 citations