scispace - formally typeset
Open AccessJournal ArticleDOI

Robust minimum variance beamforming

R.G. Lorenz, +1 more
- 01 May 2005 - 
- Vol. 53, Iss: 5, pp 1684-1696
Reads0
Chats0
TLDR
An extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response, via an ellipsoid that gives the possible values of the array for a particular look direction is introduced.
Abstract
This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncertainty in the array manifold is explicitly modeled via an ellipsoid that gives the possible values of the array for a particular look direction. We choose weights that minimize the total weighted power output of the array, subject to the constraint that the gain should exceed unity for all array responses in this ellipsoid. The robust weight selection process can be cast as a second-order cone program that can be solved efficiently using Lagrange multiplier techniques. If the ellipsoid reduces to a single point, the method coincides with Capon's method. We describe in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. We form separate uncertainty ellipsoids for each component in the signal path (e.g., antenna, electronics) and then determine an aggregate uncertainty ellipsoid from these. We give new results for modeling the element-wise products of ellipsoids. We demonstrate the robust beamforming and the ellipsoidal modeling methods with several numerical examples.

read more

Content maybe subject to copyright    Report

1684 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 5, MAY 2005
Robust Minimum Variance Beamforming
Robert G. Lorenz, Member, IEEE, and Stephen P. Boyd, Fellow, IEEE
Abstract—This paper introduces an extension of minimum vari-
ance beamforming that explicitly takes into account variation or
uncertainty in the array response. Sources of this uncertainty in-
clude imprecise knowledge of the angle of arrival and uncertainty
in the array manifold.
In our method, uncertainty in the array manifold is explicitly
modeled via an ellipsoid that gives the possible values of the array
for a particular look direction. We choose weights that minimize
the total weighted power output of the array, subject to the con-
straint that the gain should exceed unity for all array responses in
this ellipsoid. The robust weight selection process can be cast as
a second-order cone program that can be solved efficiently using
Lagrange multiplier techniques. If the ellipsoid reduces to a single
point, the method coincides with Capon’s method.
We describe in detail several methods that can be used to de-
rive an appropriate uncertainty ellipsoid for the array response.
We form separate uncertainty ellipsoids for each component in the
signal path (e.g., antenna, electronics) and then determine an ag-
gregate uncertainty ellipsoid from these. We give new results for
modeling the element-wise products of ellipsoids. We demonstrate
the robust beamforming and the ellipsoidal modeling methods with
several numerical examples.
Index Terms—Ellipsoidal calculus, Hadamard product, robust
beamforming, second-order cone programming.
I. INTRODUCTION
C
ONSIDER an array of sensors. Let denote
the response of the array to a plane wave of unit amplitude
arriving from direction
; we will refer to as the array man-
ifold. We assume that a narrowband source
is impinging on
the array from angle
and that the source is in the far field of
the array. The vector array output
is then
(1)
where
includes effects such as coupling between elements
and subsequent amplification;
is a vector of additive noises
representing the effect of undesired signals, such as thermal
noise or interference. We denote the sampled array output by
. Similarly, the combined beamformer output is given by
where is a vector of weights, i.e., design variables, and
denotes the conjugate transpose.
The goal is to make
and small, in which
case,
recovers , i.e., . The gain of the
Manuscript received January 20, 2002; revised April 5, 2004. This work was
supported by Thales Navigation. The associate editor coordinating the review
of this manuscript and approving it for publication was Dr. Joseph Tabrikian.
R. G. Lorenz is with Beceem Communications, Inc., Santa Clara, CA 95054
USA (e-mail: blorenz@beceem.com).
S. P. Boyd is with the Department of Electrical Engineering, Stanford Uni-
versity, Stanford, CA 94305 USA (e-mail: boyd@stanford.edu).
Digital Object Identifier 10.1109/TSP.2005.845436
weighted array response in direction
is ; the expected
effect of the noise and interferences at the combined output is
given by
, where , and denotes the ex-
pected value. If we presume that
and are known, we
may choose
as the optimal solution of
minimize
subject to (2)
Minimum variance beamforming is a variation on (2) in
which we replace
with an estimate of the received signal
covariance derived from recently received samples of the array
output, e.g.,
(3)
The minimum variance beamformer (MVB) is chosen as the
optimal solution of
minimize
subject to (4)
This is commonly referred to as Capon’s method [1]. Equation
(4) has an analytical solution given by
(5)
Equation (4) also differs from (2) in that the power expression
we are minimizing includes the effect of the desired signal plus
noise. The constraint
in (4) prevents the gain in the
direction of the signal from being reduced.
A measure of the effectiveness of a beamformer is given by
the signal-to-interference-plus-noise ratio (SINR), given by
SINR
(6)
where
is the power of the signal of interest. The assumed
value of the array manifold
may differ from the actual value
for a host of reasons, including imprecise knowledge of the
signal’s angle of arrival
. Unfortunately, the SINR of Capon’s
method can degrade catastrophically for modest differences be-
tween the assumed and actual values of the array manifold. We
now review several techniques for minimizing the sensitivity of
MVB to modeling errors in the array manifold.
A. Previous Work
One popular method to address uncertainty in the array re-
sponse or angle of arrival is to impose a set of unity-gain con-
straints for a small spread of angles around the nominal look
direction. These are known in the literature as point mainbeam
1053-587X/$20.00 © 2005 IEEE

LORENZ AND BOYD: ROBUST MINIMUM VARIANCE BEAMFORMING 1685
constraints or neighboring location constraints [2]. The beam-
forming problem with point mainbeam constraints can be ex-
pressed as
minimize
subject to (7)
where
is an matrix of array responses in the con-
strained directions, and
is an vector specifying the de-
sired response in each constrained direction. To achieve wider
responses, additional constraint points are added. We may sim-
ilarly constrain the derivative of the weighted array output to be
zero at the desired look angle. This constraint can be expressed
in the same framework as (7); in this case, we let
be the deriva-
tive of the array manifold with respect to look angle and
.
These are called derivative mainbeam constraints; this deriva-
tive may be approximated using regularization methods. Point
and derivative mainbeam constraints may also be used in con-
junction with one another. The minimizer of (7) has an analyt-
ical solution given by
(8)
Each constraint removes one of the remaining degrees of
freedom available to reject undesired signals; this is particularly
signicant for an array with a small number of elements. We
may overcome this limitation by using a using a low-rank
approximation to the constraints [3]. The best rank
approxi-
mation to
, in a least squares sense, is given by , where
is a diagonal matrix consisting of the largest singular
values,
is a matrix whose columns are the corre-
sponding left singular vectors of
, and is a matrix
whose columns are the corresponding right singular vectors
of
. The reduced-rank constraint equations can be written as
or equivalently
(9)
where
denotes the MoorePenrose pseudoinverse. Using (8),
we compute the beamformer using the reduced-rank constraints
as
This technique, which is used in source localization, is referred
to as MVB with environmental perturbation constraints (MV-
EPC); see [2] and the references contained therein.
Unfortunately, it is not clear how best to pick the additional
constraints, or, in the case of the MV-EPC, the rank of the con-
straints. The effect of additional constraints on the design spec-
ications appears to be difcult to predict.
Regularization methods have also been used in beamforming.
One technique, referred to in the literature as diagonal loading,
chooses the beamformer to minimize the sum of the weighted
array output power plus a penalty term, proportional to the
square of the norm of the weight vector. The gain in the assumed
angle of arrival (AOA) of the desired signal is constrained to be
unity. The beamformer is chosen as the optimal solution of
minimize
subject to (10)
The parameter
penalizes large values of and has the
general effect of detuning the beamformer response. The reg-
ularized least squares problem (10) has an analytical solution
given by
(11)
Gershman [4] and Johnson and Dudgeon [5] provide a survey of
these methods; see also the references contained therein. Similar
ideas have been used in adaptive algorithms; see [6].
Beamformers using eigenvalue thresholding methods to
achieve robustness have also been used; see [7]. The beam-
former is computed according to Capons method, using a
covariance matrix that has been modied to ensure that no
eigenvalue is less than a factor
times the largest, where
. Specically, let denote the eigen-
value/eigenvector decomposition of
, where is a diagonal
matrix, the
th entry (eigenvalue) of which is given by , i.e.,
.
.
.
Without loss of generality, assume . We form
the diagonal matrix
, the th entry of which is given by
; viz,
.
.
.
The modied covariance matrix is computed according to
. The beamformer using eigenvalue thresh-
olding is given by
(12)
The parameter
corresponds to the reciprocal of the condition
number of the covariance matrix. A variation on this approach
is to use a xed value for the minimum eigenvalue threshold.
One interpretation of this approach is to incorporate a priori
knowledge of the presence of additive white noise when the
sample covariance is unable to observe said white noise oor
due to short observation time [7]. The performance of this beam-
former appears to be similar to that of the regularized beam-
former using diagonal loading; both usually work well for an
appropriate choice of the regularization parameter
.
We see two limitations with regularization techniques for
beamformers. First, it is not clear how to efciently pick
.
Second, this technique does not take into account any knowl-
edge we may have about variation in the array manifold, e.g.,
that the variation may not be isotropic.
In Section I-C, we describe a beamforming method that ex-
plicitly uses information about the variation in the array re-
sponse
, which we model explicitly as an uncertainty ellip-
soid. Prior to this, we introduce some notation for describing
ellipsoids.

1686 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 5, MAY 2005
B. Ellipsoid Descriptions
An
-dimensional ellipsoid can be dened as the image of a
-dimensional Euclidean ball under an afne mapping from
to , i.e.,
(13)
where
, and . The set describes an el-
lipsoid whose center is
and whose principal semiaxes are the
unit-norm left singular vectors of
scaled by the corresponding
singular values. We say that an ellipsoid is flat if this map-
ping is not injective, i.e., one-to-one. Flat ellipsoids can be de-
scribed by (13) in the proper afne subspaces of
. In this
case,
and with .
Unless otherwise specied, an ellipsoid in
will be param-
eterized in terms of its center
and a symmetric non-neg-
ative denite conguration matrix
as
(14)
where
is any matrix square root satisfying
. When is full rank, the nondegenerate ellipsoid
may also be expressed as
(15)
The rst representation (14) is more natural when is de-
generate or poorly conditioned. Using the second description
(15), one may quickly determine whether a point is within the
ellipsoid.
As in (18), we will express the values of the array manifold
as the direct sum of its real and imaginary components
in
; i.e.,
(16)
While it is possible to cover the eld of values with a complex
ellipsoid in
, doing so implies a symmetry between the real
and imaginary components, which generally results in a larger
ellipsoid than if the direct sum of the real and imaginary com-
ponents are covered in
.
C. Robust Minimum Variance Beamforming
A generalization of (4) that captures our desire to minimize
the weighted power output of the array in the presence of uncer-
tainties in
is then
minimize
subject to (17)
where denotes the real part. Here, is an ellipsoid that
covers the possible range of values of
due to imprecise
knowledge of the array manifold
, uncertainty in the angle
of arrival
, or other factors. We will refer to the optimal solution
of (17) as the robust minimum variance beamformer (RMVB).
We use the constraint
for all in (17)
for two reasons. First, while normally considered a semi-in-
nite constraint, we show in Section II that it can be expressed
as a second-order cone constraint. As a result, the robust MVB
problem (17) can be solved efciently. Second, the real part of
the response is an efcient lower bound for the magnitude of the
response, as the objective
is unchanged if the weight
vector
is multiplied by an arbitrary shift . This is particu-
larly true when the uncertainty in the array response is relatively
small. It is unnecessary to constrain the imaginary part of the re-
sponse to be nominally zero. The same rotation that maximizes
the real part for a given level of
simultaneously mini-
mizes the imaginary component of the response.
Our approach differs from the previously mentioned beam-
forming techniques in that the weight selection uses the a priori
uncertainties in the array manifold in a precise way; the RMVB
is guaranteed to satisfy the minimum gain constraint for all
values in the uncertainty ellipsoid.
Wu and Zhang [8] observe that the array manifold may be
described as a polyhedron and that the robust beamforming
problem can be cast as a quadratic program. While the polyhe-
dron approach is less conservative, the size of the description
and, hence, the complexity of solving the problem grows with
the number of vertices. Vorobyov et al. [9], [10] have described
the use of second-order cone programming for robust beam-
forming in the case where the uncertainty in the array response
is isotropic. In this paper, we consider the case in which the
uncertainty is anisotropic [11], [12]. We also show how this
problem can be solved efciently in practice.
D. Outline of the Paper
The rest of this paper is organized as follows. In Section II,
we discuss the RMVB. A numerically efcient technique based
on Lagrange multiplier methods is described; we will see that
the RMVB can be computed with the same order of complexity
as its nonrobust counterpart. A numerical example is given in
Section III. In Section IV, we describe ellipsoidal modeling
methods that make use of simulated or measured values of the
array manifold. In Section V, we discuss more sophisticated
techniques, based on ellipsoidal calculus, for propagating
uncertainty ellipsoids. In particular, we describe a numerically
efcient method for approximating the numerical range of
the Hadamard (element-wise) product of two ellipsoids. This
form of uncertainty arises when the array outputs are subject
to multiplicative uncertainties. Our conclusions are given in
Section VI.
II. R
OBUST WEIGHT SELECTION
For purposes of computation, we will express the weight
vector
and the values of the array manifold as the direct
sum of the corresponding real and imaginary components
(18)
The real component of the product
can be written as ;
the quadratic form
may be expressed in terms of as
, where
We will assume is positive denite.

LORENZ AND BOYD: ROBUST MINIMUM VARIANCE BEAMFORMING 1687
Let be an ellipsoid covering the
possible values of
, i.e., the real and imaginary components of
. The ellipsoid is centered at ; the matrix determines its
size and shape. The constraint
for all in (17)
can be expressed as
(19)
which is equivalent to
for all s.t. (20)
Now, (20) holds for all
if and only if it holds for the
value of
that maximizes , namely, .
By the Cauchy-Schwartz inequality, we see that (19) is equiva-
lent to the constraint
(21)
which is called a second-order cone constraint [13]. We can then
express the robust minimum variance beamforming problem
(17) as
minimize
subject to (22)
which is a second-order cone program. See [13][16]. The sub-
ject of robust convex optimization is covered in [17][21].
By assumption,
is positive denite, and the constraint
in (22) precludes the trivial minimizer
of
. Hence, this constraint will be tight for any optimal
solution, and we may express (22) in terms of real-valued
quantities as
minimize
subject to (23)
In the case of no uncertainty where
is a singleton whose
center is
, (23) reduces to Capons
method and admits an analytical solution given by the MVB (5).
Compared to the MVB, the RMVB adds a margin that scales
with the size of the uncertainty. In the case of an isotropic array
uncertainty, the optimal solution of (17) yields the same weight
vector (to a scale factor) as the regularized beamformer for the
proper the proper choice of
.
A. Lagrange Multiplier Methods
It is natural to suspect that we may compute the RMVB ef-
ciently using Lagrange multiplier methods. See, for example,
[14] and [22][26]. Indeed, this is the case.
The RMVB is the optimal solution of
minimize
subject to (24)
if we impose the additional constraint that
.Wedene
the Lagrangian
associated with (24) as
(25)
where
. To calculate the stationary points, we
differentiate
with respect to and ; setting these partial
derivatives equal to zero, we have, respectively
(26)
and
(27)
which are known as the Lagrange equations. To solve for the
Lagrange multiplier
, we note that (26) has an analytical solu-
tion given by
Applying this to (27) yields
(28)
The optimal value of the Lagrange multiplier
is then a zero
of (28).
We proceed by computing the eigenvalue/eigenvector decom-
position
to diagonalize (28), i.e.,
(29)
where
. Equation (29) reduces to the following
scalar secular equation:
(30)
where
are the diagonal elements of . The values of
are known as the generalized eigenvalues of and and are
the roots of the equation
. Having computed
the value of
satisfying , the RMVB is computed
according to
(31)
Similar techniques have been used in the design of lters for
radar applications; see Stutt and Spafford [27] and Abramovich
and Sverdlik [28].
In principle, we could solve for all the roots of (30) and
choose the one that results in the smallest objective value
and satises the constraint , which is assumed
in (24). In the next section, however, we show that this con-
straint is met for all values of the Lagrange multiplier
greater
than a minimum value
. We will see that there is a single
value of
that satises the Lagrange equations.
B. Lower Bound on the Lagrange Multiplier
We begin by establishing the conditions under which (9) has
a solution. Assume
, i.e., is symmetric and
positive denite.
Lemma 1: For
full rank, there exists an
for which if and only if .
Proof: To prove the if direction, dene
(32)

1688 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 5, MAY 2005
By the matrix inversion lemma, we have
(33)
For
, is a monotonically in-
creasing function of
; therefore, for , there
exists a
for which
(34)
This implies that the matrix
is singular.
Since
, , for all .
As in (28) and (30), let
. Ex-
amining (28), we see
Evaluating (28) or (30), we see .For
all
, , and is continuous. Hence,
assumes the value of 0, establishing the existence of a
for which .
To show the only if direction, assume that
satises
. This condition is equivalent to
(35)
For (35) to hold, the origin cannot be contained in ellipsoid
, which implies .
Remark: The constraints and
in (24), taken together, are equivalent to the constraint
in (23). For , full rank,
and
, (23) has a unique minimizer .For
, is full rank, and the Lagrange equation
(26)
holds for only a single value of . This implies that there is a
unique value of
for which the secular equation (30)
equals zero.
Lemma 2: For
with
full rank, , and , if and only if
the matrix
has a negative eigenvalue.
Proof: Consider the matrix
We dene the inertia of as the triple
, where is the number of positive eigen-
values,
is the number of negative eigenvalues, and is the
number of zero eigenvalues of
. See Kailath et al. [29, pp.
729730].
Since both block diagonal elements of
are invertible
(36)
where
, which is the Schur
complement of the (1,1) block in
, and
, which is the Schur complement of the (2,2) block in .We
conclude
if and only if the matrix
has a negative eigenvalue. By the matrix
inversion lemma
(37)
Inverting a scalar preserves its sign; therefore
(38)
if and only if
has a negative eigenvalue.
Remark: Applying Sylvesters law of inertia to (28) and
(30), we see that
(39)
where
is the single negative generalized eigenvalue. Using
this fact and (30), we can readily verify
,
as stated in Lemma 1.
Two immediate consequences follow from Lemma 2. First,
we may exclude from consideration any value of
less than
. Second, for all , the matrix has a single
negative eigenvalue. We now use these facts to obtain a tighter
lower bound on the value of the optimal Lagrange multiplier.
We begin by rewriting (30) as
(40)
Recall that exactly one of the generalized eigenvalues
in the
secular equation (40) is negative. We rewrite (40) as
(41)
where
denotes the index associated with this negative eigen-
value.
A lower bound on
can be found by ignoring the terms in-
volving the non-negative eigenvalues in (41) and solving
This yields a quadratic equation in
(42)
the roots of which are given by

Figures
Citations
More filters
Journal ArticleDOI

Theory and Applications of Robust Optimization

TL;DR: This paper surveys the primary research, both theoretical and applied, in the area of robust optimization (RO), focusing on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology.
Journal Article

Theory and Applications of Robust Optimization

TL;DR: In this article, the authors survey the primary research, both theoretical and applied, in the area of robust optimization and highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Journal ArticleDOI

On robust Capon beamforming and diagonal loading

TL;DR: It is shown that a natural extension of the Capon beamformer to the case of uncertain steering vectors also belongs to the class of diagonal loading approaches, but the amount of diagonalloading can be precisely calculated based on the uncertainty set of the steering vector.
Journal ArticleDOI

Convex Optimization-Based Beamforming

TL;DR: It is demonstrated that convex optimization provides an indispensable set of tools for beamforming, enabling rigorous formulation and effective solution of both long-standing and emerging design problems.
Journal ArticleDOI

Robust Adaptive Beamforming Based on Interference Covariance Matrix Reconstruction and Steering Vector Estimation

TL;DR: Simulation results demonstrate that the performance of the proposed adaptive beamforming algorithm is almost always close to the optimal value across a wide range of signal to noise and signal to interference ratios.
References
More filters
Book

Matrix computations

Gene H. Golub
Book

Adaptive Filter Theory

Simon Haykin
TL;DR: In this paper, the authors propose a recursive least square adaptive filter (RLF) based on the Kalman filter, which is used as the unifying base for RLS Filters.
Book

Linear Matrix Inequalities in System and Control Theory

Edwin E. Yaz
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Related Papers (5)
Frequently Asked Questions (15)
Q1. What have the authors contributed in "Robust minimum variance beamforming" ?

This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. The authors describe in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. The authors demonstrate the robust beamforming and the ellipsoidal modeling methods with several numerical examples. 

One popular method to address uncertainty in the array response or angle of arrival is to impose a set of unity-gain constraints for a small spread of angles around the nominal look direction. 

Wu and Zhang [8] observe that the array manifold may be described as a polyhedron and that the robust beamforming problem can be cast as a quadratic program. 

One technique, referred to in the literature as diagonal loading, chooses the beamformer to minimize the sum of the weighted array output power plus a penalty term, proportional to the square of the norm of the weight vector. 

In the case of an isotropic array uncertainty, the optimal solution of (17) yields the same weight vector (to a scale factor) as the regularized beamformer for the proper the proper choice of . 

The set describes an ellipsoid whose center is and whose principal semiaxes are the unit-norm left singular vectors of scaled by the corresponding singular values. 

Using (8), the authors compute the beamformer using the reduced-rank constraints asThis technique, which is used in source localization, is referred to as MVB with environmental perturbation constraints (MVEPC); see [2] and the references contained therein. 

As in (18), the authors will express the values of the array manifold as the direct sum of its real and imaginary componentsin ; i.e.,(16)While it is possible to cover the field of values with a complex ellipsoid in , doing so implies a symmetry between the real and imaginary components, which generally results in a larger ellipsoid than if the direct sum of the real and imaginary components are covered in . 

The beamformer is chosen as the optimal solution ofminimizesubject to (10)The parameter penalizes large values of and has the general effect of detuning the beamformer response. 

The minimizer of (7) has an analytical solution given by(8)Each constraint removes one of the remaining degrees of freedom available to reject undesired signals; this is particularly significant for an array with a small number of elements. 

Their approach differs from the previously mentioned beamforming techniques in that the weight selection uses the a priori uncertainties in the array manifold in a precise way; the RMVB is guaranteed to satisfy the minimum gain constraint for all values in the uncertainty ellipsoid. 

Vorobyov et al. [9], [10] have described the use of second-order cone programming for robust beamforming in the case where the uncertainty in the array response is isotropic. 

To solve for the Lagrange multiplier , the authors note that (26) has an analytical solution given byApplying this to (27) yields(28)The optimal value of the Lagrange multiplier is then a zero of (28). 

The beamformer using eigenvalue thresholding is given by(12)The parameter corresponds to the reciprocal of the condition number of the covariance matrix. 

let denote the eigenvalue/eigenvector decomposition of , where is a diagonal matrix, the th entry (eigenvalue) of which is given by , i.e.,. . .Without loss of generality, assume .