scispace - formally typeset
Open AccessJournal ArticleDOI

Assessing the Effects of Data Selection with the DAO Physical-Space Statistical Analysis System*

Reads0
Chats0
TLDR
The Physical Space Statistical Analysis System (PSAS) as mentioned in this paper is a global analysis system that operates directly in physical space and employs error covariance models identical to those of the predecessor OI system, as well as more advanced models.
Abstract
Conventional optimal interpolation (OI) analysis systems solve the standard statistical analysis equations approximately, by invoking a local approximation and a data selection procedure. Although solution of the analysis equations is essentially exact in the recent generation of global spectral variational analysis systems, these new systems also include substantial changes in error covariance modeling, making it difficult to discern whether improvements in analysis and forecast quality are due to exact, global solution of the analysis equations, or to changes in error covariance modeling. The formulation and implementation of a new type of global analysis system at the Data Assimilation Office, termed the Physical-space Statistical Analysis System (PSAS), is described in this article. Since this system operates directly in physical space, it is capable of employing error covariance models identical to those of the predecessor OI system, as well as more advanced models. To focus strictly on the effect of global versus local solution of the analysis equations, a comparison between PSAS and OI analyses is carried out with both systems using identical error covariance models and identical data. Spectral decomposition of the analysis increments reveals that, relative to the PSAS increments, the OI increments have too little power at large horizontal scales and excessive power at small horizontal scales. The OI increments also display an unrealistically large ratio of divergence to vorticity. Dynamical imbalances in the OI-analyzed state can therefore be attributed in part to the approximate local method of solution, and are not entirely due to the simple geostrophic constraint built into the forecast error covariance model. Root-mean-square observation minus 6-h forecast errors in the zonal wind component are substantially smaller for the PSAS system than for the OI system.

read more

Content maybe subject to copyright    Report

N
OVEMBER
1998 2913COHN ET AL.
Assessing the Effects of Data Selection with the DAO Physical-Space Statistical
Analysis System*
S
TEPHEN
E. C
OHN
,A
RLINDO DA
S
ILVA
,J
ING
G
UO
,
1
M
ETA
S
IENKIEWICZ
,
1
AND
D
AVID
L
AMICH
1
Data Assimilation Office, NASA/Goddard Space Flight Center, Greenbelt, Maryland
(Manuscript received 3 April 1997, in final form 22 December 1997)
ABSTRACT
Conventional optimal interpolation (OI) analysis systems solve the standard statistical analysis equations
approximately, by invoking a local approximation and a data selection procedure. Although solution of the
analysis equations is essentially exact in the recent generation of global spectral variational analysis systems,
these new systems also include substantial changes in error covariance modeling, making it difficult to discern
whether improvements in analysis and forecast quality are due to exact, global solution of the analysis equations,
or to changes in error covariance modeling.
The formulation and implementation of a new type of global analysis system at the Data Assimilation Office,
termed the Physical-space Statistical Analysis System (PSAS), is described in this article. Since this system
operates directly in physical space, it is capable of employing error covariance models identical to those of the
predecessor OI system, as well as more advanced models. To focus strictly on the effect of global versus local
solution of the analysis equations, a comparison between PSAS and OI analyses is carried out with both systems
using identical error covariance models and identical data. Spectral decomposition of the analysis increments
reveals that, relative to the PSAS increments, the OI increments have too little power at large horizontal scales
and excessive power at small horizontal scales. The OI increments also display an unrealistically large ratio of
divergence to vorticity. Dynamical imbalances in the OI-analyzed state can therefore be attributed in part to the
approximate local method of solution, and are not entirely due to the simple geostrophic constraint built into
the forecast error covariance model. Root-mean-square observation minus 6-h forecast errors in the zonal wind
component are substantially smaller for the PSAS system than for the OI system.
1. Introduction
Practical implementation of statistical analysis
schemes requires many simplifying assumptions and ap-
proximations for computational feasibility. In conven-
tional optimal interpolation (OI) schemes the analysis
problem is localized: a local approximation is employed
to solve the analysis equations either grid point by grid
point (e.g., Bergman 1979) or in small volumes (Lorenc
1981), and a data selection procedure is invoked to re-
duce the quantity of observations available locally to a
sufficiently small number capable of being handled by
the computational resources. The purpose of this article
is to examine the limitations of this localization of the
* Dedicated to the memory of Dr. James W. Pfaendtner, who es-
tablished much of the computational foundation for the Physical-
space Statistical Analysis System.
1
Additional affiliation: General Sciences Corporation, Laurel,
Maryland, a subsidiary of Science Applications International Cor-
poration.
Corresponding author address: Dr. Stephen E. Cohn, Data Assim-
ilation Office, Code 910.3, NASA/GSFC, Greenbelt, MD 20771.
E-mail: cohn@dao.gsfc.nasa.gov
analysis problem in an operational data assimilation sys-
tem.
The term optimal interpolation is generally used to
refer to a statistical analysis scheme that takes the fol-
lowing as basic simplifications: (a) isotropy: horizontal
error correlation functions are isotropic; (b) separability:
three-dimensional error correlation functions are the
product of vertical and horizontal correlation functions;
(c) geostrophy: analyses are multivariate in the wind
and mass variables, with a geostrophic-like balance con-
straint built into the wind/mass error covariance model;
(d) local approximation: the analysis at each grid point
or in each volume incorporates observational data only
in some neighborhood of that grid point or volume; (e)
data selection: only some portion of the observations in
that neighborhood is actually included in the analysis.
As of this writing, many numerical weather prediction
centers have replaced (or will soon replace) OI schemes
with global variational analysis systems that relax or
remove the local approximation and avoid data selection
altogether (Parrish and Derber 1992; Courtier et al.
1998; Rabier et al. 1998; Andersson et al. 1998). Since
these new analysis schemes are formulated directly in
spectral (spherical harmonic) space, rather than in phys-
ical space like OI schemes, they also include changes

2914 V
OLUME
126MONTHLY WEATHER REVIEW
in error covariance modeling and imposed wind/mass
balance constraints. In the process of replacing OI
schemes by global analysis schemes, therefore, estab-
lishing the impact of each individual change on overall
data assimilation system performance is not always im-
mediate.
The Physical-space Statistical Analysis System
(PSAS) being developed at the Data Assimilation Office
(DAO) of NASAs Goddard Space Flight Center is a
new type of global analysis system designed to replace
the OI analysis component of the Goddard Earth Ob-
serving System Data Assimilation System (GEOS DAS;
Pfaendtner et al. 1995). It differs substantially from cur-
rent global variational analysis systems in that it is for-
mulated directly in physical space, rather than in a spec-
tral space. This new system is designed specifically to
accomodate a number of incremental improvements
over the OI component of the GEOS DAS. In particular,
the initial implementation described in this article em-
ploys error covariance statistics identical to those of the
OI system, including the simple geostrophic balance
constraint relating height and wind error statistics. This
first implementation of PSAS differs from the OI system
only in the numerical method used to solve for the anal-
ysis increments: a global conjugate gradient solver in-
cludes all available observations to produce the ana-
lyzed fields. While improved error covariance models
are being developed, we can isolate and study the impact
of a global analysis scheme on the performance of the
GEOS DAS.
This article is organized as follows. The design goals
of PSAS and its numerical algorithm are described in
section 2. This section also details the relationship be-
tween PSAS and OI schemes, and between PSAS and
global spectral variational analysis schemes. In section
3, we outline the components of version 1 of the GEOS
DAS (GEOS-1 DAS), the original OI-based data assim-
ilation system developed at the DAO. Section 4 de-
scribes the design of our experiments and presents the
results of comparisons between PSAS analyses and
those of the GEOS-1 DAS. Concluding remarks appear
in section 5.
2. The Physical-space Statistical Analysis System
a. Design objectives
At the time the DAO was formed, in February 1992,
plans were initiated to develop a new statistical analysis
system called the Physical-space Statistical Analysis
System. PSAS was designed to meet the following five
requirements.
1) To establish and remove the effects of data selection
in the GEOS-1 OI system. This objective requires
PSAS to be capable of using forecast and observation
error covariance models identical to those specified
in the OI system, but to solve the analysis equations
globally rather than locally.
2) To obtain proper sensitivity to all data and to all
error covariance specifications. In the OI implemen-
tation of Baker et al. (1987), for instance, introducing
geographically dependent forecast error covariances
had little impact on OI analyses. It is likely that
global solution of the analysis equations demanded
by objective 1 would reveal much more responsive-
ness, forcing one to pay careful attention to error
covariance formulations, in particular to global wind/
mass balance constraints. Recent experiments with
the PSAS system (not described here) have in fact
demonstrated strong sensitivity to these formulations
and will be described in future publications.
3) To permit assimilation of new data types that are not
state variables. A great wealth of data, mostly from
spaceborne remote-sensing devices, will become
available in coming years. Data selection would be-
come an increasingly onerous and ad hoc procedure
for these data. More importantly, many of these data,
especially if assimilated in raw form (e.g., radiances
or backscatter) rather than as retrieved products, are
neither state variables nor linearly related to state
variables. While some types of data that are not state
variables, such as total precipitable water, have been
successfully assimilated with the OI methodology
(Ledvina and Pfaendtner 1995), global formulation
of the analysis problem, in which observation op-
erators are defined explicitly, provides a natural
framework for assimilating these data types (e.g.,
Eyre et al. 1993; Derber and Wu 1998; Joiner and
da Silva 1998). The version of PSAS described in
this article incorporates linear (i.e., state-indepen-
dent) observation operators only. A version of the
PSAS algorithm for nonlinear observation operators
is described in Cohn (1997, section 5).
4) To allow maximum flexibility in forecast and ob-
servation error covariance modeling. While much
effort has been directed toward covariance modeling
in recent years, it is likely that additional efforts will
result in improved analyses. For instance, while cur-
rent global spectral variational analysis schemes rely
explicitly on an assumption that forecast errors are
horizontally isotropic, or on a slightly relaxed ver-
sion of this assumption (Courtier et al. 1998), it is
well-known (e.g., Courtier et al. 1994; The´paut et
al. 1996; Cohn and Todling 1996) that these errors
are in fact highly anisotropic and flow dependent.
Formulation of the analysis problem directly in phys-
ical space, rather than spectral space, renders fully
anisotropic correlation modeling straightforward
(e.g., Derber and Rosati 1989; Carton and Hackert
1990). The PSAS numerical algorithm makes no as-
sumption of isotropy, although the implementation
described in this article employs the isotropic cor-
relation functions specified by the GEOS-1 OI sys-
tem. Much of the current and future development is
directed toward improved error correlation modeling

N
OVEMBER
1998 2915COHN ET AL.
in PSAS (Dee and Gaspari 1996; Lou et al. 1996;
Gaspari and Cohn 1998).
5) To enable flexibility for future developments in data
assimilation methodology. The PSAS system was en-
visioned from the outset to provide a computational
framework for the development of techniques for
approximate fixed-lag Kalman smoothing (Todling
et al. 1998; Cohn et al. 1994), approximate Kalman
filtering (e.g., Cohn and Todling 1996), forecast bias
estimation (Dee and da Silva 1998), and other topics
known from the estimation theory literature but not
yet implemented in operational data assimilation sys-
tems. Solution of the innovation covariance equa-
tion, a key component of the PSAS algorithm de-
scribed below, is a need common to all of these
techniques.
Because of these design features PSAS has the fol-
lowing attributes.
1) PSAS solves the analysis equations globally rather
than locally. The local approximation and data se-
lection of the GEOS-1 OI system are eliminated. In
this respect, PSAS is similar to the global spectral
variational analysis systems that have recently re-
placed OI schemes at the U.S. National Centers for
Environmental Prediction (NCEP; Parrish and Der-
ber 1992) and at the European Centre for Medium-
Range Weather Forecasts (ECMWF; Courtier et al.
1998; Rabier et al. 1998; Andersson et al. 1998).
2) PSAS is formulated directly in physical space, like
OI schemes but unlike spectral analysis schemes.
3) PSAS performs a large part of its calculations in
observation space, also unlike operational spectral
analysis schemes, which operate in state space. This
results in computational savings, since the dimension
of the observation space is currently an order of
magnitude smaller than that of the forecast model
state. The computational efficiency of the current
generation of spectral analysis schemes arises from
an assumption that horizontal forecast error covari-
ances or correlations are either isotropic or have el-
lipsoidal isolines; that is, are diagonal or block-di-
agonal in spectral space (Courtier et al. 1998), an
assumption that is not made in the PSAS algorithm.
4) PSAS is fundamentally independent of the forecast
model formulation, and hence is a portable algorithm
suitable for diverse applications. Although PSAS is
compatible with the gridpoint system of the GEOS
general circulation model, the design does not re-
strict PSAS applications to this grid. In particular,
the PSAS algorithm is suitable for global spectral
models, as well as for regional data assimilation and
for problems on irregular or stretched grids such as
oceanic data assimilation.
b. Background: The statistical analysis equations
A statistical analysis scheme attempts to obtain an
optimal estimate, or analysis, of the state of a dynamical
system by combining observations of the system with
a forecast model first guess. Let w
f
R
n
denote the
vector representing the forecast first guess, defined on
a grid in our case, and let w
t
R
n
denote the discrete
true state approximated by w
f
:
w
f
5 w
t
1
e
f
, (1)
where
e
f
R
n
denotes the forecast error. A time index
is omitted in this equation and in those to follow for
notational simplicity. Let w
o
R
p
denote the vector of
p observations available at the analysis time, assumed
in this article to be related linearly to the state variables:
w
o
5
H
w
t
1
e
o
. (2)
Here
H
R
p
3 R
n
is the observation operator, or gen-
eralized interpolation operator;
e
o
R
p
denotes the ob-
servation error, which is the sum of the measurement
error and the error of representativeness (e.g., Lorenc
1986; Cohn 1997). In the GEOS-1 DAS, the number of
model degrees of freedom is n ; 10
6
and the current
observing system has p ; 10
5
.
The probabilistic assumptions common to most op-
erational analysis systems are that
e
f
and
e
o
are Gauss-
ian distributed with zero mean, and are not correlated
with either the state or with each other. Although these
assumptions can be relaxed in a variety of ways (cf.
Cohn 1997 and references therein), the implementation
of PSAS described in this article invokes all of them.
Efforts directed toward relaxing the assumption that
e
f
has zero mean (^
e
f
&50), that is, that the forecast is
unbiased, are described in Dee and da Silva (1998).
The two most common optimality criteria, arising
from minimum variance estimation and maximum like-
lihood estimation, lead to identical analysis equations
under these assumptions (e.g., Lorenc 1986; Cohn
1997). These equations also yield the best linear un-
biased estimate, or analysis, without an assumption that
the errors
e
f
and
e
o
are Gaussian distributed.
The minimum variance analysis w
a
R
n
is obtained
by requiring the scalar functional ^(w
a
2 w
t
)
T
S
(w
a
2
w
t
)& to be minimum for all positive definite matrices
S
R
n
3 R
n
, and under the stated assumptions is given
by the analysis equations:
af o f
w 5 w 1
K
(w 2
H
w ) (3)
fT fT 21
K
5
PH
(
HP H
1
R
) . (4)
Here the matrix
K
R
n
3 R
p
is the gain matrix, which
ascribes appropriate weights to the observations by act-
ing on the innovation vector
1
w
o
2
H
w
f
. The gain matrix
depends on the forecast error covariance matrix:
1
Strictly speaking, the innovation vector is defined by the prop-
erties of being white in time and Gaussian with zero mean, even for
nonlinear dynamics and observation operators (cf. Frost and Kailath
1971; Daley 1992). In this article we adopt the term innovation vector
with the caveat that these properties are perhaps goals but not yet
realities for operational data assimilation systems.

2916 V
OLUME
126MONTHLY WEATHER REVIEW
P
f
[^(
e
f
2^
e
f
&)(
e
f
2^
e
f
&)
T
& R
n
3 R
n
(5)
and on the observation error covariance matrix:
R
[^(
e
o
2^
e
o
&)(
e
o
2^
e
o
&)
T
& R
p
3 R
p
. (6)
Both are symmetric and positive semidefinite by defi-
nition;
R
is in fact positive definite under an assumption
that no linear combination of the observations is perfect.
Although these matrices are defined as above, in practice
they must be modeled.
c. The global PSAS solver
The PSAS algorithm solves the analysis equations
(3)–(4) in a straightforward manner. First, one p 3 p
linear system is solved for the quantity y,
(
HP
f
H
T
1
R
)y 5 w
o
2
H
w
f
, (7)
and then the analyzed state w
a
is obtained from the
equation
w
a
5 w
f
1
P
f
H
T
y. (8)
Equations (7) and (8) will be referred to as the PSAS
equations. The innovation covariance matrix:
M
[
HP
f
H
T
1
R
(9)
is symmetric positive definite, making a standard pre-
conditioned conjugate gradient (CG) algorithm (Golub
and van Loan 1989) the method of choice for solving
the large linear system (7), referred to as the innovation
covariance equation. For the current observing system
( p ; n/10), setting up and solving the linear system (7)
costs about half the computational effort of PSAS, and
involves computation in observation space:
M
R
p
3
R
p
and y R
p
, requiring O(N
cg
p
2
) operations, where
N
cg
; 10 is the number of CG iterations (the conver-
gence criterion is described later). The other half of the
computational expense is taken by step (8), which trans-
fers the solution y to the state space:
P
f
H
T
y R
n
, re-
quiring O (np) operations.
For typical models of
P
f
and
R
the innovation co-
variance matrix
M
is not sparse, although entries as-
sociated with remote pairs of observation locations are
negligibly small. To introduce some sparseness in
M
and
thereby to save computational effort, the sphere is di-
vided into N regions, and matrix blocks associated with
regions separated by more than 6000 km are assumed
to be zero; these blocks never enter the CG computa-
tions. The same procedure is applied to the matrix
P
f
itself in (8). This is a covariance modeling assumption,
rather than a local approximation like that of OI
schemes, and is justified on the basis of observational
studies (Hollingsworth and Lo¨nnberg 1986; Lo¨nnberg
and Hollingsworth 1986). Although this procedure
could in principle destroy the positive-definiteness of
M
, causing lack of convergence of the CG solver, this
has not been observed in the experiments reported in
section 4 using the covariance models
P
f
and
R
of the
GEOS-1 OI system. A rigorous approach based on
space-limited covariance functions (Gaspari and Cohn
1998), which are exactly zero beyond a specified dis-
tance, has already been implemented in PSAS, but for
the purposes of a clean comparison with the OI system
is not part of the implementation described in this ar-
ticle.
An effective preconditioner for CG algorithms must
have two important characteristics: 1) it must be inex-
pensive to compute, and 2) it must retain the essentials
of the original matrix problem if it is to improve sub-
stantially the convergence rate of the overall CG al-
gorithm. For the statistical interpolation problem that
PSAS implements, a natural preconditioner is an OI-
like approximation, in which the problem is solved sep-
arately for each of the N regions used to partition the
data. For the current serial implementation, the globe is
divided into N 5 80 equal-area regions using an ico-
sahedral grid (Pfaendtner 1996).
2
With p ; 100 000 ob-
servations, each of these regional problems has on av-
erage more than 1000 observations, which is too many
for efficient direct solution. These regional problems are
therefore solved by a preconditioned conjugate gradient
algorithm; we refer to this solver as the CG level 2
solver. As a preconditioner for CG level 2 the regional
problems are solved univariately for each data type—
that is, observations of u wind,
y
wind, geopotential
height, etc. are treated in isolation. However, these uni-
variate problems are still too large to be solved effi-
ciently by direct methods, and yet another iterative solv-
er is used; this is the CG level 1 algorithm. As a pre-
conditioner for CG level 1 we make use of the standard
numerical linear algebra package (Anderson et al. 1992)
to perform a direct Cholesky factorization of diagonal
blocks of the CG level 1 matrix. These diagonal blocks
are typically of size 32, and are chosen carefully to
include complete vertical profiles. The overall nested
preconditioned conjugate gradient algorithm is illus-
trated in Fig. 1. Additional details concerning this al-
gorithm can be found in da Silva and Guo (1996).
In the serial implementation of PSAS, the matrix
M
is first normalized by its main diagonal, the normalized
matrix is provided to the global CG solver as an op-
erator, and matrix elements are recomputed at each CG
iteration, as needed. In the prototype parallel imple-
mentation of PSAS developed at the Jet Propulsion Lab-
oratory (Ding and Ferraro 1996), blocks of the matrix
M
are precomputed and stored in memory. As a con-
vergence criterion for the global CG solver, we specify
that the residual must be reduced by one to two orders
of magnitude. Experiments with reduction of the resid-
2
In the prototype massively parallel implementation of PSAS de-
veloped at the Jet Propulsion Laboratory, the globe is divided into
256 or 512 geographically irregular regions, each having approxi-
mately the same number of observations. This is one strategy to
achieve load balance (Ding and Ferraro 1996).

N
OVEMBER
1998 2917COHN ET AL.
F
IG
. 1. PSAS nested preconditioned conjugate gradient solver. Routine cgpmain( ) contains the
main conjugate gradient driver. This routine is preconditioned by cgplevel2( ), which solves a
similar problem for each region. This routine is in turn preconditioned by cgplevel1( ), which solves
the linear system univariately. See text for details.
ual beyond two orders of magnitude produced differ-
ences in the resulting analyses much smaller than ex-
pected analysis errors. This is due to the filtering prop-
erty of the operator
P
f
H
T
in (8), which attenuates small-
scale details in the linear system variable y.
d. Relationship of PSAS, OI, and spectral variational
schemes
In this section we contrast the PSAS approach to
solving the analysis equations (3)–(4) with the approach
of OI schemes and the approach of spectral variational
schemes.
1) O
PTIMAL INTERPOLATION SCHEMES
Optimal interpolation schemes solve Eqs. (3)–(4) ap-
proximately, as follows. Denote by k
j
the jth column of
the transposed gain matrix
K
T
defined by (4), so that k
j
R
p
. Then (4) can be written as
(
HP
f
H
T
1
R
)k
j
5 (
HP
f
)
j
(10)
for j 5 1,..., n, where (
HP
f
)
j
R
p
denotes the jth
column of the matrix
HP
f
. This equation represents n
linear systems, each of the same form as the PSAS
equation (7). Similarly, Eq. (3) can be written as n scalar
equations,
51(k
j
)
T
(w
o
2
H
w
f
)
af
ww
jj
(11)
for j 5 1,...,n, where and denote the jth ele-
af
ww
jj
ments of w
a
and w
f
, respectively. This equation makes
it clear that the weight vector k
j
solved for in (10) de-
termines the correction, or analysis increment, at the jth
grid point.
Equations (10) and (11) would yield the same analysis
w
a
as the PSAS equations (7) and (8), but at far greater
computational expense since there are n linear systems
to be solved in (10) but only one in (7). Optimal inter-
polation schemes
3
do in fact solve (10) and (11), but
with a local approximation and hence the need for data
selection. These schemes differ widely in the details of
the local approximation and the data selection algorithm
(cf. McPherson et al. 1979; Lorenc 1981; Baker et al.
1987; Pfaendtner et al. 1995), but all can be described
in a generic way as follows.
Instead of involving all p observations in the solution
of Eqs. (10) and (11) for each j, some much smaller
number of observations q K p nearby the jth grid lo-
cation is selected for the analysis at that location, and
in general a different subset of observations, q 5 q( j),
is selected for different locations j. Thus w
o
,
H
, and
R
become lower-dimensional and are made to depend on
the gridpoint index j: w
o
5 R
q
,
H
5
H
j
R
q
3
o
w
j
3
It should be noted that not all implementations of OI compute
the weights k
j
explicitly (cf. Daley 1991, section 4.2).

Citations
More filters
Journal ArticleDOI

The Global Land Data Assimilation System

TL;DR: The Global Land Data Assimilation System (GLDAS) as mentioned in this paper is an uncoupled land surface modeling system that drives multiple models, integrates a huge quantity of observation-based data, runs globally at high resolution (0.25°), and produces results in near-real time (typically within 48 h of the present).
Book

Atmospheric Modeling, Data Assimilation and Predictability

TL;DR: A comprehensive text and reference work on numerical weather prediction, first published in 2002, covers not only methods for numerical modeling, but also the important related areas of data assimilation and predictability.
Journal ArticleDOI

Data Assimilation Using an Ensemble Kalman Filter Technique

TL;DR: In this article, the authors proposed an ensemble Kalman filter for data assimilation using the flow-dependent statistics calculated from an ensemble of short-range forecasts (a technique referred to as Ensemble Kalman filtering) in an idealized environment.
Journal ArticleDOI

Construction of correlation functions in two and three dimensions

TL;DR: In this paper, the authors focus on the construction of simply parametrized covariance functions for data-assimilation applications and provide a self-contained, rigorous mathematical summary of relevant topics from correlation theory.
Journal ArticleDOI

A Sequential Ensemble Kalman Filter for Atmospheric Data Assimilation

TL;DR: In this article, an ensemble Kalman filter is proposed for the 4D assimilation of atmospheric data, which employs a Schur (elementwise) product of the covariances of the background error calculated from the ensemble and a correlation function having local support to filter the small (and noisy) background-error covariance associated with remote observations.
References
More filters
Book

Stochastic Processes and Filtering Theory

TL;DR: In this paper, a unified treatment of linear and nonlinear filtering theory for engineers is presented, with sufficient emphasis on applications to enable the reader to use the theory for engineering problems.
Book

Lapack Users' Guide

Ed Anderson
TL;DR: The third edition of LAPACK provided a guide to troubleshooting and installation of Routines, as well as providing examples of how to convert from LINPACK or EISPACK to BLAS.
Book

Atmospheric Data Analysis

Roger Daley
TL;DR: In this article, the authors present a method of successive corrections for Normal Mode Initialization (NME) in univariate, multivariate and univariate Statistical Interpolation (SI) problems.
Journal ArticleDOI

The National Meteorological Center's Spectral Statistical-Interpolation Analysis System

TL;DR: Results from several months of parallel testing with the NMC spectral model have been very encouraging, and favorable features include smoother analysis increments, greatly reduced changes from initialization, and significant improvement of 1-5-day forecasts.
Journal ArticleDOI

Construction of correlation functions in two and three dimensions

TL;DR: In this paper, the authors focus on the construction of simply parametrized covariance functions for data-assimilation applications and provide a self-contained, rigorous mathematical summary of relevant topics from correlation theory.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What have the authors contributed in "Assessing the effects of data selection with the dao physical-space statistical analysis system*" ?

The formulation and implementation of a new type of global analysis system at the Data Assimilation Office, termed the Physical-space Statistical Analysis System ( PSAS ), is described in this article. 

The OI increments also display an unrealistically large ratio of divergence to vorticity, resulting in an unbalanced analyzed state. 

As a preconditioner for CG level 1 the authors make use of the standard numerical linear algebra package (Anderson et al. 1992) to perform a direct Cholesky factorization of diagonal blocks of the CG level 1 matrix. 

Since these new analysis schemes are formulated directly in spectral (spherical harmonic) space, rather than in physical space like OI schemes, they also include changesin error covariance modeling and imposed wind/mass balance constraints. 

The probabilistic assumptions common to most operational analysis systems are that e f and e o are Gaussian distributed with zero mean, and are not correlated with either the state or with each other. 

The two most common optimality criteria, arising from minimum variance estimation and maximum likelihood estimation, lead to identical analysis equations under these assumptions (e.g., Lorenc 1986; Cohn 1997). 

Results show that, relative to the PSAS analysis increments, the OI analysis increments of geopotential height have excessive power in small scales, apparently at the expense of too little power in large scales. 

The research and development documented in this article were supported by the NASA EOS Interdisciplinary Science Program and by the NASA Research and Applications Program. 

While the initial implementation of PSAS described in this article purposely employs the separable, isotropic covariance models of the GEOS-1 OI system, and is therefore not yet a stand-alone analysis system, work is currently in progress to exploit the flexibility of PSAS to incorporate much more general covariance models. 

For the static analysis experiments, the authors rely on the database prepared through the GEOS-1 reanalysis project described in Schubert et al. (1993). 

These results are consistent with the analysis increment characteristics displayed in Figs. 3–5. Whereas the noise introduced by the local nature of OI is filtered effectively by the IAU procedure (Bloom et al. 1996), the dynamical imbalance associated with the spurious OI analysis increments of divergence have a deleterious effect on the 6-h wind forecast. 

This is due to the filtering property of the operator PfHT in (8), which attenuates smallscale details in the linear system variable y.d. 

With p ; 100 000 observations, each of these regional problems has on average more than 1000 observations, which is too many for efficient direct solution. 

This is accomplished in the PSAS algorithm by employing a global conjugate gradient solver, preconditioned by a series of smaller OI-like problems.