scispace - formally typeset
Search or ask a question

Showing papers by "Eugenia Kalnay published in 1997"


Journal ArticleDOI
TL;DR: In this paper, it is shown that the analysis cycle is like a breeding cycle: it acts as a nonlinear perturbation model upon the evolution of the real atmosphere, and the perturbations (i.e., the analysis error), carried forward in the first-guess forecasts, is scaled down at regular intervals by the use of observations.
Abstract: The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In March 1994, the ensemble was expanded to seven independent breeding cycles on the Cray C90 supercomputer, and the forecasts were extended to 16 days. This provides 17 independent global forecasts valid for two weeks every day. For efficient ensemble forecasting, the initial perturbations to the control analysis should adequately sample the space of possible analysis errors. It is shown that the analysis cycle is like a breeding cycle: it acts as a nonlinear perturbation model upon the evolution of the real atmosphere. The perturbation (i.e., the analysis error), carried forward in the first-guess forecasts, is ‘‘scaled down’’ at regular intervals by the use of observations. Because of this, growing errors associated with the evolving state of the atmosphere develop within the analysis cycle and dominate subsequent forecast error growth. The breeding method simulates the development of growing errors in the analysis cycle. A difference field between two nonlinear forecasts is carried forward (and scaled down at regular intervals) upon the evolving atmospheric analysis fields. By construction, the bred vectors are superpositions of the leading local (timedependent) Lyapunov vectors (LLVs) of the atmosphere. An important property is that all random perturbations assume the structure of the leading LLVs after a transient period, which for large-scale atmospheric processes is about 3 days. When several independent breeding cycles are performed, the phases and amplitudes of individual (and regional) leading LLVs are random, which ensures quasi-orthogonality among the global bred vectors from independent breeding cycles. Experimental runs with a 10-member ensemble (five independent breeding cycles) show that the ensemble mean is superior to an optimally smoothed control and to randomly generated ensemble forecasts, and compares favorably with the medium-range double horizontal resolution control. Moreover, a potentially useful relationship between ensemble spread and forecast error is also found both in the spatial and time domain. The improvement in skill of 0.04‐0.11 in pattern anomaly correlation for forecasts at and beyond 7 days, together with the potential for estimation of the skill, indicate that this system is a useful operational forecast tool. The two methods used so far to produce operational ensemble forecasts—that is, breeding and the adjoint (or ‘‘optimal perturbations’’) technique applied at the European Centre for Medium-Range Weather Forecasts—have several significant differences, but they both attempt to estimate the subspace of fast growing perturbations. The bred vectors provide estimates of fastest sustainable growth and thus represent probable growing analysis errors. The optimal perturbations, on the other hand, estimate vectors with fastest transient growth in the future. A practical difference between the two methods for ensemble forecasting is that breeding is simpler and less expensive than the adjoint technique.

1,067 citations


Journal ArticleDOI
TL;DR: The NCEP ensemble forecasting has been operational at the National Meteorological Center (NCEP) since December 1992 and has been used to forecast the global forecast of the US weather system as mentioned in this paper.
Abstract: Ensemble forecasting has been operational at NCEP (formerly the National Meteorological Center) since December 1992. In March 1994, more ensemble forecast members were added. In the new configuration, 17 forecasts with the NCEP global model are run every day, out to 16-day lead time. Beyond the 3 control forecasts (a T126 and a T62 resolution control at 0000 UTC and a T126 control at 1200 UTC), 14 perturbed forecasts are made at the reduced T62 resolution. Global products from the ensemble forecasts are available from NCEP via anonymous FTP. The initial perturbation vectors are derived from seven independent breeding cycles, where the fast-growing nonlinear perturbations grow freely, apart from the periodic rescaling that keeps their magnitude compatible with the estimated uncertainty within the control analysis. The breeding process is an integral part of the extended-range forecasts, and the generation of the initial perturbations for the ensemble is done at no computational cost beyond that of...

108 citations


Journal ArticleDOI
TL;DR: In this article, a quasi-inverse linear method was developed to study the sensitivity of forecast errors to initial conditions for the National Centers for Environmental Prediction's (NCEP) global spectral model.
Abstract: A quasi-inverse linear method has been developed to study the sensitivity of forecast errors to initial conditions for the National Centers for Environmental Prediction’s (NCEP) global spectral model. The inverse is approximated by running the tangent linear model (TLM) of the nonlinear forecast model with a negative time step, but reversing the sign of friction and diffusion terms, in order to avoid the computational instability that would be associated with these terms if they were run backward. As usually done using the adjoint model integrations, the quasi-inverse TLM is started at the time of the verified forecast error and integrated backward to the corresponding initial time. First, a numerical experiment shows that this quasi-inverse linear estimation is able to trace back the differences between two perturbed forecasts from the NCEP ensemble forecasting system and recover with good accuracy the known difference between the two forecasts at the initial time. This result shows that both the linear estimation and the quasi-inverse linear estimation are quite close to the nonlinear evolution of the perturbation in the nonlinear forecast model, suggesting that it should be possible to apply the method to the study of the sensitivity of forecast errors to initial conditions. The authors then calculate the perturbation field at the initial time (estimate the initial error) by tracing back a 1-day forecast error using the TLM quasi-inverse estimation. As could be expected from the previous experiment, when the estimated error is subtracted from the original analysis, the new initial conditions lead to an almost perfect 1-day forecast. The forecasts beyond the first day are also considerably improved, indicating that the initial conditions have indeed been improved. In the remainder of the paper, this quasi-inverse linear method is compared with the adjoint sensitivity method (Rabier et al., Pu et al.) for medium-range weather forecasting. The authors find that both methods are able to trace back the forecast error to perturbations that improve the initial conditions. However, the forecast improvement obtained by the quasi-inverse linear method is considerably better than that obtained with a single adjoint iteration and similar to the one obtained using five iterations of the adjoint method, even though each adjoint iteration requires at least twice the computer resources of the quasi-inverse TLM estimation. Whereas the adjoint forecast sensitivities are closely related to singular vectors, the quasi-inverse linear perturbations are associated with the bred (Lyapunov) vectors used for ensemble forecasting at NCEP (Toth and Kalnay). The features of the two types of perturbations are also compared in this study. Finally, the possibility of the use of the sensitivity perturbation to improve future forecast skill is discussed, and preliminary experiments encourage further testing of this rather inexpensive method for possible operational use. The model used in this study is the NCEP operational global spectral model at a resolution of T62/L28. The corresponding TLM, and its adjoint, are based on an adiabatic version of the model but include both horizontal and vertical diffusion.

80 citations


Journal ArticleDOI
01 Mar 1997-Tellus A
TL;DR: In this article, the Lanczos algorithm was used to compare the first local Lyapunov vector (LLV) and the leading optimal vectors in a T10/18 level truncated version of the National Centers for Environmental Prediction global spectral model.
Abstract: We compare the first local Lyapunov vector (LLV) and the leading optimal vectors in a T10/18 level truncated version of the National Centers for Environmental Prediction global spectral model. The leading LLV is a vector toward which all other perturbations turn and hence it is characterized by the fastest possible growth over infinitely long time periods, while the optimal vectors are perturbations that maximize growth for a finite time period, with respect to a chosen norm. Linear tangent model breeding experiments without convection at T10 resolution show that arbitrary random perturbations converge within a transition period of 3 to 4 days to a single LLV. We computed optimal vectors with the Lanczos algorithm, using the total energy norm. For optimization periods shorter than the transition period (about 3 days), the horizontal structure of the leading initial optimal vectors differs substantially from that of the leading LLV, which provides maximum sustainable growth. There are also profound differences between the two types of vectors in their vertical structure. While the 24- h optimal vectors rapidly become similar to the LLV in their vertical structure, changes in their horizontal structure are very slow. As a consequence, their amplification factor drops and stays well below that of the LLV for an extended period after the optimization period ends. This may have an adverse effect when optimal vectors with short optimization periods are used as initial perturbations for medium-range ensemble forecasts. The optimal vectors computed for 3 days or longer are different. In these vectors, the fastest growing initial perturbation has a horizontal structure similar to that of the leading LLV, and its major difference from the LLV, in the vertical structure, tends to disappear by the end of the optimization period. Initially, the optimal vectors are highly unbalanced and the rapid changes in their vertical structure are associated with geostrophic adjustment. The kinetic energy of the initial optimal vectors peaks in the lower troposphere, whereas in the LLV the maximum is around the jet level. During the integration the phase of the streamfunction field of the optimal vectors, with respect to their corresponding temperature field, is rapidly shifted 180°. And, due to drastic changes that also take place in the vertical temperature distribution, the maximum baroclinic shear shifts from the lower troposphere to just below the jet level. Just after initial time, when the geostrophic adjustment dominates, the leading optimal vectors exhibit a growth rate significantly higher than that of the LLV. By the end of the period of optimization, however, the growth rate associated with the leading optimal vectors drops to or below the level of the Lyapunov exponent. The transient super-Lyapunov growth associated with the leading optimal vectors is due to a one-time-only rapid rotation of the optimal vectors toward the leading LLVs. The nature of this rapid rotation depends on the length of the optimization period and the norm chosen. We speculate that the initial optimal vectors computed with commonly used norms may not be realizable DOI: 10.1034/j.1600-0870.1997.00004.x

73 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the impact of modifying the error statistics by including effects of the "errors of the day" on the analysis system, and propose a method to estimate the spatially and temporally varying degree of uncertainty in the firstguess forecasts used in the analysis.
Abstract: The errors in the first-guess (forecast field) of an analysis system vary from day to day, but, as in all current operational data assimilation systems, forecast error covariances are assumed to be constant in time in the NCEP operational three-dimensional variational analysis system (known as a spectral statistical interpolation or SSI). This study focuses on the impact of modifying the error statistics by including effects of the ‘‘errors of the day’’ on the analysis system. An estimate of forecast uncertainty, as defined from the bred growing vectors of the NCEP operational global ensemble forecast, is applied in the NCEP operational SSI analysis system. The growing vectors are used to estimate the spatially and temporally varying degree of uncertainty in the first-guess forecasts used in the analysis. The measure of uncertainty is defined by a ratio of the local amplitude of the growing vectors, relative to a background amplitude measure over a large area. This ratio is used in the SSI system for adjusting the observational error term (giving more weight to observations in regions of larger forecast errors). Preliminary experiments with the low-resolution global system show positive impact of this virtually cost-free method on the quality of the analysis and medium-range weather forecasts, encouraging further tests for operational use. The results of a 45-day parallel run, and a discussion of other methods to take advantage of the knowledge of the day-to-day variation in forecast uncertainties provided by the NCEP ensemble forecast system, are also presented in the paper.

25 citations


Journal ArticleDOI
TL;DR: In this article, a simple, relatively inexpensive technique has been developed for using past forecast errors to improve the future forecast skill, which can be considered as a simplified 4-dimensional variational (4-D VAR) system.
Abstract: A simple, relatively inexpensive technique has been developed for using past forecast errors to improve the future forecast skill. the method uses the forecast model and its adjoint and can be considered as a simplified 4-dimensional variational (4-D VAR) system. One-or two-day forecast errors are used to calculate a small perturbation (sensitivity perturbation) to the analyses that minimizes the forecast error. the longer forecasts started from the corrected initial conditions, although better than the original forecasts, are still significantly worse than the shorter forecasts started from the latest analysis, even though they both had access to information covering the same period. As a much less expensive alternative to 4-D VAR, the adjusted initial conditions from one or two days ago are used as a starting point for a second iteration of the regular NCEP analysis and forecast cycle until the present time (t = O) analysis is reached. Forecast experiments indicate that the new analyses result in improvements to medium-range forecast skill, and suggest that the technique can be used in operations, since it increases the cost of the regular analysis cycle by a maximum factor of about 4 to 8, depending on the length of the analysis cycle that is repeated. Several possible operational configurations are also tested. The model used in these experiments is the NCEP's operational global spectral model with 62 waves triangular truncation and 28 ő-vertical levels. an adiabatic version of the adjoint was modified to make it more consistent with the complete forecast model, including only a few simple physical parametrizations (horizontal diffusion and vertical mixing). This adjoint model was used to compute the gradient of the forecast error with respect to initial conditions.

15 citations


Journal ArticleDOI
TL;DR: One of the most significant impediments to progress in forecasting weather over North America is the relative paucity of routine observations over data-sparse regions adjacent to the United States as discussed by the authors.
Abstract: One of the most significant impediments to progress in forecasting weather over North America is the relative paucity of routine observations over data-sparse regions adjacent to the United States....

12 citations


01 Jan 1997
TL;DR: In this article, the authors focus on what the next major areas of research in data assimilation should be, and present a survey of the state-of-the-art methods for data preprocessing.
Abstract: As part of the International Symposium on Assimilation of Observation in Meteorology and Oceanography, a panel discussion was held on the evening of 15 March 1995. The purpose of this panel discussion was focus on what the next major areas of research in data assimilation should be. The panelists had five minutes each for short presentations (Kalman filters, representers, etc.) and this was followed by an open discussion. This preprocessing will require a good understanding of the fine-scale phenomena. Least square methods such as Kalman filters and variational schemes are inefficient estimators of non-Gaussian field such as chemical traces. Regardless of the modeling technique employed (Lagrangian methods seem best), a least squares assimilation scheme will smear fine structure. The estimator of maximum likelihood must be sought, by examination of tracer probability distributions.

5 citations