scispace - formally typeset
Search or ask a question

Showing papers in "Weather and Forecasting in 1999"


Journal ArticleDOI
TL;DR: Several possible hypothesis test methods are evaluated: the paired t test, the nonparametric Wilcoxon signed-rank test, and two resampling tests, which indicate the more involved resampled test methodology is the most appropriate when testing threat scores from nonprobabilistic forecasts.
Abstract: When evaluating differences between competing precipitation forecasts, formal hypothesis testing is rarely performed. This may be due to the difficulty in applying common tests given the spatial correlation of and non-normality of errors. Possible ways around these difficulties are explored here. Two datasets of precipitation forecasts are evaluated, a set of two competing gridded precipitation forecasts from operational weather prediction models and sets of competing probabilistic quantitative precipitation forecasts from model output statistics and from an ensemble of forecasts. For each test, data from each competing forecast are collected into one sample for each case day to avoid problems with spatial correlation. Next, several possible hypothesis test methods are evaluated: the paired t test, the nonparametric Wilcoxon signed-rank test, and two resampling tests. The more involved resampling test methodology is the most appropriate when testing threat scores from nonprobabilistic forecasts. ...

354 citations


Journal ArticleDOI
TL;DR: The Statistical Hurricane Intensity Prediction Scheme (SHIPS) as discussed by the authors combines climatological, persistence, and synoptic predictors to forecast intensity changes using a multiple regression technique.
Abstract: Updates to the Statistical Hurricane Intensity Prediction Scheme (SHIPS) for the Atlantic basin are described. SHIPS combines climatological, persistence, and synoptic predictors to forecast intensity changes using a multiple regression technique. The original version of the model was developed for the Atlantic basin and was run in near–real time at the Hurricane Research Division beginning in 1993. In 1996, the model was incorporated into the National Hurricane Center operational forecast cycle, and a version was developed for the eastern North Pacific basin. Analysis of the forecast errors for the period 1993–96 shows that SHIPS had little skill relative to forecasts based upon climatology and persistence. However, SHIPS had significant skill in both the Atlantic and east Pacific basins during the 1997 hurricane season. The regression coefficients for SHIPS were rederived after each hurricane season since 1993 so that the previous season’s forecast cases were included in the sample. Modificatio...

346 citations


Journal ArticleDOI
TL;DR: In this article, an alternative method to the ROC curve is proposed that represents forecast quality when expressed in terms of probabilities of events occurring contingent upon the warnings provided, which is based on ratios that measure the proportions of events and nonevents for which warnings were provided.
Abstract: The relative operating characteristic (ROC) curve is a highly flexible method for representing the quality of dichotomous, categorical, continuous, and probabilistic forecasts. The method is based on ratios that measure the proportions of events and nonevents for which warnings were provided. These ratios provide estimates of the probabilities that an event will be forewarned and that an incorrect warning will be provided for a nonevent. Some guidelines for interpreting the ROC curve are provided. While the ROC curve is of direct interest to the user, the warning is provided in advance of the outcome and so there is additional value in knowing the probability of an event occurring contingent upon a warning being provided or not provided. An alternative method to the ROC curve is proposed that represents forecast quality when expressed in terms of probabilities of events occurring contingent upon the warnings provided. The ratios used provide estimates of the probability of an event occurring give...

320 citations


Journal ArticleDOI
TL;DR: In this paper, the forecast skill of the European Centre for Medium-Range Weather Forecasts Ensemble Prediction System (EPS) in predicting precipitation probabilities is discussed and four seasons are analyzed in detail using signal detection theory and reliability diagrams to define objective measure of predictive skill.
Abstract: The forecast skill of the European Centre for Medium-Range Weather Forecasts Ensemble Prediction System (EPS) in predicting precipitation probabilities is discussed. Four seasons are analyzed in detail using signal detection theory and reliability diagrams to define objective measure of predictive skill. First, the EPS performance during summer 1997 is discussed. Attention is focused on Europe and two European local regions, one centered around the Alps and the other around Ireland. Results indicate that for Europe the EPS can give skillful prediction of low precipitation amounts [i.e., lower than 2 mm (12 h)−1] up to forecast day 6, and of high precipitation amounts [i.e., between 2 and 10 mm (12 h)−1] up to day 4. Lower levels of skill are achieved for smaller local areas. Then, the EPS performance during summer 1996 (i.e., prior to the enhancement introduced on 10 December 1996 from 33 to 51 members and to resolution increase from T63 L19 to TL159 L31) and summer 1997 are compared. Results sho...

187 citations


Journal ArticleDOI
TL;DR: In this paper, a neural network was developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas-Fort Worth, Texas, area.
Abstract: A neural network, using input from the Eta Model and upper air soundings, has been developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas‐Fort Worth, Texas, area. Forecasts from two years were verified against a network of 36 rain gauges. The resulting forecasts were remarkably sharp, with over 70% of the PoP forecasts being less than 5% or greater than 95%. Of the 436 days with forecasts of less than 5% PoP, no rain occurred on 435 days. On the 111 days with forecasts of greater than 95% PoP, rain always occurred. The linear correlation between the forecast and observed precipitation amount was 0.95. Equitable threat scores for threshold precipitation amounts from 0.05 in. ( ;1 mm) to 1 in. (;25 mm) are 0.63 or higher, with maximum values over 0.86. Combining the PoP and QPF products indicates that for very high PoPs, the correlation between the QPF and observations is higher than for lower PoPs. In addition, 61 of the 70 observed rains of at least 0.5 in. (12.7 mm) are associated with PoPs greater than 85%. As a result, the system indicates a potential for more accurate precipitation forecasting.

174 citations


Journal ArticleDOI
TL;DR: In this article, the effects of increasing horizontal resolution, the spatial variations in model skill across the region, and the relative differences in performance between the two modeling systems are verified over the Pacific Northwest.
Abstract: Precipitation forecasts from the Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5) and NCEP’s 10-km resolution Eta Model (Eta-10) are verified over the Pacific Northwest in order to show the effects of increasing horizontal resolution, the spatial variations in model skill across the region, and the relative differences in performance between the two modeling systems. The MM5 is verified at 36- and 12-km resolution for 9 December 1996 through 30 April 1997 using approximately 150 cooperative observer and National Weather Service precipitation sites across the Pacific Northwest. A noticeable improvement in bias, equitable threat, and root-mean-square (rms) error scores occurs as the horizontal resolution is increased. The spatial distribution of bias and equitable threat scores across Washington and Oregon indicate that the 12-km MM5 generates too much precipitation along the steep windward slopes and not enough precipitation in the lee of major barriers....

173 citations


Journal ArticleDOI
TL;DR: In this paper, surface wind observations analyzed by the Hurricane Research Division (HRD) were compared to those computed by the parametric wind model used in the National Weather Service Sea, Lake, and Overland Surges from Hurricanes (SLOSH) model's storm surge computations for seven cases in five recent hurricanes.
Abstract: Surface wind observations analyzed by the Hurricane Research Division (HRD) were compared to those computed by the parametric wind model used in the National Weather Service Sea, Lake, and Overland Surges from Hurricanes (SLOSH) model’s storm surge computations for seven cases in five recent hurricanes. In six cases, the differences between the SLOSH and HRD surface peak wind speeds were 6% or less, but in one case (Hurricane Emily of 1993) the SLOSH computed peak wind speeds were 15% less than the HRD. In all seven cases, statistics for the modeled and analyzed wind fields showed that for the region of strongest winds, the mean SLOSH wind speed was 14% greater than that of the HRD and the mean inflow angle for SLOSH was 198 less than that of the HRD. The radii beyond the region of strongest winds in the seven cases had mean wind speed and inflow angle differences that were very small. The SLOSH computed peak storm surges usually compared closely to the observed values of storm surge in the region of the maximum wind speeds, except Hurricane Emily where SLOSH underestimated the peak surge. HRD’s observation-based wind fields were input to SLOSH for storm surge hindcasts of Hurricanes Emily and Opal (1995). In Opal, the HRD input produced nearly the same computed storm surges as those computed from the SLOSH parametric wind model, and the calculated surge was insensitive to perturbations in the HRD wind field. For Emily, observation-based winds produced a computed storm surge that was closer to the peak observed surge, confirming that the computed surge in Pamlico Sound was sensitive to atmospheric forcing. Using real-time, observation-based winds in SLOSH would likely improve storm surge computations in landfalling hurricanes affected by synoptic and mesoscale factors that are not accounted for in parametric models (e.g., a strongly sheared environment, convective asymmetries, and stably stratified boundary layers). An accurate diagnosis of storm surge flooding, based on the actual track and wind fields could be supplied to emergency management agencies, government officials, and utilities to help with damage assessment and recovery efforts.

134 citations


Journal ArticleDOI
TL;DR: The history of storm spotting and public awareness of the tornado threat is reviewed in this paper, where it is shown that a downward trend in fatalities apparently began after the famous “Tri-State” tornado of 1925.
Abstract: The history of storm spotting and public awareness of the tornado threat is reviewed. It is shown that a downward trend in fatalities apparently began after the famous “Tri-State” tornado of 1925. Storm spotting’s history begins in World War II as an effort to protect the nation’s military installations, but became a public service with the resumption of public tornado forecasting, pioneered in 1948 by the Air Force’s Fawbush and Miller and begun in the public sector in 1952. The current spotter program, known generally as SKYWARN, is a civilian-based volunteer organization. Responsibility for spotter training has rested with the national forecasting services (originally, the Weather Bureau and now the National Weather Service). That training has evolved with (a) the proliferation of widespread film and (recently) video footage of severe storms; (b) growth in the scientific knowledge about tornadoes and tornadic storms, as well as a better understanding of how tornadoes produce damage; and (c) th...

129 citations


Journal ArticleDOI
TL;DR: In this paper, a versatile workstation version of the NCEP Eta Model is used to simulate three excessive precipitation episodes in the central United States, including 16−17 June 1996 in the upper Midwest, 17 July 1996 in western Iowa and 27 May 1997 in Texas.
Abstract: A versatile workstation version of the NCEP Eta Model is used to simulate three excessive precipitation episodes in the central United States. These events all resulted in damaging flash flooding and include 16‐17 June 1996 in the upper Midwest, 17 July 1996 in western Iowa, and 27 May 1997 in Texas. The episodes reflect a wide range of meteorological situations: (i) a warm core cyclone in June 1996 generated a meso-b-scale region of excessive rainfall from echo training in its warm sector while producing excessive overrunning rainfall to the north of its warm front, (ii) a mesoscale convective complex in July 1996 produced excessive rainfall, and (iii) tornadic thunderstorms in May 1997 resulted in small-scale excessive rains. Model sensitivity to horizontal resolution is investigated using a range of horizontal resolutions comparable to those used in operational and quasi-operational forecasting models. Sensitivity tests are also performed using both the Betts‐Miller‐Janjic convective scheme (operational at NCEP in 1998) and the Kain‐Fritsch scheme. Variations in predicted peak precipitation as resolution is refined are found to be highly case dependent, suggesting forecaster interpretation of increasingly higher resolution model quantitative precipitation forecast (QPF) information will not be straightforward. In addition, precipitation forecasts and QPF response to changing resolution are both found to vary significantly with choice of convective parameterization.

105 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the operational potential for predicting the onset of cloud-to-ground lightning in 39 airmass thunderstorms that developed over the NASA Kennedy Space Center, Florida, to determine the best lightning initiation signature.
Abstract: The operational potential for predicting the onset of cloud-to-ground lightning is examined. WSR-88D reflectivity echoes were analyzed for 39 airmass thunderstorms that developed over the NASA Kennedy Space Center, Florida, to determine the best lightning initiation signature. This study examined thunderstorms in the months of May–September 1992–97. These storms were studied in conjunction with cloud-to-ground (CG) lightning flash locations from the National Lightning Detection Network. From a time series of radar echoes, it was found that the 40-dBZ echo detected at the −10°C temperature height is the best indicator for predicting the beginning of CG lightning activity. The observed median lag time, or warning time, between this lightning initiation signature and the first CG lightning flashes was 7.5 min. Other lightning initiation signatures examined at −15° and −20°C temperature heights yielded shorter warning times.

103 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provided a report on the behavior of these cyclonic systems during three special observing periods (SOPs) as revealed in the twice-daily ECMWF operational analyses.
Abstract: The data collected during the three special observing periods (SOPs) of the Antarctic First Regional Observing Study of the Troposphere project provide an excellent base upon which to study the behavior of cyclonic systems in winter, spring, and summer in the Southern Hemisphere. This paper provides a report on the behavior of these cyclonic systems during the three SOPs as revealed in the twice-daily ECMWF operational analyses. The study has been undertaken with an objective cyclone tracking algorithm applied to the digital analyses. The results revealed cyclone behavior generally in accord with long-term climatologies developed with this scheme. In the SOPs the authors observed many systems to be generated in the western part of the ocean basins and then to move east and, to a lesser extent, south. In the three periods they found a concentration of tracks just to the north of the Antarctic continent, being particularly noticeable in the Indian Ocean. At the same time, they found significant dif...

Journal ArticleDOI
TL;DR: In this paper, a 46-case composite spanning the years 1962-88 was constructed for a 6-day period centered at 1200 UTC on the day of heavy precipitation onset (denoted τ0).
Abstract: Warm, moist southwesterly airflow into the northwestern United States during the cold season can result in rapid snowmelt and flooding. The objectives of this research are to document characteristic synoptic flow patterns accompanying cold-season (November–March) flooding events, and isolate flow anomalies associated with the moisture transport during a representative event. The first objective is accomplished through a 46-case composite spanning the years 1962–88; the second objective is addressed through diagnosis of a flooding event that occurred on 17–18 January 1986. The 46-case composite is constructed for a 6-day period centered at 1200 UTC on the day of heavy precipitation onset (denoted τ0). Composite 500-hPa geopotential height anomaly fields reveal anomalous ridging over the Bering Sea preceding the precipitation event, a negative anomaly over the Gulf of Alaska throughout the composite evolution, and a positive anomaly over the southwestern Unites States and adjacent eastern Pacific O...

Journal ArticleDOI
TL;DR: In this article, a collocation study of meteorological reports from commercial aircraft relayed through the Aircraft Communications, Addressing, and Reporting System (ACARS) has been performed to estimate standard deviations of observation errors for wind and temperature.
Abstract: A collocation study of meteorological reports from commercial aircraft relayed through the Aircraft Communications, Addressing, and Reporting System (ACARS) has been performed to estimate standard deviations of observation errors for wind and temperature. ACARS observations were collected over an area in the western and central United States for a 13-month period, and this dataset was examined for pairs of reports within small spatial (#10 km) and temporal (#10 min) windows. The results showed an observation error of a single horizontal component of wind of 1.1 m s21 and 0.5 K for temperature above the boundary layer. Within the boundary layer, the rms difference of wind and temperature between aircraft was larger, presumably due to larger smallscale variations in the atmosphere and, in the case of wind, from aircraft maneuvers. These observation error estimates are valuable for use in data assimilation and for determination of forecast error from ACARS observation-minus-forecast differences. By comparing standard deviations at different levels, estimates of mesoscale variability at a 10-km scale in the lower troposphere were also calculated. These values (rms vector error of 1.8 ms 21 for wind, rms error of 0.5 K for temperature) can be interpreted as estimates of the 10-km lowertropospheric error of representativeness, also useful for data assimilation.

Journal ArticleDOI
TL;DR: In this paper, the mesoscale snowbands formed in close correlation to the intense midlevel frontogenesis and deep layer of negative equivalent potential vorticity (EPV) in three northeastern United States snowstorms.
Abstract: The National Centers for Environmental Prediction’s 29-km version Meso Eta Model and Weather Surveillance Radar-1988 Doppler base reflectivity data were used to diagnose intense mesoscale snowbands in three northeastern United States snowstorms. Snowfall rates within these snowbands were extreme and, in one case, were close to 15 cm (6 in.) per hour. The heaviest total snowfall with each snowstorm was largely associated with the positioning of these mesoscale snowbands. Each snowstorm exhibited strong midlevel frontogenesis in conjunction with a deep layer of negative equivalent potential vorticity (EPV). The frontogenesis and negative EPV were found in the deformation zone, north of the developing midlevel cyclone. Cross-sectional analyses (oriented perpendicular to the isotherms) indicated that the mesoscale snowbands formed in close correlation to the intense midlevel frontogenesis and deep layer of negative EPV. It was found that the EPV was significantly reduced on the warm side of the midle...

Journal ArticleDOI
TL;DR: A synoptic-dynamic climatology was constructed using all 24-h 2.5in. (50.8 mm) or greater rainfall events in nine states affected by heavy rains and flooding from June through September 1993 using 6- or 12-h gridded analyses from the Regional Data Assimilation System and geostationary satellite imagery.
Abstract: A synoptic–dynamic climatology was constructed using all 24-h 2-in. (50.8 mm) or greater rainfall events in nine states affected by heavy rains and flooding from June through September 1993 using 6- or 12-h gridded analyses from the Regional Data Assimilation System and geostationary satellite imagery. Each of the 85 events was assigned a category (0–4) based on the areal coverage of the 3-in. (76.2 mm) or greater observed precipitation isohyet. A variety of meteorological fields and rules of thumb used by forecasters at the Hydrometeorological Prediction Center are investigated that may help identify the most likely location and scale of a convective precipitation event. The heaviest rain usually fell to the north (downwind) of the axis of highest 850-mb winds and moisture flux in an area of 850-mb warm temperature and equivalent potential temperature advection. The rainfall maximum also usually occurred to the north or northeast of the axis of highest 850-mb equivalent potential temperature. Th...

Journal ArticleDOI
TL;DR: In this article, the short-term forecast accuracy of six different forecast models over the western United States is described for January, February, and March 1996, and the forecast errors are described in terms of bias error and mean square error (mse) as computed relative to gridded objective analyses and raw-insonde observations.
Abstract: The short-term forecast accuracy of six different forecast models over the western United States is described for January, February, and March 1996. Four of the models are operational products from the National Centers for Environmental Prediction (NCEP) and the other two are research models with initial and boundary conditions obtained from NCEP models. Model resolutions vary from global wavenumber 126 (∼100 km equivalent horizontal resolution) for the Medium Range Forecast model (MRF) to about 30 km for the Meso Eta, Utah Local Area Model (Utah LAM), and Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 5 (MM5). Forecast errors are described in terms of bias error and mean square error (mse) as computed relative to (i) gridded objective analyses and (ii) rawinsonde observations. Bias error and mse fields computed relative to gridded analyses show considerable variation from model to model, with the largest errors produced by the most highly resolved ...

Journal ArticleDOI
TL;DR: A long-lived highly organized highly organized squall line moved rapidly across the middle Mississippi and Ohio Valleys on 15 April 1994 within a moderately unstable, strongly sheared environment as discussed by the authors.
Abstract: A long-lived highly organized squall line moved rapidly across the middle Mississippi and Ohio Valleys on 15 April 1994 within a moderately unstable, strongly sheared environment. Over Kentucky and southern Indiana, the line contained several bowing segments (bow echoes) that resulted in widespread wind damage, numerous shear vortices/rotational circulations, and several tornadoes that produced F0–F2 damage. In this study, the Louisville–Fort Knox WSR-88D is used to present a thorough discussion of a particularly long-tracked bowing line segment over central Kentucky that exhibited a very complex and detailed evolution, more so than any other segment throughout the life span of the squall line. Specifically, this segment produced abundant straight-line wind damage; cyclic, multiple core cyclonic circulations, some of which met mesocyclone criteria; several tornadoes; and embedded high precipitation supercell-like structure that evolved into a rotating comma head–comma tail pattern. The bowing seg...

Journal ArticleDOI
TL;DR: Tornadic vortex signatures (TVSs) of 52 tornadoes were identified and analyzed, then characterized as either descending or nondescending as discussed by the authors, which refers to a known tendency of radar-observed tornadic vortices, namely, that of their initial detection aloft and then of their subsequent descent leading to tornadogenesis.
Abstract: Tornadic vortex signatures (TVSs) of 52 tornadoes were identified and analyzed, then characterized as either descending or nondescending. This characterization refers to a known tendency of radar-observed tornadic vortices, namely, that of their initial detection aloft and then of their subsequent descent leading to tornadogenesis. Only 52% of the sampled TVSs descended according to this archetypal model. The remaining 48% were detected first near the ground and grew upward or appeared nearly simultaneously over a several kilometer depth; these represent primary modes of tornado development that have been explained theoretically. The descending–nondescending TVSs were stratified according to attributes of the tornado and TVS. Significantly, tornadoes within quasi-linear convective systems tended to be associated with nondescending TVSs, identification of which provided a mean tornado lead time of 5 min. Two case studies are presented for illustrative purposes. On 1 July 1997 in southern Minnesota...

Journal ArticleDOI
TL;DR: In this paper, two mesoscale convective systems that occurred on 27 June 1995 were primarily responsible for the severe event, which resulted in three fatalities and millions of dollars in damage.
Abstract: Between 25 and 27 June 1995, excessive rainfall and associated flash flooding across portions of western Virginia resulted in three fatalities and millions of dollars in damage. Although many convective storms occurred over this region during this period, two particular mesoscale convective systems that occurred on 27 June were primarily responsible for the severe event. The first system (the Piedmont storm) developed over Madison County, Virginia (eastern slopes of the Blue Ridge Mountains), and propagated slowly southward producing 100–300 mm of rain over a narrow swath of the Virginia foothills and Piedmont. The second system (the Madison storm) developed over the same area but remained quasi-stationary along the eastern slopes of the Blue Ridge for nearly 8 h producing more than 600 mm of rain. Analysis of this event indicates that the synoptic conditions responsible for initiating and maintaining the Madison storm were very similar to the Big Thompson and Fort Collins floods along the Front ...

Journal ArticleDOI
TL;DR: In this paper, an objective method of forecasting precipitation coverage with a neural network is presented, which uses as predictors all available data at local weather stations including both numerical model results and weather data obtained later than the model initial time.
Abstract: An objective method of forecasting precipitation coverage with a neural network is presented. This method uses as predictors all available data at local weather stations including both numerical model results and weather data obtained later than the model initial time, which sometimes contradict each other and hence have to be handled subjectively by well-experienced forecasters. Since the method gives an objective and also realistic forecast of areal precipitation coverage, its skill scores are better than those of the persistence forecast (after 3 h), the linear regression forecasts, and numerical model precipitation prediction.

Journal ArticleDOI
TL;DR: The main goals of the first ground-based severe storm intercept field programs in the 1970s at the National Severe Storms Laboratory and at the University of Oklahoma were to verify severe weather signatures detected by a remote Doppler radar, to identify cloud features that could aid storm spotters, and to estimate wind speeds in tornadoes based on the photogrammetric analysis of tornado debris movies as discussed by the authors.
Abstract: Efforts to study severe convective storms and tornadoes by intercepting them either on the ground or on airborne platforms are highlighted. Airborne sorties into or near waterspouts in the Florida Keys with instruments were made in the late 1960s and the 1970s. The main goals of the first organized ground-based severe storm intercept field programs in the 1970s at the National Severe Storms Laboratory and at the University of Oklahoma were to verify severe weather signatures detected by a remote Doppler radar, to identify cloud features that could aid storm spotters, and to estimate wind speeds in tornadoes based on the photogrammetric analysis of tornado debris movies. Instruments were subsequently developed that could be carried along on intercept vehicles to measure in situ electric field change, thermodynamic variables, and wind, near the ground and aloft. Beginning in the late 1980s, portable and mobile Doppler radars were developed that could be used to make estimates of the maximum wind sp...

Journal ArticleDOI
TL;DR: In this paper, the authors examined Lake-effect snowstorms in northern Utah and western New York with and without lightning/thunder and found that the lake effect snowstorms with lightning have significantly higher temperatures and dewpoints in the lower troposphere and significantly lower lifted indices than those without lightning.
Abstract: Lake-effect snowstorms in northern Utah and western New York with and without lightning/thunder are examined. Lake-effect snowstorms with lightning have significantly higher temperatures and dewpoints in the lower troposphere and significantly lower lifted indices than lake-effect snowstorms without lightning. In contrast, there is little difference in dewpoint depressions between events with and without lightning. Surface-to-700-hPa temperature differences (a surrogate for lower-tropospheric lapse rate) for events with and without lightning differ significantly for events in northern Utah, but not for those in western New York. Nearly all events have no convective available potential energy, regardless of the presence of lightning. These results are discussed in the context of current models of storm electrification.

Journal ArticleDOI
TL;DR: In this paper, the performance of the WSR-88D rainfall algorithm, Precipitation Processing System, was examined in detail to determine how well it performed, in particular the sensitivity to the algorithm's rain-rate threshold (hail cap) parameter and the performance on the resulting radar rainfall estimates.
Abstract: A strong thunderstorm produced a flash flood on the evening of 12 July 1996 in Buffalo Creek, Colorado, that caused two deaths and significant property damage. Most of the rain fell in a 1-h time period from 2000 to 2100 MDT. The performance of the WSR-88D rainfall algorithm, Precipitation Processing System, was examined in detail to determine how well it performed. In particular the sensitivity to the algorithm’s rain-rate threshold (hail cap) parameter and the performance of the gauge–radar adjustment subalgorithm on the resulting radar rainfall estimates were examined by comparison with available rain gauge data. It was determined that the WSR-88D rainfall algorithm overestimated the rainfall in general over the radar scanning domain for this event by about 60% relative to the rain gauges although the radar-derived rainfall for the flood-producing storm cell nearly matched the single gauge that sampled it. The derived rainfall over the radar scanning domain was not very sensitive to the settin...

Journal ArticleDOI
TL;DR: The U.S. Navy's operational implementation of the hurricane prediction system developed at the National Oceanographic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory is described, and the performance of the model during the 1996 western North Pacific tropical cyclone season is analyzed.
Abstract: The U.S. Navy’s operational implementation of the hurricane prediction system developed at the National Oceanographic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory is described, and the performance of the model during the 1996 western North Pacific tropical cyclone season is analyzed. The model was highly reliable in terms of maintaining and tracking tropical cyclones, maintaining 96%, 93%, and 93% of all tropical storms and typhoons at 24-, 48-, and 72-h forecast periods. Subsequent model improvements raised these percentages to 96%, 93%, and 100%. Overall track errors were 176, 316, and 466 km for the same periods. Errors for tropical storms and typhoons were 75–150 km smaller than those for tropical depressions. The difference generally grew with forecast length. Large track errors were generally associated with a sheared environment, spurious interactions with elevated terrain, or poorly timed recurvature. On average, the model slightly underforecast intensity, but in...

Journal ArticleDOI
TL;DR: In this article, the first in a series of articles devoted to a comprehensive verification of these forecasts is presented, where the property verified is calibration: a match between forecast probabilities and empirical frequencies of events.
Abstract: From 1 August 1990 to 31 July 1995, the Weather Service Forecast Office in Pittsburgh prepared 6159 probabilistic quantitative precipitation forecasts. Forecasts were made twice a day for 24-h periods beginning at 0000 and 1200 UTC for two river basins. This is the first in a series of articles devoted to a comprehensive verification of these forecasts. The property verified herein is calibration: a match between forecast probabilities and empirical frequencies of events. Monthly time series of calibration statistics are analyzed to infer (i) trends in calibration over time, (ii) the forecasters’ skill in quantifying uncertainty, (iii) the adaptability of forecasters’ judgments to nonstationarities of the predictand, (iv) the possibility of reducing biases through dynamic recalibration, and (v) the potential for improving calibration through individualized training.

Journal ArticleDOI
TL;DR: The surface energy budget in Antarctic latitudes is evaluated for the medium-range numerical weather forecasts produced by the National Centers for Environmental Prediction (NCEP) and for the NCEP-National Center for Atmospheric Research reanalysis project during the winter, spring, and summer special observing periods (SOPs) of the Antarctic First Regional Observing Study of Troposphere project.
Abstract: The surface energy budget in Antarctic latitudes is evaluated for the medium-range numerical weather forecasts produced by the National Centers for Environmental Prediction (NCEP) and for the NCEP–National Center for Atmospheric Research reanalysis project during the winter, spring, and summer special observing periods (SOPs) of the Antarctic First Regional Observing Study of Troposphere project. A significant change in the energy balance resulted from an extensive model update beginning with the forecasts initialized on 11 January 1995 during the summer SOP. Both the forecasts and the reanalysis include significant errors in the surface energy balance over Antarctica. The errors often tend to cancel and thus produce reasonable surface temperature fields. General errors include downward longwave radiation about 30–50 W m−2 too small. Lower than observed cloudiness contributes to this error and to excessive downward shortwave radiation at the surface. The model albedo over Antarctica, about 75%, i...

Journal ArticleDOI
TL;DR: The daily evolution of local surface conditions at Phoenix, Arizona and the characteristics of the 1200 UTC sounding at Tucson, Arizona, have been examined to determine important meteorological features that lead to thunderstorm occurrence over the low deserts of central Arizona.
Abstract: The daily evolution of local surface conditions at Phoenix, Arizona, and the characteristics of the 1200 UTC sounding at Tucson, Arizona, have been examined to determine important meteorological features that lead to thunderstorm occurrence over the low deserts of central Arizona. Each day of July and August during the period 1990–95 has been stratified based upon daily mean, surface moisture conditions at Phoenix, Arizona, and the occurrence of afternoon and evening convective activity in the Phoenix metropolitan area. The nearest operational sounding, taken 160 km to the southeast at Tucson, is shown to be not representative of low-level thermodynamic conditions in central Arizona. Thus, Phoenix forecasters’ ability to identify precursor conditions for the development of thunderstorms is impaired. On days that convective storms occur in the Phoenix area, there is a decrease in the diurnal amplitude of surface dewpoint changes, signifying increased/deeper boundary layer moisture. This signal is ...

Journal ArticleDOI
TL;DR: In this article, it was shown that a mechanism of downstream amplification across the Pacific into South America is generally accompanied by troughs and ridges that propagate eastward, and that the growth of the long stationary waves during the freeze events may be due to scale interaction among wave c...
Abstract: Many frost events over southeastern Brazil are accompanied by a large-amplitude upper trough of the middle latitudes that extends well into the Tropics. This paper first illustrates that a mechanism of downstream amplification across the Pacific into South America is generally accompanied in these situations. This is manifested by troughs and ridges that propagate eastward. An analysis of these situations during frost events shows that these features of downstream amplification, illustrated on a Hovmoller (x–t) plot, can be decomposed into a family of synoptic-scale waves that propagate eastward and a family of planetary-scale waves that acquire a quasi-stationary character during the freeze event. It is shown that a global model, at a resolution of 70 km, can be used to predict these features on the decomposition of scales during freeze events. It became apparent from these features that the growth of the long stationary waves during the freeze events may be due to scale interaction among wave c...

Journal ArticleDOI
TL;DR: In this article, a variational method adjusts the LAPS moisture analysis by minimizing differences between forward model-computed radiances and radiances from Advanced Weather Interactive Processing System (AWIPS) image-grade data from Geostationary Operational Environmental Satellite 8 (GOES-8).
Abstract: The Local Analysis and Prediction System (LAPS) analyzes three-dimensional moisture as one component of its system. This paper describes the positive impact that simple 8-bit, remapped, routinely available imagery have on the LAPS moisture analysis above 500 hPa. A variational method adjusts the LAPS moisture analysis by minimizing differences between forward model-computed radiances and radiances from Advanced Weather Interactive Processing System (AWIPS) image-grade data from Geostationary Operational Environmental Satellite 8 (GOES-8). The three infrared channels used in the analysis will be routinely available to AWIPS workstations every 15 min. This technique improves LAPS upper-level dewpoint, reducing dewpoint temperature bias and root-mean-square error on the order of 0.5 and 1.5 K, respectively, as compared to Denver radiosonde observation data. Furthermore, it strongly exemplifies the objective analysis benefit of image-grade data, in addition to its well-known subjective utility.

Journal ArticleDOI
TL;DR: In this article, an optimal model output calibration (MOC) algorithm for surface air temperature forecasts is proposed and tested with the National Centers for Environmental Prediction Regional Spectral Model (RSM) using forecasts and observations of the most recent 2-4 weeks to objectively estimate and adjust the current model forecast errors and make refined predictions.
Abstract: An optimal model output calibration (MOC) algorithm suitable for surface air temperature forecasts is proposed and tested with the National Centers for Environmental Prediction Regional Spectral Model (RSM). Differing from existing methodologies and the traditional model output statistics (MOS) technique, the MOC algorithm uses forecasts and observations of the most recent 2–4 weeks to objectively estimate and adjust the current model forecast errors and make refined predictions. The MOC equation, a multivariate linear regression equation with forecast error being the predictand, objectively screens as many as 30 candidates of predictors and optimally selects no more than 6. The equation varies from day to day and from site to site. Since it does not rely on long-term statistics of stable model runs, the MOC minimizes the influence of changes in model physics and spatial resolution on the forecast refinement process. Forecast experiments were conducted for six major urban centers in the Tennessee...