scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Geophysical Research in 1998"


Journal ArticleDOI
TL;DR: In this article, a thorough description of observed monsoon variability and the physical processes that are thought to be important is presented, and some strategies that may help achieve improvement are discussed.
Abstract: The Tropical Ocean-Global Atmosphere (TOGA) program sought to determine the predictability of the coupled ocean-atmosphere system. The World Climate Research Programme's (WCRP) Global Ocean-Atmosphere-Land System (GOALS) program seeks to explore predictability of the global climate system through investigation of the major planetary heat sources and sinks, and interactions between them. The Asian-Australian monsoon system, which undergoes aperiodic and high amplitude variations on intraseasonal, annual, biennial and interannual timescales is a major focus of GOALS. Empirical seasonal forecasts of the monsoon have been made with moderate success for over 100 years. More recent modeling efforts have not been successful. Even simulation of the mean structure of the Asian monsoon has proven elusive and the observed ENSO-monsoon relationships has been difficult to replicate. Divergence in simulation skill occurs between integrations by different models or between members of ensembles of the same model. This degree of spread is surprising given the relative success of empirical forecast techniques. Two possible explanations are presented: difficulty in modeling the monsoon regions and nonlinear error growth due to regional hydrodynamical instabilities. It is argued that the reconciliation of these explanations is imperative for prediction of the monsoon to be improved. To this end, a thorough description of observed monsoon variability and the physical processes that are thought to be important is presented. Prospects of improving prediction and some strategies that may help achieve improvement are discussed.

2,632 citations


Journal ArticleDOI
TL;DR: In this article, a large data set containing coincident in situ chlorophyll and remote sensing reflectance measurements was used to evaluate the accuracy, precision, and suitability of a wide variety of ocean color algorithms for use by SeaWiFS (Sea-viewing Wide Field-of-view Sensor).
Abstract: A large data set containing coincident in situ chlorophyll and remote sensing reflectance measurements was used to evaluate the accuracy, precision, and suitability of a wide variety of ocean color chlorophyll algorithms for use by SeaWiFS (Sea-viewing Wide Field-of-view Sensor). The radiance-chlorophyll data were assembled from various sources during the SeaWiFS Bio-optical Algorithm Mini-Workshop (SeaBAM) and is composed of 919 stations encompassing chlorophyll concentrations between 0.019 and 32.79 μg L−1. Most of the observations are from Case I nonpolar waters, and ∼20 observations are from more turbid coastal waters. A variety of statistical and graphical criteria were used to evaluate the performances of 2 semianalytic and 15 empirical chlorophyll/pigment algorithms subjected to the SeaBAM data. The empirical algorithms generally performed better than the semianalytic. Cubic polynomial formulations were generally superior to other kinds of equations. Empirical algorithms with increasing complexity (number of coefficients and wavebands), were calibrated to the SeaBAM data, and evaluated to illustrate the relative merits of different formulations. The ocean chlorophyll 2 algorithm (OC2), a modified cubic polynomial (MCP) function which uses Rrs490/Rrs555, well simulates the sigmoidal pattern evident between log-transformed radiance ratios and chlorophyll, and has been chosen as the at-launch SeaWiFS operational chlorophyll a algorithm. Improved performance was obtained using the ocean chlorophyll 4 algorithm (OC4), a four-band (443, 490, 510, 555 nm), maximum band ratio formulation. This maximum band ratio (MBR) is a new approach in empirical ocean color algorithms and has the potential advantage of maintaining the highest possible satellite sensor signal: noise ratio over a 3-orders-of-magnitude range in chlorophyll concentration.

2,441 citations


Journal ArticleDOI
TL;DR: In this article, the authors use output from hydrological, oceanographic, and atmospheric models to estimate the variability in the gravity field (i.e., in the geoid) due to those sources.
Abstract: The GRACE satellite mission, scheduled for launch in 2001, is designed to map out the Earth's gravity field to high accuracy every 2–4 weeks over a nominal lifetime of 5 years. Changes in the gravity field are caused by the redistribution of mass within the Earth and on or above its surface. GRACE will thus be able to constrain processes that involve mass redistribution. In this paper we use output from hydrological, oceanographic, and atmospheric models to estimate the variability in the gravity field (i.e., in the geoid) due to those sources. We develop a method for constructing surface mass estimates from the GRACE gravity coefficients. We show the results of simulations, where we use synthetic GRACE gravity data, constructed by combining estimated geophysical signals and simulated GRACE measurement errors, to attempt to recover hydrological and oceanographic signals. We show that GRACE may be able to recover changes in continental water storage and in seafloor pressure, at scales of a few hundred kilometers and larger and at timescales of a few weeks and longer, with accuracies approaching 2 mm in water thickness over land, and 0.1 mbar or better in seafloor pressure.

1,821 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used three statistically based methods: optimal smoothing (OS), the Kalrnan filter (KF), and optimal interpolation (OI), along with estimates of the error covariance of the analyzed fields.
Abstract: Global analyses of monthly sea surface temperature (SST) anomalies from 1856 to 1991 are produced using three statistically based methods: optimal smoothing (OS), the Kalrnan filter (KF) and optimal interpolation (OI). Each of these is accompanied by estimates of the error covariance of the analyzed fields. The spatial covariance function these methods require is estimated from the available data; the time-marching model is a first-order autoregressive model again estimated from data. The data input for the analyses are monthly anomalies from the United Kingdom Meteorological Office historical sea surface temperature data set (MOHSST5) (Parker et al., 1994) of the Global Ocean Surface Temperature Atlas (COSTA) (Bottoraley et al., 1990). These analyses are compared with each other, with COSTA, and with an analy- sis generated by projection (P) onto a set of empirical orthogonal functions (as in Smith et al. (1996)). In theory, the quality of the analyses should rank in the order OS, KF, OI, P, and COSTA. It is found that the first four give comparable results in the data-rich periods (1951-1991), but at times when data is sparse the first three differ significantly from P and COSTA. At these times the latter two often have extreme and fluctuating values, prima facie evidence of error. The statistical schemes are also verified against data not used in any of the analyses (proxy records derived from corals and air temperature records from coastal and island stations). We also present evidence that the analysis error estimates are indeed indicative of the quality of the products. At most times the OS and KF products are close to the OI product, but at times of especially poor coverage their use of information from other times is advantageous. The methods appear to reconstruct the major features of the global SST field from very sparse data. Comparison with other indications of the E1 Nifio - Southern Oscillation cycle show that the analyses provide usable information on interannual variability as far back as the 1860s.

1,561 citations


Journal ArticleDOI
TL;DR: A review of tropical-extratropical teleconnections with a focus on developments over the Tropical Oceans-Global Atmosphere (TOGA) decade and the current state of understanding can be found in this article.
Abstract: The primary focus of this review is tropical-extratropical interactions and especially the issues involved in determining the response of the extratropical atmosphere to tropical forcing associated with sea surface temperature (SST) anomalies. The review encompasses observations, empirical studies, theory and modeling of the extratropical teleconnections with a focus on developments over the Tropical Oceans-Global Atmosphere (TOGA) decade and the current state of understanding. In the tropical atmosphere, anomalous SSTs force anomalies in convection and large-scale overturning with subsidence in the descending branch of the local Hadley circulation. The resulting strong upper tropospheric divergence in the tropics and convergence in the subtropics act as a Rossby wave source. The climatological stationary planetary waves and associated jet streams, especially in the northern hemisphere, can make the total Rossby wave sources somewhat insensitive to the position of the tropical heating that induces them and thus can create preferred teleconnection response patterns, such as the Pacific-North American (PNA) pattern. However, a number of factors influence the dispersion and propagation of Rossby waves through the atmosphere, including zonal asymmetries in the climatological state, transients, and baroclinic and nonlinear effects. Internal midlatitude sources can amplify perturbations. Observations, modeling, and theory have clearly shown how storm tracks change in response to changes in quasi-stationary waves and how these changes generally feedback to maintain or strengthen the dominant perturbations through vorticity and momentum transports. The response of the extratropical atmosphere naturally induces changes in the underlying surface, so that there are changes in extratropical SSTs and changes in land surface hydrology and moisture availability that can feedback and influence the total response. Land surface processes are believed to be especially important in spring and summer. Anomalous SSTs and tropical forcing have tended to be strongest in the northern winter, and teleconnections in the southern hemisphere are weaker and more variable and thus more inclined to be masked by natural variability. Occasional strong forcing in seasons other than winter can produce strong and identifiable signals in the northern hemisphere and, because the noise of natural variability is less, the signal-to-noise ratio can be large. The relative importance of tropical versus extratropical SST forcings has been established through numerical experiments with atmospheric general circulation models (AGCMs). Predictability of anomalous circulation and associated surface temperature and precipitation in the extratropics is somewhat limited by the difficulty of finding a modest signal embedded in the high level of noise from natural variability in the extratropics, and the complexity and variety of the possible feedbacks. Accordingly, ensembles of AGCM runs and time averaging are needed to identify signals and make predictions. Strong anomalous tropical forcing provides opportunities for skillful forecasts, and the accuracy and usefulness of forecasts is expected to improve as the ability to forecast the anomalous SSTs improves, as models improve, and as the information available from the mean and the spread of ensemble forecasts is better utilized.

1,523 citations


Journal ArticleDOI
TL;DR: The MODIS cloud mask algorithm as discussed by the authors uses several cloud detection tests to indicate a level of confidence that the MEDIS is observing clear skies, which is ancillary input to MEDIS land, ocean, and atmosphere science algorithms to suggest processing options.
Abstract: The MODIS cloud mask uses several cloud detection tests to indicate a level of confidence that the MEDIS is observing clear skies. It will be produced globally at single-pixel resolution; the algorithm uses as many as 14 of the MEDIS 36 spectral bands to maximize reliable cloud detection and to mitigate past difficulties experienced by sensors with coarser spatial resolution or fewer spectral bands. The MEDIS cloud mask is ancillary input to MEDIS land, ocean, and atmosphere science algorithms to suggest processing options. The MEDIS cloud mask algorithm will operate in near real time in a limited computer processing and storage facility with simple easy-to-follow algorithm paths. The MEDIS cloud mask algorithm identifies several conceptual domains according to surface type and solar illumination, including land, water, snow/ice, desert, and coast for both day and night. Once a pixel has been assigned to a particular domain (defining an algorithm path), a series of threshold tests attempts to detect the presence of clouds in the instrument field of view. Each cloud detection test returns a confidence level that the pixel is clear ranging in value from 1 (high) to zero (low). There are several types of tests, where detection of different cloud conditions relies on different tests. Tests capable of detecting similar cloud conditions are grouped together. While these groups are arranged so that independence between them is maximized, few, if any, spectral tests are completely independent. The minimum confidence from all tests within a group is taken to be representative of that group. These confidences indicate absence of particular cloud types. The product of all the group confidences is used to determine the confidence of finding clear-sky conditions. This paper outlines the MEDIS cloud masking algorithm. While no present sensor has all of the spectral bands necessary for testing the complete MEDIS cloud mask, initial validation of some of the individual cloud tests is presented using existing remote sensing data sets.

1,198 citations


Journal ArticleDOI
TL;DR: This paper reviewed many published works and presented a compilation of quantitative earthquake interaction studies from a stress change perspective, which provided some clues about certain aspects of earthquake mechanics, but much work remains before we can understand the complete story of how earthquakes work.
Abstract: Many aspects of earthquake mechanics remain an enigma as we enter the closing years of the twentieth century. One potential bright spot is the realization that simple calculations of stress changes may explain some earthquake interactions, just as previous and on going studies of stress changes have begun to explain human-induced seismicity. This paper, which introduces the special section “Stress Triggers, Stress Shadows, and Implications for Seismic Hazard,” reviews many published works and presents a compilation of quantitative earthquake interaction studies from a stress change perspective. This synthesis supplies some clues about certain aspects of earthquake mechanics. It also demonstrates that much work remains before we can understand the complete story of how earthquakes work.

1,031 citations


Journal ArticleDOI
TL;DR: A major accomplishment of the recently completed Tropical Ocean-Global Atmosphere (TOGA) Program was the development of an ocean observing system to support seasonal-to-interannual climate studies.
Abstract: A major accomplishment of the recently completed Tropical Ocean-Global Atmosphere (TOGA) Program was the development of an ocean observing system to support seasonal-to-interannual climate studies. This paper reviews the scientific motivations for the development of that observing system, the technological advances that made it possible, and the scientific advances that resulted from the availability of a significantly expanded observational database. A primary phenomenological focus of TOGA was interannual variability of the coupled ocean-atmosphere system associated with El Nino and the Southern Oscillation (ENSO).Prior to the start of TOGA, our understanding of the physical processes responsible for the ENSO cycle was limited, our ability to monitor variability in the tropical oceans was primitive, and the capability to predict ENSO was nonexistent. TOGA therefore initiated and/or supported efforts to provide real-time measurements of the following key oceanographic variables: surface winds, sea surface temperature, subsurface temperature, sea level and ocean velocity. Specific in situ observational programs developed to provide these data sets included the Tropical Atmosphere-Ocean (TAO) array of moored buoys in the Pacific, a surface drifting buoy program, an island and coastal tide gauge network, and a volunteer observing ship network of expendable bathythermograph measurements. Complementing these in situ efforts were satellite missions which provided near-global coverage of surface winds, sea surface temperature, and sea level. These new TOGA data sets led to fundamental progress in our understanding of the physical processes responsible for ENSO and to the development of coupled ocean-atmosphere models for ENSO prediction.

1,028 citations


Journal ArticleDOI
TL;DR: In this article, the authors employed an irregular grid of nonoverlapping cells adapted to the heterogeneous sampling of the Earth's mantle by seismic waves to resolve lateral heterogeneity on scales as small as 0.6° and 1.2°.
Abstract: Recent global travel time tomography studies by Zhou [1996] and van der Hilst et al. [1997] have been performed with cell parameterizations of the order of those frequently used in regional tomography studies (i.e., with cell sizes of 1°–2°). These new global models constitute a considerable improvement over previous results that were obtained with rather coarse parameterizations (5° cells). The inferred structures are, however, of larger scale than is usually obtained in regional models, and it is not clear where and if individual cells are actually resolved. This study aims at resolving lateral heterogeneity on scales as small as 0.6° in the upper mantle and 1.2°–3° in the lower mantle. This allows for the adequate mapping of expected small-scale structures induced by, for example, lithosphere subduction, deep mantle upwellings, and mid-ocean ridges. There are three major contributions that allow for this advancement. First, we employ an irregular grid of nonoverlapping cells adapted to the heterogeneous sampling of the Earth's mantle by seismic waves [Spakman and Bijwaard, 1998]. Second, we exploit the global data set of Engdahl et al. [1998], which is a reprocessed version of the global data set of the International Seismological Centre. Their reprocessing included hypocenter redetermination and phase reidentification. Finally, we combine all data used (P, pP, and pwP phases) into nearly 5 million ray bundles with a limited spatial extent such that averaging over large mantle volumes is prevented while the signal-to-noise ratio is improved. In the approximate solution of the huge inverse problem we obtain a variance reduction of 57.1%. Synthetic sensitivity tests indicate horizontal resolution on the scale of the smallest cells (0.6° or 1.2°) in the shallow parts of subduction zones decreasing to approximately 2°–3° resolution in well-sampled regions in the lower mantle. Vertical resolution can be worse (up to several hundreds of kilometers) in subduction zones with rays predominantly pointing along dip. Important features of the solution are as follows: 100–200 km thick high-velocity slabs beneath all major subduction zones, sometimes flattening in the transition zone and sometimes directly penetrating into the lower mantle; large high-velocity anomalies in the lower mantle that have been attributed to subduction of the Tethys ocean and the Farallon plate; and low-velocity anomalies continuing across the 660 km discontinuity to hotspots at the surface under Iceland, east Africa, the Canary Islands, Yellowstone, and the Society Islands. Our findings corroborate that the 660 km boundary may resist but not prevent (present day) large-scale mass transfer from upper to lower mantle or vice versa. This observation confirms the results of previous, global mantle studies that employed coarser parameterizations.

1,018 citations


Journal ArticleDOI
TL;DR: The U.S. National Lightning Detection Network (NLDN) has provided real-time and historical lightning data to the electric utility industry, the National Weather Service, and other government and commercial users.
Abstract: The U.S. National Lightning Detection Network TM (NLDN) has provided lightning data covering the continental United States since 1989. Using information gathered from more than 100 sensors, the NLDN provides both real-time and historical lightning data to the electric utility industry, the National Weather Service, and other government and commercial users. It is also the primary source of lightning data for use in research and climatological studies in the United States. In this paper we discuss the design, implementation, and data from the time-of-arrival/magnetic direction finder (TOA/MDF) network following a recent system-wide upgrade. The location accuracy (the maximum dimension of a confidence region around the stroke location) has been improved by a factor of 4 to 8 since 1991, resulting in a median accuracy of 500 m. The expected flash detection efficiency ranges from 80% to 90% for those events with peak currents above 5 kA, varying slightly by region. Subsequent strokes and strokes with peak currents less than 5 kA can now be detected and located; however, the detection efficiency for these events is not quantified in this study because their peak current distribution is not well known.

1,010 citations


Journal ArticleDOI
TL;DR: In this article, a new global model for the Earth's crust based on seismic refraction data published in the period 1948-1995 and a detailed compilation of ice and sediment thickness is presented.
Abstract: We present a new global model for the Earth's crust based on seismic refraction data published in the period 1948-1995 and a detailed compilation of ice and sediment thickness. An extensive compilation of seismic refraction measurements has been used to determine the crustal structure on continents and their margins. Oceanic crust is modeled with both a standard model for normal oceanic crust, and variants for nonstandard regions, such as oceanic plateaus. Our model (CRUST 5.1) consists of 2592 5°x5° tiles in which the crust and uppermost mantle are described by eight layers: (1) ice, (2) water, (3) soft sediments, (4) hard sediments, (5) crystalline upper, (6) middle, (7) lower crust, and (8) uppermost mantle. Topography and bathymetry are adopted from a standard database (ETOPO-5). Compressional wave velocity in each layer is based on field measurements, and shear wave velocity and density are estimated using recently published empirical V p -V s and V p -density relationships. The crustal model differs from previous models in that (1) the thickness and seismic/density structure of sedimentary basins is accounted for more completely, (2) the velocity structure of unmeasured regions is estimated using statistical averages that are based on a significantly larger database of crustal structure, (3) the compressional wave, shear wave, and density structure have been explicitly specified using newly available constraints from field and laboratory studies. Thus this global crustal model is based on substantially more data than previous models and differs from them in many important respects. A new map of the thickness of the Earth's crust is presented, and we illustrate the application of this model by using it to provide the crustal correction for surface wave phase velocity maps. Love waves at 40 s are dominantly sensitive to crustal structure, and there is a very close correspondence between observed phase velocities at this period and those predicted by CRUST 5.1. We find that the application of crustal corrections to long-period (167 s) Rayleigh waves significantly increases the variance in the phase velocity maps and strengthens the upper mantle velocity anomalies beneath stable continental regions. A simple calculation of crustal isostacy indicates significant lateral variations in upper mantle density. The model CRUST 5.1 provides a complete description of the physical properties of the Earth's crust at a scale of 5° x 5° and can be used for a wide range of seismological and nonseismological problems.

Journal ArticleDOI
TL;DR: In this article, the authors compare predictions of two models: the Petrinec and Russell [1996] model and the Shue et al. [1997] model along the flank.
Abstract: During the solar wind dynamic pressure enhancement, around 0200 UT on January 11, 1997, at the end of the January 6-11 magnetic cloud event. the magnetopause was pushed inside geosynchronous orbit. The LANL 1994-084 and GMS 4 geosynchronous satellites crossed the magnetopause and moved into the magnetosheath. Also, the Geotail satellite was in the magnetosheath while the Interball 1 satellite observed magnetopause crossings. This event provides an excellent opportunity to test and validate the prediction capabilities and accuracy of existing models of the magnetopause location for producing space weather forecasts. In this paper, we compare predictions of two models: the Petrinec and Russell [1996] model and the Shue et al. [1997] model. These two models correctly predict the magnetopause crossings on the dayside; however. there are some differences in the predictions along the flank. The Shue et al. [1997] model correctly predicts the Geotail magnetopause crossings and partially predicts the Interball 1 crossings. The Petrinec and Russell [1996] model correctly predicts the Interball 1 crossings and is partially consistent with the Geotail observations. We further found that some of the inaccuracy in Shue et al.'s predictions is due to the inappropriate linear extrapolation from the parameter range for average solar wind conditions to that for extreme conditions. To improve predictions tinder extreme solar wind conditions, we introduce a nonlinear dependence of the parameters on the solar wind conditions to represent the saturation effects of the solar wind dynamic pressure on the flaring of the magnetopause and saturation effects of the interplanetary magnetic field B z on the subsolar standoff distance. These changes lead to a better agreement with the Interball 1 observations for this event.

Journal ArticleDOI
TL;DR: In this paper, the spectral variations of the backscattered radiances are used to separate aerosol absorption from scattering effects, which can be used to identify several aerosol types, ranging from nonabsorbing sulfates to highly UV-absorbing mineral dust.
Abstract: We discuss the theoretical basis of a recently developed technique to characterize aerosols from space. We show that the interaction between aerosols and the strong molecular scattering in the near ultraviolet produces spectral variations of the backscattered radiances that can be used to separate aerosol absorption from scattering effects. This capability allows identification of several aerosol types, ranging from nonabsorbing sulfates to highly UV-absorbing mineral dust, over both land and water surfaces. Two ways of using the information contained in the near-UV radiances are discussed. In the first method, a residual quantity, which measures the departure of the observed spectral contrast from that of a molecular atmosphere, is computed. Since clouds yield nearly zero residues, this method is a useful way of separately mapping the spatial distribution of UV-absorbing and nonabsorbing particles. To convert the residue to optical depth, the aerosol type must be known. The second method is an inversion procedure that uses forward calculations of backscattered radiances for an ensemble of aerosol models. Using a look-up table approach, a set of measurements given by the ratio of backscattered radiance at 340-380 nm and the 380 nm radiance are associated, within the domain of the candidate aerosol models, to values of optical depth and single-scattering albedo. No previous knowledge of aerosol type is required. We present a sensitivity analysis of various error sources contributing to the estimation of aerosol properties by the two methods.

Journal ArticleDOI
TL;DR: In this paper, a model was proposed to account for the observed variations in the flux and pitch angle distribution of relativistic electrons during geomagnetic storms by combining pitch angle scattering by intense EMIC waves and energy diffusion during cyclotron resonant interaction with whistler mode chorus outside the plasmasphere.
Abstract: Resonant diffusion curves for electron cyclotron resonance with field-aligned electromagnetic R mode and L mode electromagnetic ion cyclotron (EMIC) waves are constructed using a fully relativistic treatment. Analytical solutions are derived for the case of a single-ion plasma, and a numerical scheme is developed for the more realistic case of a multi-ion plasma. Diffusion curves are presented, for plasma parameters representative of the Earth's magnetosphere at locations both inside and outside the plasmapause. The results obtained indicate minimal electron energy change along the diffusion curves for resonant interaction with L mode waves. Intense storm time EMIC waves are therefore ineffective for electron stochastic acceleration, although these waves could induce rapid pitch angle scattering for ≳ 1 MeV electrons near the duskside plasmapause. In contrast, significant energy change can occur along the diffusion curves for interaction between resonant electrons and whistler (R mode) waves. The energy change is most pronounced in regions of low plasma density. This suggests that whistler mode waves could provide a viable mechanism for electron acceleration from energies near 100 keV to above 1 MeV in the region outside the plasmapause during the recovery phase of geomagnetic storms. A model is proposed to account for the observed variations in the flux and pitch angle distribution of relativistic electrons during geomagnetic storms by combining pitch angle scattering by intense EMIC waves and energy diffusion during cyclotron resonant interaction with whistler mode chorus outside the plasmasphere.

Journal ArticleDOI
TL;DR: In this article, a synergistic algorithm was proposed to produce global leaf area index and fraction of absorbed photosynthetically active radiation fields from canopy reflectance data measured by MODIS (moderate resolution imaging spectroradiometer) and MISR (multiangle imaging spectraladiometer).
Abstract: A synergistic algorithm for producing global leaf area index and fraction of absorbed photosynthetically active radiation fields from canopy reflectance data measured by MODIS (moderate resolution imaging spectroradiometer) and MISR (multiangle imaging spectroradiometer) instruments aboard the EOS-AM 1 platform is described here. The proposed algorithm is based on a three-dimensional formulation of the radiative transfer process in vegetation canopies. It allows the use of information provided by MODIS (single angle and up to 7 shortwave spectral bands) and MISR (nine angles and four shortwave spectral bands) instruments within one algorithm. By accounting features specific to the problem of radiative transfer in plant canopies, powerful techniques developed in reactor theory and atmospheric physics are adapted to split a complicated three-dimensional radiative transfer problem into two independent, simpler subproblems, the solutions of which are stored in the form of a look-up table. The theoretical background required for the design of the synergistic algorithm is discussed. Large-scale ecosystem modeling is used to simulate a range of ecological responses to changes in climate and chemical composition of the atmosphere, including changes in the distribution of terrestrial plant communities across the globe in response to climate changes. Leaf area index (LAI) is a state parameter in all models describing the exchange of fluxes of energy, mass (e.g., water and CO 2), and momentum between the surface and the planetary boundary layer. Analyses of global carbon budget indicate a large terrestrial middle- to high-latitude sink, without which the accumulation of carbon in the atmosphere would be higher than the present rate. The problem of accurately evaluating the exchange of carbon between the atmosphere and the terrestrial vegetation therefore requires special attention. In this context the fraction of photosynthetically active radiation (FPAR) absorbed by global vegetation is a key state variable in most ecosystem productivity models and in global models of climate, hydrology, biogeochemestry, and ecology (Sellers et al., 1997). Therefore these variables that describe vegetation canopy structure and its energy absorption capacity are required by many of the EOS Interdisciplinary Projects (Myneni et al., 1997a). In order to quantitatively and accurately model global dynamics of these processes, differentiate short-term from long-term trends, as well as to distinguish regional from global phenomena, these two

Journal ArticleDOI
TL;DR: In this paper, the fine particle composition data from seven National Park Service locations in Alaska for the period from 1986 to 1995 was performed using a new type of factor analysis, positive matrix factorization (PMF), which uses the estimates of the error in the data to provide optimum data point scaling and permits a better treatment of missing and below detection limit values.
Abstract: The fine particle (<2.5 μm) composition data from seven National Park Service locations in Alaska for the period from 1986 to 1995 was performed using a new type of factor analysis, positive matrix factorization (PMF). This method uses the estimates of the error in the data to provide optimum data point scaling and permits a better treatment of missing and below detection limit values. Eight source components were obtained for data sets from the Northwest Alaska Areas and the Bering Land Bridge sites. Five to seven components were obtained for the other Alaskan sites. The solutions were normalized by using aerosol fine mass concentration data. Squared correlation coefficients between the reconstructed mass obtained from aerosol composition data for the sites and the measured mass were in the range of 0.74–0.95. Two factors identified as soils were obtained for all of the sites. Concentrations for these factors for most of the sites have maxima in the summer and minima in the winter. A sea-salt component was found at five locations. A factor with the highest concentrations of black carbon (BC), H+, and K identified as forest fire smoke was obtained for all data sets except at Katmai. Factors with high concentrations of S, BC-Na-S, and Zn-Cu were obtained at all sites. At three sites, the solutions also contained a factor with high Pb and Br values. The factors with the high S, Pb, and BC-Na-S values at most sites show an annual cycle with maxima during the winter-spring season and minima in the summer. The seasonal variations and elemental compositions of these factors suggest anthropogenic origins with the spatial pattern suggesting that the sources are distant from the receptor sites. The seasonal maxima/minima ratios of these factors were higher for more northerly locations. Four main sources contribute to the observed concentrations at these locations: long-range transported anthropogenic aerosol (Arctic haze aerosol), sea-salt aerosol, local soil dust, and aerosol with high BC concentrations from regional forest fires or local wood smoke. A northwest to southeast negative gradient suggesting long-range transport of air masses from regions north or northwest of Alaska dominated the spatial distribution of the high S factor concentrations.

Journal ArticleDOI
TL;DR: In this paper, spatial and temporal variability of the stable isotope composition of precipitation in the southeast Asia and western Pacific region is discussed, with emphasis on the China territory, based on the database of the International Atomic Energy Agency/World Meteorological Organization Global Network Isotopes in Precipitation and the available information on the regional climatology and atmospheric circulation patterns.
Abstract: Spatial and temporal variability of the stable isotope composition of precipitation in the southeast Asia and western Pacific region is discussed, with emphasis on the China territory, based on the database of the International Atomic Energy Agency/World Meteorological Organization Global Network Isotopes in Precipitation and the available information on the regional climatology and atmospheric circulation patterns. The meteorological and pluviometric regime of southeast Asia is controlled by five different air masses: (1) polar air mass originating in the Arctic, (2) continental air mass originating over central Asia, (3) tropical-maritime air mass originating in the northern Pacific, (4) equatorial-maritime air mass originating in the western equatorial Pacific, and (5) equatorial-maritime air mass originating in the Indian Ocean. The relative importance of different air masses in the course of a given year is modulated by the monsoon activity and the seasonal displacement of the Intertropical Convergence Zone (ITCZ). Gradual rain-out of moist, oceanic air masses moving inland, associated with monsoon circulation, constitutes a powerful mechanism capable of producing large isotopic depletions in rainfall, often completely overshadowing the dependence of δ 18 O and δ 2 H on temperature. For instance, precipitation at Lhasa station (Tibetan Plateau) during rainy period (June-September) is depleted in 18 O by more than 6 ‰ with respect to winter rainfall, despite of 10°C higher surface air temperature in summer. This characteristic isotopic imprint of monsoon activity is seen over large areas of the region. The oceanic air masses forming the two monsoon systems, Pacific and Indian monsoon, differ in their isotope signatures, as demonstrated by the average δ 18 O of rainfall, which in the south of China (Haikou, Hong Kong) is about 2.5‰ more negative than in the Bay of Bengal (Yangoon). Strong seasonal variations of the deuterium excess values in precipitation observed in some areas of the studied region result from a complete reversal of atmospheric circulation over these areas and changing source of atmospheric moisture. High d-excess values observed at Tokyo and Pohang during winter (15-25‰) result from interaction of dry air masses from the northern Asian continent passing the Sea of Japan and the China Sea and picking up moisture under reduced relative humidity. The isotopic composition of precipitation also provides information about the maximum extent of the ITCZ on the continent during summer.

Journal ArticleDOI
TL;DR: In this paper, the authors examined wind observations of inertial and dissipation range spectra in an attempt to better understand the processes that form the dissipation ranges and how these processes depend on the ambient solar wind parameters (interplanetary magnetic field intensity, ambient proton density and temperature, etc.).
Abstract: The dissipation range for interplanetary magnetic field fluctuations is formed by those fluctuations with spatial scales comparable to the gyroradius or ion inertial length of a thermal ion. It is reasonable to assume that the dissipation range represents the final fate of magnetic energy that is transferred from the largest spatial scales via nonlinear processes until kinetic coupling with the background plasma removes the energy from the spectrum and heats the background distribution. Typically, the dissipation range at 1 AU sets in at spacecraft frame frequencies of a few tenths of a hertz. It is characterized by a steepening of the power spectrum and often demonstrates a bias of the polarization or magnetic helicity spectrum. We examine Wind observations of inertial and dissipation range spectra in an attempt to better understand the processes that form the dissipation range and how these processes depend on the ambient solar wind parameters (interplanetary magnetic field intensity, ambient proton density and temperature, etc.). We focus on stationary intervals with well-defined inertial and dissipation range spectra. Our analysis shows that parallel-propagating waves, such as Alfven waves, are inconsistent with the data. MHD turbulence consisting of a partly slab and partly two-dimensional (2-D) composite geometry is consistent with the observations, while thermal paxticle interactions with the 2-D component may be responsible for the formation of the dissipation range. Kinetic Alfven waves propagating at large angles to the background magnetic field are also consistent with the observations and may form some portion of the 2-D turbulence component.

Journal ArticleDOI
TL;DR: In this article, the authors provided formulae for estimating the number of years necessary to detect trends, along with the estimates of the impact of interventions on trend detection, and the uncertainty associated with these estimates is also explored.
Abstract: Detection of long-term, linear trends is affected by a number of factors, including the size of trend to be detected, the time span of available data, and the magnitude of variability and autocorrelation of the noise in the data. The number of years of data necessary to detect a trend is strongly dependent on, and increases with, the magnitude of variance (σN2) and autocorrelation coefficient (ϕ) of the noise. For a typical range of values of σN2 and ϕ the number of years of data needed to detect a trend of 5%/decade can vary from ∼10 to >20 years, implying that in choosing sites to detect trends some locations are likely to be more efficient and cost-effective than others. Additionally, some environmental variables allow for an earlier detection of trends than other variables because of their low variability and autocorrelation. The detection of trends can be confounded when sudden changes occur in the data, such as when an instrument is changed or a volcano erupts. Sudden level shifts in data sets, whether due to artificial sources, such as changes in instrumentation or site location, or natural sources, such as volcanic eruptions or local changes to the environment, can strongly impact the number of years necessary to detect a given trend, increasing the number of years by as much as 50% or more. This paper provides formulae for estimating the number of years necessary to detect trends, along with the estimates of the impact of interventions on trend detection. The uncertainty associated with these estimates is also explored. The results presented are relevant for a variety of practical decisions in managing a monitoring station, such as whether to move an instrument, change monitoring protocols in the middle of a long-term monitoring program, or try to reduce uncertainty in the measurements by improved calibration techniques. The results are also useful for establishing reasonable expectations for trend detection and can be helpful in selecting sites and environmental variables for the detection of trends. An important implication of these results is that it will take several decades of high-quality data to detect the trends likely to occur in nature.

Journal ArticleDOI
TL;DR: In this paper, a method of deriving large-scale convection maps based on all the available velocity data is described, which is used to determine a solution for the distribution of electrostatic potential, expressed as a series expansion in spherical harmonics.
Abstract: The HF radars of the Super Dual Auroral Radar Network (SuperDARN) provide measurements of the E × B drift of ionospheric plasma over extended regions of the high-latitude ionosphere. With the recent augmentation of the northern hemisphere component to six radars, a sizable fraction of the entire convection zone (approximately one-third) can be imaged nearly instantaneously (∼2 min). To date, the two-dimensional convection velocity has been mapped by combining line-of-sight velocity measurements obtained from pairs of radars within common-volume areas. We describe a new method of deriving large-scale convection maps based on all the available velocity data. The measurements are used to determine a solution for the distribution of electrostatic potential, Φ, expressed as a series expansion in spherical harmonics. The addition of data from a statistical model constrains the solution in regions of no data coverage. For low-order expansions the results provide a gross characterization of the global convection. We discuss the processing of the radar velocity data, the factors that condition the fitting, and the reliability of the results. We present examples of imaging that demonstrate the response of the global convection to variations in the interplanetary magnetic field (IMF). In the case of a sudden polarity change from northward to southward IMF, the convection is seen to reconfigure globally on very short (<6 min) timescales.

Journal ArticleDOI
TL;DR: The National Aeronautic and Space Administration (NASA) plans to launch the moderate resolution imaging spectroradiometer (MODIS) on the polarorbiting Earth Observation System (EOS) providing morning and evening global observations in 1999 and afternoon and night observations in 2000 as discussed by the authors.
Abstract: The National Aeronautic and Space Administration (NASA) plans to launch the moderate resolution imaging spectroradiometer (MODIS) on the polarorbiting Earth Observation System (EOS) providing morning and evening global observations in 1999 and afternoon and night observations in 2000. These four MODIS daily fire observations will advance global fire monitoring with special 1 km resolution fire channels at 4 and 11 μm, with high saturation of about 450 and 400 K, respectively. MODIS data will also be used to monitor burn scars, vegetation type and condition, smoke aerosols, water vapor, and clouds for overall monitoring of the fire process and its effects on ecosystems, the atmosphere, and the climate. The MODIS fire science team is preparing algorithms that use the thermal signature to separate the fire signal from the background signal. A database of active fire products will be generated and archived at a 1 km resolution and summarized on a grid of 10 km and 0.5°, daily, 8 days, and monthly. It includes the fire occurrence and location, the rate of emission of thermal energy from the fire, and a rough estimate of the smoldering/flaming ratio. This information will be used in monitoring the spatial and temporal distribution of fires in different ecosystems, detecting changes in fire distribution and identifying new fire frontiers, wildfires, and changes in the frequency of the fires or their relative strength. We plan to combine the MODIS fire measurements with a detailed diurnal cycle of the fires from geostationary satellites. Sensitivity studies and analyses of aircraft and satellite data from the Yellowstone wildfire of 1988 and prescribed fires in the Smoke, Clouds, and Radiation (SCAR) aircraft field experiments are used to evaluate and validate the fire algorithms and to establish the relationship between the fire thermal properties, the rate of biomass consumption, and the emissions of aerosol and trace gases from fires.

Journal ArticleDOI
TL;DR: In this article, the power laws that link a(p) and a(phi) to [chl] show striking similarities, and the spectral dependence of absorption by these nonalgal particles follows an exponential increase toward short wavelengths, with a weakly variable slope (0.011 +/- 0.0025 nm(-1)).
Abstract: Spectral absorption coefficients of total particulate matter a(p) (lambda) were determined using the in vitro filter technique. The present analysis deals with a set of 1166 spectra, determined in various oceanic (case 1) waters, with field chi a concentrations ([chl]) spanning 3 orders of magnitude (0.02-25 mg m(-3)). As previously shown [Bricaud et al.; 1995] for the absorption coefficients of living phytoplankton a(phi)(lambda), the a(p)(lambda) coefficients also increase nonlinearly with [chl]. The relationships (power laws) that link a(p)(lambda) and a(phi)(lambda) to [chl] show striking similarities. Despite large fluctuations, the relative contribution of nonalgal particles to total absorption oscillates around an average value of 25-30% throughout the [chl] range. The spectral dependence of absorption by these nonalgal particles follows an exponential increase toward short wavelengths, with a weakly variable slope (0.011 +/- 0.0025 nm(-1)). The empirical relationships linking a(p)(lambda) to [chl] can be used in bio-optical models. This parameterization based on in vitro measurements leads to a good agreement with a former modeling of the diffuse attenuation coefficient based on in situ measurements. This agreement is worth noting as independent methods and data sets are compared. It is stressed that for a given [chl], the a(p)(lambda) coefficients show large residual variability around the regression lines (for instance, by a factor of 3 at 440 nm). The consequences of such a variability, when predicting or interpreting the diffuse reflectance of the ocean, are examined, according to whether or not these variations in a(p) are associated with concomitant variations in particle scattering. In most situations the deviations in a(p) actually are not compensated by those in particle scattering, so that the amplitude of reflectance is affected by these variations.

Journal ArticleDOI
TL;DR: In this article, the authors show that simultaneous eTor modeling and calibration can be achieved by using triple collocations, in sire, ERS scatterometer, and Ibrecast model winds.
Abstract: Wind is a very important geophysical variable to accurately measure. However, a statistical phenomenon important for the validation or calibration of winds is the small dynamic range relative to the typical measurement uncertainty, i.e., the generally small signal-to-noise ra- tio. In such cases, pseudobiases may occur when standard validation or calibration methods are applied, such as regression or bin-average analyses. Moreover, nonlinear translbrmation of ran- dom error, for instance, between wind components and speed and direction, may give rise to substantial pseudobiases. In fact, validation or calibration can only be done properly when the full error characteristics of the data are hown. In practice, the problem is that prior howledge on the error characteristics is seldom available. In this paper we show that simultaneous eTor modeling and calibration can be achieved by using triple collocations. This is a fundamental fincling that is generally relevant to all geophysical validation. To illustrate the statistical analysis using triple collocations, in sire, ERS scatterometer, and Ibrecast model winds are used. Wind component error analysis is shown to be more convenient than wind speed and direction error analysis. The anemometer winds from the National Oceanic and Atmospheric Administration (NOAA) buoys are shown to have the largest error variance, followed by the scatterometer and the National Centers Ibr Enviromental Prediction (NCEP) forecast model winds proved the most accurate. When using the in situ winds as a reference, the scatterometer wind components are biased low by -4%. The NCEP forecast model winds are found to be biased high by -6%. After applying a higher-order calibration procedure an improved ERS scatterometer wind re- trieval is proposed. The systematic and random error analysis is relevant for the use of near- surface winds to compute Iluxes of momentum, humidity, or heat or to drive ocean wave or cir- culation mtxlel s.

Journal ArticleDOI
TL;DR: A time-averaged inventory of subaerial volcanic sulfur (S) emissions was compiled primarily for the use of global S and sulfate modelers as discussed by the authors, which relies upon the 25-year history of S, primarily sulfur dioxide (SO2), measurements at volcanoes.
Abstract: A time-averaged inventory of subaerial volcanic sulfur (S) emissions was compiled primarily for the use of global S and sulfate modelers. This inventory relies upon the 25-year history of S, primarily sulfur dioxide (SO2), measurements at volcanoes. Subaerial volcanic SO2 emissions indicate a 13 Tg/a SO2 time-averaged flux, based upon an early 1970s to 1997 time frame. When considering other S species present in volcanic emissions, a time-averaged inventory of subaerial volcanic S fluxes is 10.4 Tg/a S. These time-averaged fluxes are conservative minimum fluxes since they rely upon actual measurements. The temporal, spatial, and chemical inhomogeneities inherent to this system gave higher S fluxes in specific years. Despite its relatively small proportion in the atmospheric S cycle, the temporal and spatial distribution of volcanic S emissions provide disproportionate effects at local, regional, and global scales. This work contributes to the Global Emissions Inventory Activity.

Journal ArticleDOI
TL;DR: In this article, the authors investigated how the Kobe earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities, and found that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in areas of stress decrease.
Abstract: The Kobe earthquake struck at the edge of the densely populated Osaka-Kyoto corridor in southwest Japan. We investigate how the earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities. We find that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in regions of stress decrease. We quantify this relationship by forming the spatial correlation between the seismicity rate change and the Coulomb stress change. The correlation is significant for stress changes greater than 0.2–1.0 bars (0.02–0.1 MPa), and the nonlinear dependence of seismicity rate change on stress change is compatible with a state- and rate-dependent formulation for earthquake occurrence. We extend this analysis to future mainshocks by resolving the stress changes on major faults within 100 km of Kobe and calculating the change in probability caused by these stress changes. Transient effects of the stress changes are incorporated by the state-dependent constitutive relation, which amplifies the permanent stress changes during the aftershock period. Earthquake probability framed in this manner is highly time-dependent, much more so than is assumed in current practice. Because the probabilities depend on several poorly known parameters of the major faults, we estimate uncertainties of the probabilities by Monte Carlo simulation. This enables us to include uncertainties on the elapsed time since the last earthquake, the repeat time and its variability, and the period of aftershock decay. We estimate that a calculated 3-bar (0.3-MPa) stress increase on the eastern section of the Arima-Takatsuki Tectonic Line (ATTL) near Kyoto causes fivefold increase in the 30-year probability of a subsequent large earthquake near Kyoto; a 2-bar (0.2-MPa) stress decrease on the western section of the ATTL results in a reduction in probability by a factor of 140 to 2000. The probability of a Mw = 6.9 earthquake within 50 km of Osaka during 1997–2007 is estimated to have risen from 5–6% before the Kobe earthquake to 7–11% afterward; during 1997–2027, it is estimated to have risen from 14–16% before Kobe to 16–22%.

Journal ArticleDOI
TL;DR: In this paper, the structure and dynamics of magnetic reconnection were studied in the premidnight sector of the magnetotail at 20-30 RE for substorm onsets in Geotail observations.
Abstract: Fast tailward ion flows with strongly southward magnetic fields are frequently observed near the neutral sheet in the premidnight sector of the magnetotail at 20–30 RE for substorm onsets in Geotail observations. These fast tailward flows are occasionally accompanied by a few keV electrons. With these events, we study the structure and dynamics of magnetic reconnection. The plasma sheet near the magnetic reconnection site can be divided into three regions: the neutral sheet region (near the neutral sheet with the absolute magnitude of Bx of 10 nT), and the off-equatorial plasma sheet (the rest). In the neutral sheet region, plasmas are transported with strong convection, and accelerated electrons show nearly isotropic distributions. In the off-equatorial plasma sheet, two ion components coexist: ions being accelerated and heated during convection toward the neutral sheet and ions flowing at a high speed almost along the magnetic field. In this region, highly accelerated electrons are observed. Although electron distributions are basically isotropic, high-energy (higher than 10 keV) electrons show streaming away from the reconnection site along the magnetic field line. In the boundary region, ions also show two components: ions with convection toward the neutral sheet and field-aligned ions flowing out of the reconnection region, although acceleration and heating during convection are weak. In the boundary region, high-energy (10 keV) electrons stream away, while medium-energy (3 keV) electrons stream into the reconnection site. Magnetic reconnection usually starts in the premidnight sector of the magnetotail between XGSM = −20 RE and XGSM = −30 RE prior to an onset signature identified with Pi 2 pulsation on the ground. Magnetic reconnection proceeds on a timescale of 10 min. After magnetic reconnection ends, adjacent plasmas are transported into the postreconnection site, and plasmas can become stationary even in the expansion phase.

Journal ArticleDOI
TL;DR: In this paper, a global three-dimensional model for tropospheric O3-NOx-hydrocarbon chemistry with synoptic-scale resolution is presented, which includes state-of-the-art inventories of anthropogenic emissions and process-based formulations of natural emissions and deposition that are tied to the model meteorology.
Abstract: We describe a global three-dimensional model for tropospheric O3-NOx-hydrocarbon chemistry with synoptic-scale resolution. A suite of 15 chemical tracers, including O3, NOx, PAN, HNO3, CO, H2O2, and various hydrocarbons, is simulated in the model. For computational expediency, chemical production and loss of tracers are parameterized as polynomial functions to fit the results of a detailed O3-NOx-hydrocarbon mechanism. The model includes state-of-the-art inventories of anthropogenic emissions and process-based formulations of natural emissions and deposition that are tied to the model meteorology. Improvements are made to existing schemes for computing biogenic emissions of isoprene and NO. Our best estimates of global emissions include among others 42 Tg N yr−1 for NOx (21 Tg N yr−1 from fossil fuel combustion, 12 Tg N yr−1 from biomass burning, 6 Tg N yr−1 from soils, and 3 Tg N yr−1 from lightning), and 37 Tg C yr−1 for acetone (1 Tg C yr−1 from industry, 9 Tg C yr−1 from biomass burning, 15 Tg C yr−1 from vegetation, and 12 Tg C yr−1 from oxidation of propane and higher alkanes).

Journal ArticleDOI
TL;DR: In this article, a comparison of two tomographic methods for determining 3D velocity structure from first-arrival travel time data is presented, where travel time residuals are distributed along their ray paths independently of all other rays.
Abstract: This paper presents a comparison of two tomographic methods for determining three-dimensional (3-D) velocity structure from first-arrival travel time data. The first method is backprojection in which travel time residuals are distributed along their ray paths independently of all other rays. The second method is regularized inversion in which a combination of data misfit and model roughness is minimized to provide the smoothest model appropriate for the data errors. Both methods are nonlinear in that a starting model is required and new ray paths are calculated at each iteration. Travel times are calculated using an efficient implementation of an existing method for solving the eikonal equation by finite differencing. Both inverse methods are applied to 3-D ocean bottom seismometer (OBS) data collected in 1993 over the Faeroe Basin, consisting of 53,479 travel times recorded at 29 OBSs. This is one of the most densely spaced, large-scale, 3-D seismic refraction experiments to date. Different starting models and values for the free parameters of each tomographic method are tested. A new form of backprojection that converges more rapidly than similar methods compares favorably with regularized inversion, but the latter method provides a simpler model for little additional computational expense when applied to the Faeroe Basin data. Bounds on two model features are assessed using regularized inversion with combined smoothness and flatness constraints. An inversion of synthetic data corresponding to 100% data recovery from the real experiment shows a marked improvement in lateral resolution at deeper depths and demonstrates the potential of currently feasible 3-D refraction experiments to provide well-resolved, long-wavelength velocity models. The similarity of the final models derived from the two tomographic methods suggests that the results from the new form of backprojection can be relied on when limited computational resources rule out regularized inversion.

Journal ArticleDOI
TL;DR: A patch of sulfur hexafluoride was released in May 1992 in the eastern North Atlantic on an isopycnal surface near 300 m depth and was surveyed over a period of 30 months as it dispersed across and along isopyncal surfaces as discussed by the authors.
Abstract: A patch of sulfur hexafluoride was released in May 1992 in the eastern North Atlantic on an isopycnal surface near 300 m depth and was surveyed over a period of 30 months as it dispersed across and along isopycnal surfaces The diapycnal eddy diffusivity K estimated for the first 6 months was 012±002 cm2/s, while for the subsequent 24 months it was 017±002 cm2/s The vertical tracer distribution remained very close to Gaussian for the full 30 months, as the root mean square (rms) dispersion grew from 5 to 50 m Lateral dispersion was measured on several scales The growth of the rms width of the tracer streaks from less than 100 m to approximately 300 m within 2 weeks implies an isopycnal diffusivity of 007 m2/s at scales of 01 to 1 km, larger than expected from the interaction between vertical shear of the internal waves and diapycnal mixing Teasing of the overall patch, initially about 25 km across, into streaks with an overall length of 1800 km within 6 months supports predictions of exponential growth by the mesoscale strain field at a rate of 3±05 × 10−7 s−1 The rms width of these streaks, estimated as 3 km and maintained in the face of the streak growth, indicates an isopycnal diffusivity of 2 m2/s at scales of 1 to 10 km, much greater than expected from internal wave shear dispersion The patch was painted in, albeit streakily, by 12 months, confirming expectations from analytical and numerical models Homogenization of the patch continued during the subsequent 18 months, while the patch continued to spread with an effective isopycnal eddy diffusivity on the order of 1000 m2/s, acting at scales of 30 to 300 km

Journal ArticleDOI
TL;DR: In this article, the authors used the same measurement procedure and calibration scale for all samples and by ensuring high age resolution and accuracy of the ice core and firn air, and calculated an average total CH4 source of 250 Tg yr−1 for 1000-1800 A.D.
Abstract: Atmospheric methane mixing ratios from 1000 A.D. to present are measured in three Antarctic ice cores, two Greenland ice cores, the Antarctic firn layer, and archived air from Tasmania, Australia. The record is unified by using the same measurement procedure and calibration scale for all samples and by ensuring high age resolution and accuracy of the ice core and firn air. In this way, methane mixing ratios, growth rates, and interpolar differences are accurately determined. From 1000 to 1800 A.D. the global mean methane mixing ratio averaged 695 ppb and varied about 40 ppb, contemporaneous with climatic variations. Interpolar (N-S) differences varied between 24 and 58 ppb. The industrial period is marked by high methane growth rates from 1945 to 1990, peaking at about 17 ppb yr−1 in 1981 and decreasing significantly since. We calculate an average total methane source of 250 Tg yr−1 for 1000–1800 A.D., reaching near stabilization at about 560 Tg yr−1 in the 1980s and 1990s. The isotopic ratio, δ13CH4, measured in the archived air and firn air, increased since 1978 but the rate of increase slowed in the mid-1980s. The combined CH4 and δ13CH4 trends support the stabilization of the total CH4 source.