scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Very high resolution interpolated climate surfaces for global land areas.

TL;DR: In this paper, the authors developed interpolated climate surfaces for global land areas (excluding Antarctica) at a spatial resolution of 30 arc s (often referred to as 1-km spatial resolution).
Abstract: We developed interpolated climate surfaces for global land areas (excluding Antarctica) at a spatial resolution of 30 arc s (often referred to as 1-km spatial resolution). The climate elements considered were monthly precipitation and mean, minimum, and maximum temperature. Input data were gathered from a variety of sources and, where possible, were restricted to records from the 1950–2000 period. We used the thin-plate smoothing spline algorithm implemented in the ANUSPLIN package for interpolation, using latitude, longitude, and elevation as independent variables. We quantified uncertainty arising from the input data and the interpolation by mapping weather station density, elevation bias in the weather stations, and elevation variation within grid cells and through data partitioning and cross validation. Elevation bias tended to be negative (stations lower than expected) at high latitudes but positive in the tropics. Uncertainty is highest in mountainous and in poorly sampled areas. Data partitioning showed high uncertainty of the surfaces on isolated islands, e.g. in the Pacific. Aggregating the elevation and climate data to 10 arc min resolution showed an enormous variation within grid cells, illustrating the value of high-resolution surfaces. A comparison with an existing data set at 10 arc min resolution showed overall agreement, but with significant variation in some regions. A comparison with two high-resolution data sets for the United States also identified areas with large local differences, particularly in mountainous areas. Compared to previous global climatologies, ours has the following advantages: the data are at a higher spatial resolution (400 times greater or more); more weather station records were used; improved elevation data were used; and more information about spatial patterns of uncertainty in the data is available. Owing to the overall low density of available climate stations, our surfaces do not capture of all variation that may occur at a resolution of 1 km, particularly of precipitation in mountainous areas. In future work, such variation might be captured through knowledgebased methods and inclusion of additional co-variates, particularly layers obtained through remote sensing. Copyright  2005 Royal Meteorological Society.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A novel jackknife validation approach is developed and tested to assess the ability to predict species occurrence when fewer than 25 occurrence records are available and the minimum sample sizes required to yield useful predictions remain difficult to determine.
Abstract: Aim: Techniques that predict species potential distributions by combining observed occurrence records with environmental variables show much potential for application across a range of biogeographical analyses. Some of the most promising applications relate to species for which occurrence records are scarce, due to cryptic habits, locally restricted distributions or low sampling effort. However, the minimum sample sizes required to yield useful predictions remain difficult to determine. Here we developed and tested a novel jackknife validation approach to assess the ability to predict species occurrence when fewer than 25 occurrence records are available. Location: Madagascar. Methods: Models were developed and evaluated for 13 species of secretive leaf-tailed geckos (Uroplatus spp.) that are endemic to Madagascar, for which available sample sizes range from 4 to 23 occurrence localities (at 1 km2 grid resolution). Predictions were based on 20 environmental data layers and were generated using two modelling approaches: a method based on the principle of maximum entropy (Maxent) and a genetic algorithm (GARP). Results: We found high success rates and statistical significance in jackknife tests with sample sizes as low as five when the Maxent model was applied. Results for GARP at very low sample sizes (less than c. 10) were less good. When sample sizes were experimentally reduced for those species with the most records, variability among predictions using different combinations of localities demonstrated that models were greatly influenced by exactly which observations were included. Main conclusions: We emphasize that models developed using this approach with small sample sizes should be interpreted as identifying regions that have similar environmental conditions to where the species is known to occur, and not as predicting actual limits to the range of a species. The jackknife validation approach proposed here enables assessment of the predictive ability of models built using very small sample sizes, although use of this test with larger sample sizes may lead to overoptimistic estimates of predictive power. Our analyses demonstrate that geographical predictions developed from small numbers of occurrence records may be of great value, for example in targeting field surveys to accelerate the discovery of unknown populations and species. © 2007 The Authors.

2,647 citations


Cites methods from "Very high resolution interpolated c..."

  • ...FEWS precipitation data were considered advantageous over estimates derived by interpolation from weather station records (e.g. Hijmans et al., 2005), since merging data from multiple sources has been shown to reduce bias and random error significantly compared to individual precipitation data…...

    [...]

  • ...Eleven temperature-derived variables were extracted from the WorldClim data base (Hijmans et al., 2005; http://www.worldclim.org/), which is a set of global climate layers generated through interpolation of climate data from weather stations on a 30¢¢ grid (c. 1 km2 resolution)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors used the PRISM (Parameter-elevation relationships on independent slopes model) interpolation method to develop data sets that reflected, as closely as possible, the current state of knowledge of spatial climate patterns in the United States.
Abstract: Spatial climate data sets of 1971–2000 mean monthly precipitation and minimum and maximum temperature were developed for the conterminous United States These 30-arcsec (∼800-m) grids are the official spatial climate data sets of the US Department of Agriculture The PRISM (Parameter-elevation Relationships on Independent Slopes Model) interpolation method was used to develop data sets that reflected, as closely as possible, the current state of knowledge of spatial climate patterns in the United States PRISM calculates a climate–elevation regression for each digital elevation model (DEM) grid cell, and stations entering the regression are assigned weights based primarily on the physiographic similarity of the station to the grid cell Factors considered are location, elevation, coastal proximity, topographic facet orientation, vertical atmospheric layer, topographic position, and orographic effectiveness of the terrain Surface stations used in the analysis numbered nearly 13 000 for precipitation and 10 000 for temperature Station data were spatially quality controlled, and short-period-of-record averages adjusted to better reflect the 1971–2000 period PRISM interpolation uncertainties were estimated with cross-validation (C-V) mean absolute error (MAE) and the 70% prediction interval of the climate–elevation regression function The two measures were not well correlated at the point level, but were similar when averaged over large regions The PRISM data set was compared with the WorldClim and Daymet spatial climate data sets The comparison demonstrated that using a relatively dense station data set and the physiographically sensitive PRISM interpolation process resulted in substantially improved climate grids over those of WorldClim and Daymet The improvement varied, however, depending on the complexity of the region Mountainous and coastal areas of the western United States, characterized by sparse data coverage, large elevation gradients, rain shadows, inversions, cold air drainage, and coastal effects, showed the greatest improvement The PRISM data set benefited from a peer review procedure that incorporated local knowledge and data into the development process Copyright © 2008 Royal Meteorological Society

2,447 citations

Journal ArticleDOI
TL;DR: New global maps of the Köppen-Geiger climate classification at an unprecedented 1-km resolution for the present-day and for projected future conditions under climate change are presented, providing valuable indications of the reliability of the classifications.
Abstract: We present new global maps of the Koppen-Geiger climate classification at an unprecedented 1-km resolution for the present-day (1980–2016) and for projected future conditions (2071–2100) under climate change. The present-day map is derived from an ensemble of four high-resolution, topographically-corrected climatic maps. The future map is derived from an ensemble of 32 climate model projections (scenario RCP8.5), by superimposing the projected climate change anomaly on the baseline high-resolution climatic maps. For both time periods we calculate confidence levels from the ensemble spread, providing valuable indications of the reliability of the classifications. The new maps exhibit a higher classification accuracy and substantially more detail than previous maps, particularly in regions with sharp spatial or elevation gradients. We anticipate the new maps will be useful for numerous applications, including species and vegetation distribution modeling. The new maps including the associated confidence maps are freely available via www.gloh2o.org/koppen . Machine-accessible metadata file describing the reported data (ISA-Tab format)

2,434 citations

Journal ArticleDOI
TL;DR: A detailed explanation of how MaxEnt works and a prospectus on modeling options are provided to enable users to make informed decisions when preparing data, choosing settings and interpreting output to highlight the need for making biologically motivated modeling decisions.
Abstract: The MaxEnt software package is one of the most popular tools for species distribution and environmental niche modeling, with over 1000 published applications since 2006. Its popularity is likely for two reasons: 1) MaxEnt typically outperforms other methods based on predictive accuracy and 2) the software is particularly easy to use. MaxEnt users must make a number of decisions about how they should select their input data and choose from a wide variety of settings in the software package to build models from these data. The underlying basis for making these decisions is unclear in many studies, and default settings are apparently chosen, even though alternative settings are often more appropriate. In this paper, we provide a detailed explanation of how MaxEnt works and a prospectus on modeling options to enable users to make informed decisions when preparing data, choosing settings and interpreting output. We explain how the choice of background samples reflects prior assumptions, how nonlinear functions of environmental variables (features) are created and selected, how to account for environmentally biased sampling, the interpretation of the various types of model output and the challenges for model evaluation. We demonstrate MaxEnt’s calculations using both simplified simulated data and occurrence data from South Africa on species of the flowering plant family Proteaceae. Throughout, we show how MaxEnt’s outputs vary in response to different settings to highlight the need for making biologically motivated modeling decisions.

2,370 citations

Journal ArticleDOI
01 Jun 2018-Science
TL;DR: Cumulatively, the findings support an approach where producers monitor their own impacts, flexibly meet environmental targets by choosing from multiple practices, and communicate their impacts to consumers.
Abstract: Food’s environmental impacts are created by millions of diverse producers. To identify solutions that are effective under this heterogeneity, we consolidated data covering five environmental indicators; 38,700 farms; and 1600 processors, packaging types, and retailers. Impact can vary 50-fold among producers of the same product, creating substantial mitigation opportunities. However, mitigation is complicated by trade-offs, multiple ways for producers to achieve low impacts, and interactions throughout the supply chain. Producers have limits on how far they can reduce impacts. Most strikingly, impacts of the lowest-impact animal products typically exceed those of vegetable substitutes, providing new evidence for the importance of dietary change. Cumulatively, our findings support an approach where producers monitor their own impacts, flexibly meet environmental targets by choosing from multiple practices, and communicate their impacts to consumers.

2,353 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a database of monthly climate observations from meteorological stations is constructed and checked for inhomogeneities in the station records using an automated method that refines previous methods by using incomplete and partially overlapping records and by detecting inhomalities with opposite signs in different seasons.
Abstract: A database of monthly climate observations from meteorological stations is constructed. The database includes six climate elements and extends over the global land surface. The database is checked for inhomogeneities in the station records using an automated method that refines previous methods by using incomplete and partially overlapping records and by detecting inhomogeneities with opposite signs in different seasons. The method includes the development of reference series using neighbouring stations. Information from different sources about a single station may be combined, even without an overlapping period, using a reference series. Thus, a longer station record may be obtained and fragmentation of records reduced. The reference series also enables 1961–90 normals to be calculated for a larger proportion of stations. The station anomalies are interpolated onto a 0.5° grid covering the global land surface (excluding Antarctica) and combined with a published normal from 1961–90. Thus, climate grids are constructed for nine climate variables (temperature, diurnal temperature range, daily minimum and maximum temperatures, precipitation, wet-day frequency, frost-day frequency, vapour pressure, and cloud cover) for the period 1901–2002. This dataset is known as CRU TS 2.1 and is publicly available (http://www.cru.uea.ac.uk/). Copyright  2005 Royal Meteorological Society.

4,011 citations


"Very high resolution interpolated c..." refers background in this paper

  • ...…we have made significant progress, additional efforts to compile and capture climate data are needed to improve spatial and temporal coverage of the available climate data and quality control (Mitchell and Jones, 2005), and interpolation methods can be further refined to better use these data....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the construction of a 10' latitude/longitude data set of mean monthly sur-face climate over global land areas, excluding Antarctica, was described, which includes 8 climate conditions: precipitation, wet-day frequency, temperature, diurnal temperature range, relative humid-ity, sunshine duration, ground frost frequency and windspeed.
Abstract: We describe the construction of a 10' latitude/longitude data set of mean monthly sur- face climate over global land areas, excluding Antarctica The climatology includes 8 climate ele- ments —precipitation, wet-day frequency, temperature, diurnal temperature range, relative humid- ity, sunshine duration, ground frost frequency and windspeed—and was interpolated from a data set of station means for the period centred on 1961 to 1990 Precipitation was first defined in terms of the parameters of the Gamma distribution, enabling the calculation of monthly precipitation at any given return period The data are compared to an earlier data set at 05o latitude/longitude resolution and show added value over most regions The data will have many applications in applied climatology, biogeochemical modelling, hydrology and agricultural meteorology and are available through the International Water Management Institute World Water and Climate Atlas (http://wwwiwmiorg) and the Climatic Research Unit (http://wwwcruueaacuk)

2,206 citations


"Very high resolution interpolated c..." refers background or methods or result in this paper

  • ...We aggregated the climate surfaces to a 10 arc min resolution to illustrate the benefits of higher resolution surfaces and to compare our results to those of New et al. (2002)....

    [...]

  • ...25: 1965–1978 (2005) resolution was chosen because it is the highest resolution global climate data set that was available before our study (New et al., 2002) and in order to compare with that data set....

    [...]

  • ...Within-grid cell variation in elevation was evaluated by mapping the range of elevations of the 3 arc s resolution grid cells within each 30 arc s cell....

    [...]

  • ...Other differences are related to the use of a different set of weather stations, and, no doubt, to some residual errors in our data set and that of New et al. (2002)....

    [...]

  • ...Our surfaces have a 30 arc s spatial resolution; this is equivalent to about 0.86 km2 at the equator and less elsewhere and commonly referred to as ‘1-km’ resolution....

    [...]

Journal ArticleDOI
TL;DR: In this article, a 0.5° lat × 0. 5° long surface climatology of global land areas, excluding Antarctica, is described, which represents the period 1961-90 and comprises a suite of nine variables: precipitation, wet-day frequency, mean temperature, diurnal temperature range, vapor pressure, sunshine, cloud cover, ground frost frequency, and wind speed.
Abstract: The construction of a 0.5° lat × 0.5° long surface climatology of global land areas, excluding Antarctica, is described. The climatology represents the period 1961–90 and comprises a suite of nine variables: precipitation, wet-day frequency, mean temperature, diurnal temperature range, vapor pressure, sunshine, cloud cover, ground frost frequency, and wind speed. The climate surfaces have been constructed from a new dataset of station 1961–90 climatological normals, numbering between 19 800 (precipitation) and 3615 (wind speed). The station data were interpolated as a function of latitude, longitude, and elevation using thin-plate splines. The accuracy of the interpolations are assessed using cross validation and by comparison with other climatologies. This new climatology represents an advance over earlier published global terrestrial climatologies in that it is strictly constrained to the period 1961–90, describes an extended suite of surface climate variables, explicitly incorporates elevation...

1,880 citations


"Very high resolution interpolated c..." refers background or methods in this paper

  • ...For many applications, data at a fine (≤1 km2) spatial resolution are necessary to capture environmental variability that can be partly lost at lower resolutions, particularly in mountainous and other areas with steep climate gradients....

    [...]

  • ...We chose this method because it has been used in other global studies (New et al., 1999, 2002), performed well in comparative tests of multiple interpolation techniques (Hartkamp et al., 1999; Jarvis and Stuart, 2001), and because it is computationally efficient and easy to run....

    [...]

  • ...Leemans and Cramer (1991) and New et al. (1999) created important earlier data sets, at a spatial resolution of 0.5° (55.6 km at the equator)....

    [...]

  • ...This database includes monthly mean (3084 stations), minimum and maximum (both 2504 stations) temperature and precipitation (4261 stations)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a method for generating daily surfaces of temperature, precipitation, humidity, and radiation over large regions of complex terrain is presented, based on the spatial convolution of a truncated Gaussian weighting filter with the set of station locations.

1,309 citations


"Very high resolution interpolated c..." refers background or methods in this paper

  • ...…sets of high-resolution climate surfaces for the conterminous United States: the 1- km-resolution Daymet database of means for 1980–1997 (http://www.daymet.org/; Thornton et al., 1997) and the 2.5 arc min (∼5 km) PRISM climate database for 1970–2000 (http://www.ocs.orst.edu/; Daly et al., 2002)....

    [...]

  • ...Thornton et al. (1997) used a truncated Gaussian weighting filter in combination with spatially and temporally explicit empirically determined relationships of temperature and precipitation to elevation....

    [...]

  • ...…for the United * Correspondence to: Robert J. Hijmans, Museum of Vertebrate Zoology, University of California, 3101 Valley Life Sciences Building, Berkeley, CA, USA; e-mail: rhijmans@berkeley.edu Copyright 2005 Royal Meteorological Society States (http://www.daymet.org/; Thornton et al., 1997)....

    [...]

  • ...We then used SPLINA to build continuous climate surfaces for the training data and interrogated these surfaces for the locations of the test data....

    [...]

  • ...Weather station data were assembled from a large number of sources: (1) The Global Historical Climate Network Dataset (GHCN) version 2....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present a knowledge-based framework for climate mapping using a statistical regression model known as PRISM (parameter-elevation regressions on independent slopes model).
Abstract: The demand for spatial climate data in digital form has risen dramatically in recent years. In response to this need, a variety of statistical techniques have been used to facilitate the pro- duction of GIS-compatible climate maps. However, observational data are often too sparse and unrepresentative to directly support the creation of high-quality climate maps and data sets that truly represent the current state of knowledge. An effective approach is to use the wealth of expert knowl- edge on the spatial patterns of climate and their relationships with geographic features, termed 'geospatial climatology', to help enhance, control, and parameterize a statistical technique. Described here is a dynamic knowledge-based framework that allows for the effective accumulation, application, and refinement of climatic knowledge, as expressed in a statistical regression model known as PRISM (parameter-elevation regressions on independent slopes model). The ultimate goal is to develop an expert system capable of reproducing the process a knowledgeable climatologist would use to create high-quality climate maps, with the added benefits of consistency and repeata- bility. However, knowledge must first be accumulated and evaluated through an ongoing process of model application; development of knowledge prototypes, parameters and parameter settings; test- ing; evaluation; and modification. This paper describes the current state of a knowledge-based framework for climate mapping and presents specific algorithms from PRISM to demonstrate how this framework is applied and refined to accommodate difficult climate mapping situations. A weighted climate-elevation regression function acknowledges the dominant influence of elevation on climate. Climate stations are assigned weights that account for other climatically important factors besides elevation. Aspect and topographic exposure, which affect climate at a variety of scales, from hill slope to windward and leeward sides of mountain ranges, are simulated by dividing the terrain into topographic facets. A coastal proximity measure is used to account for sharp climatic gradients near coastlines. A 2-layer model structure divides the atmosphere into a lower boundary layer and an upper free atmosphere layer, allowing the simulation of temperature inversions, as well as mid-slope precipitation maxima. The effectiveness of various terrain configurations at producing orographic precipitation enhancement is also estimated. Climate mapping examples are presented.

1,074 citations


"Very high resolution interpolated c..." refers background or methods in this paper

  • ...GHCN has data for precipitation (20 590 stations), mean temperature (7280 stations), and minimum and maximum temperature (4966 stations)....

    [...]

  • ...We then used SPLINA to build continuous climate surfaces for the training data and interrogated these surfaces for the locations of the test data....

    [...]

  • ...…sets of high-resolution climate surfaces for the conterminous United States: the 1- km-resolution Daymet database of means for 1980–1997 (http://www.daymet.org/; Thornton et al., 1997) and the 2.5 arc min (∼5 km) PRISM climate database for 1970–2000 (http://www.ocs.orst.edu/; Daly et al., 2002)....

    [...]

  • ...Daly et al. (2002) used the PRISM method, which allows for incorporation of expert knowledge about the climate and can be particularly useful when data points are sparse....

    [...]