scispace - formally typeset
Search or ask a question
Author

Benjamin Bechtel

Other affiliations: University of Hamburg
Bio: Benjamin Bechtel is an academic researcher from Ruhr University Bochum. The author has contributed to research in topics: Urban climate & Urban heat island. The author has an hindex of 29, co-authored 101 publications receiving 3703 citations. Previous affiliations of Benjamin Bechtel include University of Hamburg.


Papers
More filters
Posted ContentDOI
TL;DR: The wide spectrum of scientific applications of SAGA is highlighted in a review of published studies, with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.
Abstract: . The System for Automated Geoscientific Analyses (SAGA) is an open source geographic information system (GIS), mainly licensed under the GNU General Public License. Since its first release in 2004, SAGA has rapidly developed from a specialized tool for digital terrain analysis to a comprehensive and globally established GIS platform for scientific analysis and modeling. SAGA is coded in C++ in an object oriented design and runs under several operating systems including Windows and Linux. Key functional features of the modular software architecture comprise an application programming interface for the development and implementation of new geoscientific methods, a user friendly graphical user interface with many visualization options, a command line interpreter, and interfaces to interpreted languages like R and Python. The current version 2.1.4 offers more than 600 tools, which are implemented in dynamically loadable libraries or shared objects and represent the broad scopes of SAGA in numerous fields of geoscientific endeavor and beyond. In this paper, we inform about the system's architecture, functionality, and its current state of development and implementation. Furthermore, we highlight the wide spectrum of scientific applications of SAGA in a review of published studies, with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.

1,459 citations

Journal ArticleDOI
TL;DR: The WUDAPT protocol developed here provides an easy to understand workflow; uses freely available data and software; and can be applied by someone without specialist knowledge in spatial analysis or urban climate science.
Abstract: Progress in urban climate science is severely restricted by the lack of useful information that describes aspects of the form and function of cities at a detailed spatial resolution. To overcome this shortcoming we are initiating an international effort to develop the World Urban Database and Access Portal Tools (WUDAPT) to gather and disseminate this information in a consistent manner for urban areas worldwide. The first step in developing WUDAPT is a description of cities based on the Local Climate Zone (LCZ) scheme, which classifies natural and urban landscapes into categories based on climate-relevant surface properties. This methodology provides a culturally-neutral framework for collecting information about the internal physical structure of cities. Moreover, studies have shown that remote sensing data can be used for supervised LCZ mapping. Mapping of LCZs is complicated because similar LCZs in different regions have dissimilar spectral properties due to differences in vegetation, building materials and other variations in cultural and physical environmental factors. The WUDAPT protocol developed here provides an easy to understand workflow; uses freely available data and software; and can be applied by someone without specialist knowledge in spatial analysis or urban climate science. The paper also provides an example use of the WUDAPT project results.

439 citations

Journal ArticleDOI
TL;DR: The World Urban Database and Access Portal Tools (WUDAPT) as mentioned in this paper is an international community-based initiative to acquire and disseminate climate relevant data on the physical geographies of cities for modeling and analysis purposes.
Abstract: The World Urban Database and Access Portal Tools (WUDAPT) is an international community-based initiative to acquire and disseminate climate relevant data on the physical geographies of cities for modeling and analysis purposes. The current lacuna of globally consistent information on cities is a major impediment to urban climate science toward informing and developing climate mitigation and adaptation strategies at urban scales. WUDAPT consists of a database and a portal system; its database is structured into a hierarchy representing different levels of detail, and the data are acquired using innovative protocols that utilize crowdsourcing approaches, Geowiki tools, freely accessible data, and building typology archetypes. The base level of information (L0) consists of local climate zone (LCZ) maps of cities; each LCZ category is associated with a range of values for model-relevant surface descriptors (roughness, impervious surface cover, roof area, building heights, etc.). Levels 1 (L1) and 2 (L2) will provide specific intra-urban values for other relevant descriptors at greater precision, such as data morphological forms, material composition data, and energy usage. This article describes the status of the WUDAPT project and demonstrates its potential value using observations and models. As a community-based project, other researchers are encouraged to participate to help create a global urban database of value to urban climate scientists.

244 citations

Journal ArticleDOI
TL;DR: In this study Local Climate Zones, a system of thermally homogenous urban structures introduced by Stewart and Oke, was used in a pixel-based classification approach and seemed to yield considerable potential for an automated classification of LCZ.
Abstract: Considerable progress was recently made in the determination of urban morphologies or structural types from different Earth observation (EO) datasets. A relevant field of application for such methods is urban climatology, since specific urban morphologies produce distinct microclimates. However, application and comparability are so far limited by the variety of typologies used for the description of urban surfaces in EO. In this study Local Climate Zones (LCZ), a system of thermally homogenous urban structures introduced by Stewart and Oke, was used in a pixel-based classification approach. Further, different EO datasets (including satellite multitemporal thermal and multispectral data as well as a normalized digital surface model (NDSM) from airborne Interferometric Synthetic Aperture Radar) and different classifiers (including Support Vector Machines, Neural Networks and Random Forest) were evaluated for their performance in a common framework. Especially the multitemporal thermal and spectral features yielded high potential for the discrimination of LCZ, but morphological profiles from the NDSM also performed well. Further, sets of 10-100 features were selected with the Minimum Redundancy Maximal Relevance approach from multiple EO data. Overall classification accuracies of up to 97.4% and 95.3% were obtained with a Neural Network and a Random Forest classifier respectively. This provides some evidence that LCZ can be derived from multiple EO data. Hence, we propose the typology and the method for the application of automated extraction of urban structures in urban climatology. Further the chosen multiple EO data and classifiers seemed to yield considerable potential for an automated classification of LCZ.

188 citations

Journal ArticleDOI
TL;DR: In this paper, the authors explore the potential of these data for use in the application of the Weather Research Forecasting (WRF) model, which incorporates Building Effect Parameterization and Building Energy Model (BEP-BEM) schemes.
Abstract: Nowadays, the absence of suitable data that describes the urban landscape in climate relevant terms for climatic models is a significant impediment to progress, even if the physics that underpins these models is universal. To address this data gap the World Urban Database and Access Portal Tools (WUDAPT) project focuses on creating a global database on cities suited for urban climate studies. The first phase of WUDAPT has established a protocol using the Local Climate Zones classification system to partition the urban landscape of cities into neighbourhood types that can inform parameter selection in model applications. In this paper, we explore the potential of these data for use in the application of the Weather Research Forecasting (WRF) model, which incorporates Building Effect Parameterization and Building Energy Model (BEP-BEM) schemes. The test is conducted for Madrid (Spain) during winter and summer and the results of using LCZ derived data are compared with those using CORINE land-cover data. The results are indicative but show that the LCZ scheme improves model performance. The paper emphasizes the need for further work to extend the value of these models for decisions on urban planning. However, such work will need useful urban data to make progress.

156 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This review has revealed that RF classifier can successfully handle high data dimensionality and multicolinearity, being both fast and insensitive to overfitting.
Abstract: A random forest (RF) classifier is an ensemble classifier that produces multiple decision trees, using a randomly selected subset of training samples and variables. This classifier has become popular within the remote sensing community due to the accuracy of its classifications. The overall objective of this work was to review the utilization of RF classifier in remote sensing. This review has revealed that RF classifier can successfully handle high data dimensionality and multicolinearity, being both fast and insensitive to overfitting. It is, however, sensitive to the sampling design. The variable importance (VI) measurement provided by the RF classifier has been extensively exploited in different scenarios, for example to reduce the number of dimensions of hyperspectral data, to identify the most relevant multisource remote sensing and geographic data, and to select the most suitable season to classify particular target classes. Further investigations are required into less commonly exploited uses of this classifier, such as for sample proximity analysis to detect and remove outliers in the training samples.

3,244 citations

Journal ArticleDOI
16 Feb 2017-PLOS ONE
TL;DR: Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%.
Abstract: This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License.

2,228 citations

Posted ContentDOI
TL;DR: The wide spectrum of scientific applications of SAGA is highlighted in a review of published studies, with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.
Abstract: . The System for Automated Geoscientific Analyses (SAGA) is an open source geographic information system (GIS), mainly licensed under the GNU General Public License. Since its first release in 2004, SAGA has rapidly developed from a specialized tool for digital terrain analysis to a comprehensive and globally established GIS platform for scientific analysis and modeling. SAGA is coded in C++ in an object oriented design and runs under several operating systems including Windows and Linux. Key functional features of the modular software architecture comprise an application programming interface for the development and implementation of new geoscientific methods, a user friendly graphical user interface with many visualization options, a command line interpreter, and interfaces to interpreted languages like R and Python. The current version 2.1.4 offers more than 600 tools, which are implemented in dynamically loadable libraries or shared objects and represent the broad scopes of SAGA in numerous fields of geoscientific endeavor and beyond. In this paper, we inform about the system's architecture, functionality, and its current state of development and implementation. Furthermore, we highlight the wide spectrum of scientific applications of SAGA in a review of published studies, with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.

1,459 citations

Journal ArticleDOI

1,101 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: The DeepGlobe 2018 Satellite Image Understanding Challenge is presented, which includes three public competitions for segmentation, detection, and classification tasks on satellite images, and characteristics of each dataset are analyzed, and evaluation criteria for each task are defined.
Abstract: We present the DeepGlobe 2018 Satellite Image Understanding Challenge, which includes three public competitions for segmentation, detection, and classification tasks on satellite images (Figure 1). Similar to other challenges in computer vision domain such as DAVIS[21] and COCO[33], DeepGlobe proposes three datasets and corresponding evaluation methodologies, coherently bundled in three competitions with a dedicated workshop co-located with CVPR 2018. We observed that satellite imagery is a rich and structured source of information, yet it is less investigated than everyday images by computer vision researchers. However, bridging modern computer vision with remote sensing data analysis could have critical impact to the way we understand our environment and lead to major breakthroughs in global urban planning or climate change research. Keeping such bridging objective in mind, DeepGlobe aims to bring together researchers from different domains to raise awareness of remote sensing in the computer vision community and vice-versa. We aim to improve and evaluate state-of-the-art satellite image understanding approaches, which can hopefully serve as reference benchmarks for future research in the same topic. In this paper, we analyze characteristics of each dataset, define the evaluation criteria of the competitions, and provide baselines for each task.

652 citations