scispace - formally typeset
Search or ask a question

Showing papers in "Isprs Journal of Photogrammetry and Remote Sensing in 2010"


Journal ArticleDOI
TL;DR: This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way.
Abstract: Remote sensing imagery needs to be converted into tangible information which can be utilised in conjunction with other data sets, often within widely used Geographic Information Systems (GIS). As long as pixel sizes remained typically coarser than, or at the best, similar in size to the objects of interest, emphasis was placed on per-pixel analysis, or even sub-pixel analysis for this conversion, but with increasing spatial resolutions alternative paths have been followed, aimed at deriving objects that are made up of several pixels. This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way. The most common approach used for building objects is image segmentation, which dates back to the 1970s. Around the year 2000 GIS and image processing started to grow together rapidly through object based image analysis (OBIA - or GEOBIA for geospatial object based image analysis). In contrast to typical Landsat resolutions, high resolution images support several scales within their images. Through a comprehensive literature review several thousand abstracts have been screened, and more than 820 OBIA-related articles comprising 145 journal papers, 84 book chapters and nearly 600 conference papers, are analysed in detail. It becomes evident that the first years of the OBIA/GEOBIA developments were characterised by the dominance of ‘grey’ literature, but that the number of peer-reviewed journal articles has increased sharply over the last four to five years. The pixel paradigm is beginning to show cracks and the OBIA methods are making considerable progress towards a spatially explicit information extraction workflow, such as is required for spatial planning as well as for many monitoring programmes.

3,809 citations


Journal ArticleDOI
TL;DR: While traditional mapping is nearly exclusively coordinated and often also carried out by large organisations, crowdsourcing geospatial data refers to generating a map using informal social networks and web 2.0 technology.
Abstract: In this paper we review recent developments of crowdsourcing geospatial data. While traditional mapping is nearly exclusively coordinated and often also carried out by large organisations, crowdsourcing geospatial data refers to generating a map using informal social networks and web 2.0 technology. Key differences are the fact that users lacking formal training in map making create the geospatial data themselves rather than relying on professional services; that potentially very large user groups collaborate voluntarily and often without financial compensation with the result that at a very low monetary cost open datasets become available and that mapping and change detection occur in real time. This situation is similar to that found in the Open Source software environment. We shortly explain the basic technology needed for crowdsourcing geospatial data, discuss the underlying concepts including quality issues and give some examples for this novel way of generating geospatial data. We also point at applications where alternatives do not exist such as life traffic information systems. Finally we explore the future of crowdsourcing geospatial data and give some concluding remarks.

433 citations


Journal ArticleDOI
TL;DR: The paper will review a number of current approaches in order to comprehensively elaborate the state of the art of reconstruction methods and their respective principles and the generation of more detailed facade geometries from terrestrial data collection.
Abstract: The development of tools for the generation of 3D city models started almost two decades ago. From the beginning, fully automatic reconstruction systems were envisioned to fulfil the need for efficient data collection. However, research on automatic city modelling is still a very active area. The paper will review a number of current approaches in order to comprehensively elaborate the state of the art of reconstruction methods and their respective principles. Originally, automatic city modelling only aimed at polyhedral building objects, which mainly reflects the respective roof shapes and building footprints. For this purpose, airborne images or laser scans are used. In addition to these developments, the paper will also review current approaches for the generation of more detailed facade geometries from terrestrial data collection.

407 citations


Journal ArticleDOI
TL;DR: A review of the latest developments in different fields of remote sensing for forest biomass mapping is presented in this article, where the authors focus on the potential of advanced remote sensing techniques to assess forest biomass.
Abstract: This is a review of the latest developments in different fields of remote sensing for forest biomass mapping. The main fields of research within the last decade have focused on the use of small footprint airborne laser scanning systems, polarimetric synthetic radar interferometry and hyperspectral data. Parallel developments in the field of digital airborne camera systems, digital photogrammetry and very high resolution multispectral data have taken place and have also proven themselves suitable for forest mapping issues. Forest mapping is a wide field and a variety of forest parameters can be mapped or modelled based on remote sensing information alone or combined with field data. The most common information required about a forest is related to its wood production and environmental aspects. In this paper, we will focus on the potential of advanced remote sensing techniques to assess forest biomass. This information is especially required by the REDD (reducing of emission from avoided deforestation and degradation) process. For this reason, new types of remote sensing data such as fullwave laser scanning data, polarimetric radar interferometry (polarimetric systhetic aperture interferometry, PolInSAR) and hyperspectral data are the focus of the research. In recent times, a few state-of-the-art articles in the field of airborne laser scanning for forest applications have been published. The current paper will provide a state-of-the-art review of remote sensing with a particular focus on biomass estimation, including new findings with fullwave airborne laser scanning, hyperspectral and polarimetric synthetic aperture radar interferometry. A synthesis of the actual findings and an outline of future developments will be presented.

329 citations


Journal ArticleDOI
TL;DR: A novel low-cost mini-UAV-based laser scanning system capable of not only recording point cloud data giving the geometry of the objects, but also simultaneously collecting image data, including overlapping images and the intensity of laser backscatter, as well as hyperspectral and thermal data is presented.
Abstract: This paper presents a novel low-cost mini-UAV-based laser scanning system, which is also capable of performing car-based mobile mapping. The quality of the system and its feasibility for tree measurements was tested using the system’s laser scanner. The system was constructed as a modular measurement system consisting of a number of measurement instruments: a GPS/IMU positioning system, two laser scanners, a CCD camera, a spectrometer and a thermal camera. An Ibeo Lux and a Sick LMS151 profile laser were integrated into the system to provide dense point clouds; intensities of the reflected echoes can also be obtained with the Sick LMS. In our tests, when using a car as a platform, the pole-type object extraction algorithm which was developed resulted in 90% completeness and 86% correctness. The heights of pole-type objects were obtained with a bias of −1.6 cm and standard deviation of 5.4 cm. Using a mini-UAV as the platform, the standard deviation of individual tree heights was about 30 cm. Also, a digital elevation model extraction was tested with the UAV data, resulting in a height offset of about 3.1 cm and a standard deviation of 9.2 cm. With a multitemporal point cloud, we demonstrated a method to derive the biomass change of a coniferous tree with an R 2 value of 0.92. The proposed system is capable of not only recording point cloud data giving the geometry of the objects, but also simultaneously collecting image data, including overlapping images and the intensity of laser backscatter, as well as hyperspectral and thermal data. Therefore we believe that the system is feasible for new algorithm and concept development and for basic research, especially when data is recorded multitemporally.

322 citations


Journal ArticleDOI
TL;DR: In this article, two masks are obtained from the LIDAR data: a primary building mask and a secondary building mask, where the primary mask indicates the void areas where the laser does not reach below a certain height threshold.
Abstract: This paper presents an automatic building detection technique using LIDAR data and multispectral imagery. Two masks are obtained from the LIDAR data: a 'primary building mask' and a 'secondary building mask'. The primary building mask indicates the void areas where the laser does not reach below a certain height threshold. The secondary building mask indicates the filled areas, from where the laser reflects, above the same threshold. Line segments are extracted from around the void areas in the primary building mask. Line segments around trees are removed using the normalized difference vegetation index derived from the orthorectified multispectral images. The initial building positions are obtained based on the remaining line segments. The complete buildings are detected from their initial positions using the two masks and multispectral images in the YIQ colour system. It is experimentally shown that the proposed technique can successfully detect urban residential buildings, when assessed in terms of 15 indices including completeness, correctness and quality.

252 citations


Journal ArticleDOI
TL;DR: In this article, the authors summarized recent developments and applications of digital photogrammetry in industrial measurement, focusing on higher dynamic applications, integration of systems into production chains, multi-sensor solutions and still higher accuracy and lower costs.
Abstract: This article summarizes recent developments and applications of digital photogrammetry in industrial measurement. Industrial photogrammetry covers a wide field of different practical challenges in terms of specified accuracy, measurement speed, automation, process integration, cost-performance ratio, sensor integration and analysis. On-line and off-line systems are available, offering general purpose systems on the one hand and specific turnkey systems for individual measurement tasks on the other. Verification of accuracy and traceability to standard units with respect to national and international standards is inevitable in industrial practice. System solutions can be divided into the measurement of discrete points, deformations and motions, 6DOF parameters, 3D contours and 3D surfaces. Recent and future developments concentrate on higher dynamic applications, integration of systems into production chains, multi-sensor solutions and still higher accuracy and lower costs.

252 citations


Journal ArticleDOI
TL;DR: Some basic physical concepts commonly used by the remote sensing community for modelling scattering and reflection processes are reviewed and the backscattering coefficient γ is recommended to use for the radiometric calibration of small-footprint full-waveform airborne laser scanners.
Abstract: Small-footprint (0.2–2 m) airborne laser scanners are lidar instruments originally developed for topographic mapping. While the first airborne laser scanners only allowed determining the range from the sensor to the target, the latest sensor generation records the complete echo waveform. The waveform provides important information about the backscattering properties of the observed targets and may be useful for geophysical parameter retrieval and advanced geometric modelling. However, to fully utilise the potential of the waveform measurements in applications, it is necessary to perform a radiometric calibration. As there are not yet calibration standards, this paper reviews some basic physical concepts commonly used by the remote sensing community for modelling scattering and reflection processes. Based purely on theoretical arguments it is recommended to use the backscattering coefficient γ , which is the backscatter cross-section normalised relative to the laser footprint area, for the radiometric calibration of small-footprint full-waveform airborne laser scanners. The presented concepts are, with some limitations, also applicable to conventional airborne laser scanners that measure the range and intensity of multiple echoes.

213 citations


Journal ArticleDOI
TL;DR: A new method for change detection of buildings in urban environments from very high spatial resolution images (VHSR) and using existing digital cartographic data is proposed, which could be integrated into a cartographic update process or for the quality assessment of a geodatabase.
Abstract: The updating of geodatabases (GDB) in urban environments is a difficult and expensive task. It may be facilitated by an automatic change detection method. Several methods have been developed for medium and low spatial resolution images. This study proposes a new method for change detection of buildings in urban environments from very high spatial resolution images (VHSR) and using existing digital cartographic data. The proposed methodology is composed of several stages. The existing knowledge on the buildings and the other urban objects are first modelled and saved in a knowledge base. Some change detection rules are defined at this stage. Then, the image is segmented. The parameters of segmentation are computed thanks to the integration between the image and the geodatabase. Thereafter, the segmented image is analyzed using the knowledge base to localize the segments where the change of building is likely to occur. The change detection rules are then applied on these segments to identify the segments that represent the changes of buildings. These changes represent the updates of buildings to be added to the geodatabase. The data used in this research concern the city of Sherbrooke (Quebec, Canada) and the city of Rabat (Morocco). For Sherbrooke, we used an Ikonos image acquired in October 2006 and a GDB at the scale of 1:20,000. For Rabat, a QuickBird image acquired in August 2006 has been used with a GDB at the scale of 1:10,000. The rate of good detection is 90%. The proposed method presents some limitations on the detection of the exact contours of the buildings. It could be improved by including a shape post-analysis of detected buildings. The proposed method could be integrated into a cartographic update process or as a method for the quality assessment of a geodatabase. It could be also be used to identify illegal building work or to monitor urban growth.

199 citations


Journal ArticleDOI
TL;DR: This work presents a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR that merges the power of perceptual grouping theory and optimization techniques into a unified framework to address the challenging problems of geospatial feature detection and classification.
Abstract: In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.

139 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid theoretical-empirical model was developed for modeling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain, where interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods.
Abstract: A hybrid theoretical–empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as “information loss”. This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almeria province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data ( R 2 = 0.9856 ; p 0.001 ). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

Journal ArticleDOI
TL;DR: The paper shows that only a small number of additional parameters is needed to model both elements and to preserve the collinearity relation, and that no unique setup is needed for estimating the additional parameters and that the estimation is insensitive to noise or first approximations.
Abstract: Underwater photogrammetry provides an efficient nondestructive means for measurement in environments with limited accessibility. With the growing use of consumer cameras, its application is becoming easier, thus benefiting a wide variety of disciplines. However, utilizing cameras for underwater photogrammetry poses some nontrivial modeling problems due to refraction effect and the extension of the imaging system into a unit of both the camera and the protecting housing device. This paper studies the effect that the underwater environment has on the photogrammetric process, and proposes a model for describing the geometric distortions and for estimating the additional parameters involved. The proposed model accounts not only for the multimedia effect, but also for inaccuracies related to the setting of the camera and housing device. The paper shows that only a small number of additional parameters is needed to model both elements and to preserve the collinearity relation. The results show that no unique setup is needed for estimating the additional parameters and that the estimation is insensitive to noise or first approximations. Experiments show that high levels of accuracy can be achieved.

Journal ArticleDOI
TL;DR: In this article, a comparative analysis of different methods for automated building detection in aerial images and laser data at different spatial resolutions is presented, based on error measures obtained by superimposing the results on a manually generated reference map of each area.
Abstract: Automated approaches to building detection in multi-source aerial data are important in many applications, including map updating, city modeling, urban growth analysis and monitoring of informal settlements. This paper presents a comparative analysis of different methods for automated building detection in aerial images and laser data at different spatial resolutions. Five methods are tested in two study areas using features extracted at both pixel level and object level, but with the strong prerequisite of using the same training set for all methods. The evaluation of the methods is based on error measures obtained by superimposing the results on a manually generated reference map of each area. The results in both study areas show a better performance of the Dempster-Shafer and the AdaBoost methods, although these two methods also yield a number of unclassified pixels. The method of thresholding a normalized DSM performs well in terms of the detection rate and reliability in the less vegetated Mannheim study area, but also yields a high rate of false positive errors. The Bayesian methods perform better in the Memmingen study area where buildings have more or less the same heights.

Journal ArticleDOI
TL;DR: In this article, the performance of two schemes for orientation bias correction (i.e., RPCs modification and RPCs regeneration) is presented based on one separate-orbit QuickBird stereo image pair in Shanghai, and four cases for bias correction, including shift bias correction and second-order polynomial bias correction are examined.
Abstract: The rational function model (RFM) is widely used as an alternative to physical sensor models for 3D ground point determination with high-resolution satellite imagery (HRSI). However, owing to the sensor orientation bias inherent in the vendor-provided rational polynomial coefficients (RPCs), the geo-positioning accuracy obtained from these RPCs is limited. In this paper, the performances of two schemes for orientation bias correction (i.e., RPCs modification and RPCs regeneration) is presented based on one separate-orbit QuickBird stereo image pair in Shanghai, and four cases for bias correction, including shift bias correction, shift and drift bias correction, affine model bias correction and second-order polynomial bias correction, are examined. A 2-step least squares adjustment method is adopted for correction parameter estimation with a comparison with the RPC bundle adjustment method. The experiment results demonstrate that in general the accuracy of the 2-step least squares adjustment method is almost identical to that of the RPC bundle adjustment method. With the shift bias correction method and minimal 1 ground control point (GCP), the modified RPCs improve the accuracy from the original 23 m to 3 m in planimetry and 17 m to 4 m in height. With the shift and drift bias correction method, the regenerated RPCs achieve a further improved positioning accuracy of 0.6 m in planimetry and 1 m in height with minimal 2 well-distributed GCPs. The affine model bias correction yields a geo-positioning accuracy of better than 0.5 m in planimetry and 1 m in height with 3 well-positioned GCPs. Further tests with the second-order polynomial bias correction model indicate the existence of potential high-order error signals in the vendor-provided RPCs, and on condition that an adequate redundancy in GCP number is available, an accuracy of 0.4 m in planimetry and 0.8 m in height is attainable.

Journal ArticleDOI
TL;DR: In this paper, the authors examined two normalization procedures affecting LiDAR intensity through the scanning geometry and the system settings, namely, range normalization and the effects of the automatic gain control (AGC) in the Optech ALTM3100 and Leica ALS50-II sensors.
Abstract: Recently, the intensity characteristics of discrete-return LiDAR sensors were studied for vegetation classification. We examined two normalization procedures affecting LiDAR intensity through the scanning geometry and the system settings, namely, range normalization and the effects of the automatic gain control (AGC) in the Optech ALTM3100 and Leica ALS50-II sensors. Range normalization corresponds to weighting of the observed intensities with the term ( R / R Ref ) a , where R is the range, R Ref is a mean reference range, and a ∈ [ 2 , 4 ] is the exponent that is, according to theory, dependent on the target geometry. LiDAR points belonging to individual tree crowns were extracted for 13 887 trees in southern Finland. The coefficient of variation (CV) of the intensity was analyzed for a range of values of exponent a . The tree species classification performance using 13 intensity variables was also used for sensitivity analysis of the effect of a . The results were in line with the established theory, since the optimal level of a was lower ( a ≈ 2 ) for trees with large or clumped leaves and higher ( a ≈ 3 ) for diffuse coniferous crowns. Different echo groups also showed varying responses. Single-return pulses that represented strong reflections had a lower optimal value of a than the first and all echoes in a pulse. The gain in classification accuracy from the optimal selection of the exponent was 2%–3%, and the optimum for classification was different from that obtained using the CV analysis. In the ALS50-II sensor, the combined and optimized AGC and R normalizations had a notably larger effect (6%–9%) on classification accuracy. Our study demonstrates the ambiguity of R normalization in vegetation canopies.

Journal ArticleDOI
TL;DR: A detailed study into the sources of correlation in terrestrial laser scanner self-calibration for a basic additional parameter set is presented and a set of recommended design measures to reduce parameter correlation is presented.
Abstract: Instrument calibration is recognised as an important process to assure the quality of data captured with a terrestrial laser scanner. While the self-calibration approach can provide optimal estimates of systematic error parameters without the need for specialised equipment or facilities, its success is somewhat hindered by high correlations between model variables. This paper presents the findings of a detailed study into the sources of correlation in terrestrial laser scanner self-calibration for a basic additional parameter set. Several pertinent outcomes, resulting from experiments conducted with simulated data, and 12 real calibration datasets captured with a Faro 880 terrestrial laser scanner, are presented. First, it is demonstrated that panoramic-type scanner self-calibration from only two instrument locations is possible so long as the scans have orthogonal orientation in the horizontal plane. Second, the importance of including scanner tilt angle observations in the adjustment for parameter de-correlation is demonstrated. Third, a new network measure featuring an asymmetric distribution of object points that does not rely upon a priori observation of the instrument position is proposed. It is shown to be an effective means to reduce the correlation between the rangefinder offset and the scanner position parameters. Fourth, the roles of several other influencing variables on parameter correlation are revealed. The paper concludes with a set of recommended design measures to reduce parameter correlation in terrestrial laser scanner self-calibration.

Journal ArticleDOI
TL;DR: In this article, the amplitude, backscatter cross section and back scatter coefficient of echoes from airborne laser scanning (ALS) point cloud data collected from two different sites are analyzed based on urban land cover classes.
Abstract: Airborne laser scanning (ALS) data are increasingly being used for land cover classification. The amplitudes of echoes from targets, available from full-waveform ALS data, have been found to be useful in the classification of land cover. However, the amplitude of an echo is dependent on various factors such as the range and incidence angle, which makes it difficult to develop a classification method which can be applied to full-waveform ALS data from different sites, scanning geometries and sensors. Additional information available from full-waveform ALS data, such as range and echo width, can be used for radiometric calibration, and to derive backscatter cross section. The backscatter cross section of a target is the physical cross sectional area of an idealised isotropic target, which has the same intensity as the selected target. The backscatter coefficient is the backscatter cross section per unit area. In this study, the amplitude, backscatter cross section and backscatter coefficient of echoes from ALS point cloud data collected from two different sites are analysed based on urban land cover classes. The application of decision tree classifiers developed using data from the first study area on the second demonstrates the advantage of using the backscatter coefficient in classification methods, along with spatial attributes. It is shown that the accuracy of classification of the second study area using the backscatter coefficient (kappa coefficient 0.89) is higher than those using the amplitude (kappa coefficient 0.67) or backscatter cross section (kappa coefficient 0.68). This attribute is especially useful for separating road and grass.

Journal ArticleDOI
TL;DR: In this article, the authors developed methods and protocols for mapping irrigated areas using a Moderate Resolution Imaging Spectroradiometer (MODIS) 500 m time series, to generate irrigated area statistics, and to compare these with ground- and census-based statistics.
Abstract: The overarching goal of this research was to develop methods and protocols for mapping irrigated areas using a Moderate Resolution Imaging Spectroradiometer (MODIS) 500 m time series, to generate irrigated area statistics, and to compare these with ground- and census-based statistics. The primary mega-file data-cube (MFDC), comparable to a hyper-spectral data cube, used in this study consisted of 952 bands of data in a single file that were derived from MODIS 500 m, 7-band reflectance data acquired every 8-days during 2001–2003. The methods consisted of (a) segmenting the 952-band MFDC based not only on elevation-precipitation-temperature zones but on major and minor irrigated command area boundaries obtained from India’s Central Board of Irrigation and Power (CBIP), (b) developing a large ideal spectral data bank (ISDB) of irrigated areas for India, (c) adopting quantitative spectral matching techniques (SMTs) such as the spectral correlation similarity (SCS) R 2 -value, (d) establishing a comprehensive set of protocols for class identification and labeling, and (e) comparing the results with the National Census data of India and field-plot data gathered during this project for determining accuracies, uncertainties and errors. The study produced irrigated area maps and statistics of India at the national and the subnational (e.g., state, district) levels based on MODIS data from 2001–2003. The Total Area Available for Irrigation (TAAI) and Annualized Irrigated Areas (AIAs) were 113 and 147 million hectares (MHa), respectively. The TAAI does not consider the intensity of irrigation, and its nearest equivalent is the net irrigated areas in the Indian National Statistics. The AIA considers intensity of irrigation and is the equivalent of “irrigated potential utilized (IPU)” reported by India’s Ministry of Water Resources (MoWR). The field-plot data collected during this project showed that the accuracy of TAAI classes was 88% with a 12% error of omission and 32% of error of commission. Comparisons between the AIA and IPU produced an R 2 -value of 0.84. However, AIA was consistently higher than IPU. The causes for differences were both in traditional approaches and remote sensing. The causes of uncertainties unique to traditional approaches were (a) inadequate accounting of minor irrigation (groundwater, small reservoirs and tanks), (b) unwillingness to share irrigated area statistics by the individual Indian states because of their stakes, (c) absence of comprehensive statistical analyses of reported data, and (d) subjectivity involved in observation-based data collection process. The causes of uncertainties unique to remote sensing approaches were (a) irrigated area fraction estimate and related sub-pixel area computations and (b) resolution of the imagery. The causes of uncertainties common in both traditional and remote sensing approaches were definitions and methodological issues.

Journal ArticleDOI
TL;DR: Small satellites are already powerful tools for monitoring global, regional and local phenomena, and their application spectrum will even broaden based on the ongoing development in many areas of technology and observation techniques.
Abstract: There is an increasing need for Earth Observation (EO) missions to meet the information requirements in connection with Global Change Studies. Small and cost-effective missions are powerful tools to flexibly react to information requirements with space-borne solutions. Small satellite missions can be conducted relatively quickly and inexpensively by using commercial off-the-shelf-technologies, or they can be enhanced by using advanced technologies. A new class of advanced small satellites, including autonomously operating “intelligent” satellites may be created, opening new fields of application. The increasing number of small satellites and their applications drive developments in the fields of small launchers, small ground station networks, cost-effective data distribution methods, and cost-effective management and quality assurance procedures. There are many advantages of small satellite missions, like more frequent mission opportunities, a faster return of data, larger variety of missions, more rapid expansion of the technical and/or scientific knowledge base, greater involvement of small industry, feasibility by universities and others. This paper deals with general trends in the field of small satellite missions for Earth observation. Special attention is given to the potential of spatial, spectral, and temporal resolution of small satellite based systems. Examples show small satellites offer also the unique possibility to install affordable constellations to provide good daily coverage of the globe and/or allow us to observe dynamic phenomena. The facts and examples given in this paper lead to the conclusion: Small satellites are already powerful tools for monitoring global, regional and local phenomena. In the future, their application spectrum will even broaden based on the ongoing development in many areas of technology and observation techniques.

Journal ArticleDOI
TL;DR: In this article, the authors compare the results derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories.
Abstract: A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision. We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate. This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006. Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto protocol rules for its overseas department. The latter estimates come from a sample of nearly 17,000 plots analyzed from same spatial imagery acquired between year 1990 and year 2006. This sampling scheme is derived from the traditional forest inventory methods carried out by IFN (Inventaire Forestier National). Our intensified global sampling scheme leads to an estimate of 96,650 ha deforested between 1990 and 2006, which is within the 95% confidence interval of the IFN sampling scheme, which gives an estimate of 91,722 ha, representing a relative difference from the IFN of 5.4%. These results demonstrate that the intensification of the global sampling scheme can provide forest area change estimates close to those achieved by official forest inventories ( Such methods could be used by developing countries to demonstrate that they are fulfilling requirements for reducing emissions from deforestation in the framework of an REDD (Reducing Emissions from Deforestation in Developing Countries) mechanism under discussion within the United Nations Framework Convention on Climate Change (UNFCCC). Monitoring systems at national levels in tropical countries can also benefit from pan-tropical and regional observations, to ensure consistency between different national monitoring systems.

Journal ArticleDOI
TL;DR: In this paper, the authors used Dijkstra's algorithm to find a seam-line with the minimal objective function, which is defined as the sum of a fixed number of top mismatch scores.
Abstract: This paper presents a novel algorithm that selects seam-lines for mosaicking image patches. This technique uses Dijkstra’s algorithm to find a seam-line with the minimal objective function. Since a segment of seam-line with significant mismatch, even if it is short, is more visible than a lengthy one with small differences, a direct summation of the mismatch scores is inadequate. Limiting the level of the maximum difference along a seam-line should be part of the objective in the seam-line selection process. Our technique first determines this desired level of maximum difference, then applies Dijkstra’s algorithm to find the best seam-line. A quantitative measure to evaluate a seam-line is proposed. The measure is defined as the sum of a fixed number of top mismatch scores. The proposed algorithm is compared with other techniques quantitatively and visually about various types of images.

Journal ArticleDOI
TL;DR: A unified approach to self-calibration of terrestrial laser scanners is presented, where the parameters in a least-squares adjustment are treated as observations by assigning appropriate weights to them, and precise knowledge of the horizontal coordinates of the scanner centre helped greatly to achieve low correlation between these parameters and the zero error.
Abstract: In recent years, the method of self-calibration widely used in photogrammetry has been found suitable for the estimation of systematic errors in terrestrial laser scanners. Since high correlations can be present between the estimated parameters, ways to reduce them have to be found. This paper presents a unified approach to self-calibration of terrestrial laser scanners, where the parameters in a least-squares adjustment are treated as observations by assigning appropriate weights to them. The higher these weights are the lower the parameter correlations are expected to be. Self-calibration of a pulsed laser scanner Leica Scan Station was performed with the unified approach. The scanner position and orientation were determined during the measurements with the help of a total station, and the point clouds were directly georeferenced. The significant systematic errors were zero error in the laser rangefinder and vertical circle index error. Most parameter correlations were comparatively low. In part, precise knowledge of the horizontal coordinates of the scanner centre helped greatly to achieve low correlation between these parameters and the zero error. The approach was shown to be advantageous to the use of adjustment with stochastic (weighted) inner constraints where the parameter correlations were higher. At the same time, the collimation error could not be estimated reliably due to its high correlation with the scanner azimuth because of a limited vertical distribution of the targets in the calibration field. While this problem can be solved for a scanner with a nearly spherical field-of-view, it will complicate the calibration of scanners with limited vertical field-of-view. Investigations into the influence of precision of the scanner position and levelling on the adjustment results lead to two important findings. First, it is not necessary to level the scanner during the measurements when using the unified approach since the parameter correlations are relatively low anyway. Second, the scanner position has to be known with a precision of about 1 mm in order to get a reliable estimate of the zero error.

Journal ArticleDOI
TL;DR: In this paper, an energy model for building footprint extraction from high-resolution digital elevation models (≤1m) in urban areas is presented, which is based on stochastic geometry and in particular on marked point processes of rectangles.
Abstract: In the past two decades, building detection and reconstruction from remotely sensed data has been an active research topic in the photogrammetric and remote sensing communities. Recently, effective high level approaches have been developed, i.e. , the ones involving the minimization of an energetic formulation. Yet, their efficiency has to be balanced by the amount of processing power required to obtain good results. In this paper, we introduce an original energetic model for building footprint extraction from high resolution digital elevation models (≤1 m) in urban areas. Our goal is to formulate the energy in an efficient way, easy to parametrize and fast to compute, in order to get an effective process still providing good results. Our work is based on stochastic geometry, and in particular on marked point processes of rectangles. We therefore try to obtain a reliable object configuration described by a collection of rectangular building footprints. To do so, an energy function made up of two terms is defined: the first term measures the adequacy of the objects with respect to the data and the second one has the ability to favour or penalize some footprint configurations based on prior knowledge (alignment, overlapping, …). To minimize the global energy, we use a Reversible Jump Monte Carlo Markov Chain (RJMCMC) sampler coupled with a simulated annealing algorithm, leading to an optimal configuration of objects. Various results from different areas and resolutions are presented and evaluated. Our work is also compared with an already existing methodology based on the same mathematical framework that uses a much more complex energy function. We show how we obtain similarly good results with a high computational efficiency (between 50 and 100 times faster) using a simplified energy that requires a single data-independent parameter, compared to more than 20 inter-related and hard-to-tune parameters.

Journal ArticleDOI
TL;DR: This study develops a structured analysis method to generalize DEM data through the identification of minor valleys and filling the corresponding depression positions, able to retain the main geographical characteristics more effectively in terrain representation.
Abstract: As an important method of terrain representation, a DEM usually needs to be generalized at multiple resolutions in order to adapt to different applications. The preservation of main landscape features is an important constraint in DEM generalization. The traditional generalization method based on signal processing by resampling or low-pass filtering is just a data compression operation rather than the abstraction of real information. This study develops a structured analysis method to generalize DEM data through the identification of minor valleys and filling the corresponding depression positions. The generalization process has two steps: geographic decision and geometric operation. According to their hydrological significance, the unimportant valley branches are detected and their corresponding coverage is filled by raising the terrain to make the terrain surface smoother. In contrast to the conventional algorithms based on image processing, this method is able to retain the main geographical characteristics more effectively in terrain representation.

Journal ArticleDOI
TL;DR: This paper showed that terrain-induced shadows can enhance bi-directional reflectance distribution function variation and negatively bias the clumping index (i.e., indicating more vegetation clumping) in rugged terrain.
Abstract: The clumping index measures the spatial aggregation (clumped, random and regular) of foliage elements. The global mapping of the clumping index with a limited eight-month multi-angular POLDER 1 dataset is expanded by integrating new, complete year-round observations from POLDER 3. We show that terrain-induced shadows can enhance bi-directional reflectance distribution function variation and negatively bias the clumping index (i.e. indicating more vegetation clumping) in rugged terrain. Using a global high-resolution digital elevation model, a topographic compensation function is devised to correct for this terrain effect. The clumping index reductions can reach up to 30% from the topographically non-compensated values, depending on terrain complexity and land cover type. The new global clumping index map is compared with an assembled set of field measurements from 32 different sites, covering four continents and diverse biomes.

Journal ArticleDOI
TL;DR: A stereo system built around a probabilistic environment model which fuses evidence from dense 3D reconstruction and image-based pedestrian detection into a consistent interpretation of the observed scene, and a multi-hypothesis tracker to reconstruct the pedestrians’ trajectories in 3D coordinates over time.
Abstract: We report on a stereo system for 3D detection and tracking of pedestrians in urban traffic scenes. The system is built around a probabilistic environment model which fuses evidence from dense 3D reconstruction and image-based pedestrian detection into a consistent interpretation of the observed scene, and a multi-hypothesis tracker to reconstruct the pedestrians’ trajectories in 3D coordinates over time. Experiments on real stereo sequences recorded in busy inner-city scenarios are presented, in which the system achieves promising results.

Journal ArticleDOI
Qi Chen1
TL;DR: In this paper, the authors used airborne lidar data to assess the lowest two GLAS Gaussian peaks for terrain elevation estimation over mountainous forest areas in North Carolina, and found that the lowest peak tends to underestimate ground elevation; terrain steepness (slope) and canopy height have the highest correlation with the underestimation.
Abstract: Gaussian decomposition has been used to extract terrain elevation from waveforms of the satellite lidar GLAS (Geoscience Laser Altimeter System), on board ICESat (Ice, Cloud, and land Elevation Satellite). The common assumption is that one of the extracted Gaussian peaks, especially the lowest one, corresponds to the ground. However, Gaussian decomposition is usually complicated due to the broadened signals from both terrain and objects above over sloped areas. It is a critical and pressing research issue to quantify and understand the correspondence between Gaussian peaks and ground elevation. This study uses ∼2000 km2 airborne lidar data to assess the lowest two GLAS Gaussian peaks for terrain elevation estimation over mountainous forest areas in North Carolina. Airborne lidar data were used to extract not only ground elevation, but also terrain and canopy features such as slope and canopy height. Based on the analysis of a total of ∼500 GLAS shots, it was found that (1) the lowest peak tends to underestimate ground elevation; terrain steepness (slope) and canopy height have the highest correlation with the underestimation, (2) the second to the lowest peak is, on average, closer to the ground elevation over mountainous forest areas, and (3) the stronger peak among the lowest two is closest to the ground for both open terrain and mountainous forest areas. It is expected that this assessment will shed light on future algorithm improvements and/or better use of the GLAS products for terrain elevation estimation.

Journal ArticleDOI
TL;DR: In this article, a pre-screened and normalized multiple endmember spectral mixture analysis (PNMESMA) method was proposed for estimating the impervious surface area (ISA) fraction in Lake Kasumigaura Basin, Japan.
Abstract: The impervious surface area (ISA) has emerged not only as an indicator of the degree of urbanization, but also as a major indicator of environmental quality for drainage basin management. However, since almost all of the methods for estimating ISA have been developed for urban environments, it is questionable whether these methods can be successfully applied to drainage basins, such as those found in Japan, which usually have more complicated vegetation components (e.g. paddy field, plowed field and dense forest). This paper presents a pre-screened and normalized multiple endmember spectral mixture analysis (PNMESMA) method, which includes a new endmember selection strategy and an integration of the normalized spectral mixture analysis (NSMA) and multiple endmember spectral mixture analysis (MESMA), for estimating the ISA fraction in Lake Kasumigaura Basin, Japan. This new proposed method is superior to the previous methods in that the estimation error of the proposed method is much smaller than the previous SMA- or NSMA-based methods for drainage basin environments. The overall root mean square error was reduced to 5.2%, and no obvious underestimation or overestimation occurred for high or low ISA areas. Through the assessment of environmental quality in Lake Kasumigaura Basin using the ISA fraction, the results showed that the basin has been in the impacted category since 1987, and that in the two decades since, the environmental quality has continued to decline. If this decline continues, then Lake Kasumigaura Basin will fall into the degraded category by 2017.

Journal ArticleDOI
TL;DR: In this paper, simulated TLS data is collected from surfaces up to ∼1m 2 created from regular arrays of uniform spheres (sphere diameters of 10 to 100mm) and irregular arrays of mixed spheres (median sphere diameters from 16 to 94mm).
Abstract: Terrestrial Laser Scanning (TLS) is increasingly being used to collect mm-resolution surface data from a broad range of environments. When scanning complex surfaces, interactions between the surface topography, laser footprint and scanner precision can introduce errors into the point cloud. Quantification of these errors is, however, limited by the availability of independent measurement techniques. This research presents simulated TLS as a new approach to error quantification. Two sets of experiments are presented. The first set demonstrates that simulated TLS is able to reproduce real TLS data from a plane and a pebble. The second set uses simulated TLS to assess a methodology developed for the collection and processing of field TLS data. Simulated TLS data is collected from surfaces up to ∼1 m 2 created from regular arrays of uniform spheres (sphere diameters of 10 to 100 mm) and irregular arrays of mixed spheres (median sphere diameters of 16 to 94 mm). These data were analysed to (i) assess the effectiveness of the processing methodology at removing erroneous points; (ii) quantify the magnitude of errors in a digital surface model (DSM) interpolated from the processed point cloud; and (iii) investigate the extent to which the interpolated DSMs retained the geometric properties of the original surfaces. The processing methodology was found to be effective, especially on data from coarser surfaces, with the retained points typically having an inter-quartile range (IQR) of point errors of ∼2 mm. DSM errors varied as a function of sphere size and packing, with DSM errors having an IQR of ∼2 mm for the regular surfaces and ∼4 mm for the irregular surfaces. Finally, whilst in the finer surfaces point and DSM errors were a substantial proportion of the sphere diameters, geometrical analysis indicated that the DSMs still reproduced properties of the original surface such as semivariance and some percentiles of the surface elevation distribution.

Journal ArticleDOI
TL;DR: A low-cost outdoor mobile AR application to integrate buildings of different urban spaces with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR.
Abstract: Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user’s movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.