scispace - formally typeset
Search or ask a question

Showing papers in "Isprs Journal of Photogrammetry and Remote Sensing in 2005"


Journal ArticleDOI
TL;DR: This surface matching technique is a generalization of the least squares image matching concept and offers high flexibility for any kind of 3D surface correspondence problem, as well as statistical tools for the analysis of the quality of final matching results.
Abstract: The automatic co-registration of point clouds, representing 3D surfaces, is a relevant problem in 3D modeling. This multiple registration problem can be defined as a surface matching task. We treat it as least squares matching of overlapping surfaces. The surface may have been digitized/sampled point by point using a laser scanner device, a photogrammetric method or other surface measurement techniques. Our proposed method estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface, using the Generalized Gauss–Markoff model, minimizing the sum of squares of the Euclidean distances between the surfaces. This formulation gives the opportunity of matching arbitrarily oriented 3D surface patches. It fully considers 3D geometry. Besides the mathematical model and execution aspects we address the further extensions of the basic model. We also show how this method can be used for curve matching in 3D space and matching of curves to surfaces. Some practical examples based on the registration of close-range laser scanner and photogrammetric point clouds are presented for the demonstration of the method. This surface matching technique is a generalization of the least squares image matching concept and offers high flexibility for any kind of 3D surface correspondence problem, as well as statistical tools for the analysis of the quality of final matching results.

569 citations


Journal ArticleDOI
TL;DR: Satellite remote sensing is providing a systematic, synoptic framework for advancing scientific knowledge of the Earth as a complex system of geophysical phenomena that, directly and through interacting processes, often lead to natural hazards as discussed by the authors.
Abstract: Satellite remote sensing is providing a systematic, synoptic framework for advancing scientific knowledge of the Earth as a complex system of geophysical phenomena that, directly and through interacting processes, often lead to natural hazards. Improved and integrated measurements along with numerical modeling are enabling a greater understanding of where and when a particular hazard event is most likely to occur and result in significant socioeconomic impact. Geospatial information products derived from this research increasingly are addressing the operational requirements of decision support systems used by policy makers, emergency managers and responders from international and federal to regional, state and local jurisdictions. This forms the basis for comprehensive risk assessments and better-informed mitigation planning, disaster assessment and response prioritization. Space-based geodetic measurements of the solid Earth with the Global Positioning System, for example, combined with ground-based seismological measurements, are yielding the principal data for modeling lithospheric processes and for accurately estimating the distribution of potentially damaging strong ground motions which is critical for earthquake engineering applications. Moreover, integrated with interferometric synthetic aperture radar, these measurements provide spatially continuous observations of deformation with sub-centimeter accuracy. Seismic and in situ monitoring, geodetic measurements, high-resolution digital elevation models (e.g. from InSAR, Lidar and digital photogrammetry) and imaging spectroscopy (e.g. using ASTER, MODIS and Hyperion) are contributing significantly to volcanic hazard risk assessment, with the potential to aid land use planning in developing countries where the impact of volcanic hazards to populations and lifelines is continually increasing. Remotely sensed data play an integral role in reconstructing the recent history of the land surface and in predicting hazards due to flood and landslide events. Satellite data are addressing diverse observational requirements that are imposed by the need for surface, subsurface and hydrologic characterization, including the delineation of flood and landslide zones for risk assessments. Short- and long-term sea-level change and the impact of ocean-atmosphere processes on the coastal land environment, through flooding, erosion and storm surge for example, define further requirements for hazard monitoring and mitigation planning. The continued development and application of a broad spectrum of satellite remote sensing systems and attendant data management infrastructure will contribute needed baseline and time series data, as part of an integrated global observation strategy that includes airborne and in situ measurements of the solid Earth. Multi-hazard modeling capabilities, in turn, will result in more accurate forecasting and visualizations for improving the decision support tools and systems used by the international disaster management community.

444 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to discuss how SOLAP concepts support spatio-temporal exploration of data and to present the geovisualization, interactivity, and animation features of the SOLAP software developed by the research group.
Abstract: To support their analytical processes, today's organizations deploy data warehouses and client tools such as OLAP (On-Line Analytical Processing) to access, visualize, and analyze their integrated, aggregated and summarized data. Since a large part of these data have a spatial component, better client tools are required to take full advantage of the geometry of the spatial phenomena or objects being analyzed. With this regard, Spatial OLAP (SOLAP) technology offers promising possibilities. A SOLAP tool can be defined as “a type of software that allows rapid and easy navigation within spatial databases and that offers many levels of information granularity, many themes, many epochs and many display modes synchronized or not: maps, tables and diagrams” [Bedard, Y., Proulx, M.J., Rivest, S., 2005. Enrichissement du OLAP pour l'analyse geographique: exemples de realisation et differentes possibilites technologiques. In: Bentayeb, F., Boussaid, O., Darmont, J., Rabaseda, S. (Eds.), Entrepots de Donnees et Analyse en ligne, RNTI B_1. Paris: Cepadues, pp. 1–20]. SOLAP tools offer a new user interface and are meant to be client applications sitting on top of multi-scale spatial data warehouses or datacubes. As they are based on the multidimensional paradigm, they facilitate the interactive spatio-temporal exploration of data. The purpose of this paper is to discuss how SOLAP concepts support spatio-temporal exploration of data and then to present the geovisualization, interactivity, and animation features of the SOLAP software developed by our research group. This paper first reviews the general concepts behind OLAP and SOLAP systems. This is followed by a discussion of how these SOLAP concepts support spatio-temporal exploration of data. In the subsequent section, SOLAP software is introduced along with features that enable geovisualization, interactivity and animation.

175 citations


Journal ArticleDOI
TL;DR: Qualitative and quantitative results obtained for benchmark image pairs show that the proposed algorithm outperforms most state-of-the-art matching algorithms currently listed on the Middlebury stereo evaluation website.
Abstract: This work describes a stereo algorithm that takes advantage of image segmentation, assuming that disparity varies smoothly inside a segment of homogeneous colour and depth discontinuities coincide with segment borders. Image segmentation allows our method to generate correct disparity estimates in large untextured regions and precisely localize depth boundaries. The disparity inside a segment is represented by a planar equation. To derive the plane model, an initial disparity map is generated. We use a window-based approach that exploits the results of segmentation. The size of the match window is chosen adaptively. A segment's planar model is then derived by robust least squared error fitting using the initial disparity map. In a layer extraction step, disparity segments that are found to be similar according to a plane dissimilarity measurement are combined to form a single robust layer. We apply a modified mean-shift algorithm to extract clusters of similar disparity segments. Segments of the same cluster build a layer, the plane parameters of which are computed from its spatial extent using the initial disparity map. We then optimize the assignment of segments to layers using a global cost function. The quality of the disparity map is measured by warping the reference image to the second view and comparing it with the real image. Z-buffering enforces visibility and allows the explicit detection of occlusions. The cost function measures the colour dissimilarity between the warped and real views, and penalizes occlusions and neighbouring segments that are assigned to different layers. Since the problem of finding the assignment of segments to layers that minimizes this cost function is N ⁢ P -complete, an efficient greedy algorithm is applied to find a local minimum. Layer extraction and assignment are alternately applied. Qualitative and quantitative results obtained for benchmark image pairs show that the proposed algorithm outperforms most state-of-the-art matching algorithms currently listed on the Middlebury stereo evaluation website. The technique achieves particularly good results in areas with depth discontinuities and related occlusions, where missing stereo information is substituted from surrounding regions. Furthermore, we apply the algorithm to a self-recorded image set and show 3D visualizations of the derived results.

164 citations


Journal ArticleDOI
TL;DR: In this article, a new individual tree-based algorithm for determining forest biomass using small-scale LiDAR data was developed and tested, which combines computer vision and optimization techniques to determine the biomass present in areas without ground measurements.
Abstract: A new individual tree-based algorithm for determining forest biomass using small footprint LiDAR data was developed and tested. This algorithm combines computer vision and optimization techniques to become the first training data-based algorithm specifically designed for processing forest LiDAR data. The computer vision portion of the algorithm uses generic properties of trees in small footprint LiDAR canopy height models (CHMs) to locate trees and find their crown boundaries and heights. The ways in which these generic properties are used for a specific scene and image type is dependent on 11 parameters, nine of which are set using training data and the Nelder–Mead simplex optimization procedure. Training data consist of small sections of the LiDAR data and corresponding ground data. After training, the biomass present in areas without ground measurements is determined by developing a regression equation between properties derived from the LiDAR data of the training stands and biomass, and then applying the equation to the new areas. A first test of this technique was performed using 25 plots (radius = 15 m) in a loblolly pine plantation in central Virginia, USA (37.42N, 78.68W) that was not intensively managed, together with corresponding data from a LiDAR canopy height model (resolution = 0.5 m). Results show correlations ( r ) between actual and predicted aboveground biomass ranging between 0.59 and 0.82, and RMSEs between 13.6 and 140.4 t/ha depending on the selection of training and testing plots, and the minimum diameter at breast height (7 or 10 cm) of trees included in the biomass estimate. Correlations between LiDAR-derived plot density estimates were low (0.22 ≤ r ≤ 0.56) but generally significant (at a 95% confidence level in most cases, based on a one tailed test), suggesting that the program is able to properly identify trees. Based on the results it is concluded that the validation of the first training data-based algorithm for determining forest biomass using small footprint LiDAR data was a success, and future refinement and testing are merited.

159 citations


Journal ArticleDOI
TL;DR: In this article, an exploratory data analysis is used to assess the potential of laser return type and return intensity as variables for classification of individual trees or forest stands according to species.
Abstract: Understanding your data through exploratory data analysis is a necessary first stage of data analysis particularly for observational data. The checking of data integrity and understanding the distributions, correlations and relationships between potentially important variables is a fundamental part of the analysis process prior to model development and hypothesis testing. In this paper, exploratory data analysis is used to assess the potential of laser return type and return intensity as variables for classification of individual trees or forest stands according to species. For narrow footprint lidar instruments that record up to two return amplitudes for each output pulse, the usual pre-classification of return data into first and last intensity returns camouflages the fact that a number of the return signals have only “single amplitude” (singular) returns. The importance of singular returns for species discrimination has received little discussion in the remote sensing literature. A map view of the different types of returns overlaid on field species data indicated that it is possible to visually distinguish between vegetation types that produce a high proportion of singular returns, compared to vegetation types that produce a lower proportion of singular returns, at least when using a specific laser footprint size. Using lidar data and the corresponding field data derived from a subtropical woodland area of South East Queensland, Australia, map scatterplots of return types combined with field data enabled, in some cases, visual discrimination at the individual tree level between White Cypress Pine (Callitris glaucophylla) and Poplar Box (Eucalyptus populnea). While a clear distinction between these two species was not always visually obvious at the individual tree level, due to other extraneous sources of variation in the dataset, the observation was supported in general at the site level. Sites dominated by Poplar Box generally exhibited a lower proportion of singular returns compared to sites dominated by Cypress Pine. While return intensity statistics for this particular dataset were not found to be as useful for classification as the proportions of laser return types, an examination of the return intensity data leads to an explanation of how return intensity statistics are affected by forest structure. Exploratory data analysis indicated that a large component of variation in the intensity of the return signals from a forest canopy is associated with reflections of only part of the laser footprint. Consequently, intensity return statistics for the forest canopy, such as average and standard deviation, are related not only to the reflective properties of the vegetation, but also to the larger scale properties of the forest such as canopy openness and the spacing and type of foliage components within individual tree crowns.

145 citations


Journal ArticleDOI
TL;DR: The paper presents a method of generating epipolar images which are suitable for stereo-processing with a field of view larger than 180° in vertical and horizontal viewing directions and surveys mathematical models to describe the projection.
Abstract: The paper describes calibration and epipolar rectification for stereo with fish-eye optics. While stereo processing of classical cameras is state of the art for many applications, stereo with fish-eye cameras have been much less discussed in literature. This paper discusses the geometric calibration and the epipolar rectification as pre-requisite for stereo processing with fish-eyes. First, it surveys mathematical models to describe the projection. Then the paper presents a method of generating epipolar images which are suitable for stereo-processing with a field of view larger than 180° in vertical and horizontal viewing directions. One example with 3D-point measuring from real fish-eye images demonstrates the feasibility of the calibration and rectification procedure.

142 citations


Journal ArticleDOI
TL;DR: In this paper, the downwelling diffuse attenuation coefficient of water around Roatan Island, Honduras was determined by analyzing the optical properties of seawater, and the results showed that a typical satellite sensor can penetrate up to 8 m in the blue band, 6 m in green and 2 m in red region.
Abstract: To characterize the water column, the diffuse attenuation coefficient of downwelling irradiance, Kd(z, λ )( m −1 ) is one of the most important optical properties of seawater. The purpose of this research was to determine the downwelling diffuse attenuation coefficient of water around Roatan Island, Honduras. In situ Kd analysis showed low attenuation coefficient values in green and blue and increased exponentially after 570 nm. The blue, green and red portion of the spectrum showed a Kd value of 0.138, 0.158, and 0.503 m −1 , respectively. Error analysis revealed a significantly high uncertainty in the red region (600–700 nm) and, as expected, low estimation uncertainty in blue and green. When compared with IKONOS derived Kd (490 nm), it was observed that the differences were negligible, being 0.0084 and 0.0054 m −1 for station #1 and #2, respectively. Based on the fact that 90% of the diffused reflected light from a water body comes from a surface layer of water of depth 1/Kd, the results showed that a typical satellite sensor (such as IKONOS) can penetrate up to 8 m in the blue band, 6 m in green, and 2 m in the red region. © 2005 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the temporal backscatter of rice crop for a predominant rice-growing region in West Bengal, India was analyzed using a standard beam SAR data of different incidence angle.
Abstract: Temporal RADARSAT Standard beam SAR data of different incidence angle were analysed to study the temporal backscatter of rice crop for a predominant rice-growing region in West Bengal, India Correlation studies of backscatter with crop growth parameters were carried out Second order polynomial was the best fit obtained for crop age and crop height Shallow angle data (> 40°) was found better correlated to crop height than steep angle (23°) data Inversion algorithm was used to generate spatial maps of crop height and age The results validated over a village showed an over all 90% accuracy

108 citations


Journal ArticleDOI
TL;DR: In this paper, a rule-based regression tree model was used to identify relationships between satellite-derived vegetation conditions, climatic drought indices, and biophysical data, including land-cover type, available soil water capacity, percent of irrigated farm land, and ecological type.
Abstract: Droughts are normal climate episodes, yet they are among the most expensive natural disasters in the world. Knowledge about the timing, severity, and pattern of droughts on the landscape can be incorporated into effective planning and decision-making. In this study, we present a data mining approach to modeling vegetation stress due to drought and mapping its spatial extent during the growing season. Rule-based regression tree models were generated that identify relationships between satellite-derived vegetation conditions, climatic drought indices, and biophysical data, including land-cover type, available soil water capacity, percent of irrigated farm land, and ecological type. The data mining method builds numerical rule-based models that find relationships among the input variables. Because the models can be applied iteratively with input data from previous time periods, the method enables to provide predictions of vegetation conditions farther into the growing season based on earlier conditions. Visualizing the model outputs as mapped information (called VegPredict) provides a means to evaluate the model. We present prototype maps for the 2002 drought year for Nebraska and South Dakota and discuss potential uses for these maps.

101 citations


Journal ArticleDOI
TL;DR: The University of Florida and the Florida Department of Environmental Protection (FDEP) are collaborating on a program to improve the quantitative monitoring of Florida's beaches, which are subject to erosion and catastrophic damage from seasonal storms as discussed by the authors.
Abstract: The University of Florida (UF) and the Florida Department of Environmental Protection (FDEP) are collaborating on a program to improve the quantitative monitoring of Florida's beaches, which are subject to erosion and catastrophic damage from seasonal storms. Each year, a segment of the Florida coastline will be mapped using Airborne Laser Swath Mapping (ALSM) technology (also referred to as LIDAR). The ALSM surveys, conducted by UF staff and students, nominally extend from a few hundred feet offshore to about 1500 ft inland. GPS observations are manually collected by FDEP personnel and used to generate profiles across the beach for comparison with profiles generated from the ALSM observations. Results from the first survey segment completed under the program, covering approximately 35 miles of beaches in northeast Florida, are presented. Additional results that demonstrate the ability to precisely quantify changes in beach topography and volume using ALSM data are also presented.

Journal ArticleDOI
TL;DR: Calibration equipment and concepts for the airborne hyper-spectral push broom imaging spectrometer APEX (Airborne Prism EXperiment) are outlined and the high relevance to scientific objectives is highlighted.
Abstract: ESA currently builds the airborne hyper-spectral push broom imaging spectrometer APEX (Airborne Prism EXperiment) operating in the spectral range from 380 to 2500 nm. In the scope of the APEX project a large variety of characterization measurements will be performed, e.g., on-board characterization, frequent laboratory characterization, and vicarious calibration. The APEX instrument will only achieve its challenging measurement accuracy by regular calibration of the instrument between flight cycles. For that on-ground characterisation, a dedicated characterisation and calibration facility is necessary to enable a comprehensive and accurate calibration of the instrument. In view of the high relevance to scientific objectives, ESA is funding an "Calibration Home Base" (CHB) for APEX. It is located at DLR Oberpfaffenhofen and will be operational from 2006 on. The CHB provides all hard- and software tools required for radiometric, spectral and geometric on-ground characterisation and calibration of the instrument and its internal references and on-board attachments, and to perform measurements on polarisation- and straylight-sensitivity. This includes a test bed and the provision of the infrastructure. In this paper the calibration equipment and concepts are outlined.

Journal ArticleDOI
TL;DR: In this article, the authors used the digital photogrammetric technique to extract high-resolution digital elevation models and large-scale orthophotos from the volcano of Stromboli.
Abstract: After a period of anomalous activity affecting the Volcano of Stromboli (Aeolian volcanic arc, Italy), the “Sciara del Fuoco” slope, situated on the north–east flank of the island, was affected by major landslides on December 30, 2002. Recent lava accumulations starting from the beginning of the eruption (December 28, 2002) and a portion of the subaerial and submarine deposits were detached. As a result, tsunami waves several meters high affected the coasts of the island. After the event, monitoring activities, coordinated by the Italian Civil Protection Department, included systematic photogrammetric surveys. The digital photogrammetric technique was adopted to extract high-resolution digital elevation models and large-scale orthophotos. The comparison between the data collected before the eruption and that acquired on January 5, 2003, together with bathymetric data, allowed to define the geometry and to estimate the volume of the surfaces involved in the landslides. The following 13 photogrammetric surveys (from January to September 2003) enabled the monitoring of the continuous and relevant morphological changes induced by both the lava flow and the evolution of the instability phenomena. The method adopted for the data analysis and the results obtained are described in the paper.

Journal ArticleDOI
TL;DR: In this article, change detection techniques using co-registered high-resolution satellite imagery and archival digital aerial photographs have been used in conjunction with GPS to constrain the magnitude and timing of previously undocumented historical motion of the Salmon Falls landslide in south-central Idaho, USA.
Abstract: Change detection techniques using co-registered high-resolution satellite imagery and archival digital aerial photographs have been used in conjunction with GPS to constrain the magnitude and timing of previously undocumented historical motion of the Salmon Falls landslide in south-central Idaho, USA. The landslide has created natural dams of Salmon Falls Creek, resulting in the development of large lakes and a potential flooding hazard. Rapid motion (cm/year–m/year) of the relatively remote landslide was first reported in 1999, but significant horizontal motion (up to 10.8 m) is demonstrated between 1990 and 1998 by measuring changes in the locations of ground control points in a time-series of images. The total (three-dimensional) motion of the landslide prior to 2002 was calculated using the horizontal (two-dimensional) velocities obtained in the image change detection study and horizontal-to-vertical ratios of motion derived for the landslide in 2003–2004 collected from a network of autonomous GPS stations. The total historical motion that was estimated using this method averages about 12 m, which is in agreement with field observations.

Journal ArticleDOI
TL;DR: In this paper, a sampling method is proposed with the following criteria: unbiased estimators are easy to compute; it can be combined with stratification; within each stratum, sampling probability is proportional to the area of the sampling unit; and the geographic distribution of the sample is reasonably homogeneous.
Abstract: Sampling satellite images presents some specific characteristics: images overlap and many of them fall partially outside the studied region. A careless sampling may introduce an important bias. This paper illustrates the risk of bias and the efficiency improvements of systematic, pps (probability proportional to size) and stratified sampling. A sampling method is proposed with the following criteria: (a) unbiased estimators are easy to compute; (b) it can be combined with stratification; (c) within each stratum, sampling probability is proportional to the area of the sampling unit; and (d) the geographic distribution of the sample is reasonably homogeneous. Thiessen polygons computed on image centres are sampled through a systematic grid of points. The sampling rates in different strata are tuned by dividing the systematic grid into subgrids or replicates and taking for each stratum a certain number of replicates. The approach is illustrated with an application to the estimation of the geometric accuracy of Image2000, a Landsat ETM+ mosaic of the European Union.

Journal ArticleDOI
TL;DR: A novel approach for object recognition is presented based on neuro-fuzzy modelling where the learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro- fuzzy approach.
Abstract: Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes ‘buildings’, ‘cars’ and ‘trees’ by using aerial colour images of an urban area of the town of Engen in Germany.

Journal ArticleDOI
TL;DR: A new algorithm, starting from a triangular mesh in 3D and following the subdivision paradigm will be presented, which yields a series of triangulated terrain surfaces with increasing point density and smaller angles between adjacent triangles, converging to a smooth surface.
Abstract: Current terrain modelling algorithms are not capable of reconstructing 3D surfaces, but are restricted to so-called 2.5D surfaces: for one planimetric position only one height may exist. The objective of this paper is to extend terrain relief modelling to 3D. In a 3D terrain model overhangs and caves, cliffs and tunnels will be presented correctly. Random measurement errors, limitations in data sampling and the requirement for a smooth surface rule out a triangulation of the original measurements as the final terrain model. A new algorithm, starting from a triangular mesh in 3D and following the subdivision paradigm will be presented. It is a stepwise refinement of a polygonal mesh, in which the location of the vertices on the next level is computed from the vertices on the current level. This yields a series of triangulated terrain surfaces with increasing point density and smaller angles between adjacent triangles, converging to a smooth surface. With the proposed algorithm, the special requirements in terrain modelling, e.g. breaklines can be considered. The refinement process can be stopped as soon as a resolution suitable for a specific application is obtained. Examples of an overhang, a bridge which is modelled as part of the terrain surface and for a 2.5D terrain surface are presented. The implications of extending modelling to 3D are discussed for typical terrain model applications.

Journal ArticleDOI
TL;DR: In this paper, a detailed atmospheric correction method for NOAA/AVHRR images using 6S code, easily accessible data sets, and the images themselves is proposed, which includes a broad range of precipitation, elevation, and vegetation types.
Abstract: A detailed atmospheric correction method for NOAA/AVHRR images using 6S code, easily accessible data sets, and the images themselves is proposed. The parameters that 6S requires (aerosol optical depth, precipitable water, ozone content, and elevation) were obtained using data from the images, the Total Ozone Mapping Spectrometer, and the GTOPO30 elevation data set, with reference to existing studies. The proposed methodology is validated through a case study of the Marsabit District, northern Kenya, which includes a broad range of precipitation, elevation, and vegetation types. The Normalized Difference Vegetation Index (NDVI) of desert, grassland, and forested areas from August 1987 to August 1988 was calculated after making the atmospheric correction. The intercept of the regression line, between the reflectance of before and after correction, is almost the same for each land cover type; while the slope is around 1.7 to 1.8 for grassland/bushland, it is smaller in the desert, where the range of the NDVI is limited. The NDVI in dense vegetation is more sensitive to the atmospheric correction, which is a result of the effect of path radiance. After the atmospheric correction, the range of NDVI increased, with the characteristic that the greater the NDVI, the larger was the atmospheric effect. In the case study, as well, the NDVI increased after the atmospheric correction, especially in pixels with initially high NDVI. A more detailed, entirely pixel-by-pixel, atmospheric correction requires individual pixel information for aerosol optical depth and precipitable water. This will be possible using data collected by the recent sensors such as MODIS.

Journal ArticleDOI
TL;DR: In this paper, the problem of quantifying these errors is solved for rotating polygon mirror type lidar systems, and it is shown that the horizontal component of misalignment results in a scan line first being shifted across the track and then rotated around the vertical at the new center of the scan line.
Abstract: One of the major lidar error sources not yet analyzed in the literature is the tolerance of the laser beam alignment with respect to the scanning mirror. In this paper, the problem of quantifying these errors is solved for rotating polygon mirror type lidar systems. An arbitrary deviation of the beam from its design direction–the vector of beam misalignment–can be described by two independent parameters. We choose these as horizontal and vertical components of the misalignment vector in the body frame. Either component affects both, horizontal and vertical lidar accuracy. Horizontal lidar errors appear as scan line distortions—along and across track shifts, rotations and scaling. It is shown that the horizontal component of misalignment results in a scan line first being shifted across the track and then rotated around the vertical at the new center of the scan line. Resulting vertical lidar error, being a linear function of the scan angle, is similar to that produced by a roll bias. The vertical component of the beam misalignment causes scan line scaling and an along track shift. The corresponding vertical error is quadratic with respect to the scan angle. The magnitude of these effects is significant even at tight alignment tolerances and cannot be realistically accounted for in the conventional calibration model, which includes only range, attitude and GPS biases. Therefore, in order to attain better accuracy, this model must be expanded to include the beam misalignment parameters as well. Addition of new parameters into the model raises a question of whether they can be reliably solved for. To give a positive answer to this question, a calibration method must utilize not only ground control information, which is typically very limited, but also the relative accuracy information from the overlapping flight lines.

Journal ArticleDOI
TL;DR: This paper concentrates on concepts and requirements for the development of a suitable software architecture using case studies and use cases seen from a GIS-based perspective.
Abstract: Environmental processes often vary in space and time and act over several scales. Current software applications dealing with aspects of these processes emphasize properties specific to their domain and tend to neglect other issues. For example, GIS prefers a static view and generally lacks the representation of dynamics, temporal simulation systems emphasize the temporal component but ignore space to a great extent, and virtual reality tends to “forget” the underlying data and models. In order to remedy this situation we present an approach that aims to bring together the three domains; temporal simulation systems, GIS, and virtual reality, and to foster the integration of particular functionalities. This paper concentrates on concepts and requirements for the development of a suitable software architecture using case studies and use cases seen from a GIS-based perspective. © 2005 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.



Journal ArticleDOI
TL;DR: In this paper, a method for sampling bidirectional reflectance information from multi-angular airborne images is described, which uses high resolution surface models to determine the location of the imaged point on the ground and the orientation of the measured surface fragment.
Abstract: This paper describes a method for sampling bidirectional reflectance information from multiangular airborne images. The method uses high resolution surface models to determinate the location of the imaged point on the ground and the orientation of the measured surface fragment. Since natural surfaces scatter incident radiation anisotropically, viewing and illumination conditions play a critical role in the interpretation of remotely sensed images. Thus, directionally defined reflectance data are needed for the modelling and correction of bidirectional effects on airborne optical images. Two test sites were imaged with a wide range of viewing azimuth angles at two different times. A high resolution HRSC-A stereo camera was used for image acquisition. Algorithms were implemented to reconstruct the image acquisition and retrieve the image samples from the HRSC-A image data. Combined with GPS and INS data, automatically derived high resolution digital surface models, including vegetation canopies, houses, etc., were used to determine the viewing and illumination geometry on the target surface. The brightness of a sample point was recorded as a measure for reflectance. A large number of directionally defined samples and a wide angular range of sample geometry were obtained. The images were first classified. Sampled reflectance data were verified by investigating the bidirectional reflectance of five agricultural and forest targets. Errors affecting the data quality, such as angular uncertainty, were studied. The multiangular image data, the developed sampling methods and the obtained bidirectional dataset proved to be applicable in investigations of bidirectional reflectance effects of natural targets. Airborne imagery combined with high resolution digital surface models permit extensive investigation of the bidirectional reflectance of a wide range of natural objects and large habitats.

Journal ArticleDOI
TL;DR: ACENT (Attribute-aided Classification of Entangled Trajectories), a novel framework that comprises classification, clustering, and neural net processes to progressively reconstruct elongated trajectories using as input spatiotemporal coordinates of image patches and corresponding attribute values, is developed.
Abstract: In motion imagery-based tracking applications, it is common to extract locations of moving objects without any knowledge about the identity of the objects they correspond to. The identification of individual spatiotemporal trajectories from such data sets is far from trivial when these trajectories intersect in space, time, or attributes. In this paper, we present a novel approach for the reconstruction of entangled spatiotemporal trajectories of moving objects captured in motion imagery data sets. We have developed ACENT (Attribute-aided Classification of Entangled Trajectories), a novel framework that comprises classification, clustering, and neural net processes to progressively reconstruct elongated trajectories using as input spatiotemporal coordinates of image patches and corresponding attribute values. ACENT proceeds by first forming brief fragments and then linking them and adding points to them. An initial classification allows us to form brief segments corresponding to distinct objects. These segments are then linked together through clustering to form longer trajectories. Back-propagation neural network classification and geometric/self-organizing map (SOM) analysis refine these trajectories by removing misclassified and redistributing unassigned points. Thus, ACENT integrates some established classification and clustering tools to devise a novel approach that can address the tracking challenges of busy environments. Furthermore, ACENT allows us use spatiotemporal (ST) thresholds to cluster trajectories according to their spatial and temporal extent. In the paper, we present in detail our framework and experimental results that support the application potential of our approach.