scispace - formally typeset
Search or ask a question

Showing papers in "The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences in 2014"


Journal ArticleDOI
TL;DR: Comparisons of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision, and the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation will be discussed.
Abstract: . Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

49 citations


Journal ArticleDOI
TL;DR: The results of the fit highlight the potential to use this method as a bespoke track monitoring tool during major redevelopment projects where traditional methods, such as robotic total stations, results in missed information, for example due to passing trains or knocked prisms.
Abstract: . This paper presents the capabilities of detecting relevant geometry of railway track for monitoring purposes from static terrestrial laser scanning (TLS) systems at platform level. The quality of the scans from a phased based scanner (Scanner A) and a hybrid timeof- flight scanner (Scanner B) are compared by fitting different sections of the track profile to its matching standardised rail model. The various sections of track investigated are able to fit to the model with an RMS of less than 3 mm. Both scanners show that once obvious noise and artefacts have been removed from the data, the most confident fit of the point cloud to the model is the section closest to the scanner position. The results of the fit highlight the potential to use this method as a bespoke track monitoring tool during major redevelopment projects where traditional methods, such as robotic total stations, results in missed information, for example due to passing trains or knocked prisms and must account for offset target locations to compute track parameters.

30 citations


Journal ArticleDOI
TL;DR: In this article, the SkySat-1 satellite was used to image a target in high-definition panchromatic video for up to 90 seconds, using super-resolution techniques, sub-meter accuracy was reached for the still imagery.
Abstract: . The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

26 citations


Journal ArticleDOI
TL;DR: In this article, the authors attempted some model based analysis to assess prediction of rainfall trend using MK test and the Sen's slope estimator, which is the state-of-the-art estimator.
Abstract: . Climate science is a complex field as climate is governed by processes that interact and operate on a vast array of time and space scales. The processes involving radiative transfer, chemistry and phase changes of water are most easily described at atomic and molecular scales; the influence of ice sheets, continents and planetary scale circulations controlling the basic energy balance of the planet operate at continental scales; even planetary orbital and solar variations operating at millennial time scales cannot be ignored. Global as well as regional climate has changed due to human activities like land use changes, production of industrial effluents and other activities due to the development of the society. The consequences of these changes have a massive impact on atmospheric events like precipitation, temperature etc. So, present and future information of precipitation is required to develop adaptation and mitigation strategies at national and international levels. Precipitation is one of the major phenomena of the atmosphere. So, its prediction and the trend are very necessary to realize the change of climate. The study attempted some model based analysis to assess prediction of rainfall trend using MK test and the Sen’s slope estimator.

19 citations


Journal ArticleDOI
TL;DR: In this article, the authors used data from the Multiple Altimeter Beam Experimental Lidar (MABEL), an airborne photon counting lidar sensor developed by NASA Goddard, to estimate canopy height in savannas at MABEL's signal and noise levels.
Abstract: . Discrete return and waveform lidar have demonstrated a capability to measure vegetation height and the associated structural attributes such as aboveground biomass and carbon storage. Since discrete return lidar (DRL) is mainly suitable for small scale studies and the only existing spaceborne lidar sensor (ICESat-GLAS) has been decommissioned, the current question is what the future holds in terms of large scale lidar remote sensing studies. The earliest planned future spaceborne lidar mission is ICESat-2, which will use a photon counting technique. To pre-validate the capability of this mission for studying three dimensional vegetation structure in savannas, we assessed the potential of the measurement approach to estimate canopy height in a typical savanna landscape. We used data from the Multiple Altimeter Beam Experimental Lidar (MABEL), an airborne photon counting lidar sensor developed by NASA Goddard. MABEL fires laser pulses in the green (532 nm) and near infrared (1064 nm) bands at a nominal repetition rate of 10 kHz and records the travel time of individual photons that are reflected back to the sensor. The photons’ time of arrival and the instrument’s GPS positions and Inertial Measurement Unit (IMU) orientation are used to calculate the distance the light travelled and hence the elevation of the surface below. A few transects flown over the Tejon ranch conservancy in Kern County, California, USA were used for this work. For each transect we extracted the data from one near infrared channel that had the highest number of photons. We segmented each transect into 50 m, 25 m and 10 m long blocks and aggregated the photons in each block into a histogram based on their elevation values. We then used an expansion window algorithm to identify cut off points where the cumulative density of photons from the highest elevation resembles the canopy top and likewise where such cumulative density from the lowest elevation resembles mean ground elevation. These cut off points were compared to DRL derived canopy and mean ground elevations. The correlation between MABEL and DRL derived metrics ranged from R2 = 0.70, RMSE = 7.9 m to R2 = 0.83, RMSE = 2.9 m. Overall, the results were better when analysis was done at smaller block sizes, mainly due to the large variability of terrain relief associated with increased block size. However, the increase in accuracy was more dramatic when block size was reduced from 50 m to 25 m than it was from 25 m to 10 m. Our work has demonstrated the capability of photon counting lidar to estimate canopy height in savannas at MABEL's signal and noise levels. However, analysis of the Advanced Topography Laser Altimeter System (ATLAS) sensor on ICESat-2 indicate that signal photons will be substantially lower than those of MABEL while sensor noise will vary as a function of solar illumination, altitude and declination, as well as the topographic and reflectance properties of surfaces. Therefore, there are reasons to believe that the actual data from ICESat-2 will give poorer results due to a lower sampling rate and use of only the green wavelength. Further analysis using simulated ATLAS data are required before more definitive results are possible, and these analyses are ongoing.

18 citations


Journal ArticleDOI
Wenhui Wan1, Z. X. Liu1, Kaichang Di1, B. Wang, Jin Zhou 
TL;DR: Zhang et al. as mentioned in this paper developed a visual localization method for the Chang'e-3 rover, which is capable of deriving accurate localization results from cross-site stereo images.
Abstract: . Localization of the rover is critical to support science and engineering operations in planetary rover missions, such as rover traverse planning and hazard avoidance. It is desirable for planetary rover to have visual localization capability with high degree of automation and quick turnaround time. In this research, we developed a visual localization method for lunar rover, which is capable of deriving accurate localization results from cross-site stereo images. Tie points are searched in correspondent areas predicted by initial localization results and determined by ASIFT matching algorithm. Accurate localization results are derived from bundle adjustment based on an image network constructed by the tie points. In order to investigate the performance of proposed method, theoretical accuracy analysis on is implemented by means of error propagation principles. Field experiments were conducted to verify the effectiveness of the proposed method in practical applications. Experiment results prove that the proposed method provides more accurate localization results (1 %~4 %) than dead-reckoning. After more validations and enhancements, the developed rover localization method has been successfully used in Chang'e-3 mission operations.

17 citations


Journal ArticleDOI
TL;DR: This paper presents and compares five methods which are able to derive stress levels from hyperspectral images and shows that Linear Ordinal SVM is a powerful tool for applications which require high prediction performance under limited resources.
Abstract: . Detection of crop stress from hyperspectral images is of high importance for breeding and precision crop protection. However, the continuous monitoring of stress in phenotyping facilities by hyperspectral imagers produces huge amounts of uninterpreted data. In order to derive a stress description from the images, interpreting algorithms with high prediction performance are required. Based on a static model, the local stress state of each pixel has to be predicted. Due to the low computational complexity, linear models are preferable. In this paper, we focus on drought-induced stress which is represented by discrete stages of ordinal order. We present and compare five methods which are able to derive stress levels from hyperspectral images: One-vs.-one Support Vector Machine (SVM), one-vs.-all SVM, Support Vector Regression (SVR), Support Vector Ordinal Regression (SVORIM) and Linear Ordinal SVM classification. The methods are applied on two data sets - a real world set of drought stress in single barley plants and a simulated data set. It is shown, that Linear Ordinal SVM is a powerful tool for applications which require high prediction performance under limited resources. It is significantly more efficient than the one-vs.-one SVM and even more efficient than the less accurate one-vs.-all SVM. Compared to the very compact SVORIM model, it represents the senescence process much more accurate.

15 citations


Journal ArticleDOI
TL;DR: In this article, thermal imagery collected via a lightweight remote sensing Unmanned Aerial Vehicle (UAV) was used to create a surface temperature map for the purpose of providing wildland firefighting crews with a cost-effective and time-saving resource.
Abstract: . Thermal remote sensing has a wide range of applications, though the extent of its use is inhibited by cost. Robotic and computer components are now widely available to consumers on a scale that makes thermal data a readily accessible resource. In this project, thermal imagery collected via a lightweight remote sensing Unmanned Aerial Vehicle (UAV) was used to create a surface temperature map for the purpose of providing wildland firefighting crews with a cost-effective and time-saving resource. The UAV system proved to be flexible, allowing for customized sensor packages to be designed that could include visible or infrared cameras, GPS, temperature sensors, and rangefinders, in addition to many data management options. Altogether, such a UAV system could be used to rapidly collect thermal and aerial data, with a geographic accuracy of less than one meter.

14 citations


Journal ArticleDOI
TL;DR: This work takes advantage of certain recently developed mathematical tools and of photogrammetry techniques to access very computationally efficient Euclidian 3D reconstruction of the scene and, thanks to the presence of instrumentation for localization embedded in the device, the obtained3D reconstruction can be properly georeferenced.
Abstract: The development of Mobile Mapping systems over the last decades allowed to quickly collect georeferenced spatial measurements by means of sensors mounted on mobile vehicles. Despite the large number of applications that can potentially take advantage of such systems, because of their cost their use is currently typically limited to certain specialized organizations, companies, and Universities. However, the recent worldwide diffusion of powerful mobile devices typically embedded with GPS, Inertial Navigation System (INS), and imaging sensors is enabling the development of small and compact mobile mapping systems. More specifically, this paper considers the development of a 3D reconstruction system based on photogrammetry methods for smartphones (or other similar mobile devices). The limited computational resources available in such systems and the users' request for real time reconstructions impose very stringent requirements on the computational burden of the 3D reconstruction procedure. This work takes advantage of certain recently developed mathematical tools (incremental singular value decomposition) and of photogrammetry techniques (structure from motion, Tomasi–Kanade factorization) to access very computationally efficient Euclidian 3D reconstruction of the scene. Furthermore, thanks to the presence of instrumentation for localization embedded in the device, the obtained 3D reconstruction can be properly georeferenced.

13 citations


Journal ArticleDOI
TL;DR: In this article, a new INS-LiDAR bore-sighting parameters calibration method is presented, which is based on a rigorous mathematical model, which allows providing reliable boresight quality factors.
Abstract: . This paper presents a new INS-LiDAR bore-sighting parameters calibration method that differs from traditional methods on two main aspects. First, the method is static, which avoids being affected by GPS errors and enables the extraction of scanlines. For terrestrial laser scanning, this aspect is extremely important since ranges are short and the GPS errors are the first source of error during the calibration process. Second, the method is based on a rigorous mathematical model, which allows providing reliable boresight quality factors. After presenting the boresight determination problem, this paper will introduce the existing calibration procedures. Then, it will describe the new procedure and explain how it overcomes the limitations of the traditional approaches. Finally, some results from both simulations and real datasets are presented to illustrate our approach.

12 citations


Journal ArticleDOI
TL;DR: In this article, a feasibility study on the implementation and performance assessment of time-lapse processing of a monoscopic image sequence, acquired by a calibrated camera in the Perito Moreno Glacier in Argentina is presented.
Abstract: . This research provides a feasibility study on the implementation and performance assessment of time-lapse processing of a monoscopic image sequence, acquired by a calibrated camera in the Perito Moreno Glacier in Argentina. The glacier is located at 50°28'23" S, 73°02'10" W at the Parque Nacional Los Glaciares, South Patagonia Icefield, Santa Cruz and has experienced minor fluctuations and unusual behavior since the early 1960's to present. The objective of this study was to determine the evolution and changes in the ice-dam of the Perito Moreno glacier that started on November, 23 2012 and collapsed on January 19, 2013. Two images every 24 hours were acquired since October 2012 until February 2013, a total of 135 days. Image information was supported by ground data. Image and ground data was correlated with a 2D affine transformation. This technique allows the determination of the distortions in the images and estimating the values of scale factors. This, along with an accurate time-lapse interval, has produced accurate data for the analysis. In addition, changes in the level of the Brazo Rico lake were validated with direct data in order to determine the degree of uncertainty in the estimation of changes in the glacier. Based on the calculations, advance rates of the front of the Perito Moreno glacier were estimated at 0.67 m/d ± 0.003 m, and the tunnel evolution was also recorded.

Journal ArticleDOI
TL;DR: In this paper, a single channel based classification method is proposed to extract a digital terrain model (DTM) from satellite stereo imagery. And the proposed method adopts the random forests method to get initial probability maps of the four main classes in forest regions (high forest, low forest, ground and buildings).
Abstract: Satellite stereo imagery is becoming a popular data source for derivation of height information. Many new Digital Surface Model (DSM) generation and evaluation methods have been proposed based on these data. A novel Digital Terrain Model (DTM) extraction method based on the DSM from satellite stereo imagery is proposed in this paper. Instead of directly filtering the DSM, firstly a single channel based classification method is proposed. In this step, no multi-spectral information is used, because for some stereo sensors, like Cartosat-1, only panchromatic channels are available. The proposed classification method adopts the random forests method to get initial probability maps of the four main classes in forest regions (high-forest, low-forest, ground, and buildings). To cover the pepper and salt effect of this pixel based classification method, the probability maps are further filtered based on the adaptive Wiener filtering. Then a cube-based greedy strategy is applied in generating the final classification map from these refined probability maps. Secondly, the height distances between neighboring regions are calculated along the boundary regions. These height distances can be used to estimate the relative region heights. Thirdly, the DTM is extracted by subtracting these relative region heights from the DSM in the order of: buildings - low forest - high forest. In the end, the extracted DTM is further smoothed using median filter. The proposed DTM extraction method is finally tested on satellite stereo imagery captured by Cartosat-1. Quality evaluation is performed by comparing the extracted DTMs to a reference DTM, which is generated from the last return airborne laser scanning point cloud.

Journal ArticleDOI
TL;DR: The proposed framework is easily extensible and supports geoindexes to speed up the querying, and the experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.
Abstract: To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

Journal ArticleDOI
TL;DR: In this article, a Markov Chain within Cellular Automata (CA) was used to simulate urban/non-urban growth process for the year 2040 in the city of Istanbul, and the results indicated that urban expansion will influence mainly forest areas during the time period of 1972-2040.
Abstract: . Urban growth is a complex dynamical process associated with landscape change driving forces such as the environment, politics, geography and many others that affect the city at multiple spatial and temporal scales. Istanbul, one of the largest agglomerations in Europe and the fifth-largest city in the world in terms of population within city limits, has been growing very rapidly over the late 20th century at a rate of 3.45 %, causing to have many environmental issues. Recently, Istanbul's new third bridge and proposed new routes for across the Bosphorus are foreseen to not only threaten the ecology of the city, but also it will give a way to new areas for unplanned urbanization. The dimensions of this threat are affirmed by the urban sprawl especially after the construction of the second bridge and the connections such as Trans European Motorway (TEM). Since the spatial and temporal components of urbanization can be more simply identified through modeling, this study aims to analyze the urban change and assess the ecological threats in Istanbul city through the proper modeling for the year 2040. For this purpose, commonly used urban modeling approach, the Markov Chain within Cellular Automata (CA), was selected to simulate urban/non-urban growth process. CA is a simple and effective tool to capture and simulate the complexity of urban system dynamic. The key factor for a Markov is the transition probability matrix, which defines change trend from past to today and into the future for a certain class type, and land use suitability maps for urban. Multi Criteria Analysis was used to build these suitability maps. Distance from each pixel to the urban, road and water classes, plus the elevation, slope and land use maps (as excluded layer) were defined as factors. Calibration data were obtained from remotely sensed data recorded in 1972, 1986 and 2013. Validation was performed by overlaying the simulated and actual 2013 urban maps and Kappa Index of Agreement was calculated. The results indicate that the urban expansion will influence mainly forest areas during the time period of 1972–2040.

Journal ArticleDOI
TL;DR: A new GIS tool using most commonly known rudimentary algorithm called Prim’s algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network and helps to solve complex network MST problem easily, efficiently and effectively is developed.
Abstract: . minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim’s algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it’s minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors analyzed the morphologic change caused by sand dredging in Poyang Lake basin by overlaying two DEMs acquired in 1952 and 2010 respectively, showing that the reflectance of middle infrared band for sand-diverging vessel is much higher than that of water surface, and can be counted in 12 Landsat images acquired in flooding season during 2000~2010.
Abstract: . Sand dredging has been practiced in rivers, lakes, harbours and coastal areas in recent years in China mostly because of demand from construction industry as building material. Sand dredging has disturbed aquatic ecosystems by affecting hydrological processes, increasing content of suspended sediments and reducing water clarity. Poyang Lake, connecting with Yangtze River in the lower reaches of the Yangtze River, is the largest fresh water lake in China. Sand dredging in Poyang Lake has been intensified since 2001 because such practice was banned in Yangtze River and profitable. In this study, the morphologic change caused by sand dredging in Poyang Lake basin was analysed by overlaying two DEMs acquired in 1952 and 2010 respectively. Since the reflectance of middle infrared band for sand dredging vessel is much higher than that of water surface, sand dredging vessels were showed as isolated grey points and can be counted in the middle infrared band in 12 Landsat images acquired in flooding season during 2000~2010. Another two Landsat images (with low water level before 2000 and after 2010) were used to evaluate the morphologic change by comparing inundation extent and shoreline shape. The following results was obtained: (1) vessels for sand dredging are mainly distributed in the north of Poyang Lake before 2007, but the dredging area was enlarged to the central region and even to Gan River; (2) sand dredging area reached to about 260.4 km2 and is mainly distributed in the north of Songmen Mountain and has been enlarged to central of Poyang Lake from the distribution of sand vessels since 2007. Sand dredged from Poyang Lake was about 1.99 × 109 m3 or 2448 Mt assuming sediment bulk density of 1.23 t m−3. It means that the magnitude of sand mining during 2001–2010 is almost ten times of sand depositions in Poyang Lake during 1955–2010; (3) Sand dredging in Poyang Lake has alternated the lake capacity and discharge section area, some of the watercourse in the northern channel was enlarged by more than 1 km when in low lake level. This study is useful to understand the change of hydrological system, especially the drying up trend in Poyang Lake in recent autumns and winters.

Journal ArticleDOI
TL;DR: This work is focused on a fast foreign object recognition algorithm for an onboard foreign object detection system based on 3D object minimal boundary extraction that is used for an automatic recognition of a foreign object from the predefined bank of 3D models.
Abstract: . !The systems for detection of foreign objects on a runway during the landing of an aircraft are highly demanded. Such systems could be installed in the airport or could be mounted on the board of an aircraft. This work is focused on a fast foreign object recognition algorithm for an onboard foreign object detection system. The algorithm is based on 3D object minimal boundary extraction. The boundary is estimated through an iterative process of minimization of a difference between a pair of orthophotos. During the landing an onboard camera produces a sequence of images from which a number of stereo pair could be extracted. For each frame the runway lines are automatically detected and the external orientation of the camera relative to the runway is estimated. Using external orientation parameters the runway region is projected on an orthophoto to the runway plane. The difference of orthophotos shows the objects that doesn't coincide with the runway plane. After that the position of the foreign object relative to the runway plane and its minimal 3D boundary could be calculated. The minimal 3D boundary for each object is estimated by projection of a runway region on a modified model of the runway. The extracted boundary is used for an automatic recognition of a foreign object from the predefined bank of 3D models.

Journal ArticleDOI
TL;DR: In this article, the impact of camera radiometric resolution on spectral reflectance values in chosen wavelength has been analyzed using two monochromatic XEVA video sensors with different radiometric resolutions (12 and 14 bits).
Abstract: Nowadays remote sensing plays a very important role in many different study fields, ie environmental studies, hydrology, mineralogy, ecosystem studies, etc One of the key areas of remote sensing applications is water quality monitoring Understanding and monitoring of the water quality parameters and detecting different water contaminants is an important issue in water management and protection of whole environment and especially the water ecosystem There are many remote sensing methods to monitor water quality and detect water pollutants One of the most widely used method for substance detection with remote sensing techniques is based on usage of spectral reflectance coefficients They are usually acquired using discrete methods such as spectrometric measurements These however can be very time consuming, therefore image-based methods are used more and more often In order to work out the proper methodology of obtaining spectral reflectance coefficients from hyperspectral and multispectral images, it is necessary to verify the impact of cameras radiometric resolution on the accuracy of determination of them This paper presents laboratory experiments that were conducted using two monochromatic XEVA video sensors (400–1700 nm spectral data registration) with two different radiometric resolutions (12 and 14 bits) In view of determining spectral characteristics from images, the research team used set of interferometric filters All data collected with multispectral digital video cameras were compared with spectral reflectance coefficients obtained with spectroradiometer The objective of this research is to find the impact of cameras radiometric resolution on reflectance values in chosen wavelength The main topic of this study is the analysis of accuracy of spectral coefficients from sensors with different radiometric resolution By comparing values collected from images acquired with XEVA sensors and with the curves obtained with spectroradiometer it's possible to determine accuracy of imagebased spectral reflectance coefficients and decide which sensor will be more accurate to determine them for protection of water aquatic environment purpose

Journal ArticleDOI
TL;DR: A dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed, and the mesh generated by Graph Cuts is 2-manifold surface which satisfied the half edge data structure.
Abstract: . We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

Journal ArticleDOI
TL;DR: In this article, the authors present the initial results of the VAST project (VAlorizzazione Storia e Territorio - Valorization of History and Landscape), that aims to digitally reconstruct the forts located on the plateaus of Luserna, Lavarone and Folgaria.
Abstract: . 2014 is the hundredth anniversary of the outbreak of the First World War (WWI) – or Great War – in Europe and a number of initiatives have been planned to commemorate the tragic event. Until 1918, the Italian Trentino – Alto Adige region was under the Austro – Hungarian Empire and represented one of the most crucial and bloody war front between the Austrian and Italian territories. The region borders were constellated of military fortresses, theatre of battles between the two opposite troops. Unfortunately, most of these military buildings are now ruined and their architectures can be hardly appreciated. The paper presents the initial results of the VAST project (VAlorizzazione Storia e Territorio – Valorization of History and Landscape), that aims to digitally reconstruct the forts located on the plateaus of Luserna, Lavarone and Folgaria. An integrated methodology has been adopted to collect and employ all possible source of information in order to derive precise and photo-realistic 3D digital representations of WWI forts.

Journal ArticleDOI
TL;DR: In this research implementation of a crowd-sourced Web-based real estate support system is desired and creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is desired.
Abstract: . Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.

Journal ArticleDOI
TL;DR: The paper is based on the concept of a spatial fuzzy logic on topological spaces that contributes to the development of an adaptive Early Warning And Response System (EWARS) providing decision support for the current or future spatial distribution of a disease.
Abstract: A Spatial Decision Support System (SDSS) provides support for decision makers and should not be viewed as replacing human intelligence with machines. Therefore it is reasonable that decision makers are able to use a feature to analyze the provided spatial decision support in detail to crosscheck the digital support of the SDSS with their own expertise. Spatial decision support is based on risk and resource maps in a Geographic Information System (GIS) with relevant layers e.g. environmental, health and socio-economic data. Spatial fuzzy logic allows the representation of spatial properties with a value of truth in the range between 0 and 1. Decision makers can refer to the visualization of the spatial truth of single risk variables of a disease. Spatial fuzzy logic rules that support the allocation of limited resources according to risk can be evaluated with measure theory on topological spaces, which allows to visualize the applicability of this rules as well in a map. Our paper is based on the concept of a spatial fuzzy logic on topological spaces that contributes to the development of an adaptive Early Warning And Response System (EWARS) providing decision support for the current or future spatial distribution of a disease. It supports the decision maker in testing interventions based on available resources and apply risk mitigation strategies and provide guidance tailored to the geo-location of the user via mobile devices. The software component of the system would be based on open source software and the software developed during this project will also be in the open source domain, so that an open community can build on the results and tailor further work to regional or international requirements and constraints. A freely available EWARS Spatial Fuzzy Logic Demo was developed wich enables a user to visualize risk and resource maps based on individual data in several data formats.

Journal ArticleDOI
TL;DR: In this article, the authors used the integration of the Multi-Resolution Landscape Characterization Consortium's National Land Cover Database 2011 and LANDFIRE's Disturbance Products to update the 2001 National GAP Vegetation Dataset to reflect 2011 conditions.
Abstract: . Over the past decade, great progress has been made to develop national extent land cover mapping products to address natural resource issues. One of the core products of the GAP Program is range-wide species distribution models for nearly 2000 terrestrial vertebrate species in the U.S. We rely on deductive modeling of habitat affinities using these products to create models of habitat availability. That approach requires that we have a thematically rich and ecologically meaningful map legend to support the modeling effort. In this work, we tested the integration of the Multi-Resolution Landscape Characterization Consortium's National Land Cover Database 2011 and LANDFIRE's Disturbance Products to update the 2001 National GAP Vegetation Dataset to reflect 2011 conditions. The revised product can then be used to update the species models. We tested the update approach in three geographic areas (Northeast, Southeast, and Interior Northwest). We used the NLCD product to identify areas where the cover type mapped in 2011 was different from what was in the 2001 land cover map. We used Google Earth and ArcGIS base maps as reference imagery in order to label areas identified as "changed" to the appropriate class from our map legend. Areas mapped as urban or water in the 2011 NLCD map that were mapped differently in the 2001 GAP map were accepted without further validation and recoded to the corresponding GAP class. We used LANDFIRE's Disturbance products to identify changes that are the result of recent disturbance and to inform the reassignment of areas to their updated thematic label. We ran species habitat models for three species including Lewis's Woodpecker (Melanerpes lewis) and the White-tailed Jack Rabbit (Lepus townsendii) and Brown Headed nuthatch (Sitta pusilla). For each of three vertebrate species we found important differences in the amount and location of suitable habitat between the 2001 and 2011 habitat maps. Specifically, Brown headed nuthatch habitat in 2011 was −14% of the 2001 modeled habitat, whereas Lewis's Woodpecker increased by 4%. The white-tailed jack rabbit (Lepus townsendii) had a net change of −1% (11% decline, 10% gain). For that species we found the updates related to opening of forest due to burning and regenerating shrubs following harvest to be the locally important main transitions. In the Southeast updates related to timber management and urbanization are locally important.

Journal ArticleDOI
TL;DR: The aim of the survey is to create a 3D multiscale database, therefore, different restitution scales, from the architectural-urban one to a detail one are taken in consideration, and the multi-range approach efficiency, obtained by employing and integrating different type of acquisition technologies are illustrated.
Abstract: . In this paper, a Cultural Heritage survey, performed by employing and integrating different type of acquisition technologies (imagebased and active sensor based) is presented. The aim of the survey is to create a 3D multiscale database, therefore, different restitution scales, from the architectural-urban one to a detail one are taken in consideration. This research is part of a project financed by the Unesco for the study of historical gardens located in Mantua and Sabbioneta, and in particular for the Palazzo Te renaissance gardens in Mantua, which are reported in this paper. First of all, a general survey of the area has been realized by employing the classical aerial photogrammetry in order to provide the actual arboreal and urban furniture conditions of the gardens (1:500 scale). Next, a detailed photogrammetric survey of the Esedra courtyard in Palazzo Te has been performed by using a UAV system. At the end, laser scanning and traditional topography have been used for the terrestrial detailed acquisition of gardens and architectural facades (1:50–1:20 scale). The aim of this research is to create a suitable graphical documentation support for the study of the structure of the gardens, to analyze how they have been modified over the years and as an effective support for eventual future re-design. Moreover, the research has involved a certain number of botanic and archeological investigations, which have been duly acquired and modeled with image based systems. Starting from the acquired datasets with their acquisition scales, a series of comparative analysis have been performed, especially for those areas in which all the systems have been employed. The comparisons have been extracted by analyzing point cloud models obtained by using a topographical network. As a result, the multi-range approach efficiency, obtained by employing the actual available technologies have been illustrated in the present work.

Journal ArticleDOI
Yahui Liu1, Y. Shen1
TL;DR: Wang et al. as discussed by the authors collected the data of the number and spatial distribution of colleges and universities with GIS specialty in China and made comparative analysis of GIS curriculum of three typical universities, and discussed the classification of enrolments and graduates and emphasis on the industrial and regional distribution of the graduates.
Abstract: The first GIS Bachelor's Program arose in China in 1980s. After more than 30 years of exploration and practice, it is now in the period of full-scale development. The paper collects the data of the number and spatial distribution of colleges and universities with GIS specialty in China. After that it makes comparative analysis of GIS curriculum of three typical universities. Then on the base of data of enrolment and employment in Wuhan University, it discusses the classification of enrolments and graduates and emphasis on the industrial and regional distribution of the graduates. Through the above investigation and analysis, the paper finally draws the conclusion on the features and problems in China's GIS higher education.

Journal ArticleDOI
F. Barouni1, B. Moulin1
TL;DR: A novel approach to reason with spatial proximity is proposed based on contextual information and uses a neurofuzzy classifier to handle the uncertainty aspect of proximity and is integrated in a GIS, enhancing it with proximity reasoning.
Abstract: In this paper, we propose a novel approach to reason with spatial proximity. The approach is based on contextual information and uses a neurofuzzy classifier to handle the uncertainty aspect of proximity. Neurofuzzy systems are a combination of neural networks and fuzzy systems and incorporate the advantages of both techniques. Although fuzzy systems are focused on knowledge representation, they do not allow the estimation of membership functions. Conversely, neuronal networks use powerful learning techniques but they are not able to explain how results are obtained. Neurofuzzy systems benefit from both techniques by using training data to generate membership functions and by using fuzzy rules to represent expert knowledge. Moreover, contextual information is collected from a knowledge base. The complete solution that we propose is integrated in a GIS, enhancing it with proximity reasoning. From an application perspective, the proposed approach was used in the telecommunication domain and particularly in fiber optic monitoring systems. In such systems, a user needs to qualify the distance between a fiber break and the surrounding objects of the environment to optimize the assignment of emergency crews. The neurofuzzy classifier has been used to compute the membership function parameters of the contextual information inputs using a training data set and fuzzy rules.

Journal ArticleDOI
TL;DR: In this paper, a collaborative web map called "Where I was Robbed" was published by students from the Federal University of Bahia, Brazil, and their initial efforts in publicizing their web map were restricted to announce it at a local radio as a tool of social interest.
Abstract: . In July of 2013 a group of undergraduate students from the Federal University of Bahia, Brazil, published a collaborative web map called “Where I Was Robbed”. Their initial efforts in publicizing their web map were restricted to announce it at a local radio as a tool of social interest. In two months the map had almost 10.000 reports, 155 reports per day and people from more the 350 cities had already reported a crime. The present study consists in an investigation about this collaborative web map spatial correlation to official robbery data registered at the Secretary of Public Safety database, for the city of Salvador, Bahia. Kernel density estimator combined with map algebra was used to the investigation. Spatial correlations with official robbery data for the city of Salvador were not found initially, but after standardizing collaborative data and mining official registers, both data pointed at very similar areas as the main hot spots for pedestrian robbery. Both areas are located at two of the most economical active areas of the city, although web map crimes reports were more concentrated in an area with higher income population. This results and discussions indicates that this collaborative application is been used mainly by mid class and upper class parcel of the city population, but can still provide significant information on public safety priority areas. Therefore, extended divulgation, on local papers, radio and TV, of the collaborative crime map application and partnership with official agencies are strongly recommended.

Journal ArticleDOI
Yu Liu1, B. Liu1, Bing Xu1, Z. X. Liu1, Kaichang Di1, Jin Zhou1 
TL;DR: In this article, the trajectory of CE-3 descent images are recovered using self-calibration free net bundle adjustment, and then the topographic data is rectified by absolute orientation with GCPs selected from the adjusted CE-2 DEM and DOM.
Abstract: Chang'e-3 (CE-3) is the first lander and rover of China following the success of Chang'e-1 and Chang'e-2 (CE-2) orbiters. High precision topographic mapping can provide detailed terrain information to ensure the safety of the rover as well as to support scientific investigations. In this research, multi-source data are co-registered into a uniform geographic framework for high precision topographic mapping at the CE-3 landing site. CE-2 CCD images with 7 m- and 1.5 m- resolutions are registered using selfcalibration bundle adjustment method with ground control points (GCPs) selected from LRO WAC mosaic map and LOLA DTM. The trajectory of CE-3 descent images are recovered using self-calibration free net bundle adjustment, and then the topographic data is rectified by absolute orientation with GCPs selected from the adjusted CE-2 DEM and DOM. Finally, these topographic data are integrated into the same geographic framework for unified, multi-scale, high precision mapping of the CE-3 landing site. Key technologies and the mapping products of this research have been used to support the surface operations of CE-3 mission.

Journal ArticleDOI
TL;DR: In this article, a new approach for the detection of accurate building boundaries by merging point clouds acquired by airborne laser scanning and aerial photographs is presented, which comprises two major parts: reconstruction of initial roof models from point clouds only, and refinement of their boundaries.
Abstract: . Geometrically and topologically correct 3D building models are required to satisfy with new demands such as 3D cadastre, map updating, and decision making. More attention on building reconstruction has been paid using Airborne Laser Scanning (ALS) point cloud data. The planimetric accuracy of roof outlines, including step-edges is questionable in building models derived from only point clouds. This paper presents a new approach for the detection of accurate building boundaries by merging point clouds acquired by ALS and aerial photographs. It comprises two major parts: reconstruction of initial roof models from point clouds only, and refinement of their boundaries. A shortest closed circle (graph) analysis method is employed to generate building models in the first step. Having the advantages of high reliability, this method provides reconstruction without prior knowledge of primitive building types even when complex height jumps and various types of building roof are available. The accurate position of boundaries of the initial models is determined by the integration of the edges extracted from aerial photographs. In this process, scene constraints defined based on the initial roof models are introduced as the initial roof models are representing explicit unambiguous geometries about the scene. Experiments were conducted using the ISPRS benchmark test data. Based on test results, we show that the proposed approach can reconstruct 3D building models with higher geometrical (planimetry and vertical) and topological accuracy.

Journal ArticleDOI
TL;DR: In this paper, the authors present a data warehouse to store relevant flood prediction data which may be accessed via Hazus. This data warehouse will contain tools for On-Line Analytical Processing (OLAP) and knowledge discovery to quantitatively determine areas at risk and discover unexpected dependencies between datasets.
Abstract: In New Brunswick flooding occurs typically during the spring freshet, though, in recent years, midwinter thaws have led to flooding in January or February. Municipalities are therefore facing a pressing need to perform risk assessments in order to identify communities at risk of flooding. In addition to the identification of communities at risk, quantitative measures of potential structural damage and societal losses are necessary for these identified communities. Furthermore, tools which allow for analysis and processing of possible mitigation plans are needed. Natural Resources Canada is in the process of adapting Hazus-MH to respond to the need for risk management. This requires extensive data from a variety of municipal, provincial, and national agencies in order to provide valid estimates. The aim is to establish a data warehouse to store relevant flood prediction data which may be accessed thru Hazus. Additionally, this data warehouse will contain tools for On-Line Analytical Processing (OLAP) and knowledge discovery to quantitatively determine areas at risk and discover unexpected dependencies between datasets. The third application of the data warehouse is to provide data for online visualization capabilities: web-based thematic maps of Hazus results, historical flood visualizations, and mitigation tools; thus making flood hazard information and tools more accessible to emergency responders, planners, and residents. This paper represents the first step of the process: locating and collecting the appropriate datasets.