scispace - formally typeset
Search or ask a question

Showing papers on "Change detection published in 2014"


Journal ArticleDOI
TL;DR: In this article, a two-step cloud, cloud shadow, and snow masking algorithm is used for eliminating noisy observations and a time series model that has components of seasonality, trend, and break estimates surface reflectance and brightness temperature.

981 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: The latest release of the changedetection.net dataset is presented, which includes 22 additional videos spanning 5 new categories that incorporate challenges encountered in many surveillance settings and highlights strengths and weaknesses of these methods and identifies remaining issues in change detection.
Abstract: Change detection is one of the most important lowlevel tasks in video analytics. In 2012, we introduced the changedetection.net (CDnet) benchmark, a video dataset devoted to the evalaution of change and motion detection approaches. Here, we present the latest release of the CDnet dataset, which includes 22 additional videos (70; 000 pixel-wise annotated frames) spanning 5 new categories that incorporate challenges encountered in many surveillance settings. We describe these categories in detail and provide an overview of the results of more than a dozen methods submitted to the IEEE Change DetectionWorkshop 2014. We highlight strengths and weaknesses of these methods and identify remaining issues in change detection.

680 citations


Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results on real SAR datasets show that the proposed approach can detect the real changes as well as mitigate the effect of speckle noises and is computationally simple in all the steps involved.
Abstract: In this paper, we put forward a novel approach for change detection in synthetic aperture radar (SAR) images. The approach classifies changed and unchanged regions by fuzzy c-means (FCM) clustering with a novel Markov random field (MRF) energy function. In order to reduce the effect of speckle noise, a novel form of the MRF energy function with an additional term is established to modify the membership of each pixel. In addition, the degree of modification is determined by the relationship of the neighborhood pixels. The specific form of the additional term is contingent upon different situations, and it is established ultimately by utilizing the least-square method. There are two aspects to our contributions. First, in order to reduce the effect of speckle noise, the proposed approach focuses on modifying the membership instead of modifying the objective function. It is computationally simple in all the steps involved. Its objective function can just return to the original form of FCM, which leads to its consuming less time than that of some obviously recently improved FCM algorithms. Second, the proposed approach modifies the membership of each pixel according to a novel form of the MRF energy function through which the neighbors of each pixel, as well as their relationship, are concerned. Theoretical analysis and experimental results on real SAR datasets show that the proposed approach can detect the real changes as well as mitigate the effect of speckle noises. Theoretical analysis and experiments also demonstrate its low time complexity.

270 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel slow feature analysis (SFA) algorithm for change detection that performs better in detecting changes than the other state-of-the-art change detection methods.
Abstract: Change detection was one of the earliest and is also one of the most important applications of remote sensing technology. For multispectral images, an effective solution for the change detection problem is to exploit all the available spectral bands to detect the spectral changes. However, in practice, the temporal spectral variance makes it difficult to separate changes and nonchanges. In this paper, we propose a novel slow feature analysis (SFA) algorithm for change detection. Compared with changed pixels, the unchanged ones should be spectrally invariant and varying slowly across the multitemporal images. SFA extracts the most temporally invariant component from the multitemporal images to transform the data into a new feature space. In this feature space, the differences in the unchanged pixels are suppressed so that the changed pixels can be better separated. Three SFA change detection approaches, comprising unsupervised SFA, supervised SFA, and iterative SFA, are constructed. Experiments on two groups of real Enhanced Thematic Mapper data sets show that our proposed method performs better in detecting changes than the other state-of-the-art change detection methods.

244 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: A moving object detection system named Flux Tensor with Split Gaussian models (FTSG) that exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison is presented.
Abstract: In this paper, we present a moving object detection system named Flux Tensor with Split Gaussian models (FTSG) that exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison. This hybrid system can handle challenges such as shadows, illumination changes, dynamic background, stopped and removed objects. Extensive testing performed on the CVPR 2014 Change Detection benchmark dataset shows that FTSG outperforms state-ofthe-art methods.

223 citations


Journal ArticleDOI
TL;DR: This paper proposes a change detection method based on stereo imagery and digital surface models generated with stereo matching methodology and provides a solution by the joint use of height changes and Kullback-Leibler divergence similarity measure between the original images.
Abstract: Building change detection is a major issue for urban area monitoring. Due to different imaging conditions and sensor parameters, 2-D information delivered by satellite images from different dates is often not sufficient when dealing with building changes. Moreover, due to the similar spectral characteristics, it is often difficult to distinguish buildings from other man-made constructions, like roads and bridges, during the change detection procedure. Therefore, stereo imagery is of importance to provide the height component which is very helpful in analyzing 3-D building changes. In this paper, we propose a change detection method based on stereo imagery and digital surface models (DSMs) generated with stereo matching methodology and provide a solution by the joint use of height changes and Kullback-Leibler divergence similarity measure between the original images. The Dempster-Shafer fusion theory is adopted to combine these two change indicators to improve the accuracy. In addition, vegetation and shadow classifications are used as no-building change indicators for refining the change detection results. In the end, an object-based building extraction method based on shape features is performed. For evaluation purpose, the proposed method is applied in two test areas, one is in an industrial area in Korea with stereo imagery from the same sensor and the other represents a dense urban area in Germany using stereo imagery from different sensors with different resolutions. Our experimental results confirm the efficiency and high accuracy of the proposed methodology even for different kinds and combinations of stereo images and consequently different DSM qualities.

202 citations


Journal ArticleDOI
TL;DR: This work proposes to apply principal component analysis (PCA) for feature extraction prior to the change detection of changes in multidimensional unlabeled data and shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.
Abstract: When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.

174 citations


Journal ArticleDOI
TL;DR: A novel building change detection approach for multitemporal high-resolution images is proposed based on a recently developed morphological building index (MBI), which is able to automatically indicate the presence of buildings from high- Resolution images.
Abstract: In this study, urban building change detection is investigated, considering that buildings are one of the most dynamic structures in urban areas. To this aim, a novel building change detection approach for multitemporal high-resolution images is proposed based on a recently developed morphological building index (MBI), which is able to automatically indicate the presence of buildings from high-resolution images. In the MBI-based change detection framework, the changed building information is decomposed into MBI, spectral, and shape conditions. A variation of the MBI is a basic condition for the indication of changed buildings. Besides, the spectral information is used as a mask since the change of buildings is primarily related to the spectral variation, and the shape condition is then used as a post-filter to remove irregular structures such as noise and road-like narrow objects. The change detection framework is carried out based on a threshold-based processing at both the feature and decision levels. The advantages of the proposed method are that it does not need any training samples and it is capable of reducing human labor, considering the fact that the current building change detection methods are totally reliant on visual interpretation. The proposed method is evaluated with a QuickBird dataset from 2002 and 2005 covering Hongshan District of Wuhan City, China. The experiments show that the proposed change detection algorithms can achieve satisfactory correctness rates (over 80%) with a low level of total errors (less than 10%), and give better results than the supervised change detection using the support vector machine (SVM).

149 citations


Journal ArticleDOI
TL;DR: A simple and effective unsupervised approach based on the combined difference image and k-means clustering is proposed for the synthetic aperture radar (SAR) image change detection task, and local consistency and edge information of the difference image are considered.
Abstract: In this letter, a simple and effective unsupervised approach based on the combined difference image and k-means clustering is proposed for the synthetic aperture radar (SAR) image change detection task. First, we use one of the most popular denoising methods, the probabilistic-patch-based algorithm, for speckle noise reduction of the two multitemporal SAR images, and the subtraction operator and the log ratio operator are applied to generate two kinds of simple change maps. Then, the mean filter and the median filter are used to the two change maps, respectively, where the mean filter focuses on making the change map smooth and the local area consistent, and the median filter is used to preserve the edge information. Second, a simple combination framework which uses the maps obtained by the mean filter and the median filter is proposed to generate a better change map. Finally, the k-means clustering algorithm with k = 2 is used to cluster it into two classes, changed area and unchanged area. Local consistency and edge information of the difference image are considered in this method. Experimental results obtained on four real SAR image data sets confirm the effectiveness of the proposed approach.

148 citations


Proceedings ArticleDOI
08 Jun 2014
TL;DR: The novel approach is an extension of the Naïve Bayesian approach and results in a generative model that builds on the relations to the directly surrounding vehicles and to the static traffic environment.
Abstract: Risk estimation for the current traffic situation is crucial for safe autonomous driving systems. One part of the uncertainty in risk estimation is the behavior of the surrounding traffic participants. In this paper we focus on highway scenarios, where possible behaviors consist of a change in acceleration and lane change maneuvers. We present a novel approach for the recognition of lane change intentions of traffic participants. Our novel approach is an extension of the Naive Bayesian approach and results in a generative model. It builds on the relations to the directly surrounding vehicles and to the static traffic environment. We obtain the conditional probabilities of all relevant features using Gaussian mixtures with a flexible number of components. We systematically reduce the number of features by selecting the most powerful ones. Furthermore we investigate the predictive power of each feature with respect to the time before a lane change event. In a large scale experiment on real world data with over 160.781 samples collected on a test drive of 1100km we trained and validated our intention prediction model and achieved a significant improvement in the recognition performance of lane change intentions compared to current state of the art methods.

141 citations


Journal ArticleDOI
TL;DR: A novel unsupervised change detection method in SAR images based on image fusion strategy and compressed projection is presented, which is effective for SAR image change detection in terms of shape preservation of the detected change portion and the numerical results.
Abstract: Multitemporal synthetic aperture radar (SAR) images have been successfully used for the detection of different types of terrain changes. SAR image change detection has recently become a challenge problem due to the existence of speckle and the complex mixture of terrain environment. This paper presents a novel unsupervised change detection method in SAR images based on image fusion strategy and compressed projection. First, a Gauss-log ratio operator is proposed to generate a difference image. In order to obtain a better difference map, image fusion strategy is applied using complementary information from Gauss-log ratio and log-ratio difference image. Second, nonsubsampled contourlet transform (NSCT) is used to reduce the noise of the fused difference image, and compressed projection is employed to extract feature for each pixel. The final change detection map is obtained by partitioning the feature vectors into “changed” and “unchanged” classes using simple k-means clustering. Experiment results show that the proposed method is effective for SAR image change detection in terms of shape preservation of the detected change portion and the numerical results.

Journal ArticleDOI
TL;DR: The major steps involved in a change detection are overviewed, a summary of major change detection methods is summarised, the impacts of scales and complexity of study areas on the selection of remote-sensing data and change detection algorithms are discussed and the needs of developing newchange detection methods are discussed.
Abstract: Research on change detection techniques has long been an active topic and many techniques have been developed. In reality, change detection is a comprehensive procedure that requires careful consideration of many factors such as the nature of change detection problems, image preprocessing, selection of suitable variables and algorithms. This paper briefly overviews the major steps involved in a change detection, summarises major change detection methods, discusses the impacts of scales and complexity of study areas on the selection of remote-sensing data and change detection algorithms and finally discusses the needs of developing new change detection methods. As high spatial resolution images are easily available in the past decade, texture- and object-based methods become valuable to improve change detection performance. At national and global scales, coarse spatial resolution satellite images such as MODIS become important data sources for rapidly detecting land-cover change, but results have high unce...

Journal ArticleDOI
TL;DR: In this paper, a polynomial fitting-based scheme was used to detect changes in vegetation from satellite data for North Africa (including the Sahel) for the period 1982-2006, where the change detection approach retains more complex signatures embedded in long-term time series by preserving details about change rates.

Journal ArticleDOI
TL;DR: A method that utilizes residuals from harmonic regression over years of Landsat data, in conjunction with statistical quality control charts, to signal subtle disturbances in vegetative cover, which is able to detect changes from both deforestation and subtler forest degradation and thinning.
Abstract: One challenge to implementing spectral change detection algorithms using multitemporal Landsat data is that key dates and periods are often missing from the record due to weather disturbances and lapses in continuous coverage. This paper presents a method that utilizes residuals from harmonic regression over years of Landsat data, in conjunction with statistical quality control charts, to signal subtle disturbances in vegetative cover. These charts are able to detect changes from both deforestation and subtler forest degradation and thinning. First, harmonic regression residuals are computed after fitting models to interannual training data. These residual time series are then subjected to Shewhart X-bar control charts and exponentially weighted moving average charts. The Shewhart X-bar charts are also utilized in the algorithm to generate a data-driven cloud filter, effectively removing clouds and cloud shadows on a location-specific basis. Disturbed pixels are indicated when the charts signal a deviation from data-driven control limits. The methods are applied to a collection of loblolly pine ( Pinus taeda) stands in Alabama, USA. The results are compared with stands for which known thinning has occurred at known times. The method yielded an overall accuracy of 85%, with the particular result that it provided afforestation/deforestation maps on a per-image basis, producing new maps with each successive incorporated image. These maps matched very well with observed changes in aerial photography over the test period. Accordingly, the method is highly recommended for on-the-fly change detection, for changes in both land use and land management within a given land use.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: The main features of the proposed WNN method are the dynamic adaptability to background change due to the WNN model adopted and the introduction of pixel color histories to improve system behavior in videos characterized by (des)appearing of objects in video scene and/or sudden changes in lightning and background brightness and shape.
Abstract: In this paper a pixel -- based Weightless Neural Network (WNN) method to face the problem of change detection in the field of view of a camera is proposed. The main features of the proposed method are 1) the dynamic adaptability to background change due to the WNN model adopted and 2) the introduction of pixel color histories to improve system behavior in videos characterized by (des)appearing of objects in video scene and/or sudden changes in lightning and background brightness and shape. The WNN approach is very simple and straightforward, and it gives high rank results in competition with other approaches applied to the ChangeDetection.net 2014 benchmark dataset.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method for change detection at street level by using combination of mobile laser scanning (MLS) point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial image or photogrammetric images captured from an image-based mobile mapping system at a later epoch are used to detect the geometrical changes between different epochs.
Abstract: Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.

Journal ArticleDOI
TL;DR: The iterated conditional modes (ICM) framework for the optimization of the maximum a posteriori (MAP-MRF) criterion function is extended to include a nonlocal probability maximization step, which has the potential to preserve spatial details and to reduce speckle effects.
Abstract: In remote sensing change detection, Markov random field (MRF) has been used successfully to model the prior probability using class-labels dependencies MRF has played an important role in the detection of complex urban changes using optical images However, the preservation of details in urban change analysis turns out to be a highly complex task if multitemporal SAR images with their speckle are to be used Here, the ability of MRF to preserve geometric details and to combat speckle effect at the same time becomes questionable Blob-region phenomenon and fine structures removal are common consequences of the application of traditional MRF-based change detection algorithm To overcome these limitations, the iterated conditional modes (ICM) framework for the optimization of the maximum a posteriori (MAP-MRF) criterion function is extended to include a nonlocal probability maximization step This probability model, which characterizes the relationship between pixels’ class-labels in a nonlocal scale, has the potential to preserve spatial details and to reduce speckle effects Two multitemporal SAR datasets were used to assess the proposed algorithm Experimental results using three density functions [ie, the log normal (LN), generalized Gaussian (GG), and normal distributions (ND)] have demonstrated the efficiency of the proposed approach in terms of detail preservation and noise suppression Compared with the traditional MRF algorithm, the proposed approach proved to be less-sensitive to the value of the contextual parameter and the chosen density function The proposed approach has also shown less sensitivity to the quality of the initial change map when compared with the ICM algorithm

Journal ArticleDOI
TL;DR: This research uses reservoir sampling to build a sequential change detection model that offers statistically sound guarantees on false positive and false negative rates but has much smaller computational complexity than the ADWIN concept drift detector.
Abstract: In this research we present a novel approach to the concept change detection problem. Change detection is a fundamental issue with data stream mining as classification models generated need to be updated when significant changes in the underlying data distribution occur. A number of change detection approaches have been proposed but they all suffer from limitations with respect to one or more key performance factors such as high computational complexity, poor sensitivity to gradual change, or the opposite problem of high false positive rate. Our approach uses reservoir sampling to build a sequential change detection model that offers statistically sound guarantees on false positive and false negative rates but has much smaller computational complexity than the ADWIN concept drift detector. Extensive experimentation on a wide variety of datasets reveals that the scheme also has a smaller false detection rate while maintaining a competitive true detection rate to ADWIN.

Proceedings ArticleDOI
01 Dec 2014
TL;DR: This paper proposes a new less conservative and more sensitive condition for anomaly detection, quite different from the traditional “nσ” type conditions, and points to some possible applications which will be the domain of future work.
Abstract: In this paper, we propose a new eccentricity- based anomaly detection principle and algorithm It is based on a further development of the recently introduced data analytics framework (TEDA - from typicality and eccentricity data analytics) We compare TEDA with the traditional statistical approach and prove that TEDA is a generalization of it in regards to the well-known “nσ” analysis (TEDA gives exactly the same result as the traditional “nσ” analysis but it does not require the restrictive prior assumptions that are made for the traditional approach to be in place) Moreover, it offers a non-parametric, closed form analytical descriptions (models of the data distribution) to be extracted from the real data realizations, not to be pre-assumed In addition to that, for several types of proximity/similarity measures (such as Euclidean, cosine, Mahalonobis) it can be calculated recursively, thus, computationally very efficiently and is suitable for real time and online algorithms Building on the per data sample, exact information about the data distribution in a closed analytical form, in this paper we propose a new less conservative and more sensitive condition for anomaly detection It is quite different from the traditional “nσ” type conditions We demonstrate example where traditional conditions would lead to an increased amount of false negatives or false positives in comparison with the proposed condition The new condition is intuitive and easy to check for arbitrary data distribution and arbitrary small (but not less than 3) amount of data samples/points Finally, because the anomaly/novelty/change detection is very important and basic data analysis operation which is in the fundament of such higher level tasks as fault detection, drift detection in data streams, clustering, outliers detection, autonomous video analytics, particle physics, etc we point to some possible applications which will be the domain of future work

Journal ArticleDOI
TL;DR: In this article, the authors employed parameter adaptive estimators to provide analytical redundancies and designed a dedicated diagnosis scheme for airspeed sensor faults with pitot tube clogging or icing being the most common causes.
Abstract: Airspeed sensor faults are common causes for incidents with unmanned aerial vehicles (UAV) with pitot tube clogging or icing being the most common causes. Timely diagnosis of such faults or other artifacts in signals from airspeed sensing systems could potentially prevent crashes. This paper employs parameter adaptive estimators to provide analytical redundancies and a dedicated diagnosis scheme is designed. Robustness is investigated on sets of flight data to estimate distributions of test statistics. The result is robust diagnosis with adequate balance between false alarm rate and fault detectability.

Journal ArticleDOI
TL;DR: Various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms are discussed, including important conclusions on solved and remaining issues in change detection, and future challenges for the scientific community are described.
Abstract: Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video data set exists for benchmarking different methods. Presented here is a unique change detection video data set consisting of nearly 90000 frames in 31 video sequences representing six categories selected to cover a wide range of challenges in two modalities (color and thermal infrared). A distinguishing characteristic of this benchmark video data set is that each frame is meticulously annotated by hand for ground-truth foreground, background, and shadow area boundaries-an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of video-based change detection algorithms. This paper discusses various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms. It draws important conclusions on solved and remaining issues in change detection, and describes future challenges for the scientific community. The data set, evaluation tools, and algorithm rankings are available to the public on a website 1 and will be updated with feedback from academia and industry in the future.

Journal ArticleDOI
TL;DR: An automatic and effective approach to the thresholding of the log-ratio change indicator whose histogram may have one mode or more than one mode, and results obtained on multitemporal SAR images of Toronto and Beijing demonstrate the effectiveness of the proposed approach.
Abstract: Unsupervised change detection in multitemporal single-polarization synthetic aperture radar (SAR) images often involves thresholding of the image change indicator. If one class, which is usually the unchanged class, comprises a disproportionately large part of the scene, the image change indicator may have a unimodal histogram. Image thresholding of such a change indicator is a challenging task. In this paper, we present an automatic and effective approach to the thresholding of the log-ratio change indicator whose histogram may have one mode or more than one mode. A bimodality test is performed to determine whether the histogram of the log-ratio image is unimodal or not. If it has more than one mode, the generalized Kittler and Illingworth thresholding (GKIT) algorithm based on the generalized Gaussian model (GG-GKIT) is used to detect the optimal threshold values. If it is unimodal, the log-ratio image is divided into small regions and a multiscale region selection process is carried out to select regions which are a balanced mixture of unchanged and changed classes. The selected regions are combined to generate a new histogram. The optimal threshold value obtained from the new histogram is then used to separate unchanged pixels from changed pixels in the log-ratio image. Experimental results obtained on multitemporal SAR images of Toronto and Beijing demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: The approach provides a non-destructive tool for monitoring important forest characteristics without laborious biomass sampling and enables the repeating of these measurements over time for a large number of samples, providing a fast and effective means for monitoring forest growth, mortality, and biomass in 3D.
Abstract: We present a new application of terrestrial laser scanning and mathematical modelling for the quantitative change detection of tree biomass, volume, and structure. We investigate the feasibility of the approach with two case studies on trees, assess the accuracy with laboratory reference measurements, and identify the main sources of error, and the ways to mitigate their effect on the results. We show that the changes in the tree branching structure can be reproduced with about ±10% accuracy. As the current biomass detection is based on destructive sampling, and the change detection is based on empirical models, our approach provides a non-destructive tool for monitoring important forest characteristics without laborious biomass sampling. The efficiency of the approach enables the repeating of these measurements over time for a large number of samples, providing a fast and effective means for monitoring forest growth, mortality, and biomass in 3D.

Journal ArticleDOI
Maoguo Gong1, Yu Li1, Licheng Jiao1, Meng Jia1, Linzhi Su1 
TL;DR: In this paper, a novel change detection approach is proposed for multitemporal synthetic aperture radar (SAR) images, based on two difference images, which are constructed through intensity and texture information, respectively.
Abstract: In this paper, a novel change detection approach is proposed for multitemporal synthetic aperture radar (SAR) images. The approach is based on two difference images, which are constructed through intensity and texture information, respectively. In the extraction of the texture differences, robust principal component analysis technique is used to separate irrelevant and noisy elements from Gabor responses. Then graph cuts are improved by a novel energy function based on multivariate generalized Gaussian model for more accurately fitting. The effectiveness of the proposed method is proved by the experiment results obtained on several real SAR images data sets.

Journal ArticleDOI
TL;DR: A fast, well performing and theoretically tractable method for detecting multiple change points in the structure of an auto‐regressive conditional heteroscedastic model for financial returns with piecewise constant parameter values is proposed.
Abstract: The emergence of the recent financial crisis, during which markets frequently underwent changes in their statistical structure over a short period of time, illustrates the importance of non-stationary modelling in financial time series. Motivated by this observation, we propose a fast, well performing and theoretically tractable method for detecting multiple change points in the structure of an auto-regressive conditional heteroscedastic model for financial returns with piecewise constant parameter values. Our method, termed BASTA (binary segmentation for transformed auto-regressive conditional heteroscedasticity), proceeds in two stages: process transformation and binary segmentation. The process transformation decorrelates the original process and lightens its tails; the binary segmentation consistently estimates the change points. We propose and justify two particular transformations and use simulation to fine-tune their parameters as well as the threshold parameter for the binary segmentation stage. A comparative simulation study illustrates good performance in comparison with the state of the art, and the analysis of the Financial Times Stock Exchange FTSE 100 index reveals an interesting correspondence between the estimated change points and major events of the recent financial crisis. Although the method is easy to implement, ready-made R software is provided.

Journal ArticleDOI
TL;DR: In this paper, the authors present an approach for change-based land cover time series (LCTS) development following from previous research, but with significant advancements in change detection, training, classification, and evidence-based refinement.

Book ChapterDOI
01 Jan 2014
TL;DR: It is shown that little can be done at the single sensor level unless strong hypotheses are made and that the situation is different if the embedded system mounts a rich sensor platform or is inserted in a sensor network.
Abstract: Sensors and real apparatus are prone to faults that, in turn, affect the quality of retrieved data. Detection of faults or erroneous behaviors in sensor data stream must be anticipated to prevent drastic side effects (recall we make decisions out of incoming data). Cognitive fault diagnosis systems aim at detecting, identifying, and isolating the occurrence of faults without assuming that the process generating the data is known. It is shown that little can be done at the single sensor level unless strong hypotheses are made. However, the situation is different if the embedded system mounts a rich sensor platform or is inserted in a sensor network. In such a case, redundancy in the information content and functional dependencies among sensors can be exploited to classify a change as fault, a change in the environment or an inefficiency of the change detection method (model bias).

Journal ArticleDOI
TL;DR: An expectation-maximization-based level set method (EMLS) is proposed to detect changes, and experimental results confirm the EMLS effectiveness when compared to state-of-the-art unsupervised change detection methods.
Abstract: The level set method, because of its implicit handling of topological changes and low sensitivity to noise, is one of the most effective unsupervised change detection techniques for remotely sensed images. In this letter, an expectation-maximization-based level set method (EMLS) is proposed to detect changes. First, the distribution of the difference image generated from multitemporal images is supposed to satisfy Gaussian mixture model, and expectation-maximization (EM) is then used to estimate the mean values of changed and unchanged pixels in the difference image. Second, two new energy terms, based on the estimated means, are defined and added into the level set method to detect those changes without initial contours and improve final accuracy. Finally, the improved level set method is implemented to partition pixels into changed and unchanged pixels. Landsat and QuickBird images were tested, and experimental results confirm the EMLS effectiveness when compared to state-of-the-art unsupervised change detection methods.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This paper presents and assesses a novel physics-based change detection technique, Spectral-360, which is based on the dichromatic color reflectance model, and shows that the objective evaluation performed using the 'changedetection.net 2014' dataset shows that it outperforms most state-of-the-art methods.
Abstract: This paper presents and assesses a novel physics-based change detection technique, Spectral-360, which is based on the dichromatic color reflectance model. This approach, uses image formation models to computationally estimate, from the camera output, a consistent physics-based color descriptor of the spectral reflectance of surfaces visible in the image, and then to measure the similarity between the full-spectrum reflectance of the background and foreground pixels to segment the foreground from a static background. This method represents a new approach to change detection, using explicit hypotheses about the physics that create images. The assumptions which have been made are that diffuse-only-reflection is applicable, and the existence of a dominant illuminant. The objective evaluation performed using the 'changedetection.net 2014' dataset shows that our Spectral-360 method outperforms most state-of-the-art methods.

Journal ArticleDOI
Rongjun Qin1
TL;DR: A novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models.
Abstract: Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with “change”, “non-change” and “uncertain change” status labeled through a voting strategy. The “uncertain changes” are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.