scispace - formally typeset
Search or ask a question

Showing papers on "Change detection published in 2017"


Journal ArticleDOI
TL;DR: This survey article enumerates, categorizes, and compares many of the methods that have been proposed to detect change points in time series, and presents some grand challenges for the community to consider.
Abstract: Change points are abrupt variations in time series data. Such abrupt changes may represent transitions that occur between states. Detection of change points is useful in modelling and prediction of time series and is found in application areas such as medical condition monitoring, climate change detection, speech and image analysis, and human activity analysis. This survey article enumerates, categorizes, and compares many of the methods that have been proposed to detect change points in time series. The methods examined include both supervised and unsupervised algorithms that have been introduced and evaluated. We introduce several criteria to compare the algorithms. Finally, we present some grand challenges for the community to consider.

788 citations


Journal ArticleDOI
Zhe Zhu1
TL;DR: It is observed that the more recent the study, the higher the frequency of Landsat time series used and some of the widely-used change detection algorithms were discussed, including thresholding, differencing, segmentation, trajectory classification, and regression.
Abstract: The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

521 citations


Journal ArticleDOI
Yang Zhan1, Kun Fu1, Menglong Yan1, Xian Sun1, Hongqi Wang1, Xiaosong Qiu1 
TL;DR: A novel supervised change detection method based on a deep siamese convolutional network for optical aerial images that is comparable, even better, with the two state-of-the-art methods in terms of F-measure.
Abstract: In this letter, we propose a novel supervised change detection method based on a deep siamese convolutional network for optical aerial images. We train a siamese convolutional network using the weighted contrastive loss. The novelty of the method is that the siamese network is learned to extract features directly from the image pairs. Compared with hand-crafted features used by the conventional change detection method, the extracted features are more abstract and robust. Furthermore, because of the advantage of the weighted contrastive loss function, the features have a unique property: the feature vectors of the changed pixel pair are far away from each other, while the ones of the unchanged pixel pair are close. Therefore, we use the distance of the feature vectors to detect changes between the image pair. Simple threshold segmentation on the distance map can even obtain good performance. For improvement, we use a $k$ -nearest neighbor approach to update the initial result. Experimental results show that the proposed method produces results comparable, even better, with the two state-of-the-art methods in terms of F-measure.

402 citations


Journal ArticleDOI
TL;DR: In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison, and results on real datasets validate the effectiveness and superiority of the proposed framework.
Abstract: Ternary change detection aims to detect changes and group the changes into positive change and negative change It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection Experimental results on real datasets validate the effectiveness and superiority of the proposed framework

191 citations


Journal ArticleDOI
TL;DR: A new post-classification method with iterative slow feature analysis (ISFA) and Bayesian soft fusion is proposed to obtain reliable and accurate change detection maps and achieve a clearly higher change detection accuracy than the current state-of-the-art methods.

189 citations


Journal ArticleDOI
TL;DR: A novel change detection framework for high-resolution remote sensing images, which incorporates superpixel-based change feature extraction and hierarchical difference representation learning by neural networks is presented.
Abstract: With the rapid technological development of various satellite sensors, high-resolution remotely sensed imagery has been an important source of data for change detection in land cover transition. However, it is still a challenging problem to effectively exploit the available spectral information to highlight changes. In this paper, we present a novel change detection framework for high-resolution remote sensing images, which incorporates superpixel-based change feature extraction and hierarchical difference representation learning by neural networks. First, highly homogenous and compact image superpixels are generated using superpixel segmentation, which makes these image blocks adhere well to image boundaries. Second, the change features are extracted to represent the difference information using spectrum, texture, and spatial features between the corresponding superpixels. Third, motivated by the fact that deep neural network has the ability to learn from data sets that have few labeled data, we use it to learn the semantic difference between the changed and unchanged pixels. The labeled data can be selected from the bitemporal multispectral images via a preclassification map generated in advance. And then, a neural network is built to learn the difference and classify the uncertain samples into changed or unchanged ones. Finally, a robust and high-contrast change detection result can be obtained from the network. The experimental results on the real data sets demonstrate its effectiveness, feasibility, and superiority of the proposed technique.

167 citations


Journal ArticleDOI
TL;DR: This paper analyzes imagery data from remote sensing satellites to detect forest cover changes over a period of 29 years, and automatically learns region representations using a deep neural network in a data-driven fashion.
Abstract: Land cover change monitoring is an important task from the perspective of regional resource monitoring, disaster management, land development, and environmental planning. In this paper, we analyze imagery data from remote sensing satellites to detect forest cover changes over a period of 29 years (1987–2015). Since the original data are severely incomplete and contaminated with artifacts, we first devise a spatiotemporal inpainting mechanism to recover the missing surface reflectance information. The spatial filling process makes use of the available data of the nearby temporal instances followed by a sparse encoding-based reconstruction. We formulate the change detection task as a region classification problem. We build a multiresolution profile (MRP) of the target area and generate a candidate set of bounding-box proposals that enclose potential change regions. In contrast to existing methods that use handcrafted features, we automatically learn region representations using a deep neural network in a data-driven fashion. Based on these highly discriminative representations, we determine forest changes and predict their onset and offset timings by labeling the candidate set of proposals. Our approach achieves the state-of-the-art average patch classification rate of 91.6% (an improvement of ~16%) and the mean onset/offset prediction error of 4.9 months (an error reduction of five months) compared with a strong baseline. We also qualitatively analyze the detected changes in the unlabeled image regions, which demonstrate that the proposed forest change detection approach is scalable to new regions.

161 citations


Journal ArticleDOI
TL;DR: An improved sparse coding method for change detection that minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature, which can adapt to different data due to the characteristic of joint dictionary learning.
Abstract: Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors’ knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.

156 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a change detection method based on an improved MRF, where linear weights are designed for dividing unchanged, uncertain and changed pixels of the difference image, and spatial attraction model is introduced to refine the spatial neighborhood relations, which aims to enhance the accuracy of spatial information in MRF.
Abstract: The fixed weights between the center pixel and neighboring pixels are used in the traditional Markov random field for change detection, which will easily cause the overuse of spatial neighborhood information. Besides the traditional label field cannot accurately identify the spatial relations between neighborhood pixels. To solve these problems, this study proposes a change detection method based on an improved MRF. Linear weights are designed for dividing unchanged, uncertain and changed pixels of the difference image, and spatial attraction model is introduced to refine the spatial neighborhood relations, which aims to enhance the accuracy of spatial information in MRF. The experimental results indicate that the proposed method can effectively enhance the accuracy of change detection.

146 citations


Journal ArticleDOI
TL;DR: In this article, the authors used past and recent satellite data to evaluate the typical landscape change over the decades, both pre-classification and post classification change detection approach was used to assess the change result from 1980 to 2010.

138 citations


Book ChapterDOI
11 Sep 2017
TL;DR: Results demonstrate that starting from simple algorithms the proposed IUTIS combination strategy can achieve comparable results with respect to more complex state-of-the-art change detection algorithms, while keeping the computational complexity affordable for real-time applications.
Abstract: Given the existence of many change detection algorithms, each with its own peculiarities and strengths, we propose a combination strategy, that we termed IUTIS (In Unity There Is Strength), based on a genetic Programming framework. This combination strategy is aimed at leveraging the strengths of the algorithms and compensate for their weakness. In this paper we show our findings in applying the proposed strategy in two different scenarios. The first scenario is purely performance-based. The second scenario performance and efficiency must be balanced. Results demonstrate that starting from simple algorithms we can achieve comparable results with respect to more complex state-of-the-art change detection algorithms, while keeping the computational complexity affordable for real-time applications.

Journal ArticleDOI
TL;DR: This paper presents a novel object-based approach for unsupervised change detection with focus on individual buildings with a unique procedure for determination of the number of relevant principal components and k-means clustering is applied for discrimination of changed and unchanged buildings.

Journal ArticleDOI
TL;DR: A novel scene change detection method via kernel slow feature analysis (KSFA) and postclassification fusion, which integrates independent scene classification with scene change Detection to accurately determine scene changes and identify the “from-to” transition type.
Abstract: Scene change detection between multitemporal image scenes can be used to interpret the variation of regional land use, and has significant potential in the application of urban development monitoring at the semantic level. The traditional methods directly comparing the independent semantic classes neglect the temporal correlation, and thus suffer from accumulated classification errors. In this paper, we propose a novel scene change detection method via kernel slow feature analysis (KSFA) and postclassification fusion, which integrates independent scene classification with scene change detection to accurately determine scene changes and identify the “from-to” transition type. After representation with the bag-of-visual-words model, KSFA is proposed to extract the nonlinear temporally invariant features, to better measure the change probability between corresponding multitemporal image scenes. Two postclassification fusion methods, which are based on Bayesian theory and predefined rules, respectively, are then employed to identify the optimal coupled class combinations of multitemporal scene pairs. Furthermore, in addition to identifying semantic changes, the proposed method can also improve the performance of scene classification, since the unchanged scenes are more likely to belong to the same class. Two experiments with high-resolution remote sensing image scene data sets confirm that the proposed method can increase the accuracy of scene change detection, scene transition identification, and scene classification.

Posted Content
TL;DR: In this paper, the authors proposed the Margin Density Drift Detection (MD3) algorithm, which tracks the number of samples in the uncertainty region of a classifier, as a metric to detect drift.
Abstract: Classifiers deployed in the real world operate in a dynamic environment, where the data distribution can change over time. These changes, referred to as concept drift, can cause the predictive performance of the classifier to drop over time, thereby making it obsolete. To be of any real use, these classifiers need to detect drifts and be able to adapt to them, over time. Detecting drifts has traditionally been approached as a supervised task, with labeled data constantly being used for validating the learned model. Although effective in detecting drifts, these techniques are impractical, as labeling is a difficult, costly and time consuming activity. On the other hand, unsupervised change detection techniques are unreliable, as they produce a large number of false alarms. The inefficacy of the unsupervised techniques stems from the exclusion of the characteristics of the learned classifier, from the detection process. In this paper, we propose the Margin Density Drift Detection (MD3) algorithm, which tracks the number of samples in the uncertainty region of a classifier, as a metric to detect drift. The MD3 algorithm is a distribution independent, application independent, model independent, unsupervised and incremental algorithm for reliably detecting drifts from data streams. Experimental evaluation on 6 drift induced datasets and 4 additional datasets from the cybersecurity domain demonstrates that the MD3 approach can reliably detect drifts, with significantly fewer false alarms compared to unsupervised feature based drift detectors. The reduced false alarms enables the signaling of drifts only when they are most likely to affect classification performance. As such, the MD3 approach leads to a detection scheme which is credible, label efficient and general in its applicability.

Journal ArticleDOI
TL;DR: The analyses suggest that the new multispectral ALS data acquired in Finland have high potential for further increasing the automation level in mapping, especially in automated object-based land cover classification and change detection in a suburban area.
Abstract: During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.

Journal ArticleDOI
TL;DR: The Margin Density Drift Detection (MD3) algorithm, which tracks the number of samples in the uncertainty region of a classifier, as a metric to detect drift, is proposed, which leads to a detection scheme which is credible, label efficient and general in its applicability.
Abstract: New classifier-independent, dynamic, unsupervised approach for detecting concept drift.Reduced number of false alarms and increased relevance of drift detection.Results comparable to supervised approaches, which require fully labeled streams.Our approach generalizes the notion of margin density, as a signal to detect drifts.Experiments on cybersecurity datasets, show efficacy for detecting adversarial drifts. Classifiers deployed in the real world operate in a dynamic environment, where the data distribution can change over time. These changes, referred to as concept drift, can cause the predictive performance of the classifier to drop over time, thereby making it obsolete. To be of any real use, these classifiers need to detect drifts and be able to adapt to them, over time. Detecting drifts has traditionally been approached as a supervised task, with labeled data constantly being used for validating the learned model. Although effective in detecting drifts, these techniques are impractical, as labeling is a difficult, costly and time consuming activity. On the other hand, unsupervised change detection techniques are unreliable, as they produce a large number of false alarms. The inefficacy of the unsupervised techniques stems from the exclusion of the characteristics of the learned classifier, from the detection process. In this paper, we propose the Margin Density Drift Detection (MD3) algorithm, which tracks the number of samples in the uncertainty region of a classifier, as a metric to detect drift. The MD3 algorithm is a distribution independent, application independent, model independent, unsupervised and incremental algorithm for reliably detecting drifts from data streams. Experimental evaluation on 6 drift induced datasets and 4 additional datasets from the cybersecurity domain demonstrates that the MD3 approach can reliably detect drifts, with significantly fewer false alarms compared to unsupervised feature based drift detectors. At the same time, it produces performance comparable to that of a fully labeled drift detector. The reduced false alarms enables the signaling of drifts only when they are most likely to affect classification performance. As such, the MD3 approach leads to a detection scheme which is credible, label efficient and general in its applicability.

Journal ArticleDOI
27 Jan 2017
TL;DR: A new detection method to predict a vehicle's trajectory and use it for detecting lane changes of surrounding vehicles and it is confirmed that the proposed method can considerably improve the detection performance.
Abstract: We propose a new detection method to predict a vehicle's trajectory and use it for detecting lane changes of surrounding vehicles. According to the previous research, more than 90% of the car crashes are caused by human errors, and lane changes are the main factor. Therefore, if a lane change can be detected before a vehicle crosses the centerline, accident rates will decrease. Previously reported detection methods have the problem of frequent false alarms caused by zigzag driving that can result in user distrust in driving safety support systems. Most cases of zigzag driving are caused by the abortion of a lane change due to the presence of adjacent vehicles on the next lane. Our approach reduces false alarms by considering the possibility of a crash with adjacent vehicles by applying trajectory prediction when the target vehicle attempts to change a lane, and it reflects the result of lane-change detection. We used a traffic dataset with more than 500 lane changes and confirmed that the proposed method can considerably improve the detection performance.

Proceedings ArticleDOI
14 Sep 2017
TL;DR: The notion of semantic background subtraction is introduced, a novel framework for motion detection in video sequences that combines the information of a semantic segmentation algorithm with the output of any background subtracted algorithm to reduce false positive detections produced by illumination changes, dynamic backgrounds, strong shadows, and ghosts.
Abstract: We introduce the notion of semantic background subtraction, a novel framework for motion detection in video sequences. The key innovation consists to leverage object-level semantics to address the variety of challenging scenarios for background subtraction. Our framework combines the information of a semantic segmentation algorithm, expressed by a probability for each pixel, with the output of any background subtraction algorithm to reduce false positive detections produced by illumination changes, dynamic backgrounds, strong shadows, and ghosts. In addition, it maintains a fully semantic background model to improve the detection of camouflaged foreground objects. Experiments led on the CDNet dataset show that we managed to improve, significantly, almost all background subtraction algorithms of the CDNet leaderboard, and reduce the mean overall error rate of all the 34 algorithms (resp. of the best 5 algorithms) by roughly 50% (resp. 20%).

Journal ArticleDOI
TL;DR: This letter proposes an unsupervised change detection method based on generative adversarial networks (GANs), which has the ability of recovering the training data distribution from noise input and demonstrates the effectiveness and robustness of the proposed method.
Abstract: Change detection can be treated as a generative learning procedure, in which the connection between bitemporal images and the desired change map can be modeled as a generative one. In this letter, we propose an unsupervised change detection method based on generative adversarial networks (GANs), which has the ability of recovering the training data distribution from noise input. Here, the joint distribution of the two images to be detected is taken as input and an initial difference image (DI), generated by traditional change detection method such as change vector analysis, is used to provide prior knowledge for sampling the training data based on Bayesian theorem and GAN’s min–max game theory. Through the continuous adversarial learning, the shared mapping function between the training data and their corresponding image patches can be built in GAN’s generator, from which a better DI can be generated. Finally, an unsupervised clustering algorithm is used to analyze the better DI to obtain the desired binary change map. Theoretical analysis and experimental results demonstrate the effectiveness and robustness of the proposed method.

Journal ArticleDOI
TL;DR: The proposed method incorporates low-rank-based saliency computation and deep feature representation and a multiscale fusion strategy is employed to produce more reliable detection results.
Abstract: In this letter, we address the problem of change detection for remote sensing images from the perspective of visual saliency computation. The proposed method incorporates low-rank-based saliency computation and deep feature representation. First, multilevel convolutional neural network (CNN) features are extracted for superpixels generated using SLIC, in which a fixed-size CNN feature can be formed to represent each superpixel. Then, low-rank decomposition is applied to the change features of the two input images to generate saliency maps that indicate change probabilities of each pixel. Finally, binarized change map can be obtained with a simple threshold. To deal with scale variations, a multiscale fusion strategy is employed to produce more reliable detection results. Extensive experiments on Google Earth and GF-2 images demonstrate the feasibility and effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: An approximately symmetric deep neural network with two sides containing the same number of coupled layers to transform the two images into the same feature space, in which their features are more discriminative and the difference image can be generated by comparing paired features pixel by pixel.
Abstract: With the application requirement, the technique for change detection based on heterogeneous remote sensing images is paid more attention. However, detecting changes between two heterogeneous images is challenging as they cannot be compared in low-dimensional space. In this paper, we construct an approximately symmetric deep neural network with two sides containing the same number of coupled layers to transform the two images into the same feature space. The two images are connected with the two sides and transformed into the same feature space, in which their features are more discriminative and the difference image can be generated by comparing paired features pixel by pixel. The network is first built by stacked restricted Boltzmann machines, and then, the parameters are updated in a special way based on clustering. The special way, motivated by that two heterogeneous images share the same reality in unchanged areas and retain respective properties in changed areas, shrinks the distance between paired features transformed from unchanged positions, and enlarges the distance between paired features extracted from changed positions. It is achieved through introducing two types of labels and updating parameters by adaptively changed learning rate. This is different from the existing methods based on deep learning that just do operations on positions predicted to be unchanged and extract only one type of labels. The whole process is completely unsupervised without any priori knowledge. Besides, the method can also be applied to homogeneous images. We test our method on heterogeneous images and homogeneous images. The proposed method achieves quite high accuracy.

Proceedings ArticleDOI
02 Apr 2017
TL;DR: A detailed case study about model-based attack detection procedures for Cyber-Physical Systems (CPSs) using EPANET and an input-output Linear Time Invariant (LTI) model for the network to derive a Kalman filter to estimate the evolution of the system dynamics.
Abstract: In this manuscript, we present a detailed case study about model-based attack detection procedures for Cyber-Physical Systems (CPSs). In particular, using EPANET (a simulation tool for water distribution systems), we simulate a Water Distribution Network (WDN). Using this data and sub-space identification techniques, an input-output Linear Time Invariant (LTI) model for the network is obtained. This model is used to derive a Kalman filter to estimate the evolution of the system dynamics. Then, residual variables are constructed by subtracting data coming from EPANET and the estimates of the Kalman filter. We use these residuals and the Bad-Data and the dynamic Cumulative Sum (CUSUM) change detection procedures for attack detection. Simulation results are presented - considering false data injection and zero-alarm attacks on sensor readings, and attacks on control input - to evaluate the performance of our model-based attack detection schemes. Finally, we derive upper bounds on the estimator-state deviation that zero-alarm attacks can induce.

Journal ArticleDOI
TL;DR: A novel multiscale morphological compressed change vector analysis method is proposed to address the multiple-change detection problem in bitemporal remote sensing images by jointly analyzing the spectral-spatial change information.
Abstract: A novel multiscale morphological compressed change vector analysis (M2C2VA) method is proposed to address the multiple-change detection problem (i.e., identifying different classes of changes) in bitemporal remote sensing images. The proposed approach contributes to extend the state-of-the-art spectrum-based compressed change vector analysis (C2VA) method by jointly analyzing the spectral-spatial change information. In greater details, reconstructed spectral change vector features are built according to a morphological analysis. Thus more geometrical details of change classes are preserved while exploiting the interaction of a pixel with its adjacent regions. Two multiscale ensemble strategies, i.e., data level and decision level fusion, are designed to integrate the change information represented at different scales of features or to combine the change detection results obtained by the detector at different scales, respectively. A detailed scale sensitivity analysis is carried out to investigate its impacts on the performance of the proposed method. The proposed method is designed in an unsupervised fashion without requiring any ground reference data. The proposed M2C2VA is tested on one simulated and three real bitemporal remote sensing images showing its properties in terms of different image size and spatial resolution. Experimental results confirm its effectiveness.

Journal ArticleDOI
25 May 2017
TL;DR: This study addresses the lane-change detection problem by using vehicle dynamic signals extracted from the CAN-bus, which is collected with 58 drivers around Dallas, TX area, to propose a machine learning-based segmentation and classification algorithm.
Abstract: In order to formulate a high-level understanding of driver behavior from massive naturalistic driving data, an effective approach is needed to automatically process or segregate data into low-level maneuvers Besides traditional computer vision processing, this study addresses the lane-change detection problem by using vehicle dynamic signals (steering angle and vehicle speed) extracted from the CAN-bus, which is collected with 58 drivers around Dallas, TX area After reviewing the literature, this study proposes a machine learning-based segmentation and classification algorithm, which is stratified into three stages The first stage is preprocessing and prefiltering, which is intended to reduce noise and remove clear left and right turning events Second, a spectral time-frequency analysis segmentation approach is employed to generalize all potential time-variant lane-change and lane-keeping candidates The final stage compares two possible classification methods—1) dynamic time warping feature with k -nearest neighbor classifier and 2) hidden state sequence prediction with a combined hidden Markov model The overall optimal classification accuracy can be obtained at 8036% for lane-change-left and 8322% for lane-change-right The effectiveness and issues of failures are also discussed With the availability of future large-scale naturalistic driving data, such as SHRP2, this proposed effective lane-change detection approach can further contribute to characterize both automatic route recognition as well as distracted driving state analysis

Journal ArticleDOI
TL;DR: The proposed framework for testing the equality of covariance matrices is applied to the relevant signal processing application of multipass Coherent Change Detection in polarimetric synthetic aperture radar and demonstrates both on simulated and on live data.
Abstract: This paper deals with the problem of testing the equality of $M$ covariance matrices. We first identify a suitable group of transformations leaving the problem invariant and obtain the corresponding maximal invariant statistic. Then, the Generalized Likelihood Ratio Test (GLRT) is recalled and explicit expressions for Rao, Wald, Gradient, and Durbin tests are provided. Also, equivalences among them and with other well-known tests proposed in open literature (mostly for the real-valued case) are analyzed and compared. Finally, the application of the proposed framework to the relevant signal processing application of multipass Coherent Change Detection (CCD) in polarimetric synthetic aperture radar is demonstrated both on simulated and on live data.

Proceedings ArticleDOI
01 May 2017
TL;DR: A novel 3D reconstruction algorithm based on an extended Truncated Signed Distance Function (TSDF) that enables to continuously refine the static map while simultaneously obtaining 3D reconstructions of dynamic objects in the scene.
Abstract: Robots that are operating for extended periods of time need to be able to deal with changes in their environment and represent them adequately in their maps. In this paper, we present a novel 3D reconstruction algorithm based on an extended Truncated Signed Distance Function (TSDF) that enables to continuously refine the static map while simultaneously obtaining 3D reconstructions of dynamic objects in the scene. This is a challenging problem because map updates happen incrementally and are often incomplete. Previous work typically performs change detection on point clouds, surfels or maps, which are not able to distinguish between unexplored and empty space. In contrast, our TSDF-based representation naturally contains this information and thus allows us to more robustly solve the scene differencing problem. We demonstrate the algorithms performance as part of a system for unsupervised object discovery and class recognition. We evaluated our algorithm on challenging datasets that we recorded over several days with RGB-D enabled tablets. To stimulate further research in this area, all of our datasets are publicly available3.

Journal ArticleDOI
TL;DR: HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart, and are able to reveal further departures from the postchange state of the data-generating process.
Abstract: We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.

Journal ArticleDOI
TL;DR: In this article, a robust distance measurement is developed, which deals with surface roughness and areas of lower point densities, and the level of detection (LOD) is based on a confidence interval considering the spatial variability of TLS errors caused by large laser footprints.
Abstract: Long-range terrestrial laserscanning (TLS) is an emerging method for the monitoring of alpine slopes in the vicinity of infrastructure. Nevertheless, deformation monitoring of alpine natural terrain is difficult and gets even more challenging with larger scan distances. In this study we present approaches for the handling of spatially variable measurement uncertainties in the context of geomorphological change detection using multi-temporal data sets. A robust distance measurement is developed, which deals with surface roughness and areas of lower point densities. The level of detection (LOD), i.e. the threshold distinguishing between real surface change and data noise, is based on a confidence interval considering the spatial variability of TLS errors caused by large laser footprints, low incidence angles and surface roughness. Spatially variable positional uncertainties are modelled for each point according to its range and the object geometry hit. The local point cloud roughness is estimated in the distance calculation process from the variance of least-squares fitted planes. Distance calculation and LOD assessment are applied in two study areas in the Eastern Alps (Austria) using multi-temporal laserscanning data sets of slopes surrounding reservoir lakes. At Finstertal, two TLS point clouds of high alpine terrain and scanned from ranges between 300 and 1800 m are compared. At Gepatsch, the comparison is done between an airborne laserscanning (ALS) and a TLS point cloud of a vegetated mountain slope scanned from ranges between 600 and 3600 m. Although these data sets feature different conditions regarding the scan setup and the surface conditions, the presented approach makes it possible to reliably analyse the geomorphological activity. This includes the automatic detection of rock glacier movement, rockfall and debris slides, even in areas where a difference in vegetation cover could be observed. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: First, Gabor wavelet features are extracted from two temporal VHR images to obtain spatial and contextual information, and the Gabor-wavelet-based difference measure (GWDM) is designed to generate the difference image.
Abstract: In this letter, we propose a change detection method based on Gabor wavelet features for very high resolution (VHR) remote sensing images. First, Gabor wavelet features are extracted from two temporal VHR images to obtain spatial and contextual information. Then, the Gabor-wavelet-based difference measure (GWDM) is designed to generate the difference image. In GWDM, a new local similarity measure is defined, in which the Markov random field neighborhood system is incorporated to obtain a local relationship, and the coefficient of variation method is applied to discriminate contributions from different features. Finally, the fuzzy c-means cluster algorithm is employed to obtain the final change map. Experiments employing QuickBird and SPOT5 images demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
Yaoguo Zheng1, Licheng Jiao1, Hongying Liu1, Xiangrong Zhang1, Biao Hou1, Shuang Wang1 
TL;DR: A novel unsupervised saliency-guided synthetic aperture radar (SAR) image change detection method that combines the principal component analysis (PCA) method and k -means clustering to obtain the change map on the extracted features, which are clustered into two classes: changed areas and unchanged areas.