scispace - formally typeset
Search or ask a question

Showing papers on "Change detection published in 2015"


Journal ArticleDOI
TL;DR: This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs, and gives a general framework for the algorithms categorized under various settings.
Abstract: Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the `why', of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field.

998 citations


Journal ArticleDOI
TL;DR: This paper presents a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes, which allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored.
Abstract: Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method’s internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

603 citations


Journal ArticleDOI
TL;DR: This review has organised the literature by the unit of analysis and the comparison method used to identify change, significantly reducing the conceptual overlap present in previous reviews giving a succinct nomenclature with which to understand and apply change detection workflows.

409 citations


Journal ArticleDOI
TL;DR: Two main approaches to handle concept drift regardless of the learning model are proposed, the first one involves moving averages and is more suitable to detect abrupt changes and the second follows a widespread intuitive idea to deal with gradual changes using weighted moving averages.
Abstract: Incremental and online learning algorithms are more relevant in the data mining context because of the increasing necessity to process data streams. In this context, the target function may change over time, an inherent problem of online learning (known as concept drift). In order to handle concept drift regardless of the learning model, we propose new methods to monitor the performance metrics measured during the learning process, to trigger drift signals when a significant variation has been detected. To monitor this performance, we apply some probability inequalities that assume only independent, univariate and bounded random variables to obtain theoretical guarantees for the detection of such distributional changes. Some common restrictions for the online change detection as well as relevant types of change (abrupt and gradual) are considered. Two main approaches are proposed, the first one involves moving averages and is more suitable to detect abrupt changes. The second one follows a widespread intuitive idea to deal with gradual changes using weighted moving averages. The simplicity of the proposed methods, together with the computational efficiency make them very advantageous. We use a Naive Bayes classifier and a Perceptron to evaluate the performance of the methods over synthetic and real data.

259 citations


Journal ArticleDOI
TL;DR: In this article, an integrated protocol is proposed to produce spatially exhaustive annual BAP image composites that are seasonally constrained and free of atmospheric perturbations, which can be used for mapping and monitoring land cover and land cover change.

239 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply spectral trend analysis of Landsat Thematic Mapper (TM) and enhanced thematic mapper plus (ETM+) data from 1984 to 2012 to detect, characterize, and attribute forest changes in the province of Saskatchewan, Canada.

237 citations


Journal ArticleDOI
TL;DR: DBEST as discussed by the authors uses a novel segmentation algorithm which simplifies the trend into linear segments using one of three user-defined parameters: a generalisation-threshold parameter δ, the m largest changes, or a threshold β for the magnitude of changes of interest for detection.

224 citations


Journal ArticleDOI
TL;DR: This work proposes the sparsified binary segmentation algorithm which aggregates the cumulative sum statistics by adding only those that pass a certain threshold, which reduces the influence of irrelevant noisy contributions, which is particularly beneficial in high dimensions.
Abstract: Time series segmentation, a.k.a. multiple change-point detection, is a well-established problem. However, few solutions are designed specifically for high-dimensional situations. In this paper, our interest is in segmenting the second-order structure of a high-dimensional time series. In a generic step of a binary segmentation algorithm for multivariate time series, one natural solution is to combine CUSUM statistics obtained from local periodograms and cross-periodograms of the components of the input time series. However, the standard “maximum” and “average” methods for doing so often fail in high dimensions when, for example, the change-points are sparse across the panel or the CUSUM statistics are spuriously large. In this paper, we propose the Sparsified Binary Segmentation (SBS) algorithm which aggregates the CUSUM statistics by adding only those that pass a certain threshold. This “sparsifying” step reduces the impact of irrelevant, noisy contributions, which is particularly beneficial in high dimensions. In order to show the consistency of SBS, we introduce the multivariate Locally Stationary Wavelet model for time series, which is a separate contribution of this work.

215 citations


Proceedings ArticleDOI
05 Jan 2015
TL;DR: A new type of word based approach that regulates its own internal parameters using feedback mechanisms to withstand difficult conditions while keeping sensitivity intact in regular situations is proposed.
Abstract: Although there has long been interest in foreground background segmentation based on change detection for video surveillance applications, the issue of inconsistent performance across different scenarios remains a serious concern. To address this, we propose a new type of word based approach that regulates its own internal parameters using feedback mechanisms to withstand difficult conditions while keeping sensitivity intact in regular situations. Coined "PAWCS", this method's key advantages lie in its highly persistent and robust dictionary model based on color and local binary features as well as its ability to automatically adjust pixel-level segmentation behavior. Experiments using the 2012 Change Detection.net dataset show that it outranks numerous recently proposed solutions in terms of overall performance as well as in each category. A complete C++ implementation based on OpenCV is available online.

206 citations


Journal ArticleDOI
TL;DR: The R package cpm is described, which provides a fast implementation of all the above change point models in both batch (Phase I) and sequential (Phase II) settings, where the sequences may contain either a single or multiple change points.
Abstract: The change point model framework introduced in Hawkins, Qiu, and Kang (2003) and Hawkins and Zamba (2005a) provides an effective and computationally efficient method for detecting multiple mean or variance change points in sequences of Gaussian random variables, when no prior information is available regarding the parameters of the distribution in the various segments. It has since been extended in various ways by Hawkins and Deng (2010), Ross, Tasoulis, and Adams (2011), Ross and Adams (2012) to allow for fully nonparametric change detection in non-Gaussian sequences, when no knowledge is available regarding even the distributional form of the sequence. Another extension comes from Ross and Adams (2011) and Ross (2014) which allows change detection in streams of Bernoulli and Exponential random variables respectively, again when the values of the parameters are unknown. This paper describes the R package cpm, which provides a fast implementation of all the above change point models in both batch (Phase I) and sequential (Phase II) settings, where the sequences may contain either a single or multiple change points.

199 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the accuracy that can be obtained from several processing pipelines for the 3D surface reconstruction of landslides and the detection of changes over time based on open-source libraries for multi-view stereo-photogrammetry and Structure-from-motion.

Journal ArticleDOI
TL;DR: Harmonic analysis of a multi-temporal time series of >500 ENVISAT Advanced SAR scenes with a spatial resolution of 150 m was used to characterise the seasonality in backscatter under non-flooded conditions and an outlook for the proposed algorithm is given in the light of the Sentinel-1 mission.

Journal ArticleDOI
TL;DR: This paper presents and analyses the most relevant literature contributions on the image fusion concept in the context of multitemporal remote sensing image processing by considering images acquired by optical and SAR systems at medium, high and very high spatial resolution.
Abstract: This paper presents an overview on the image fusion concept in the context of multitemporal remote sensing image processing. In the remote sensing literature, multitemporal image analysis mainly deals with the detection of changes and land-cover transitions. Thus the paper presents and analyses the most relevant literature contributions on these topics. From the perspective of change detection and detection of land-cover transitions, multitemporal image analysis techniques can be divided into two main groups: i) those based on the fusion of the multitemporal information at feature level, and ii) those based on the fusion of the multitemporal information at decision level. The former mainly exploit multitemporal image comparison techniques, which aim at highlighting the presence/absence of changes by generating change indices. These indices are then analyzed by unsupervised algorithms for extracting the change information. The latter rely mainly on classification and include both supervised and semi/partially-supervised/unsupervised methods. The paper focuses the attention on both standard (and largely used) methods and techniques proposed in the recent literature. The analysis is conducted by considering images acquired by optical and SAR systems at medium, high and very high spatial resolution.

Journal ArticleDOI
TL;DR: A novel hierarchical CD approach is proposed, aimed at identifying all the possible change classes present between the considered images, by considering spectral change information to identify the change classes having discriminable spectral behaviors.
Abstract: The new generation of satellite hyperspectral (HS) sensors can acquire very detailed spectral information directly related to land surface materials. Thus, when multitemporal images are considered, they allow us to detect many potential changes in land covers. This paper addresses the change-detection (CD) problem in multitemporal HS remote sensing images, analyzing the complexity of this task. A novel hierarchical CD approach is proposed, which is aimed at identifying all the possible change classes present between the considered images. In greater detail, in order to formalize the CD problem in HS images, an analysis of the concept of “change” is given from the perspective of pixel spectral behaviors. The proposed novel hierarchical scheme is developed by considering spectral change information to identify the change classes having discriminable spectral behaviors. Due to the fact that, in real applications, reference samples are often not available, the proposed approach is designed in an unsupervised way. Experimental results obtained on both simulated and real multitemporal HS images demonstrate the effectiveness of the proposed CD method.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: The results of the scenes of Panoramic Change Detection Dataset show the ground-truth of change detection, final change detection results, superpixel segmentation results, feature distance between each grid using feature of pool-5 layer, and probabilities of the sky and the ground estimated using Geometric Context.
Abstract: Figures 1 100 show the results of the scenes of Panoramic Change Detection Dataset. The rows show, from top to bottom, input image pair, ground-truth of change detection, final change detection results, superpixel segmentation results, feature distance between each grid using feature of pool-5 layer, feature distance projected to superpixel segmentation result of each input image, probabilities of the sky and the ground estimated using Geometric Context.

Journal ArticleDOI
TL;DR: In this paper, the authors used LandTrendr Landsat time-series-based algorithms to identify abrupt disturbances, and then applied spatial rules to aggregate these to patches, and derived a suite of spectral, patch-shape, and landscape position variables for each patch.

Proceedings Article
25 Jan 2015
TL;DR: This method combines a generalized hierarchical random graph model with a Bayesian hypothesis test to quantitatively determine if, when, and precisely how a change point has occurred and is applied to two high-resolution evolving social networks.
Abstract: Interactions among people or objects are often dynamic in nature and can be represented as a sequence of networks, each providing a snapshot of the interactions over a brief period of time. An important task in analyzing such evolving networks is change-point detection, in which we both identify the times at which the large-scale pattern of interactions changes fundamentally and quantify how large and what kind of change occurred. Here, we formalize for the first time the network change-point detection problem within an online probabilistic learning framework and introduce a method that can reliably solve it. This method combines a generalized hierarchical random graph model with a Bayesian hypothesis test to quantitatively determine if, when, and precisely how a change point has occurred. We analyze the detectability of our method using synthetic data with known change points of different types and magnitudes, and show that this method is more accurate than several previously used alternatives. Applied to two high-resolution evolving social networks, this method identifies a sequence of change points that align with known external "shocks" to these networks.

Journal ArticleDOI
TL;DR: A new approach for surface water change detection, which is based on integration of pixel level image fusion and image classification techniques, has the advantages of producing a pansharpened multispectral image, simultaneously highlighting the changed areas, as well as providing a high accuracy result.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric graph-based approach is proposed to detect change points in a data sequence, which can be applied to any data set as long as an informative similarity measure on the sample space can be defined.
Abstract: We consider the testing and estimation of change-points—locations where the distribution abruptly changes—in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is nonparametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.

Journal ArticleDOI
TL;DR: A simple yet effective unsupervised change detection approach for multitemporal synthetic aperture radar images from the perspective of clustering that jointly exploits the robust Gabor wavelet representation and the advanced cascade clustering.
Abstract: In this letter, we propose a simple yet effective unsupervised change detection approach for multitemporal synthetic aperture radar images from the perspective of clustering. This approach jointly exploits the robust Gabor wavelet representation and the advanced cascade clustering. First, a log-ratio image is generated from the multitemporal images. Then, to integrate contextual information in the feature extraction process, Gabor wavelets are employed to yield the representation of the log-ratio image at multiple scales and orientations, whose maximum magnitude over all orientations in each scale is concatenated to form the Gabor feature vector. Next, a cascade clustering algorithm is designed in this discriminative feature space by successively combining the first-level fuzzy c-means clustering with the second-level nearest neighbor rule. Finally, the two-level combination of the changed and unchanged results generates the final change map. Experimental results are presented to demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: This paper proposes a new approach for similarity measurement between images acquired by heterogeneous sensors that exploits the considered sensor physical properties and specially the associated measurement noise models and local joint distributions.
Abstract: Remote sensing images are commonly used to monitor the earth surface evolution. This surveillance can be conducted by detecting changes between images acquired at different times and possibly by different kinds of sensors. A representative case is when an optical image of a given area is available and a new image is acquired in an emergency situation (resulting from a natural disaster for instance) by a radar satellite. In such a case, images with heterogeneous properties have to be compared for change detection. This paper proposes a new approach for similarity measurement between images acquired by heterogeneous sensors. The approach exploits the considered sensor physical properties and specially the associated measurement noise models and local joint distributions. These properties are inferred through manifold learning. The resulting similarity measure has been successfully applied to detect changes between many kinds of images, including pairs of optical images and pairs of optical-radar images.

Journal ArticleDOI
TL;DR: An approach to perform relative spectral alignment between optical cross-sensor acquisitions by proposing a completely automatic strategy to select the hyperparameters of the system as well as the dimensionality of the transformed (latent) space.
Abstract: In this paper we present an approach to perform relative spectral alignment between optical cross-sensor acquisitions. The proposed method aims at projecting the images from two different and possibly disjoint input spaces into a common latent space, in which standard change detection algorithms can be applied. The system relies on the regularized kernel canonical correlation analysis transformation (kCCA), which can accommodate nonlinear dependencies between pixels by means of kernel functions. To learn the projections, the method employs a subset of samples belonging to the unchanged areas or to uninteresting radiometric differences. Since the availability of ground truth information to perform model selection is limited, we propose a completely automatic strategy to select the hyperparameters of the system as well as the dimensionality of the transformed (latent) space. The proposed scheme is fully automatic and allows the use of any change detection algorithm in the transformed latent space. A synthetic problem built from real images and a case study involving a real cross-sensor change detection problem illustrate the capabilities of the proposed method. Results show that the proposed system outperforms the linear baseline and provides accuracies close the ones obtained with a fully supervised strategy. We provide a MATLAB implementation of the proposed method as well as the real cross-sensor data we prepared and employed at https://sites.google.com/site/michelevolpiresearch/codes/cross-sensor.

Proceedings ArticleDOI
10 Aug 2015
TL;DR: This paper proposes a framework for detecting changes in multidimensional data streams based on principal component analysis, which is used for projecting data into a lower dimensional space, thus facilitating density estimation and change-score calculations and has advantages over existing approaches.
Abstract: Detecting changes in multidimensional data streams is an important and challenging task. In unsupervised change detection, changes are usually detected by comparing the distribution in a current (test) window with a reference window. It is thus essential to design divergence metrics and density estimators for comparing the data distributions, which are mostly done for univariate data. Detecting changes in multidimensional data streams brings difficulties to the density estimation and comparisons. In this paper, we propose a framework for detecting changes in multidimensional data streams based on principal component analysis, which is used for projecting data into a lower dimensional space, thus facilitating density estimation and change-score calculations.The proposed framework also has advantages over existing approaches by reducing computational costs with an efficient density estimator, promoting the change-score calculation by introducing effective divergence metrics, and by minimizing the efforts required from users on the threshold parameter setting by using the Page-Hinkley test. The evaluation results on synthetic and real data show that our framework outperforms two baseline methods in terms of both detection accuracy and computational costs.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a detailed discussion about the use of self-normalization in different contexts and highlight distinctive feature associated with each problem and connections among these recent developments.
Abstract: This article reviews some recent developments on the inference of time series data using the self-normalized approach. We aim to provide a detailed discussion about the use of self-normalization in different contexts and highlight distinctive feature associated with each problem and connections among these recent developments. The topics covered include: confidence interval construction for a parameter in a weakly dependent stationary time series setting, change point detection in the mean, robust inference in regression models with weakly dependent errors, inference for nonparametric time series regression, inference for long memory time series, locally stationary time series and near-integrated time series, change point detection, and two-sample inference for functional time series, as well as the use of self-normalization for spatial data and spatial-temporal data. Some new variations of the self-normalized approach are also introduced with additional simulation results. We also provide a brief review ...

Journal ArticleDOI
TL;DR: This paper reviews recent methodology in the form of a method breakdown, thereby distinguishing methods aiming at pure binary change detection from methods that in addition want to quantify change.
Abstract: Laser scanning is rapidly evolving as a surveying technique and is not only used to assess the geometrical state of a scene but also to assess changes in that state. Change detection is, however, a challenging application for several reasons. First, laser scanning is not measuring fixed points, such as a total station does, therefore in general, some interpolation or object extraction method is required. Second, errors that are inevitably present when determining the geometric state of a scene of interest in one epoch will add up when comparing the state between epochs. In addition, data volumes are constantly increasing, therefore processing methods should be computationally efficient. This paper reviews recent methodology in the form of a method breakdown, thereby distinguishing methods aiming at pure binary change detection from methods that in addition want to quantify change. In addition, the direction of a change is discussed, notably in connection with the measurement geometry. Also, the reference state is discussed, which can be in the form of a free form surface, or in the form of some idealized mathematical primitive like a plane. The different methods are presented in connection with applications in fields like structural monitoring, geomorphology, urban inventory and forestry, as considered by the original authors.

Journal ArticleDOI
TL;DR: The newly proposed DCD method is faster, requires less user input, and is better able to handle high-dimensional data, which overcomes the shortcomings of DCR by adopting a simplified sparse matrix estimation approach and a different hypothesis testing procedure to determine change points.
Abstract: Recently there has been an increased interest in using fMRI data to study the dynamic nature of brain connectivity. In this setting, the activity in a set of regions of interest (ROIs) is often modeled using a multivariate Gaussian distribution, with a mean vector and covariance matrix that are allowed to vary as the experiment progresses, representing changing brain states. In this work, we introduce the Dynamic Connectivity Detection (DCD) algorithm, which is a data-driven technique to detect temporal change points in functional connectivity, and estimate a graph between ROIs for data within each segment defined by the change points. DCD builds upon the framework of the recently developed Dynamic Connectivity Regression (DCR) algorithm, which has proven efficient at detecting changes in connectivity for problems consisting of a small to medium ( 100). The newly proposed DCD method is faster, requires less user input, and is better able to handle high-dimensional data. It overcomes the shortcomings of DCR by adopting a simplified sparse matrix estimation approach and a different hypothesis testing procedure to determine change points. The application of DCD to simulated data, as well as fMRI data, illustrates the efficacy of the proposed method.

Journal ArticleDOI
TL;DR: The proposed object-based approach has been tested for a sub-area of the Baichi catchment in northern Taiwan and the focus is on the mapping of landslides and debris flows/sediment transport areas caused by the Typhoons Aere and Matsa in 2005.
Abstract: Earth observation (EO) data are very useful for the detection of landslides after triggering events, especially if they occur in remote and hardly accessible terrain. To fully exploit the potential of the wide range of existing remote sensing data, innovative and reliable landslide (change) detection methods are needed. Recently, object-based image analysis (OBIA) has been employed for EO-based landslide (change) mapping. The proposed object-based approach has been tested for a sub-area of the Baichi catchment in northern Taiwan. The focus is on the mapping of landslides and debris flows/sediment transport areas caused by the Typhoons Aere in 2004 and Matsa in 2005. For both events, pre- and post-disaster optical satellite images (SPOT-5 with 2.5 m spatial resolution) were analysed. A Digital Elevation Model (DEM) with 5 m spatial resolution and its derived products, i.e., slope and curvature, were additionally integrated in the analysis to support the semi-automated object-based landslide mapping. Changes were identified by comparing the normalised values of the Normalized Difference Vegetation Index (NDVI) and the Green Normalized Difference Vegetation Index (GNDVI) of segmentation-derived image objects between pre- and post-event images and attributed to landslide classes.

Journal ArticleDOI
TL;DR: A novel technique for parameter estimation of the Rayleigh-Rice density that is based on a specific definition of the expectation-maximization algorithm is presented, which is characterized by good theoretical properties, iteratively updates the parameters and does not depend on specific optimization routines.
Abstract: The problem of estimating the parameters of a Rayleigh-Rice mixture density is often encountered in image analysis (e.g., remote sensing and medical image processing). In this paper, we address this general problem in the framework of change detection (CD) in multitemporal and multispectral images. One widely used approach to CD in multispectral images is based on the change vector analysis. Here, the distribution of the magnitude of the difference image can be theoretically modeled by a Rayleigh-Rice mixture density. However, given the complexity of this model, in applications, a Gaussian-mixture approximation is often considered, which may affect the CD results. In this paper, we present a novel technique for parameter estimation of the Rayleigh-Rice density that is based on a specific definition of the expectation-maximization algorithm. The proposed technique, which is characterized by good theoretical properties, iteratively updates the parameters and does not depend on specific optimization routines. Several numerical experiments on synthetic data demonstrate the effectiveness of the method, which is general and can be applied to any image processing problem involving the Rayleigh-Rice mixture density. In the CD context, the Rayleigh-Rice model (which is theoretically derived) outperforms other empirical models. Experiments on real multitemporal and multispectral remote sensing images confirm the validity of the model by returning significantly higher CD accuracies than those obtained by using the state-of-the-art approaches.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method outperforms the state-of-the-art change detection methods under both “ideal” and “noisy” conditions.

Journal ArticleDOI
TL;DR: A point cloud de-noising and calibration approach that takes advantage of point redundancy in both space and time to improve ability to detect small changes in many disciplines, such as rock slope pre-failure deformation, deformation in civil infrastructure and small-scale geomorphological change.
Abstract: This study presents a point cloud de-noising and calibration approach that takes advantage of point redundancy in both space and time (4D). The purpose is to detect displacements using terrestrial laser scanner data at the sub-mm scale or smaller, similar to radar systems, for the study of very small natural changes, i.e., pre-failure deformation in rock slopes, small-scale failures or talus flux. The algorithm calculates distances using a multi-scale normal distance approach and uses a set of calibration point clouds to remove systematic errors. The median is used to filter distance values for a neighbourhood in space and time to reduce random type errors. The use of space and time neighbours does need to be optimized to the signal being studied, in order to avoid smoothing in either spatial or temporal domains. This is demonstrated in the application of the algorithm to synthetic and experimental case examples. Optimum combinations of space and time neighbours in practical applications can lead to an improvement of an order or two of magnitude in the level of detection for change, which will greatly improve our ability to detect small changes in many disciplines, such as rock slope pre-failure deformation, deformation in civil infrastructure and small-scale geomorphological change.