scispace - formally typeset

Proceedings ArticleDOI

Simultaneous registration and change detection in multitemporal, very high resolution remote sensing data

07 Jun 2015-pp 61-69

TL;DR: This paper proposes a modular, scalable, metric free single shot change detection/registration method that exploits a decomposed interconnected graphical model formulation where registration similarity constraints are relaxed in the presence of change detection.

AbstractIn order to exploit the currently continuous streams of massive, multi-temporal, high-resolution remote sensing datasets there is an emerging need to address efficiently the image registration and change detection challenges. To this end, in this paper we propose a modular, scalable, metric free single shot change detection/registration method. The approach exploits a decomposed interconnected graphical model formulation where registration similarity constraints are relaxed in the presence of change detection. The deformation space is discretized, while efficient linear programming and duality principles are used to optimize a joint solution space where local consistency is imposed on the deformation and the detection space. Promising results on large scale experiments demonstrate the extreme potentials of our method.

Topics: Change detection (58%), Object detection (57%), Image registration (57%), Metric (mathematics) (51%)

Summary (3 min read)

1. Introduction

  • The current generation of space-borne and airborne sensors are generating nearly continuous streams of massive multi-temporal high resolution remote sensing data.
  • Most remote sensing and GIS software still employ semi-automated registration procedures when it comes to very large, multispectral, very high resolution satellite data [17, 7].
  • Among the various recently proposed methods, those based on Markov Random Fields [8, 24, 1], kernels [2, 26] and neural networks [22, 23] have gained important attention.
  • Focusing on man-made object change detection [4, 20] in urban and peri-urban regions, several approaches have been proposed based on very high resolution optical and radar data [19, 23, 6, 20].

2.1. MRF formulation

  • The authors have designed and built an MRF model over two different graphs of the same dimensions.
  • The interaction between the two graphs is performed by the similarity cost which connect the registration with the change detection terms.
  • Each graph is superimposed on the image [9] and therefore every node of the graph acts and depends on a subset of pixels in the vicinity (depending on the chosen interpolation strategy).
  • In particular, the dimensions of the graph are related to the image dimensions forming a trade off between accuracy and computational complexity.
  • Ech and the authors couple the two different graphs to one.

2.2. The Registration Energy Term

  • The goal of image registration is to define a transformation map T which will project the source image to the target image.
  • The energy formulation for the registration comprises of a similarity cost (that seeks to satisfy the equation 2) and a smoothness penalty on the deformation domain.
  • The similarity cost depends on the presence of changes and will be subsequently defined.
  • The smoothness term penalises neighbouring nodes that have different displacement labels, depending on the distance of the labelled displacements.

2.3. The Change Detection Energy Term

  • The goal of the change detection term is to estimate the changed and unchanged image regions.
  • The authors employ two different labels in order to address the change detection problem lcp ∈ [0, 1].
  • The energy formulation for the change detection corresponds to a smoothness term which penalizes neighbouring nodes with different change labels.

2.4. Coupling the Energy Terms

  • The coupling between change detection and registration is achieved through the interconnection between the two graphs.
  • These two terms are integrated as in (equation 5) which simply uses a fixed cost in the presence of changes and the image matching cost in their absence.
  • With a slight abuse of notation the authors consider a node with an index p ∈ G (they recall that the two graphs are identical) corresponding to the same node throughout the two graphs (Greg, Gch).
  • In such a setting, optimizing an objective function seeking similarity correspondences is not meaningful and deformation vectors should be the outcome of the smoothness constraint on the displacement space.
  • Let us consider that this value is known and that it is independent from the image displacements, so the authors can distinguish the regions that have been changed.

2.5. Optimization

  • There are several techniques for the minimization of an MRF model which can be generally summarised into those based on the message passing and those on graph cut methods.
  • The first category is related to the linear programming relaxation [14].
  • The optimization of the implementation is performed by FastPD which is based on the dual theorem of linear programming [15, 16].

3. Implementation

  • Concerning the image, iteratively different levels of Gaussian image pyramids are used.
  • In all their experiments, 2 image and 3 grid levels were found adequate for the very high resolution satellite data.
  • Regarding the label space, a search for possible displacements along 8 directions (x, y and diagonal axes) is performed, while the change labels are always two and correspond to change or no change description.
  • Depending on the parameter label factor the values of registration labels change towards the optimal ones.
  • One of the problems in traditional change detection techniques, is that change in intensities does not directly mean semantic change.

4.1. Dataset

  • The developed framework was applied to several pairs of multispectal VHR images from different satellite sensors (i.e., Quickbird and WorldView-2).
  • The multi-temporal dataset covers approximately a 9 km2 region in the East Prefecture of Attica in Greece.
  • The dataset is quite challenging both due to its size and the pictured complexity derived from the different acquisition angles.
  • For the quantitative evaluation the ground truth was manually collected and annotated after an attentive and laborious photointerpretation done by an expert.

4.2. Experimental Results

  • Regarding the evaluation for the man-made change detection task, experimental results after the application of the developed method are shown in Figure 3 and Figure 4.
  • In particular, in Figure 3 the detected changes are shown with a red color while the ground truth polygons are shown with green.
  • The behaviour of the developed method can be further observed in Figure 5, where certain examples with True Positives, False Negatives and False Positives cases are presented.
  • The selected metric affects, also, the computational time significantly.

5. Conclusions

  • Developed and validated a novel framework which address concurrently the registration and change detection tasks in very high resolution multispectral multitemporal optical satellite data.the authors.
  • The developed method is modular, scalable and metric free.
  • The formulation exploits a decomposed interconnected graphical model formulation where registration similarity constraints are relaxed in the presence of change detection.
  • The framework was optimized for the detection of changes related to man-made objects in urban and peri-urban environments.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

HAL Id: hal-01264072
https://hal.archives-ouvertes.fr/hal-01264072
Submitted on 16 Feb 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Simultaneous Registration and Change Detection in
Multitemporal, Very High Resolution Remote Sensing
Data
Maria Vakalopoulou, Konstantinos Karatzalos, Nikos Komodakis, Nikos
Paragios
To cite this version:
Maria Vakalopoulou, Konstantinos Karatzalos, Nikos Komodakis, Nikos Paragios. Simultaneous Reg-
istration and Change Detection in Multitemporal, Very High Resolution Remote Sensing Data. 2015
IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jun 2015,
Boston, United States. pp.61-69, �10.1109/CVPRW.2015.7301384�. �hal-01264072�

Simultaneous Registration and Change Detection in Multitemporal, Very High
Resolution Remote Sensing Data
Maria Vakalopoulou
1,2
, Konstantinos Karatzalos
1
, Nikos Komodakis
2
, Nikos Paragios
3
1
Remote Sensing Laboratory, Natio-
2
Ecole des Ponts ParisTech,
3
Center for Visual Computing
nal Technical University of Athens University Paris-Est Ecole Centrale de Paris
mariavak@mail.ntua.gr nikos.komodakis@enpc.fr nikos.paragios@ecp.fr
karank@central.ntua.gr
Abstract
In order to exploit the currently continuous streams of
massive, multi-temporal, high-resolution remote sensing
datasets there is an emerging need to address efficiently the
image registration and change detection challenges. To this
end, in this paper we propose a modular, scalable, metric
free single shot change detection/registration method. The
approach exploits a decomposed interconnected graphical
model formulation where registration similarity constraints
are relaxed in the presence of change detection. The defor-
mation space is discretized, while efficient linear program-
ming and duality principles are used to optimize a joint so-
lution space where local consistency is imposed on the de-
formation and the detection space. Promising results on
large scale experiments demonstrate the extreme potentials
of our method.
1. Introduction
The current generation of space-borne and airborne sen-
sors are generating nearly continuous streams of massive
multi-temporal high resolution remote sensing data. How-
ever, in order to efficiently exploit these datasets their ac-
curate co-registration is the first indispensable processing
step. Despite the fact that image registration is among the
most studied problems in computer vision, most remote
sensing and GIS software still employ semi-automated reg-
istration procedures when it comes to very large, multispec-
tral, very high resolution satellite data [17, 7]. This is, how-
ever, far from a cost-effective solution especially if we con-
sider huge multi-temporal datasets that require accurate co-
registration [13, 12].
In addition, the primary goal of the analysis of multi-
temporal datasets is the detection of changes between differ-
ent land cover types [3, 11]. In particular, change detection
Unregistered multi-temporal satellite data
July 2006 July 2011
Registration & Change Detection Framework
E = E
reg
+ E
chng
Registration & Change Detection Framework
E
reg,ch
= V
reg,ch
+ V
pg,ch
+ V
pg,reg
Registered Data Detected Changes
Figure 1. The developed framework addresses simultaneous the
registration and change detection tasks.
of man-made objects is still an emerging challenge due to
the significant importance for various engineering and en-
vironmental applications [18, 5, 10, 25, 3]. Apart from Na-
tional and local government applications like the update of
cadastral and other GIS databases, companies like Google
and Microsoft are seeking to include extensively up-to-date
2D and 3D urban models in their products (e.g., Microsoft
Virtual Earth and Google Earth).
1

Change detection, however, from multi-temporal earth
observation data is not a trivial task and still remains a chal-
lenge. Among the various recently proposed methods, those
based on Markov Random Fields [8, 24, 1], kernels [2, 26]
and neural networks [22, 23] have gained important atten-
tion. Focusing on man-made object change detection [4, 20]
in urban and peri-urban regions, several approaches have
been proposed based on very high resolution optical and
radar data [19, 23, 6, 20]. However, these change detec-
tion techniques assume and require accurately co-registered
data in order to perform pixel-by-pixel or region-by-region
multi-temporal data fusion, correlation or any change anal-
ysis.
In this paper, we propose a simultaneous registration and
change detection approach motivated by the fact that on one
hand, the registration of very high resolution data seems to
be optimally addressed through deformation grids and pow-
erful discrete optimization [12], while on the other hand, the
desired changes are located in the regions for which corre-
spondences between the unregistered multi-temporal data
can not be established (Figure 1).
To this end, we have designed, developed and evaluated
a modular, scalable, metric-free single shot change detec-
tion/registration method. The approach exploits a decom-
posed interconnected graphical model formulation where
registration similarity constraints are relaxed in the pres-
ence of change detection. We employ a discretized, grid-
based deformation space. State-of-the-art linear program-
ming and duality principles have been employed to opti-
mize the joint solution space where local consistency is im-
posed on the deformation and the detection space. The un-
supervised framework has been designed to handle and pro-
cess large very high resolution multispectral remote sens-
ing data, while optimized for man-made object change de-
tection in urban and peri-urban regions. The developed
method has been validated through large scale experiments
on several multi-temporal very high resolution optical satel-
lite datasets.
The main contributions of the developed method are (i)
the novel, single and modular joint registration and change
detection framework, (ii) the metric-free formulation which
allows numerous and change-specific implementations, (iii)
the low computational complexity which allows near real-
time performance with gpu implementation. It should be
mentioned that the detected changes can not be directly em-
ployed for the update of, e.g., a geospatial database, since
the developed unsupervised framework does not include
any prior information about the geometry of the man-made
objects.
G
reg
G
ch
V
reg,ch
qp
Figure 2. Each graph contains a smoothness term which imposes
the necessary homogeneity within the graph. The interaction be-
tween the two graphs is performed by the similarity cost which
connect the registration with the change detection terms.
2. Methodology
2.1. MRF formulation
We have designed and built an MRF model over two dif-
ferent graphs of the same dimensions. The first deformable
graph corresponds to the registration term (G
reg
) and the
second one to the change detection term (G
ch
). Each graph
contains a smoothness term which impose the necessary ho-
mogeneity within the graph. The interaction between the
two graphs is performed by the similarity cost which con-
nect the registration with the change detection terms.
Each graph is superimposed on the image [9] and there-
fore every node of the graph acts and depends on a subset of
pixels in the vicinity (depending on the chosen interpolation
strategy). With such a manner every pixel can participate
through a certain weight to the graph related to its distance
from the nodes. The computational complexity is therefore
lower as graph’s dimensions are smaller than the unregis-
tered raw images. In particular, the dimensions of the graph
are related to the image dimensions forming a trade off be-
tween accuracy and computational complexity. In such a
setting the deformation of a pixel is defined through an in-
terpolation of the displacement of the proximal graph nodes
as follows:
T (x) = x +
X
pG
η(||x p||)d
p
(1)
where d
p
is a displacement value, η(.) is the projection
function, p is a control point and x is a pixel in the image.
After the optimization, the optimal labels will be pro-
jected to the image pixels using the same projection func-
tion η(.). Once the similarity criterion has been defined, the
next step consists of imposing certain continuity on the de-
formation space which is discussed in the next subsection.
That way, we formulate an energy function E
reg,ch
=
E
reg
+ E
ch
and we couple the two different graphs to one.

The label for each node p belonging to the graph G, will
be l
p
= [l
c
, l
reg
] where l
c
are the labels for the change de-
tection, l
c
{0, 1} and l
reg
are the labels for the registra-
tion l
reg
where = [d
1
, . . . , d
n
] corresponding to all
possible displacements. Concluding the label space can be
summarized as L = {0, 1} × .
2.2. The Registration Energy Term
Let us denote a pair of images where A is the source
image and V is the target image defined on a domain . The
goal of image registration is to define a transformation map
T which will project the source image to the target image.
V (x) = A T (x) (2)
Let us consider a discrete set of labels L
reg
= [1, . . . , n],
and a set of discrete displacements = [d
1
, . . . , d
n
]. We
seek to assign a label l
reg
p
to each grid node p, where each
label corresponds to a discrete displacement d
l
reg
p
.
The energy formulation for the registration comprises of
a similarity cost (that seeks to satisfy the equation 2) and a
smoothness penalty on the deformation domain. The sim-
ilarity cost depends on the presence of changes and will
be subsequently defined. The smoothness term penalises
neighbouring nodes that have different displacement labels,
depending on the distance of the labelled displacements.
V
pq ,reg
(l
reg
p
, l
reg
q
) = ||d
l
reg
p
d
l
reg
q
|| (3)
where p and q are neighbouring nodes.
2.3. The Change Detection Energy Term
The goal of the change detection term is to estimate the
changed and unchanged image regions. We employ two dif-
ferent labels in order to address the change detection prob-
lem l
c
p
[0, 1]. The energy formulation for the change de-
tection corresponds to a smoothness term which penalizes
neighbouring nodes with different change labels.
V
pq ,ch
(l
c
p
, l
c
q
) = ||l
c
p
l
c
q
|| (4)
2.4. Coupling the Energy Terms
The coupling between change detection and registration
is achieved through the interconnection between the two
graphs. Assuming a pair of corresponding nodes of the two
graphs, one would expect that in the absence of change the
similarity cost should be satisfied and in that case the poten-
tial will be:
V
reg,ch
(l
reg
p
, l
c
p
) = (1 l
c
p
) ·
Z
ˆη(||x p||)
ρ(V (x), A(x + d
l
reg
p
))dx + l
c
p
· C
(5)
where we simply take all pixels in the vicinity of the graph
node and project them back to the grid node with a weight
that is proportional to the distance. In the presence of
change, we use a fixed cost C. These two terms are in-
tegrated as in (equation 5) which simply uses a fixed cost
in the presence of changes and the image matching cost in
their absence.
With a slight abuse of notation we consider a node with
an index p G (we recall that the two graphs are identical)
corresponding to the same node throughout the two graphs
(G
reg
, G
ch
).We can now integrate all terms to a single en-
ergy which detect changes, establishes correspondences and
impose smoothness in the change detection and the defor-
mation map as follows:
E
reg,ch
(l
c
, l
reg
) = w
1
·
X
pG
V
reg,ch
(l
reg
p
, l
c
p
)+
w
2
·
X
pG
reg
X
q N (p)
V
pq ,reg
(l
reg
p
, l
reg
q
)+
w
3
·
X
pG
ch
X
q N (p)
V
pq ,ch
(l
c
p
, l
c
q
)
(6)
where V
reg,ch
(l
reg
p
, l
c
p
) represents the coupling term for
each node at each label, V
pq ,reg
(l
reg
p
, l
reg
q
) the pairwise or
binary term for the registration and V
pq ,ch
(l
c
p
, l
c
q
) the pair-
wise for the change detection.
In particular, a similarity function ρ(.) is used in order
to compare the two images, while a constant value C is
used in order to define the changes. In such a setting, opti-
mizing an objective function seeking similarity correspon-
dences is not meaningful and deformation vectors should be
the outcome of the smoothness constraint on the displace-
ment space. However, the areas of change are unknown and
is one of the objective of the optimization process. Without
loss of generality we can assume that the matching cost cor-
responding to change can correspond to a value that can be
determined from the distribution of these costs on the en-
tire domain (it is metric dependent). Let us consider that
this value is known and that it is independent from the im-
age displacements, so we can distinguish the regions that
have been changed. The advantage of the methodology is
that by solving the two problems simultaneously first we
have less false changes caused by the misregistration of the
images and then the changed regions do not affect the en-
tire displacement map, as we do not calculate the displace-
ment there and their final displacement is caused by the un-
changed neighbour regions.
Finally, the pairwise costs for both terms have been de-
scribed in equation 3 and 4.
2.5. Optimization
There are several techniques for the minimization of an
MRF model which can be generally summarised into those

based on the message passing and those on graph cut meth-
ods. The first category is related to the linear programming
relaxation [14]. The optimization of the implementation is
performed by FastPD which is based on the dual theorem of
linear programming [15, 16].
3. Implementation
The minimization of the MRF energy is performed by a
multi-scale framework. Concerning the image, iteratively
different levels of Gaussian image pyramids are used. Con-
cerning the grid, in a similar way we consider different lev-
els of it, beginning with a sparser grid. Having very large re-
mote sensing imagery the multi-scale approach diminishes
the computational complexity without losing in terms of ac-
curacy. The different levels of the images and the grid with
the consistency of nodes in the grid are defined by the user.
In all our experiments, 2 image and 3 grid levels were found
adequate for the very high resolution satellite data.
Regarding the label space, a search for possible displace-
ments along 8 directions (x, y and diagonal axes) is per-
formed, while the change labels are always two and corre-
spond to change or no change description. The number of
the registration labels is the same at each level. Depending
on the parameter label factor the values of registration la-
bels change towards the optimal ones. The source image is
deformed according to the optimal labels and it is updated
for the next level. The value 0.8 was employed for updat-
ing the registration labels. Last but not least, the maximum
displacement was set smaller than 0.4 times the distance of
two consecutive nodes in order to preserve the right dis-
placement of every node.
In addition, several methods block matching methods
can be employed. Semantic changes in multitemporal im-
agery affect the local intensities and also change the struc-
ture of the region. One of the problems in traditional change
detection techniques, is that change in intensities does not
directly mean semantic change. This was crucial since the
focus, here, was on urban and peri-urban regions and man-
made object changes. The optimal displacement using the
SADG function are calculated using the weighted sum of
the difference between the pair and the gradient inner prod-
uct. On the other hand any other similarity measure as mu-
tual information, normalized cross correlation, ratio corre-
lation can be used. In Section 4 we have tested different
similarity functions. For the SADG metric and focused on
man-made object changes, the fixed cost C was set to 100.
In particular, higher C values result to less changes. The
parameter is not so sensitive since values between 90 to 120
lead to comparable results.
Last but not least, the number of iterations per level was
set to 10, the regularization parameter for the registration
task to 35 and for the change detection one to 3.5. The
function used for the projection from pixels to nodes and
reverse was the Cubic B-splines.
4. Experimental Results and Evaluation
4.1. Dataset
The developed framework was applied to several pairs
of multispectal VHR images from different satellite sen-
sors (i.e., Quickbird and WorldView-2). The multi-temporal
dataset covers approximately a 9 km
2
region in the East
Prefecture of Attica in Greece. All datasets were acquired
between the years of 2006 and 2011. The dataset is quite
challenging both due to its size and the pictured complex-
ity derived from the different acquisition angles. For the
quantitative evaluation the ground truth was manually col-
lected and annotated after an attentive and laborious photo-
interpretation done by an expert.
4.2. Experimental Results
Extensive experiments were performed over several im-
ages pairs and based on several similarity metrics namely
the Sum of Absolute Differences (SAD), the Sum of
Square Differences (SSD), the Normalized Cross Correla-
tion (NCC), the Normalized Mutual Information (NMI), the
Correlation Ratio (CR), the Sum of Gradient Inner Products
(GRAD), the Normalized Correlation Coefficient plus Sum
of Gradient Inner Products (CCGIP), the Hellinger Distance
(HD), the Jensen-Renyi Divergence (JRD), the Mutual In-
formation (MI) and the Sum Absolute of Differences plus
Gradient Inner Products (SADG). The experimental results
were evaluated both qualitative and quantitative for the reg-
istration and the change detection tasks.
Moreover, in order to evaluate quantitatively the devel-
oped algorithm, the standard quality metrics of Complete-
ness, Correctness and Quality were calculated on the de-
tected object level. The True Positives (TP), False Nega-
tives (FN) and False Positives (FP) were calculated in all
cases.
Completeness =
T P
T P + F N
(7)
Correctness =
T P
T P + F P
(8)
Quality =
T P
T P + F P + F N
(9)
where TP is the number of correctly detected changes, FN
is the number of changes that haven’t been detected by the
algorithm and FP is the number of false alarms.
Regarding the evaluation of the registration, a number
of ground control points (GCPs) were manually collected
in both unregistered and registered data. In particular, the
GCPs contained several points on building roof tops which
usually have the largest displacements. The displacement

Citations
More filters

Journal ArticleDOI
07 Aug 2015
TL;DR: A taxonomical view of the field is provided and the current methodologies for multimodal classification of remote sensing images are reviewed, which highlight the most recent advances, which exploit synergies with machine learning and signal processing.
Abstract: Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future.

222 citations


Journal ArticleDOI
TL;DR: This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest.
Abstract: Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods Finally, we present concluding remarks in algorithmic aspects of 3D CD

125 citations


Cites background from "Simultaneous registration and chang..."

  • ...A few of them investigated the possibility of using very high resolution (VHR) images for 2D CD in a finer level (Bouziani et al., 2010; Brunner et al., 2010; Huang et al., 2014; Košecka, 2012; Vakalopoulou et al., 2015)....

    [...]


Journal ArticleDOI
TL;DR: This paper presents the first large scale very high resolution semantic change detection dataset, which enables the usage of deep supervised learning methods for semantic changes detection with very highresolution images, and presents a network architecture that performs change detection and land cover mapping simultaneously.
Abstract: Change detection is one of the main problems in remote sensing, and is essential to the accurate processing and understanding of the large scale Earth observation data available. Most of the recently proposed change detection methods bring deep learning to this context, but change detection labelled datasets which are openly available are still very scarce, which limits the methods that can be proposed and tested. In this paper we present the first large scale very high resolution semantic change detection dataset, which enables the usage of deep supervised learning methods for semantic change detection with very high resolution images. The dataset contains coregistered RGB image pairs, pixel-wise change information and land cover information. We then propose several supervised learning methods using fully convolutional neural networks to perform semantic change detection. Most notably, we present a network architecture that performs change detection and land cover mapping simultaneously, while using the predicted land cover information to help to predict changes. We also describe a sequential training scheme that allows this network to be trained without setting a hyperparameter that balances different loss functions and achieves the best overall results.

54 citations


Cites methods from "Simultaneous registration and chang..."

  • ...Unsupervised methods have been used for change detection in many different ways (Hussain et al., 2013; Vakalopoulou et al., 2015; Liu et al., 2019)....

    [...]


Journal ArticleDOI
TL;DR: The winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data, and the second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection.
Abstract: In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper.

47 citations


Cites background or methods from "Simultaneous registration and chang..."

  • ...Following the notations of [6] and [55], the first graph, Greg, involved nodes where the labels corresponded to deformation vectors from the registration process, i....

    [...]

  • ...1) the registration (Vpq,reg(l reg p , l reg q )) and change detection (Vpq,ch(l p , l ch q )) pairwise terms followed the same formulation as in [6] and [55] and penalized neighboring nodes...

    [...]

  • ...In particular, the formulation of [6] and [55], was extended...

    [...]


Journal ArticleDOI
03 Feb 2018-Sensors
TL;DR: A new approach for change detection in 3D point clouds that combines classification and CD in one step using machine learning is suggested.
Abstract: This paper suggests a new approach for change detection (CD) in 3D point clouds. It combines classification and CD in one step using machine learning. The point cloud data of both epochs are merged for computing features of four types: features describing the point distribution, a feature relating to relative terrain elevation, features specific for the multi-target capability of laser scanning, and features combining the point clouds of both epochs to identify the change. All these features are merged in the points and then training samples are acquired to create the model for supervised classification, which is then applied to the whole study area. The final results reach an overall accuracy of over 90% for both epochs of eight classes: lost tree, new tree, lost building, new building, changed ground, unchanged building, unchanged tree, and unchanged ground.

44 citations


References
More filters

Journal ArticleDOI
Masroor Hussain1, Dongmei Chen1, Angela Cheng1, Hui Wei, David Stanley 
TL;DR: This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context, followed by a review of object-basedchange detection techniques.
Abstract: The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.

909 citations


"Simultaneous registration and chang..." refers background in this paper

  • ...of man-made objects is still an emerging challenge due to the significant importance for various engineering and environmental applications [18, 5, 10, 25, 3]....

    [...]


Journal ArticleDOI
TL;DR: New extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery and three regularization schemes are described.
Abstract: This paper describes new extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery. Much like boosting methods often applied in data mining work, the iteratively reweighted (IR) MAD method in a series of iterations places increasing focus on "difficult" observations, here observations whose change status over time is uncertain. The MAD method is based on the established technique of canonical correlation analysis: for the multivariate data acquired at two points in time and covering the same geographical region, we calculate the canonical variates and subtract them from each other. These orthogonal differences contain maximum information on joint change in all variables (spectral bands). The change detected in this fashion is invariant to separate linear (affine) transformations in the originally measured variables at the two points in time, such as 1) changes in gain and offset in the measuring device used to acquire the data, 2) data normalization or calibration schemes that are linear (affine) in the gray values of the original variables, or 3) orthogonal or other affine transformations, such as principal component (PC) or maximum autocorrelation factor (MAF) transformations. The IR-MAD method first calculates ordinary canonical and original MAD variates. In the following iterations we apply different weights to the observations, large weights being assigned to observations that show little change, i.e., for which the sum of squared, standardized MAD variates is small, and small weights being assigned to observations for which the sum is large. Like the original MAD method, the iterative extension is invariant to linear (affine) transformations of the original variables. To stabilize solutions to the (IR-)MAD problem, some form of regularization may be needed. This is especially useful for work on hyperspectral data. This paper describes ordinary two-set canonical correlation analysis, the MAD transformation, the iterative extension, and three regularization schemes. A simple case with real Landsat Thematic Mapper (TM) data at one point in time and (partly) constructed data at the other point in time that demonstrates the superiority of the iterative scheme over the original MAD method is shown. Also, examples with SPOT High Resolution Visible data from an agricultural region in Kenya, and hyperspectral airborne HyMap data from a small rural area in southeastern Germany are given. The latter case demonstrates the need for regularization

452 citations


Additional excerpts

  • ...Results have been also compared with the unsupervised IRMAD [21] change detection algorithm....

    [...]

  • ...Method Complet % Corr % Quality % IRMAD 67.2 36.8 30.1 SADG 92.2 80.1 74.4 SAD 95.2 64.9 60.01 SSD 94.1 67.3 61.4 NCC 77.7 40.5 34.8 NMI 55.3 62.8 50.1 CR 60.5 30.3 25.2 GRAD 35.1 49.3 23.1 CCGIP 77.8 40.4 34.9 JRD 39.6 50.1 30.4 HD 83.6 60.1 57.8 MI 41.9 52.8 30.1 Table 2....

    [...]


BookDOI
23 Oct 2009
TL;DR: This paper presents a meta-modelling architecture for semi-supervised image classification of hyperspectral remote sensing data using a SVM and a proposed circular validation strategy for land-cover maps updating.
Abstract: About the editors. List of authors. Preface. Acknowledgments. List of symbols. List of abbreviations. I Introduction. 1 Machine learning techniques in remote sensing data analysis (Bjorn Waske, Mathieu Fauvel, Jon Atli Benediktsson and Jocelyn Chanussot). 1.1 Introduction. 1.2 Supervised classification: algorithms and applications. 1.3 Conclusion. Acknowledgments. References. 2 An introduction to kernel learning algorithms (Peter V. Gehler and Bernhard Scholkopf). 2.1 Introduction. 2.2 Kernels. 2.3 The representer theorem. 2.4 Learning with kernels. 2.5 Conclusion. References. II Supervised image classification. 3 The Support Vector Machine (SVM) algorithm for supervised classification of hyperspectral remote sensing data (J. Anthony Gualtieri). 3.1 Introduction. 3.2 Aspects of hyperspectral data and its acquisition. 3.3 Hyperspectral remote sensing and supervised classification. 3.4 Mathematical foundations of supervised classification. 3.5 From structural risk minimization to a support vector machine algorithm. 3.6 Benchmark hyperspectral data sets. 3.7 Results. 3.8 Using spatial coherence. 3.9 Why do SVMs perform better than other methods? 3.10 Conclusions. References. 4 On training and evaluation of SVM for remote sensing applications (Giles M. Foody). 4.1 Introduction. 4.2 Classification for thematic mapping. 4.3 Overview of classification by a SVM. 4.4 Training stage. 4.5 Testing stage. 4.6 Conclusion. Acknowledgments. References. 5 Kernel Fisher's Discriminant with heterogeneous kernels (M. Murat Dundar and Glenn Fung). 5.1 Introduction. 5.2 Linear Fisher's Discriminant. 5.3 Kernel Fisher Discriminant. 5.4 Kernel Fisher's Discriminant with heterogeneous kernels. 5.5 Automatic kernel selection KFD algorithm. 5.6 Numerical results. 5.7 Conclusion. References. 6 Multi-temporal image classification with kernels (Jordi Munoz-Mari, Luis Gomez-Choa, Manel Martinez-Ramon, Jose Luis Rojo-Alvarez, Javier Calpe-Maravilla and Gustavo Camps-Valls). 6.1 Introduction. 6.2 Multi-temporal classification and change detection with kernels. 6.3 Contextual and multi-source data fusion with kernels. 6.4 Multi-temporal/-source urban monitoring. 6.5 Conclusions. Acknowledgments. References. 7 Target detection with kernels (Nasser M. Nasrabadi). 7.1 Introduction. 7.2 Kernel learning theory. 7.3 Linear subspace-based anomaly detectors and their kernel versions. 7.4 Results. 7.5 Conclusion. References. 8 One-class SVMs for hyperspectral anomaly detection (Amit Banerjee, Philippe Burlina and Chris Diehl). 8.1 Introduction. 8.2 Deriving the SVDD. 8.3 SVDD function optimization. 8.4 SVDD algorithms for hyperspectral anomaly detection. 8.5 Experimental results. 8.6 Conclusions. References. III Semi-supervised image classification. 9 A domain adaptation SVM and a circular validation strategy for land-cover maps updating (Mattia Marconcini and Lorenzo Bruzzone). 9.1 Introduction. 9.2 Literature survey. 9.3 Proposed domain adaptation SVM. 9.4 Proposed circular validation strategy. 9.5 Experimental results. 9.6 Discussions and conclusion. References. 10 Mean kernels for semi-supervised remote sensing image classification (Luis Gomez-Chova, Javier Calpe-Maravilla, Lorenzo Bruzzone and Gustavo Camps-Valls). 10.1 Introduction. 10.2 Semi-supervised classification with mean kernels. 10.3 Experimental results. 10.4 Conclusions. Acknowledgments. References. IV Function approximation and regression. 11 Kernel methods for unmixing hyperspectral imagery (Joshua Broadwater, Amit Banerjee and Philippe Burlina). 11.1 Introduction. 11.2 Mixing models. 11.3 Proposed kernel unmixing algorithm. 11.4 Experimental results of the kernel unmixing algorithm. 11.5 Development of physics-based kernels for unmixing. 11.6 Physics-based kernel results. 11.7 Summary. References. 12 Kernel-based quantitative remote sensing inversion (Yanfei Wang, Changchun Yang and Xiaowen Li). 12.1 Introduction. 12.2 Typical kernel-based remote sensing inverse problems. 12.3 Well-posedness and ill-posedness. 12.4 Regularization. 12.5 Optimization techniques. 12.6 Kernel-based BRDF model inversion. 12.7 Aerosol particle size distribution function retrieval. 12.8 Conclusion. Acknowledgments. References. 13 Land and sea surface temperature estimation by support vector regression (Gabriele Moser and Sebastiano B. Serpico). 13.1 Introduction. 13.2 Previous work. 13.3 Methodology. 13.4 Experimental results. 13.5 Conclusions. Acknowledgments. References. V Kernel-based feature extraction. 14 Kernel multivariate analysis in remote sensing feature extraction (Jeronimo Arenas-Garcia and Kaare Brandt Petersen). 14.1 Introduction. 14.2 Multivariate analysis methods. 14.3 Kernel multivariate analysis. 14.4 Sparse Kernel OPLS. 14.5 Experiments: pixel-based hyperspectral image classification. 14.6 Conclusions. Acknowledgments. References. 15 KPCA algorithm for hyperspectral target/anomaly detection (Yanfeng Gu). 15.1 Introduction. 15.2 Motivation. 15.3 Kernel-based feature extraction in hyperspectral images. 15.4 Kernel-based target detection in hyperspectral images. 15.5 Kernel-based anomaly detection in hyperspectral images. 15.6 Conclusions. Acknowledgments References. 16 Remote sensing data Classification with kernel nonparametric feature extractions (Bor-Chen Kuo, Jinn-Min Yang and Cheng-Hsuan Li). 16.1 Introduction. 16.2 Related feature extractions. 16.3 Kernel-based NWFE and FLFE. 16.4 Eigenvalue resolution with regularization. 16.5 Experiments. 16.6 Comments and conclusions. References. Index.

378 citations


Journal ArticleDOI
TL;DR: It is shown that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms, which are able to derive algorithms that generalize and extend state-of-the-art message-passing methods, and take full advantage of the special structure that may exist in particular MRFs.
Abstract: This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

347 citations


"Simultaneous registration and chang..." refers background in this paper

  • ...The first category is related to the linear programming relaxation [14]....

    [...]


Book
20 Mar 2019
Abstract: Images, Arrays, and Vectors. Image Statistics. Transformations. Radiometric Enhancement. Topographic Modeling. Image Registration. Image Sharpening. Change Detection. Unsupervised Classification. Supervised Classification. Hyperspectral Analysis.

276 citations


"Simultaneous registration and chang..." refers background in this paper

  • ...In addition, the primary goal of the analysis of multitemporal datasets is the detection of changes between different land cover types [3, 11]....

    [...]

  • ...of man-made objects is still an emerging challenge due to the significant importance for various engineering and environmental applications [18, 5, 10, 25, 3]....

    [...]


Frequently Asked Questions (2)
Q1. What have the authors contributed in "Simultaneous registration and change detection in multitemporal, very high resolution remote sensing data" ?

To this end, in this paper the authors propose a modular, scalable, metric free single shot change detection/registration method. Promising results on large scale experiments demonstrate the extreme potentials of their method. 

The integration of prior knowledge regarding texture and geometric features is currently under development and a gpu implementation is among the future perspectives as well.