scispace - formally typeset
Open AccessJournal ArticleDOI

Detecting offsets in GPS time series: First results from the detection of offsets in GPS experiment

TLDR
In this article, the authors designed and managed the Detection of Offsets in GPS Experiment, which simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model.
Abstract
[1] The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.

read more

Content maybe subject to copyright    Report

JOURNAL OF GEOPHYSICAL RESEARCH: SOLID EARTH, VOL. 118, 2397–2407, doi:10.1002/jgrb.50152, 2013
Detecting offsets in GPS time series: First results from the detection of
offsets in GPS experiment
Julien Gazeaux,
1
Simon Williams,
2
Matt King,
1,3
Machiel Bos,
4
Rolf Dach,
5
Manoj Deo,
6
Angelyn W Moore,
7
Luca Ostini,
5
Elizabeth Petrie,
1
Marco Roggero,
8
Felix Norman Teferle,
9
German Olivares,
9
and Frank H. Webb
7
Received 4 December 2012; revised 7 March 2013; accepted 8 March 2013; published 8 May 2013.
[1] The accuracy of Global Positioning System (GPS) time series is degraded by the
presence of offsets. To assess the effectiveness of methods that detect and remove these
offsets, we designed and managed the Detection of Offsets in GPS Experiment. We
simulated time series that mimicked realistic GPS data consisting of a velocity
component, offsets, white and flicker noises (1/f spectrum noises) composed in an
additive model. The data set was made available to the GPS analysis community without
revealing the offsets, and several groups conducted blind tests with a range of detection
approaches. The results show that, at present, manual methods (where offsets are hand
picked) almost always give better results than automated or semi-automated methods
(two automated methods give quite similar velocity bias as the best manual solutions).
For instance, the fifth percentile range (5% to 95%) in velocity bias for automated
approaches is equal to 4.2 mm/year (most commonly ˙0.4 mm/yr from the truth),
whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr
from the truth). The magnitude of offsets detectable by manual solutions is smaller than
for automated solutions, with the smallest detectable offset for the best manual and
automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time
series noise levels are representative of real GPS time series, robust geophysical
interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly
not robust, although a limit of nearer 1 mm/yr would be a more conservative choice.
Further work to improve offset detection in GPS coordinates time series is required
before we can routinely interpret sub-mm/yr velocities for single GPS stations.
Citation: Gazeaux, J., et al. (2013), Detecting offsets in GPS time series: first results from the detection of offsets in GPS
experiment, J. Geophys. Res. Solid Earth, 118, 2397–2407, doi:10.1002/jgrb.50152.
1
School of Civil Engineering and Geosciences, Newcastle University,
Newcastle, UK.
2
National Oceanography Centre, Liverpool, UK.
3
School of Geography and Environmental Studies, University of
Tasmania, Hobart, Tas, Australia.
4
CIIMAR/CIMAR, Interdisciplinary Centre of Marine and Environ-
mental Research, University of Porto, Porto, Portugal.
5
Astronomical Institute, University of Bern, Bern, Switzerland.
6
National Geospatial Reference Systems, Earth Monitoring Group,
Canberra, ACT, Australia.
7
Jet Propulsion Laboratory, California Institute of Technology,
Pasadena, California, USA.
8
Politecnico di Torino, Torino, Italy.
9
Geophysics Laboratory, University of Luxembourg, Luxembourg.
Corresponding author: Julien Gazeaux, School of Civil
Engineering and Geosciences, Newcastle University, Newcastle, UK.
(julien.gazeaux@ncl.ac.uk)
©2013. American Geophysical Union. All Rights Reserved.
2169-9313/13/10.1002/jgrb.50152
1. Introduction
[2] Since the 1980s, GPS receivers have been established
at a variety of geophysical sites to measure positions and
velocities of Earth’s surface. As data analysis approaches
have improved, the time series have achieved increas-
ingly higher precision [Santamaria-Gomez et al., 2011].
However, further improvements are necessary in order to
measure small geophysical signals or test competing mod-
els. For example, intra-plate deformations may be as small
as a few tenths of a millimeter per year requiring pre-
cision below 0.1 mm/yr [e.g., Frankel et al., 2011] and
tide gauge vertical land movements need to be obtained
with a precision and accuracy of around 0.1–0.2 mm/yr
[Wöppelmann et al., 2009] in order not to degrade measure-
ments of sea level change. In such cases, even small errors
in the GPS coordinate time series may be important.
[
3] However, GPS coordinates time series remain dis-
rupted by offsets occurring at times that are known (e.g.,
documented equipment changes) or unknown, and with
2397

GAZEAUX ET AL.: DETECTING OFFSETS IN GPS TIME SERIE
Environmental change
Equipment change
Creep event
Seismic event
Monument disruption
Unknown
Metadata change
< 1%
29%
< 1%
34%
< 1%
33%
< 1%
−10 0 10 20 30 40 50 60
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Offset size Distribution (mm)
Frequency
Unknown
Seismic
Equipment change
Gaussian
(b)(a)
Figure 1. SOPAC offset (a) description and (b) magnitude distribution since 1995 over 340 sites (560
offsets).
magnitudes that are at best known imprecisely. Offsets in
coordinates time series are defined as a sharp change of
the mean resulting in a long-lasting effect on parameters,
such as velocity estimation. Depending on their locations in
the time series, undetected offsets may have a detrimental
effect on velocity estimation. For example, when estimating
uplift rates in East Antarctica, Thomas et al. [2011] reported
velocities approximately 2.1 mm/yr lower than Argus et al.
[2011], leading to very different interpretations of the data.
Thomas et al. [2011] suggest that about 50% of the dif-
ference was due to differences in handling offsets. Both
approaches are described in their respective auxiliary mate-
rial. This velocity difference notes the difficulty in detecting
offsets robustly.
[
4] As the length of time series increases the number
of offsets is likely to increase and the cumulative effect
of even small offsets can significantly alter position and
velocity estimates. The detection of offsets is therefore an
important challenge when attempting to obtain an accurate
understanding of Earth surface deformation.
[5] Offset detection, also known as data segmentation or
homogenization, is a problem investigated in a large num-
ber of scientific studies. These include climate/meteorology
[Beaulieu et al., 2008; Gazeaux et al., 2011], bio-
statistics research [Olshen et al., 2004], image processing
[Pham et al., 2000], and quantitative marketing [Fong and
DeSarbo, 2007]. Some studies have also been dedicated to
the importance of offsets in GPS studies [Williams, 2003a]
and their detection [Khodabandeh et al., 2012; Williams,
2003a; Borghi et al., 2012; Vitti, 2012]. Williams [2003b]
notably highlights the role of offsets in velocity estimation
of GPS time series as well as the impact of the position and
magnitude of the offsets in the time series.
[6] As shown in Figure 1a, according to the SOPAC
archive (the Scripps Orbit and Permanent Array Center,
http://sopac.ucsd.edu/), around two thirds of the offsets have
understood reasons (either equipment changes or seismic
activity), with metadata available to determine their timings
and hence allow estimation of their magnitudes. However,
the last third of offsets are due to unknown reasons; these
are unknown to the analyst and have to be detected by some
post-processing or possibly pre-processing approach. Figure
1b shows the size distribution of offsets from Figure 1a,
taking all coordinate components together. From these data,
all types of offsets have a symmetric distribution (i.e., sym-
metric around zero). Assuming that SOPAC analysts have
correctly identified the majority of offsets, the main differ-
ences lay in the variance, which is higher for “equipment
change” and “seismic event” offsets than for “unknown” off-
sets. As such, “unknown” type offsets tend to be smaller
than those of other types making them especially difficult to
detect. Offset detection methods are therefore essential to the
removal of spurious effects of accumulated unknown events
and to accurately estimate relevant site parameters such as
position and velocity.
[7] In this paper, GPS-derived time series are simulated
and are then examined by different offset detection methods.
Realistic characteristics of the time series such as velocity,
noise amplitude, timing, and magnitude of offsets are simu-
lated with regards to state-of-the-art knowledge on GPS time
series (based on articles such as Williams [2003a], Langbein
[2008] or Santamaria-Gomez et al. [2011]). The time series
are then blind-tested by analysts through a range of recently
developed detection methods. The detected offset epochs
and the consequent site velocities computed after consider-
ing them are compared to the actual simulated offset epochs
and velocities. Finally, the methods are compared to each
other in order to highlight the relative merits of each.
2. Methodology Description
[8] The Detection of Offsets in GPS Experiment
(DOGEx) aims to consistently and objectively compare a
range of offset detection methods applied to GPS time series
analysis. To establish a known truth, we simulated three-
dimensional coordinate time series containing known and
realistic GPS signal, noise, offsets, and data gaps. We pro-
duce up to 18 years of simulated GPS daily coordinate time
series for the three components (North, East, and Up) and
for 50 idealized sites.
2398

GAZEAUX ET AL.: DETECTING OFFSETS IN GPS TIME SERIE
200
400
600
800
North
Simulated offset
SDPWMANL
MAK2PIEE
−300
−200
−100
mm
East
1992 1994 1996 1998 2000 2002 2004 2006 2008 2010
60
40
20
0
20
Time
Up
Figure 2. Example of simulated GPS daily coordinate time series. The vertical dashed black lines show
dates of simulated offsets. The red and blue markers represent, respectively, the offsets detected by two
different methods discussed in the text (SDPWMANL and MAK2PIEE). These methods, respectively,
turn out to be the best and worst methods among those under investigation in this article. Note that, due
to its location at the very beginning of the time series, the first offset occurring in 1992 was not detected
by either method. The effect of offset location on velocity estimate is discussed in Williams [2003b].
[9] For each created site, we combine an intercept (a),
trend (f), cycle (c), noise (), gaps and offsets (ı)inan
additive model (1) for each component (North, East, and Up)
in daily time s eries (y)givenby
y(t)=a + f(t)+c(t)+ı(t)+(t), (1)
and defined for each t, when there is no gap in data. The
velocities were chosen randomly from a reasonable dis-
tribution of Earth surface velocities, and the annual and
semi-annual were chosen randomly from past estimates of
annual and semi-annual from real GPS data.
[
10] Based on Hosking [1981], simulated noise charac-
teristics ( in equation (1)) are based on those p resent in
the state-of-the-art GPS reprocessing solutions using a white
plus flicker noise model [see Williams, 2003a]. The noise
is not necessarily time-constant at each site [cf. Langbein,
2008]. This time-dependent property of the noise allows the
simulation of the decreasing GPS data uncertainty over the
decades thanks to instrument and data analysis improve-
ments [see Santamaria-Gomez et al., 2011]. Note that the
definition of
does not allow for simulation of outliers, and
hence they are not taken into account in this initial DOGEx.
[
11] Offset time series (ı) are generated randomly over
time and dates o f the offsets follow a binomial distribution.
The offset occurs in all three components and its magni-
tude varies according to component. Offsets are modeled as
a stepwise signal with magnitudes changing for every offset,
or remaining constant when no offset occurs. Amplitudes of
the offsets are modeled as a symmetric Pareto distribution
(see DeGroot [1970] for details on the Pareto distribution).
That is a Pareto distribution multiplied by ˙1 with prob-
ability equal to
1
2
. While the difference from a Gaussian
distribution is generally quite small, the adopted distribution
allows for a better representation of the smallest magnitude
offsets that would not be well represented via a Gaussian dis-
tribution. Using a Pareto distribution results in the frequency
of very large offsets being reduced (offsets as big as 42 mm
or 52 mm shown in Figure 1b), but we do not consider this
as a weakness as such offsets are easier to detect than very
small ones.
[
12] Finally, gaps are created at random times with ran-
dom lengths following a Zeta distribution with parameter
s =2.0 (see Lin and Hu [2001] for details on Zeta distribu-
tion). The periods between gaps follow a Zipf-Mandelbrot
distribution [Mouillot and Lepretre, 2000]; during these peri-
ods, the data are continuous. Gaps being only 1 epoch long
are given an increased probability to model short events such
as instrument change.
[13] An example of a simulated time series for one site is
represented in Figure 2. The three components (North, East,
and Up) are displayed and simulated offsets are highlighted
by dashed vertical black lines.
[14] Once the time series were simulated, the experiment
was announced through an open call to the GPS community
for analysts to submit solutions. The request to analysts was
to estimate as accurately as possible the time and magnitude
of offset occurrences and determine the three components
of the velocities of each site from the simulated time series
described above. Fifty simulated GPS site time series were
tested through a range of commonly used detection meth-
ods often modified to suit GPS time series in some way.
Both manual and automatic solutions were requested and no
information about the sites, other than the three-component
time series, was provided to the analysts. The exact veloc-
ity, type and temporal variation of noise, number of offsets,
and offset timing are known but were not revealed to
solution p roviders.
2399

GAZEAUX ET AL.: DETECTING OFFSETS IN GPS TIME SERIE
3. Tested Solutions
[15] A total of 25 solutions were submitted, some of
which represent variants of the same solution strategy. Some
solutions were provided after preliminary results. These
results revealed how some solutions performed in terms of
true or false offset detection but did not reveal offset epochs
or the true velocity. For solutions provided later in the exper-
iment, therefore, the experiment was no longer entirely blind
as solutions providers could learn from these results how
the methods were performing. For example, AIUBCOD2-3
and ULGLFD02-3 are adapted versions of initial versions
AIUBCOD1 and ULGLFD01, respectively.
[16] Solutions can be split within two groups: a group
of manual solutions and a group of automated or semi-
automated methods. All solutions provided epochs, esti-
mates of offset magnitudes, and three-component velocities
for each site.
[
17] In Figure 2, outputs of two methods are shown as
an illustration of performance differences. We notice for
example that the “SDPWMANL (red dashes above) man-
ual solution gives better results than the “MAK2PIEE”
(blue dashes below) automated method in terms of offset
timing detection.
[
18] In the following, we describe the detection methods
used in the article and give details of solution methodologies
and any assumptions made. Note that the CPU-time required
by each automated solution is not discussed in the article.
Indeed, it has not been an issue because all solutions require
less than a few tens of seconds to run for each time series.
It is also worth specifying that all approaches work in the
case of data gaps, although we do not compare offset detec-
tion correctness between time series where gaps do and do
not occur.
3.1. Manual Solutions
[
19] The first group of solutions consists of individual
GPS experts providing solutions they obtained manually
on a site-by-site basis. No automated or semi-automated
method were used, but experts were asked to graphically
detect offsets. GPS-specialized software such as Tsview (see
Herring [2003] for more details) was used, which allowed
the removal of annual and semi-annual signals and, in the
case of Tsview, data averaging. Velocity rates were esti-
mated using common approaches such as Maximum Like-
lihood [Le Cam, 1990]. In the article, these methods are
referenced as BOSM_MLE, EJP_MANL, NOCLMANL,
SDPWMANL, and ULGLM001.
[
20] Inspired by Bos et al. [2008], BOSM_MLE uses the
Maximum Likelihood Estimation method, with a standard
power law plus a white noise model for estimating the veloc-
ities. Gaps in the time series were filled using simple linear
interpolation and an annual signal was considered. Offsets
were detected by visual inspection of the difference between
the raw time series and the estimated signal. Offsets were
added in an iterative process until the residual plots looked
free from offset.
[
21] ULGLM001 uses a “subjective and liberal approach”
where anything that graphically could be identified as an off-
set was taken to be an offset. Offsets identified in one coor-
dinate component were also included in other components.
Finally, standard deviation () was calculated and offsets
were disregarded when their magnitudes were smaller than
3
of its uncertainty.
[
22] The other handpicked solutions (EJP_MANL,
NOCLMANL, and SDPWMANL) use a simple linear
regression (with or without annual or semi-annual compo-
nent) with independent Gaussian noise assumption. These
three solutions used the Tsview software allowing interac-
tive picking of offsets after the removal of combination of a
linear trend and annual and semi annual signals.
3.2. Automated Solutions
3.2.1. Picard and Lavielle Solutions
[
23] A subset group of solutions called MAK1PIXX or
MAK2PIXX uses a likelihood maximization approach with
different penalty functions, either under constant variance
assumption over time (homoscedasticity, denoted by an “O”
as the second last letter) or under varying variance assump-
tion over time (heteroscedasticity, denoted by an “E” as
the second last letter). Penalized likelihood refers to the
following equation,
L
K
= L ˇpen(K), (2)
where L represents the initial likelihood of the model, and
the penalty function, pen is a function of the number of
offsets,
K. The penalty function allows the optimal esti-
mation of the number of offsets. The penalty increases
with the increasing number of offsets, and thereby prevents
over segmentation of the time series during the likelihood
maximization process. Based on Picard et al. [2005], the
pen function of equation (2) is either based on Lavielle
[2005] (in this case the last letter of the solution name
is “A”) or on Lebarbier [2005] (“E” as last letter name).
Both penalizing functions are based on Birgé and Massart
[2001], but Lavielle [2005] uses the additional assumption
that the number of change points is small compare to the
length of the series. This assumption allows for using the
asymptotic version of Picard et al. [2005] function. The
heteroscedasticity assumption is relevant if one wants to
model the decreasing uncertainty on actual GPS data over
decades [c.f. Santamaria-Gomez et al., 2011]. Dynamic pro-
gramming is used to optimize the speed of the algorithm.
For instance, MAK1PIAO method uses the Lavielle penalty
function under homoscedastic assumption.
3.2.2. GA Solution
[24] The GA (named after Geoscience Australia agency)
offset detection algorithm is based on a moving filter. For
each point in the time series,
t, two sets of n data sets are
selected prior to and after this epoch. A least squares linear
trend is fitted to the two data sets. The residual is interpo-
lated at t based on the linear trend from the first data set. The
difference between the interpolated value and actual value is
designated as
d1. The same procedure is repeated at t,using
the second data set to yield a value d2. The value d is adopted
as the larger of d1 and d2 and is considered an outlier if (1) it
is greater than 3 times the larger of the standard deviation of
the two data sets, or (2) it is detected as an outlier based on
students t-distribution, carried out at the 95% level of statis-
tical significance. The values of d, which are above either of
these thresholds, are tracked and the time when it achieves
a maximum is identified as a candidate offset point. When
the point, t, is close to the beginning or end of the data set,
n is reduced to however many data points are available. As
2400

GAZEAUX ET AL.: DETECTING OFFSETS IN GPS TIME SERIE
the sample size gets reduced at the start and end of the full
data set, the outlier test becomes less reliable as it becomes
sensitive to noisy data. Thus, the point
t in the moving filter
begins and ends a few points before and after the final and
initial epochs, respectively.
3.2.3. MAK2CS3D Solution
[
25] This method consists of the use of a cumulative sum
to find the offsets and remove them. This approach removes
decreasing size offsets until a size threshold is reached, with
each iteration composed of a series of steps:
[26] 1. Compute a 3-D displacement vector from the three
components of the data at each site.
[27] 2. Compute the cumulative sum and detrend.
[28] 3. Identify peaks in first differences of the series
10 mm.
[29] 3. Remove the largest offsets (by estimating its mag-
nitude in the original time series) and then iterate until all
offsets are found.
[30] The simple 10 mm threshold is chosen arbitrarily and
designed to capture large offsets only; this value would need
to be changed for time series with very different noise char-
acteristics. This method does not attempt to find small offsets
(10 mm).
3.2.4. MRPCV1 Solution
[
31] In this solution, GPS time series are modeled as
stochastic process plus a step function that represents the
time series offsets. The offsets detection is based on a
hypothesis test that assumes as null hypothesis
H
0
that the
time series do not have any offset. This hypothesis is tested
against a certain number of alternative hypotheses H
A
, with
a jump in a given epoch. An alternative hypothesis can
be formulated for each observation epoch or for candidate
epochs only. The adequacy of the model can be verified
using the ratio test, which is known to have the
2
distri-
bution. After detecting the offsets, they can be estimated
and removed.
[32] The presented approach focuses on four steps and
is intended to detect, estimate, and remove the level shifts,
performing iteratively the so-called detection, identification,
and adaptation procedure (DIA), presented in Teunissen and
Kleusberg [1998], as applied in Perfetti [2006].
[33] 1. The horizontal coordinates E and N are trans-
formed to radial and tangential coordinates through prin-
cipal component analysis in order to highlight offsets in
the horizontal components of time series. Height coordi-
nates are not transformed in any way. Further information
in the context of deformation monitoring may be found in
Teunissen [2006], Teunissen and Kleusberg [1998] and
Perfetti [2006].
[
34] 2. Detect and remove the offsets through least squares
estimation and testing the null hypothesis in the absence of
discontinuities against a number of alternative hypotheses in
the p resence of discontinuities.
[
35] 3. Detect velocity changes within the time series.
[36] 4. Fit of one or more linear models to remove the
trend.
[
37] However, instead of assuming an a priori functional
model, the station motion is represented as a discrete-time
Markov process. The state vector can be designed in 3-D
and it is estimated by least squares, constraining the sys-
tem dynamic by setting the system noise at a low value
(the process is detailed in Roggero [2006]). Because offsets
do not necessarily affect horizontal and vertical components
similarly, the vertical component is studied separately using
the same approach. This approach also makes it possible to
consider documented and undocumented offsets, to predict
the station coordinates in data gaps, and to correctly rep-
resent pre-seismic and post-seismic deformations or other
nonlinear behaviors.
3.2.5. Kehagias and Fortin Solution
[
38] MAK1KF99 and MAK2KF99 solutions use, respec-
tively, version-1 and version-2 of a shifting means Hidden
Markov Model approach and the use of the Expectation-
Maximization (EM) algorithm [e.g., Kehagias and Fortin
2006]. Both models describe a Gaussian process, the mean
of which shifts at random epochs and with random ampli-
tudes. Version-1 describes a zero-mean Gaussian process
impacted by a random walk series, whereas version-2
describes a succession of piecewise Gaussian processes with
shifting means. Hence, the two versions behave very simi-
larly. The main difference is that, in version-2, values of the
series after offsets do not depend on the pre-offset value of
the series. The use of an EM algorithm makes the method
fast and particularly easy to use.
[39] This approach is based on the assumption that the
time series is driven by independent Gaussian noise which
is randomly disrupted every epoch with a certain probabil-
ity by another Gaussian distribution. However, this method
assumes that the signal is piecewise stationary which does
not allow a linear trend in the time series to be taken into
account. To overcome this, the first step was applied to
remove the nonzero velocity from each coordinate compo-
nent. Each component was treated independently.
3.2.6. FODITS Solutions
[40] AIUBCOD1, AIUBCOD2, and AIUBCOD3 refer to
solutions based on versions 1, 2, and 3 of FODITS (Find
Outliers and Discontinuities in Time Series), which is a
detection tool integrated within the Bernese Global Naviga-
tion Satellite Systems software [see Ostini et al., 2008].
[41] The FODITS algorithm iteratively adapts a func-
tional model to the time series of station coordinates where
all three coordinate components (North, East, and Up) are
treated at the same time. The principle is to reduce, step-
by-step, the discrepancy between the functional model and
the time series through using a statistical test to identify the
next element to be added to the functional model. This pro-
cess is based on the DIA procedure. The identification step
was reformulated as an absolute value of the sum of the
residuals. The parameters are added to the functional model
starting with the previous estimated components (velocity
and cycle) compensating the largest discrepancy between the
current status of the model and the time series. New param-
eters are added (and insignificant parameters are removed)
until a certain level of agreement between the model and the
time series is achieved. FODITS assumes a functional model
containing offsets, velocity changes, outliers, and annual and
semi-annual p eriodic functions.
[
42] The AIUBCOD1 solution was obtained by the algo-
rithm presented in Ostini et al. [2008], where the parameters
were removed step-by-step from the normal equation rep-
resenting the model, and where the most p robable offset
was sought in the whole interval of the post-fit residual
time series. In order to achieve more reliable results, for
AIUBCOD2, the algorithm was modified to add elements
2401

Figures
Citations
More filters
Journal ArticleDOI

ITRF2014: A new release of the International Terrestrial Reference Frame modeling nonlinear station motions

TL;DR: The ITRF2014 is generated with an enhanced modeling of nonlinear station motions, including seasonal (annual and semiannual) signals of station positions and postseismic deformation for sites that were subject to major earthquakes.
Journal ArticleDOI

Vertical land motion as a key to understanding sea level change and variability

TL;DR: In this article, the most successful instrumental methods that have been used to determine vertical displacements at the Earth's surface, so that the objectives of understanding and anticipating sea levels can be addressed adequately in terms of accuracy.
Journal ArticleDOI

MIDAS robust trend estimator for accurate GPS station velocities without step detection.

TL;DR: Development here of Median Interannual Difference Adjusted for Skewness (MIDAS), a variant of the Theil‐Sen median trend estimator, which computes a robust and realistic estimate of trend uncertainty and has the potential for broader application in the geosciences.
Journal ArticleDOI

Uncertainty of the 20th century sea-level rise due to vertical land motion errors

TL;DR: In this article, the authors used a novel reconstruction approach and updated geodetic VLM corrections to estimate the vertical land motion (VLM) at tide gauges and found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1.
References
More filters
Book

Optimal Statistical Decisions

TL;DR: In this article, the authors present a survey of probability theory in the context of sample spaces and decision problems, including the following: 1.1 Experiments and Sample Spaces, and Probability 2.2.3 Random Variables, Random Vectors and Distributions Functions.
Journal ArticleDOI

Circular binary segmentation for the analysis of array-based DNA copy number data.

TL;DR: A modification ofbinary segmentation is developed, which is called circular binary segmentation, to translate noisy intensity measurements into regions of equal copy number in DNA sequence copy number.
Journal ArticleDOI

Current methods in medical image segmentation.

TL;DR: A critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images is presented, with an emphasis on the advantages and disadvantages of these methods for medical imaging applications.
Journal ArticleDOI

ITRF2008: an improved solution of the international terrestrial reference frame

TL;DR: ITRF2008 as mentioned in this paper is a refined version of the International Terrestrial Reference Frame based on reprocessed solutions of the four space geodetic techniques: VLBI, SLR, GPS and DORIS, spanning 29, 26, 12.5 and 16 years of observations, respectively.
Journal ArticleDOI

Gaussian model selection

TL;DR: The purpose in this paper is to provide a general approach to model selection via penalization for Gaussian regression and to develop the point of view about this subject.
Related Papers (5)
Frequently Asked Questions (12)
Q1. What are the contributions mentioned in the paper "Detecting offsets in gps time series: first results from the detection of offsets in gps experiment" ?

In this paper, the detection of offsets in GPS coordinates time series is evaluated using a simulated time series that mimicked real GPS data consisting of offsets, white and flicker noises ( 1/f spectrum noises ) composed in an additive model. 

Further work needs to be done to further reduce offset-related velocity biases. Further improvements will almost certainly be found by considering metadata that provide relevant information on the epoch and the magnitude of offsets ( see Figure 1b ) [ e. g., Ostini et al., 2008 ]. 

1. The horizontal coordinates E and N are transformed to radial and tangential coordinates through principal component analysis in order to highlight offsets in the horizontal components of time series. 

The first group of solutions consists of individual GPS experts providing solutions they obtained manually on a site-by-site basis. 

The consideration of information on the dependence between components of the GPS time series for instance (i.e., multivariate approach) would also likely reduce the number of FPs. 

Since the 1980s, GPS receivers have been established at a variety of geophysical sites to measure positions and velocities of Earth’s surface. 

Aside from the applications of individual GPS time series, the International Terrestrial Reference Frame (ITRF) must reach an accuracy of 0.1 mm/yr to meet future science requirements [Altamimi et al., 2011]. 

Fifty simulated GPS site time series were tested through a range of commonly used detection methods often modified to suit GPS time series in some way. 

In this solution, GPS time series are modeled as stochastic process plus a step function that represents the time series offsets. 

This implies that if the first estimates of velocity and annual cycle are not accurate enough, this might consequently impact the estimate of offsets detection and then also the number of FPs.[59] 

One important consequence is that over identification of offsets can lead to velocity biases that are larger on average than ignoring all offsets (red lines go above the black line on the top panel of the figure). 

The effect of adding FPs to solutions (red lines) is quantified through the ratio of total offsets (TP + FP + FN) to time series length, as shown near the left axis.