scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Geodesy in 2014"


Journal ArticleDOI
TL;DR: It will be shown that with the combined system, when more satellites are available, much larger than the customary cut-off elevations can be used, which will significantly increase the GNSS applicability in constrained environments, such as e.g. in urban canyons or when low-elevation multipath is present.
Abstract: As the Chinese BeiDou Navigation Satellite System (BDS) has become operational in the Asia-Pacific region, it is of importance to better understand as well as demonstrate the capabilities that a combination of BeiDou with GPS brings to positioning. In this contribution, a formal and empirical analysis is given of the single-epoch RTK positioning capabilities of such a combined system. This will be done for the single- and dual-frequency case, and in comparison with the BDS- and GPS-only performances. It will be shown that with the combined system, when more satellites are available, much larger than the customary cut-off elevations can be used. This is important, as such measurement set-up will significantly increase the GNSS applicability in constrained environments, such as e.g. in urban canyons or when low-elevation multipath is present.

188 citations


Journal ArticleDOI
TL;DR: A global GPS reanalysis program in which two general classes of trajectory model are used, tuned on a station by station basis, and the network trajectory model is defined as the set of station trajectory models encompassing every station in the network.
Abstract: We sketch the evolution of station trajectory models used in crustal motion geodesy over the last several decades, and describe some recent generalizations of these models that allow geodesists and geophysicists to parameterize accelerating patterns of displacement in general, and postseismic transient deformation in particular. Modern trajectory models are composed of three sub-models that represent secular trends, annual oscillations, and instantaneous jumps in coordinate time series. Traditionally the trend model invoked constant station velocity. This can be generalized by assuming that position is a polynomial function of time. The trajectory model can also be augmented as needed, by including one or more logarithmic transients in order to account for typical multi-year patterns of postseismic transient motion. Many geodetic and geophysical research groups are using general classes of trajectory model to characterize their crustal displacement time series, but few if any of them are using these trajectory models to define and realize the terrestrial reference frames (RFs) in which their time series are expressed. We describe a global GPS reanalysis program in which we use two general classes of trajectory model, tuned on a station by station basis. We define the network trajectory model as the set of station trajectory models encompassing every station in the network. We use the network trajectory model from the each global analysis to assign prior position estimates for the next round of GPS data processing. We allow our daily orbital solutions to relax so as to maintain their consistency with the network polyhedron. After several iterations we produce GPS time series expressed in a RF similar to, but not identical with ITRF2008. We find that each iteration produces an improvement in the daily repeatability of our global time series and in the predictive power of our trajectory models.

170 citations


Journal ArticleDOI
TL;DR: In this article, a robust Kalman filter scheme is proposed to resist the influence of the outliers in the observations, where a judging index is defined as the square of the Mahalanobis distance from the observation to its prediction.
Abstract: A robust Kalman filter scheme is proposed to resist the influence of the outliers in the observations. Two kinds of observation error are studied, i.e., the outliers in the actual observations and the heavy-tailed distribution of the observation noise. Either of the two kinds of errors can seriously degrade the performance of the standard Kalman filter. In the proposed method, a judging index is defined as the square of the Mahalanobis distance from the observation to its prediction. By assuming that the observation is Gaussian distributed with the mean and covariance being the observation prediction and its associate covariance, the judging index should be Chi-square distributed with the dimension of the observation vector as the degree of freedom. Hypothesis test is performed to the actual observation by treating the above Gaussian distribution as the null hypothesis and the judging index as the test statistic. If the null hypothesis should be rejected, it is concluded that outliers exist in the observations. In the presence of outliers scaling factors can be introduced to rescale the covariance of the observation noise or of the innovation vector, both resulting in a decreased filter gain. And the scaling factors can be solved using the Newton’s iterative method or in an analytical manner. The harmful influence of either of the two kinds of errors can be effectively resisted in the proposed method, so robustness can be achieved. Moreover, as the number of iterations needed in the iterative method may be rather large, the analytically calculated scaling factor should be preferred.

159 citations


Journal ArticleDOI
TL;DR: In this article, it was proved that only the off-diagonal entries of the second-order derivative of the potential do exhibit a non-eliminable singularity when the observation point is aligned with an edge of a face.
Abstract: On the basis of recent analytical results we derive new formulas for computing the gravity effects of polyhedral bodies which are expressed solely as function of the coordinates of the vertices of the relevant faces. We thus prove that such formulas exhibit no singularity whenever the position of the observation point is not aligned with an edge of a face. In the opposite case, the contribution of the edge to the potential to its first-order derivative and to the diagonal entries of the second-order derivative is deemed to be zero on the basis of some claims which still require a rigorous mathematical proof. In contrast with a common statement in the literature, it is proved that only the off-diagonal entries of the second-order derivative of the potential do exhibit a noneliminable singularity when the observation point is aligned with an edge of a face. The analytical provisions on the range of validity of the derived formulas have been fully confirmed by the Matlab $$^{\textregistered }$$ program which has been coded and thoroughly tested by computing the gravity effects induced by real asteroids at arbitrarily placed observation points.

92 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir-Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995-2008) derived from GPS data at 14 permanent and 42 campaign stations.
Abstract: We present new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir–Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995–2008) derived from GPS data at 14 GPS permanent and 42 campaign stations between $$29.5{-}35^{\circ }\hbox {N}$$ and $$76{-}81^{\circ }\hbox {E}$$ . The GPS surface horizontal velocities vary significantly from the Higher to Lesser Himalaya and are of the order of 30 to 48 mm/year NE in ITRF 2005 reference frame, and 17 to 2 mm/year SW in an India fixed reference frame indicating that this region is accommodating less than 2 cm/year of the India–Eurasia plate motion ( $${\sim }4~\hbox {cm/year}$$ ). The total arc-normal shortening varies between $${\sim }10{-}14~\hbox {mm/year}$$ along the different transects of the northwest Himalayan wedge, between the Indo-Tsangpo suture to the north and the Indo-Gangetic foreland to the south indicating high strain accumulation in the Himalayan wedge. This convergence is being accommodated differentially along the arc-normal transects; $${\sim } 5{-}10~\hbox {mm/year}$$ in Lesser Himalaya and 3–4 mm/year in Higher Himalaya south of South Tibetan Detachment. Most of the convergence in the Lesser Himalaya of Garhwal and Kumaon is being accommodated just south of the Main Central Thrust fault trace, indicating high strain accumulation in this region which is also consistent with the high seismic activity in this region. In addition, for the first time an arc-normal extension of $${\sim }6~\hbox {mm/year}$$ has also been observed in the Tethyan Himalaya of Kumaon. Inverse modeling of GPS-derived surface deformation rates in Garhwal and Kumaon Himalaya using a single dislocation indicate that the Main Himalayan Thrust is locked from the surface to a depth of $${\sim }15{-}20~\hbox {km}$$ over a width of 110 km with associated slip rate of $${\sim }16{-}18~\hbox {mm/year}$$ . These results indicate that the arc-normal rates in the Northwest Himalaya have a complex deformation pattern involving both convergence and extension, and rigorous seismo-tectonic models in the Himalaya are necessary to account for this pattern. In addition, the results also gave an estimate of co-seismic and post-seismic motion associated with the 1999 Chamoli earthquake, which is modeled to derive the slip and geometry of the rupture plane.

89 citations


Journal ArticleDOI
TL;DR: In this paper, the satellite laser ranging (SLR) validation finally states an orbit accuracy of 242 cm for kinematic and 184 cm for the reduced-dynamic orbits over the entire mission of the Gravity field and steady-state Ocean Circulation Explorer (GOCE).
Abstract: The Gravity field and steady-state Ocean Circulation Explorer (GOCE) was the first Earth explorer core mission of the European Space Agency It was launched on March 17, 2009 into a Sun-synchronous dusk-dawn orbit and re-entered into the Earth’s atmosphere on November 11, 2013 The satellite altitude was between 255 and 225 km for the measurement phases The European GOCE Gravity consortium is responsible for the Level 1b to Level 2 data processing in the frame of the GOCE High-level processing facility (HPF) The Precise Science Orbit (PSO) is one Level 2 product, which was produced under the responsibility of the Astronomical Institute of the University of Bern within the HPF This PSO product has been continuously delivered during the entire mission Regular checks guaranteed a high consistency and quality of the orbits A correlation between solar activity, GPS data availability and quality of the orbits was found The accuracy of the kinematic orbit primarily suffers from this Improvements in modeling the range corrections at the retro-reflector array for the SLR measurements were made and implemented in the independent SLR validation for the GOCE PSO products The satellite laser ranging (SLR) validation finally states an orbit accuracy of 242 cm for the kinematic and 184 cm for the reduced-dynamic orbits over the entire mission The common-mode accelerations from the GOCE gradiometer were not used for the official PSO product, but in addition to the operational HPF work a study was performed to investigate to which extent common-mode accelerations improve the reduced-dynamic orbit determination results The accelerometer data may be used to derive realistic constraints for the empirical accelerations estimated for the reduced-dynamic orbit determination, which already improves the orbit quality On top of that the accelerometer data may further improve the orbit quality if realistic constraints and state-of-the-art background models such as gravity field and ocean tide models are used for the reduced-dynamic orbit determination

87 citations


Journal ArticleDOI
TL;DR: The AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities with efficient procedures for improved float solutions and ambiguity fixing.
Abstract: Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied For fixing the scalar ambiguity, an error probability controllable rounding method is proposed The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints The presented methodology is tested over seven baselines of around 100 km from USA CORS network The results show that the new widelane AR scheme can obtain the 994 % successful fixing rate with 06 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 08 % In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities

86 citations


Journal ArticleDOI
TL;DR: In this article, three GPS+GLONASS solutions of 8 years (2004-2011) were computed which differ only in the solar radiation pressure (SRP) and satellite attitude models, and they showed that part of the draconitic errors currently found in GNSS geodetic products are definitely induced by the CODE radiation pressure orbit modeling deficiencies.
Abstract: Systematic errors at harmonics of the GPS draconitic year have been found in diverse GPS-derived geodetic products like the geocenter $$Z$$ -component, station coordinates, $$Y$$ -pole rate and orbits (i.e. orbit overlaps). The GPS draconitic year is the repeat period of the GPS constellation w.r.t. the Sun which is about 351 days. Different error sources have been proposed which could generate these spurious signals at the draconitic harmonics. In this study, we focus on one of these error sources, namely the radiation pressure orbit modeling deficiencies. For this purpose, three GPS+GLONASS solutions of 8 years (2004–2011) were computed which differ only in the solar radiation pressure (SRP) and satellite attitude models. The models employed in the solutions are: (1) the CODE (5-parameter) radiation pressure model widely used within the International GNSS Service community, (2) the adjustable box-wing model for SRP impacting GPS (and GLONASS) satellites, and (3) the adjustable box-wing model upgraded to use non-nominal yaw attitude, specially for satellites in eclipse seasons. When comparing the first solution with the third one we achieved the following in the GNSS geodetic products. Orbits: the draconitic errors in the orbit overlaps are reduced for the GPS satellites in all the harmonics on average 46, 38 and 57 % for the radial, along-track and cross-track components, while for GLONASS satellites they are mainly reduced in the cross-track component by 39 %. Geocenter $$Z$$ -component: all the odd draconitic harmonics found when the CODE model is used show a very important reduction (almost disappearing with a 92 % average reduction) with the new radiation pressure models. Earth orientation parameters: the draconitic errors are reduced for the $$X$$ -pole rate and especially for the $$Y$$ -pole rate by 24 and 50 % respectively. Station coordinates: all the draconitic harmonics (except the 2nd harmonic in the North component) are reduced in the North, East and Height components, with average reductions of 41, 39 and 35 % respectively. This shows, that part of the draconitic errors currently found in GNSS geodetic products are definitely induced by the CODE radiation pressure orbit modeling deficiencies.

82 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an approach to optimize voxel distribution in both vertical and horizontal domains by searching the maximum number of ray-crossing voxels in both latitude and longitude directions.
Abstract: Water vapor tomography has been developed as a powerful tool to model spatial and temporal distribution of atmospheric water vapor. Global navigation satellite systems (GNSS) water vapor tomography refers to the 3D structural construction of tropospheric water vapor using a large number of GNSS signals that penetrate the tomographic modeling area from different positions. The modeling area is usually discretized into a number of voxels. A major issue involved is that some voxels are not crossed by any GNSS signal rays, resulting in an undetermined solution to the tomographic system. To alleviate this problem, the number of voxels crossed by GNSS signal rays should be as large as possible. An important way to achieve this is to optimize the geographic distribution of tomographic voxels. We propose an approach to optimize voxel distribution in both vertical and horizontal domains. In the vertical domain, water vapor profiles derived from radiosonde data are exploited to identify the maximum height of tomography and the optimal vertical resolution. In the horizontal domain, the optimal horizontal distribution of voxels is obtained by searching the maximum number of ray-crossing voxels in both latitude and longitude directions. The water vapor tomography optimization procedures are implemented using GPS water vapor data from the Hong Kong Satellite Positioning Reference Station Network. The tomographic water vapor fields solved from the optimized tomographic voxels are evaluated using radiosonde data and a numerical weather prediction non-hydrostatic model (NHM) obtained for the Hong Kong station. The comparisons of tomographic integrated water vapor (IWV) with the radiosonde and NHM IWV show that RMS errors of their differences are 1.41 and 3.09 mm, respectively. Moreover, the tomographic water vapor density results are compared with those of radiosonde and NHM. The RMS error of the density differences between tomography and radiosonde data is 1.05 $$\mathrm{g/m}^{3}$$ . For the comparison between tomography and NHM, an overall RMS error of $$1.43\,\mathrm{g/m^{3}}$$ is achieved.

81 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on improving the quality of gravity field models in the form of spherical harmonic representation via alternative configuration scenarios applied in future gravimetric satellite missions, and they performed full-scale simulations of various mission scenarios within the frame work of the German joint research project “Concepts for future gravity field satellite missions.
Abstract: The goal of this contribution is to focus on improving the quality of gravity field models in the form of spherical harmonic representation via alternative configuration scenarios applied in future gravimetric satellite missions We performed full-scale simulations of various mission scenarios within the frame work of the German joint research project “Concepts for future gravity field satellite missions” as part of the Geotechnologies Program, funded by the German Federal Ministry of Education and Research and the German Research Foundation In contrast to most previous simulation studies including our own previous work, we extended the simulated time span from one to three consecutive months to improve the robustness of the assessed performance New is that we performed simulations for seven dedicated satellite configurations in addition to the GRACE scenario, serving as a reference baseline These scenarios include a “GRACE Follow-on” mission (with some modifications to the currently implemented GRACE-FO mission), and an in-line “Bender” mission, in addition to five mission scenarios that include additional cross-track and radial information Our results clearly confirm the benefit of radial and cross-track measurement information compared to the GRACE along-track observable: the gravity fields recovered from the related alternative mission scenarios are superior in terms of error level and error isotropy In fact, one of our main findings is that although the noise levels achievable with the particular configurations do vary between the simulated months, their order of performance remains the same Our findings show also that the advanced pendulums provide the best performance of the investigated single formations, however an accuracy reduced by about 2–4 times in the important long-wavelength part of the spectrum (for spherical harmonic degrees $${<}50$$ ), compared to the Bender mission, can be observed Concerning state-of-the-art mission constraints, in particular the severe restriction of heterodyne lasers on maximum range-rates, only the moderate Pendulum and the Bender-mission are beneficial options, of course in addition to GRACE and GRACE-FO Furthermore, a Bender-type constellation would result in the most accurate gravity field solution by a factor of about 12 at long wavelengths (up to degree/order 40) and by a factor of about 200 at short wavelengths (up to degree/order 120) compared to the present GRACE solution Finally, we suggest the Pendulum and the Bender missions as candidate mission configurations depending on the available budget and technological progress

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a modified principal component analysis to extract the Common Mode Error (CME) from the incomplete position time series, which is equivalent to the method of Dong et al. in case of no missing data in the time series and to the extended stacking approach under the assumption of a uniformly spatial response.
Abstract: The existing spatiotemporal analysis methods suppose that the involved time series are complete and have the same data interval. However missing data inevitably occur in the position time series of Global Navigation Satellite Systems networks for many reasons. In this paper, we develop a modified principal component analysis to extract the Common Mode Error (CME) from the incomplete position time series. The principle of the proposed method is that a time series can be reproduced from its principle components. The method is equivalent to the method of Dong et al. (J Geophys Res 111:3405–3421, 2006) in case of no missing data in the time series and to the extended ‘stacking’ approach under the assumption of a uniformly spatial response. The new method is first applied to extract the CME from the position time series of the Crustal Movement Observation Network of China (CMONOC) over the period of 1999–2009 where the missing data occur in all stations with the different gaps. The results show that the CMEs are significant in CMONOC. The size of the first principle components for the North, East and Up coordinates are as large as 40, 41 and 37 % of total principle components and their spatial responses are not uniform. The minimum amplitudes of the first eigenvectors are only 41, 15 and 29 % for the North, East and Up coordinate components, respectively. The extracted CMEs of our method are close to the data filling method, and the Root Mean Squared error (RMS) values computed from the differences of maximum CMEs between two methods are only 0.31, 0.52 and 1.55 mm for North, East and Up coordinates, respectively. The RMS of the position time series is greatly reduced after filtering out the CMEs. The accuracies of the reconstructed missing data using the two methods are also comparable. To further comprehensively test the efficiency of our method, the repeated experiments are then carried out by randomly deleting different percentages of data at some stations. The results show that the CMEs can be extracted with high accuracy at the non missing-data epochs. And at the missing-data epochs, the accuracy of extracted CMEs has a strong dependence on the number of stations with missing data.

Journal ArticleDOI
TL;DR: A collinearity diagnosis, based on the notion of variance inflation factor, is developed and allows handling several peculiarities of the GNSS geocenter determination problem.
Abstract: The problem of observing geocenter motion from global navigation satellite system (GNSS) solutions through the network shift approach is addressed from the perspective of collinearity (or multicollinearity) among the parameters of a least-squares regression. A collinearity diagnosis, based on the notion of variance inflation factor, is therefore developed and allows handling several peculiarities of the GNSS geocenter determination problem. Its application reveals that the determination of all three components of geocenter motion with GNSS suffers from serious collinearity issues, with a comparable level as in the problem of determining the terrestrial scale simultaneously with the GNSS satellite phase center offsets. The inability of current GNSS, as opposed to satellite laser ranging, to properly sense geocenter motion is mostly explained by the estimation, in the GNSS case, of epoch-wise station and satellite clock offsets simultaneously with tropospheric parameters. The empirical satellite accelerations, as estimated by most Analysis Centers of the International GNSS Service, slightly amplify the collinearity of the $$Z$$ geocenter coordinate, but their role remains secondary.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the relation between weighted mean temperature and surface temperature, surface water vapor pressure and surface pressure, and determined the relationship between, on the one hand, the weighted average temperature, and on the other hand the surface temperature.
Abstract: In ground-based GPS meteorology, weighted mean temperature is the key parameter to calculate the conversion factor which will be used to map zenith wet delay to precipitable water vapor. In practical applications, we can hardly obtain the vertical profiles of meteorological parameters over the site, thus cannot use the integration method to calculate weighted mean temperature. In order to exactly calculate weighted mean temperature from a few meteorological parameters, this paper studied the relation between weighted mean temperature and surface temperature, surface water vapor pressure and surface pressure, and determined the relationship between, on the one hand, the weighted mean temperature, and, on the other hand, the surface temperature and surface water vapor pressure. Considering the seasonal and geographic variations in the relationship, we employed the trigonometry functions with an annual cycle and a semi-annual cycle to fit the residuals (seasonal and geographic variations are reflected in the residuals). Through the above work, we finally established the GTm-I model and the PTm-I model with a $$2^{\circ }\times 2.5^{\circ }(\mathrm{lat}\times \mathrm{lon})$$ resolution. Test results show that the two models both show a consistent high accuracy around the globe, which is about 1.0 K superior to the widely used Bevis weighted mean temperature–surface temperature relationship in terms of root mean square error.

Journal ArticleDOI
TL;DR: The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS and introduces different configurations of source-based scheduling options.
Abstract: In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.

Journal ArticleDOI
TL;DR: In this paper, a combined reprocessing of GPS and GLONASS observations was performed to estimate combined GPS+GLONASS satellite clocks, with first results presented in this paper.
Abstract: The International GNSS Service (IGS) provides operational products for the GPS and GLONASS constellation. Homogeneously processed time series of parameters from the IGS are only available for GPS. Reprocessed GLONASS series are provided only by individual Analysis Centers (i. e. CODE and ESA), making it difficult to fully include the GLONASS system into a rigorous GNSS analysis. In view of the increasing number of active GLONASS satellites and a steadily growing number of GPS+GLONASS-tracking stations available over the past few years, Technische Universitat Dresden, Technische Universitat Munchen, Universitat Bern and Eidgenossische Technische Hochschule Zurich performed a combined reprocessing of GPS and GLONASS observations. Also, SLR observations to GPS and GLONASS are included in this reprocessing effort. Here, we show only SLR results from a GNSS orbit validation. In total, 18 years of data (1994–2011) have been processed from altogether 340 GNSS and 70 SLR stations. The use of GLONASS observations in addition to GPS has no impact on the estimated linear terrestrial reference frame parameters. However, daily station positions show an RMS reduction of 0.3 mm on average for the height component when additional GLONASS observations can be used for the time series determination. Analyzing satellite orbit overlaps, the rigorous combination of GPS and GLONASS neither improves nor degrades the GPS orbit precision. For GLONASS, however, the quality of the microwave-derived GLONASS orbits improves due to the combination. These findings are confirmed using independent SLR observations for a GNSS orbit validation. In comparison to previous studies, mean SLR biases for satellites GPS-35 and GPS-36 could be reduced in magnitude from $$-35$$ and $$-38$$ mm to $$-12$$ and $$-13$$ mm, respectively. Our results show that remaining SLR biases depend on the satellite type and the use of coated or uncoated retro-reflectors. For Earth rotation parameters, the increasing number of GLONASS satellites and tracking stations over the past few years leads to differences between GPS-only and GPS+GLONASS combined solutions which are most pronounced in the pole rate estimates with maximum 0.2 mas/day in magnitude. At the same time, the difference between GLONASS-only and combined solutions decreases. Derived GNSS orbits are used to estimate combined GPS+GLONASS satellite clocks, with first results presented in this paper. Phase observation residuals from a precise point positioning are at the level of 2 mm and particularly reveal poorly modeled yaw maneuver periods.

Journal ArticleDOI
TL;DR: In this paper, the impact of low-orbiting geodetic satellites on the accuracy of the International Terrestrial Reference Frame (ILRS) derived from the LAGEOS-1/2 solution was investigated and the authors found that the repeatability of the East and North components of station coordinates, the quality of polar coordinates, and the scale estimates of the reference are improved when combining LAGEos with low- orbiting SLR satellites.
Abstract: The contribution of Starlette, Stella, and AJISAI is currently neglected when defining the International Terrestrial Reference Frame, despite a long time series of precise SLR observations and a huge amount of available data. The inferior accuracy of the orbits of low orbiting geodetic satellites is the main reason for this neglect. The Analysis Centers of the International Laser Ranging Service (ILRS ACs) do, however, consider including low orbiting geodetic satellites for deriving the standard ILRS products based on LAGEOS and Etalon satellites, instead of the sparsely observed, and thus, virtually negligible Etalons. We process ten years of SLR observations to Starlette, Stella, AJISAI, and LAGEOS and we assess the impact of these Low Earth Orbiting (LEO) SLR satellites on the SLR-derived parameters. We study different orbit parameterizations, in particular different arc lengths and the impact of pseudo-stochastic pulses and dynamical orbit parameters on the quality of the solutions. We found that the repeatability of the East and North components of station coordinates, the quality of polar coordinates, and the scale estimates of the reference are improved when combining LAGEOS with low orbiting SLR satellites. In the multi-SLR solutions, the scale and the $$Z$$ component of geocenter coordinates are less affected by deficiencies in solar radiation pressure modeling than in the LAGEOS-1/2 solutions, due to substantially reduced correlations between the $$Z$$ geocenter coordinate and empirical orbit parameters. Eventually, we found that the standard values of Center-of-mass corrections (CoM) for geodetic LEO satellites are not valid for the currently operating SLR systems. The variations of station-dependent differential range biases reach 52 and 25 mm for AJISAI and Starlette/Stella, respectively, which is why estimating station-dependent range biases or using station-dependent CoM, instead of one value for all SLR stations, is strongly recommended. This clearly indicates that the ILRS effort to produce CoM corrections for each satellite, which are site-specific and depend on the system characteristics at the time of tracking, is very important and needs to be implemented in the SLR data analysis.

Journal ArticleDOI
Xing Fang1
TL;DR: In this article, a symmetrical positive-definite cofactor matrix for the random coefficient matrix and the random observation vector with linear inequality constraints is considered and an active set method without combinatorial tests and a method based on sequential quadratic programming (SQP) are presented.
Abstract: Observation systems known as errors-in-variables (EIV) models with model parameters estimated by total least squares (TLS) have been discussed for more than a century, though the terms EIV and TLS were coined much more recently. So far, it has only been shown that the inequality-constrained TLS (ICTLS) solution can be obtained by the combinatorial methods, assuming that the weight matrices of observations involved in the data vector and the data matrix are identity matrices. Although the previous works test all combinations of active sets or solution schemes in a clear way, some aspects have received little or no attention such as admissible weights, solution characteristics and numerical efficiency. Therefore, the aim of this study was to adjust the EIV model, subject to linear inequality constraints. In particular, (1) This work deals with a symmetrical positive-definite cofactor matrix that could otherwise be quite arbitrary. It also considers cross-correlations between cofactor matrices for the random coefficient matrix and the random observation vector. (2) From a theoretical perspective, we present first-order Karush–Kuhn–Tucker (KKT) necessary conditions and the second-order sufficient conditions of the inequality-constrained weighted TLS (ICWTLS) solution by analytical formulation. (3) From a numerical perspective, an active set method without combinatorial tests as well as a method based on sequential quadratic programming (SQP) is established. By way of applications, computational costs of the proposed algorithms are shown to be significantly lower than the currently existing ICTLS methods. It is also shown that the proposed methods can treat the ICWTLS problem in the case of more general weight matrices. Finally, we study the ICWTLS solution in terms of non-convex weighted TLS contours from a geometrical perspective.

Journal ArticleDOI
TL;DR: In this paper, the Kalman filter for linear systems with colored measurement noises is revisited, and another measurement time difference-based approach is introduced, which is easy to be implemented and generalized to nonlinear system, and can provide filtering solutions directly.
Abstract: The Kalman filter for linear systems with colored measurement noises is revisited. Besides two well-known approaches, i.e., Bryson’s and Petovello’s, another measurement time difference-based approach is introduced. This approach is easy to be implemented and generalized to nonlinear system, and can provide filtering solutions directly. A unified view on these approaches is provided, and the equivalence between any two of the three is proved. In the case study part it is validated that, compared to the approach that neglects the time correlations, the approaches that take them into account not only avoid overly optimistically evaluating the estimate, but also improve the transient accuracy of the estimate.

Journal ArticleDOI
TL;DR: In this article, the variance components of the EIV stochastic model are not estimable, if the elements of the random coefficient matrix can be classified into two or more groups of data of the same accuracy.
Abstract: Although total least squares has been substantially investigated theoretically and widely applied in practical applications, almost nothing has been done to simultaneously address the estimation of parameters and the errors-in-variables (EIV) stochastic model. We prove that the variance components of the EIV stochastic model are not estimable, if the elements of the random coefficient matrix can be classified into two or more groups of data of the same accuracy. This result of inestimability is surprising as it indicates that we have no way of gaining any knowledge on such an EIV stochastic model. We demonstrate that the linear equations for the estimation of variance components could be ill-conditioned, if the variance components are theoretically estimable. Finally, if the variance components are estimable, we derive the biases of their estimates, which could be significantly amplified due to a large condition number.

Journal ArticleDOI
TL;DR: This model is derived using the short arc approach, which allows a very effective decorrelation of the highly correlated GOCE gradiometer and orbit data noise by introducing a full empirical covariance matrix for each arc, and gives the possibility to downweight ‘bad’ arcs.
Abstract: In this contribution, we describe the global GOCE-only gravity field model ITG-Goce02 derived from 7.5 months of gradiometer and orbit data. This model represents an alternative to the official ESA products as it is computed completely independently, using a different processing strategy and a separate software package. Our model is derived using the short arc approach, which allows a very effective decorrelation of the highly correlated GOCE gradiometer and orbit data noise by introducing a full empirical covariance matrix for each arc, and gives the possibility to downweight ‘bad’ arcs. For the processing of the orbit data we rely on the integral equation approach instead of the energy integral method, which has been applied in several other GOCE models. An evaluation against high-resolution global gravity field models shows very similar differences of our model compared to the official GOCE results published by ESA (release 2), especially to the model derived by the time-wise approach. This conclusion is confirmed by comparison of the GOCE models to GPS/levelling and altimetry data.

Journal ArticleDOI
TL;DR: In this paper, the potential impact of using an improved set of 6-hourly atmospheric de-aliasing products on the computations of linear trends as well as the amplitude of annual and semi-annual mass changes from GRACE is assessed.
Abstract: There are two spurious jumps in the atmospheric part of the Gravity Recovery and Climate Experiment-Atmosphere and Ocean De-aliasing level 1B (GRACE-AOD1B) products, which occurred in January-February of the years 2006 and 2010, as a result of the vertical level and horizontal resolution changes in the ECMWFop (European Centre for Medium-Range Weather Forecasts operational analysis). These jumps cause a systematic error in the estimation of mass changes from GRACE time-variable level 2 products, since GRACE-AOD1B mass variations are removed during the computation of GRACE level 2. In this short note, the potential impact of using an improved set of 6-hourly atmospheric de-aliasing products on the computations of linear trends as well as the amplitude of annual and semi-annual mass changes from GRACE is assessed. These improvements result from 1) employing a modified 3D integration approach (ITG3D), and 2) using long-term consistent atmospheric fields from the ECMWF reanalysis (ERA-Interim). The monthly averages of the new ITG3D-ERA-Interim de-aliasing products are then compared to the atmospheric part of GRACE-AOD1B, covering January 2003 to December 2010. These comparisons include the 33 world largest river basins along with Greenland and Antarctica ice sheets. The results indicate a considerable difference in total atmospheric mass derived from the two products over some of the mentioned regions. We suggest that future GRACE studies consider these through updating uncertainty budgets or by applying corrections to estimated trends and amplitudes/phases.

Journal ArticleDOI
TL;DR: In this article, a new approach to estimate precise long-term vertical land motion (VLM) based on double-differences of long tide gauge (TG) and short altimetry data is presented.
Abstract: We present a new approach to estimate precise long-term vertical land motion (VLM) based on double-differences of long tide gauge (TG) and short altimetry data. We identify and difference rates of pairs of highly correlated sea level records providing relative VLM estimates that are less dependent on record length and benefit from reduced uncertainty and mitigated biases (e.g. altimeter drift). This approach also overcomes the key limitation of previous techniques in that it is not geographically limited to semi-enclosed seas and can thus be applied to estimate VLM at TGs along any coast, provided data of sufficient quality are available. Using this approach, we have estimated VLM at a global set of 86 TGs with a median precision of 0.7 mm/year in a conventional reference frame. These estimates were compared to previous VLM estimates at TGs in the Baltic Sea and to estimates from co-located Global Positioning System (GPS) stations and Glacial Isostatic Adjustment (GIA) predictions. Differences with respect to the GPS and VLM estimates from previous studies resulted in a scatter of around 0.6 mm/year. Differences with respect to GIA predictions had a larger scatter in excess of 1 mm/year. Until satellite altimetry records reach enough length to estimate precise VLM at each TG, this new approach constitutes a substantial advance in the geodetic monitoring of TGs with major applications in long-term sea level change and climate change studies.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated a new approach, which is based on a frequent (e.g. weekly) estimation of station positions and EOP from a combination of epoch normal equations of the space geodetic techniques Global Positioning System (GPS), Satellite Laser Ranging (SLR), and Very Long Baseline Interferometry (VLBI).
Abstract: In the conventions of the International Earth Rotation and Reference Systems Service (e.g. IERS Conventions 2010), it is recommended that the instantaneous station position, which is fixed to the Earth’s crust, is described by a regularized station position and conventional correction models. Current realizations of the International Terrestrial Reference Frame use a station position at a reference epoch and a constant velocity to describe the motion of the regularized station position in time. An advantage of this parameterization is the possibility to provide station coordinates of high accuracy over a long time span. Various publications have shown that residual non-linear station motions can reach a magnitude of a few centimeters due to not considered loading effects. Consistently estimated parameters like the Earth Orientation Parameters (EOP) may be affected if these non-linear station motions are neglected. In this paper, we investigate a new approach, which is based on a frequent (e.g. weekly) estimation of station positions and EOP from a combination of epoch normal equations of the space geodetic techniques Global Positioning System (GPS), Satellite Laser Ranging (SLR) and Very Long Baseline Interferometry (VLBI). The resulting time series of epoch reference frames are studied in detail and are compared with the conventional secular approach. It is shown that both approaches have specific advantages and disadvantages, which are discussed in the paper. A major advantage of the frequently estimated epoch reference frames is that the non-linear station motions are implicitly taken into account, which is a major limiting factor for the accuracy of the secular frames. Various test computations and comparisons between the epoch and secular approach are performed. The authors found that the consistently estimated EOP are systematically affected by the two different combination approaches. The differences between the epoch and secular frames reach magnitudes of $$23.6~\upmu \hbox {as}$$ (0.73 mm) and $$39.8~\upmu \hbox {as}$$ (1.23 mm) for the x-pole and y-pole, respectively, in case of the combined solutions. For the SLR-only solutions, significant differences with amplitudes of $$77.3~\upmu \hbox {as}$$ (2.39 mm) can be found.

Journal ArticleDOI
TL;DR: In this article, a new robust alternative method called REDOD was proposed, which completely eliminated the effect of constant errors from the results of IWST and showed that the REDOD method completely eliminated their effect from deformation analysis results.
Abstract: Deformation measurements have a repeatable nature. This means that deformation measurements are performed often with the same equipment, methods, geometric conditions and in a similar environment in epochs 1 and 2 (e.g., a fully automated, continuous control measurements). It is, therefore, reasonable to assume that the results of deformation measurements can be distorted by both random errors and by some non-random errors, which are constant in both epochs. In other words, there is a high probability that the difference in the accuracy and precision of measurement of the same geometric element of the network in both epochs has a constant value and sign. The constant errors are understood, but the manifestation of these errors is difficult to determine in practice. For free control networks (the group of potential reference points in absolute control networks or the group of potential stable points in relative networks), the results of deformation measurements are most often processed using robust methods. Classical robust methods do not completely eliminate the effect of constant errors. This paper proposes a new robust alternative method called REDOD. The performed tests showed that if the results of deformation measurements were additionally distorted by constant errors, the REDOD method completely eliminated their effect from deformation analysis results. If the results of deformation measurements are only distorted by random errors, the REDOD method yields very similar deformation analysis results as the classical IWST method. The numerical tests were preceded by a theoretical part. The theoretical part describes the algorithm of classical robust methods. Particular attention was paid to the IWST method. In relation to classical robust methods, the optimization problem of the new REDOD method was formulated and the algorithm for its solution was derived.

Journal ArticleDOI
TL;DR: In this article, the effects of the random errors of the design matrix on weighted LS adjustment were investigated and a bias-corrected weighted LS estimate of the variance of unit weight was proposed.
Abstract: Although total least squares (TLS) is more rigorous than the weighted least squares (LS) method to estimate the parameters in an errors-in-variables (EIV) model, it is computationally much more complicated than the weighted LS method. For some EIV problems, the TLS and weighted LS methods have been shown to produce practically negligible differences in the estimated parameters. To understand under what conditions we can safely use the usual weighted LS method, we systematically investigate the effects of the random errors of the design matrix on weighted LS adjustment. We derive the effects of EIV on the estimated quantities of geodetic interest, in particular, the model parameters, the variance–covariance matrix of the estimated parameters and the variance of unit weight. By simplifying our bias formulae, we can readily show that the corresponding statistical results obtained by Hodges and Moore (Appl Stat 21:185–195, 1972) and Davies and Hutton (Biometrika 62:383–391, 1975) are actually the special cases of our study. The theoretical analysis of bias has shown that the effect of random matrix on adjustment depends on the design matrix itself, the variance–covariance matrix of its elements and the model parameters. Using the derived formulae of bias, we can remove the effect of the random matrix from the weighted LS estimate and accordingly obtain the bias-corrected weighted LS estimate for the EIV model. We derive the bias of the weighted LS estimate of the variance of unit weight. The random errors of the design matrix can significantly affect the weighted LS estimate of the variance of unit weight. The theoretical analysis successfully explains all the anomalously large estimates of the variance of unit weight reported in the geodetic literature. We propose bias-corrected estimates for the variance of unit weight. Finally, we analyze two examples of coordinate transformation and climate change, which have shown that the bias-corrected weighted LS method can perform numerically as well as the weighted TLS method.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the sources of the trend differences, focusing on the averaging methods used to generate the GMSL, and find that the main difference comes from the averaging method with significant differences depending on latitude.
Abstract: Determining how the global mean sea level (GMSL) evolves with time is of primary importance to understand one of the main consequences of global warming and its potential impact on populations living near coasts or in low-lying islands. Five groups are routinely providing satellite altimetry-based estimates of the GMSL over the altimetry era (since late 1992). Because each group developed its own approach to compute the GMSL time series, this leads to some differences in the GMSL interannual variability and linear trend. While over the whole high-precision altimetry time span (1993–2012), good agreement is noticed for the computed GMSL linear trend (of \(3.1\pm 0.4\) mm/year), on shorter time spans (e.g., \({<}10~\hbox {years}\)), trend differences are significantly larger than the 0.4 mm/year uncertainty. Here we investigate the sources of the trend differences, focusing on the averaging methods used to generate the GMSL. For that purpose, we consider outputs from two different groups: the Colorado University (CU) and Archiving, Validation and Interpretation of Satellite Oceanographic Data (AVISO) because associated processing of each group is largely representative of all other groups. For this investigation, we use the high-resolution MERCATOR ocean circulation model with data assimilation (version Glorys2-v1) and compute synthetic sea surface height (SSH) data by interpolating the model grids at the time and location of “true” along-track satellite altimetry measurements, focusing on the Jason-1 operating period (i.e., 2002–2009). These synthetic SSH data are then treated as “real” altimetry measurements, allowing us to test the different averaging methods used by the two processing groups for computing the GMSL: (1) averaging along-track altimetry data (as done by CU) or (2) gridding the along-track data into \(2^{\circ }\times 2^{\circ }\) meshes and then geographical averaging of the gridded data (as done by AVISO). We also investigate the effect of considering or not SSH data at shallow depths \(({<}120~\hbox {m})\) as well as the editing procedure. We find that the main difference comes from the averaging method with significant differences depending on latitude. In the tropics, the \(2^{\circ }\times 2^{\circ }\) gridding method used by AVISO overestimates by 11 % the GMSL trend. At high latitudes (above \(60^{\circ }\hbox {N}/\hbox {S}\)), both methods underestimate the GMSL trend. Our calculation shows that the CU method (along-track averaging) and AVISO gridding process underestimate the trend in high latitudes of the northern hemisphere by 0.9 and 1.2 mm/year, respectively. While we were able to attribute the AVISO trend overestimation in the tropics to grid cells with too few data, the cause of underestimation at high latitudes remains unclear and needs further investigation.

Journal ArticleDOI
TL;DR: In this article, the historical sea level data from a tide gauge, especially devised originally for geodesy, was extracted from the original tide chart and used to define the origin of the height system in France, and the hourly values were thoroughly analyzed for the first time after their original recording.
Abstract: This paper describes the historical sea level data that we have rescued from a tide gauge, especially devised originally for geodesy. This gauge was installed in Marseille in 1884 with the primary objective of defining the origin of the height system in France. Hourly values for 1885–1988 have been digitized from the original tidal charts. They are supplemented by hourly values from an older tide gauge record (1849–1851) that was rediscovered during a survey in 2009. Both recovered data sets have been critically edited for errors and their reliability assessed. The hourly values are thoroughly analysed for the first time after their original recording. A consistent high-frequency time series is reported, increasing notably the length of one of the few European sea level records in the Mediterranean Sea spanning more than one hundred years. Changes in sea levels are examined, and previous results revisited with the extended time series. The rate of relative sea level change for the period 1849–2012 is estimated to have been $$1.08\pm 0.04$$ mm/year at Marseille, a value that is slightly lower but in close agreement with the longest time series of Brest over the common period ( $$1.26\pm 0.04$$ mm/year). The data from a permanent global positioning system station installed on the roof of the solid tide gauge building suggests a remarkable stability of the ground ( $$-0.04\pm 0.25$$ mm/year) since 1998, confirming the choice made by our predecessor geodesists in the nineteenth century regarding this site selection.

Journal ArticleDOI
TL;DR: In this article, the robust estimation by the EM (expectation maximization) algorithm for a model, which is more general than the linear model, was derived for the nonlinear Gauss Helmert (GH) model.
Abstract: For deriving the robust estimation by the EM (expectation maximization) algorithm for a model, which is more general than the linear model, the nonlinear Gauss Helmert (GH) model is chosen. It contains the errors-in-variables model as a special case. The nonlinear GH model is difficult to handle because of the linearization and the Gauss Newton iterations. Approximate values for the observations have to be introduced for the linearization. Robust estimates by the EM algorithm based on the variance-inflation model and the mean-shift model have been derived for the linear model in case of homoscedasticity. To derive these two EM algorithms for the GH model, different variances are introduced for the observations and the expectations of the measurements defined by the linear model are replaced by the ones of the GH model. The two robust methods are applied to fit by the GH model a polynomial surface of second degree to the measured three-dimensional coordinates of a laser scanner. This results in detecting more outliers than by the linear model.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a probabilistic estimator for estimating the influence function of the Pearson distribution with respect to the distribution of the central moments of the Gaussian distribution.
Abstract: The paper concerns $$M$$ -estimation with probabilistic models of geodetic observations that is called $$M_{\mathcal {P}}$$ estimation. The special attention is paid to $$M_{\mathcal {P}}$$ estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of $$M_{\mathcal {P}}$$ estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments $$\mu _{k},\, k=2,3,4$$ , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The $$M_{\mathcal {P}}$$ estimation that includes the Pearson type IV and VII distributions ( $$M_{\mathrm{PD(l)}}$$ method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering $$M$$ -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( $$M_{\mathrm{G-C}}$$ method). The paper shows that $$M_{\mathcal {P}}$$ estimation with the application of probabilistic models belongs to the class of robust estimations; $$M_{\mathrm{PD(l)}}$$ method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.

Journal ArticleDOI
TL;DR: In this paper, the authors report on the susceptibility of the Scintrex CG-5 relative gravimeters to tilting, that is the tendency of the instrument of providing incorrect readings after being tilted (even by small angles) for a moderate period of time.
Abstract: We report on the susceptibility of the Scintrex CG-5 relative gravimeters to tilting, that is the tendency of the instrument of providing incorrect readings after being tilted (even by small angles) for a moderate period of time. Tilting of the instrument can occur when in transit between sites usually on the backseat of a car even using the specially designed transport case. Based on a series of experiments with different instruments, we demonstrate that the readings may be offset by tens of $$\upmu $$ Gal. In addition, it may take hours before the first reliable readings can be taken, with the actual time depending on how long the instrument had been tilted. This sensitivity to tilt in combination with the long time required for the instrument to provide reliable readings has not yet been reported in the literature and is not addressed adequately in the Scintrex CG-5 user manual. In particular, the inadequate instrument state cannot easily be detected by checking the readings during the observation or by reviewing the final data before leaving a site, precautions suggested by Scintrex Ltd. In regional surveys with car transportation over periods of tens of minutes to hours, the gravity measurements can be degraded by some 10 $$\upmu $$ Gal. To obtain high-quality results in line with the CG-5 specifications, the gravimeters must remain in upright position to within a few degrees during transits. This requirement may often be unrealistic during field observations, particularly when observing in hilly terrain or when walking with the instrument in a backpack.