scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Geodesy in 2005"


Journal ArticleDOI
TL;DR: A new generation of Earth gravity field models called GGM02 are derived using approximately 14 months of data spanning from April 2002 to December 2003 from the Gravity Recovery And Climate Experiment (GRACE) as discussed by the authors.
Abstract: A new generation of Earth gravity field models called GGM02 are derived using approximately 14 months of data spanning from April 2002 to December 2003 from the Gravity Recovery And Climate Experiment (GRACE). Relative to the preceding generation, GGM01, there have been improvements to the data products, the gravity estimation methods and the background models. Based on the calibrated covariances, GGM02 (both the GRACE-only model GGM02S and the combination model GGM02C) represents an improvement greater than a factor of two over the previous GGM01 models. Error estimates indicate a cumulative error less than 1 cm geoid height to spherical harmonic degree 70, which can be said to have met the GRACE minimum mission goals.

632 citations


Journal ArticleDOI
TL;DR: The relations between the LAMBDA method and some relevant methods in the information theory literature are pointed out when it is introduced and several strategies are proposed to reduce the computational complexity.
Abstract: The least-squares ambiguity Decorrelation (LAMBDA) method has been widely used in GNSS for fixing integer ambiguities. It can also solve any integer least squares (ILS) problem arising from other applications. For real time applications with high dimensions, the computational speed is crucial. A modified LAMBDA (MLAMBDA) method is presented. Several strategies are proposed to reduce the computational complexity of the LAMBDA method. Numerical simulations show that MLAMBDA is (much) faster than LAMBDA. The relations between the LAMBDA method and some relevant methods in the information theory literature are pointed out when we introduce its main procedures.

346 citations


Journal ArticleDOI
TL;DR: In this paper, a new approach is presented, which selects the double-difference ambiguities according to their probability of being fixed to the nearest integer, which is computed from estimates and variances of wide-lane and narrow-lane ambiguity.
Abstract: Integer carrier-phase ambiguity resolution is one of the critical issues for precise GPS applications in geodesy and geodynamics. To resolve as many integer ambiguities as possible, the ‘most-easy-to-fix’ double-difference ambiguities have to be defined. For this purpose, several strategies are implemented in existing GPS software packages, such as choosing the ambiguities according to the baseline length or the variances of the estimated real-valued ambiguities. Although their efficiencies are demonstrated in practice, it is proven in this paper that they do not reflect all effects of varying data quality, because they are based on theoretical considerations of GPS data processing. Therefore, a new approach is presented, which selects the double-difference ambiguities according to their probability of being fixed to the nearest integer. The probability is computed from estimates and variances of wide-lane and narrow-lane ambiguities. Together with an optimized ambiguity fixing procedure, the new approach is implemented in the routine data processing for the International GPS Service (IGS) at GeoForschungsZentrum (GFZ) Potsdam. Within a sub-network of about 90 IGS stations, it is demonstrated that more than 97% of the independent ambiguities are fixed correctly compared to 75% by a commonly used method, and that the additionally fixed ambiguities improve the repeatability of the station coordinates by 10–26% in regions with sparse site distribution.

120 citations


Journal ArticleDOI
Peiliang Xu1
TL;DR: Sign-constrained robust least squares as discussed by the authors is a robust estimation method that employs a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near-multi-collinearity among part of the data or if some data are clustered together.
Abstract: The findings of this paper are summarized as follows: (1) We propose a sign-constrained robust estimation method, which can tolerate 50% of data contamination and meanwhile achieve high, least-squares-comparable efficiency. Since the objective function is identical with least squares, the method may also be called sign-constrained robust least squares. An iterative version of the method has been implemented and shown to be capable of resisting against more than 50% of contamination. As a by-product, a robust estimate of scale parameter can also be obtained. Unlike the least median of squares method and repeated medians, which use a least possible number of data to derive the solution, the sign-constrained robust least squares method attempts to employ a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near multi-collinearity among part of the data or if some of the data are clustered together; (2) although M-estimates have been reported to have a breakdown point of 1/(t+1), we have shown that the weights of observations can readily deteriorate such results and bring the breakdown point of M-estimates of Huber’s type to zero. The same zero breakdown point of the L 1-norm method is also derived, again due to the weights of observations; (3) by assuming a prior distribution for the signs of outliers, we have developed the concept of subjective breakdown point, which may be thought of as an extension of stochastic breakdown by Donoho and Huber but can be important in explaining real-life problems in Earth Sciences and image reconstruction; and finally, (4) We have shown that the least median of squares method can still break down with a single outlier, even if no highly concentrated good data nor highly concentrated outliers exist.

113 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the analyses of the network-derived ionospheric correction accuracy under extremely varying -quiet and stormy - geomagnetic and ionosphere conditions and investigate the influence of the correction accuracy on the instantaneous (single-epoch) and on-the-fly (OTF) AR in long-range RTK GPS positioning.
Abstract: The network-based GPS technique provides a broad spectrum of corrections to support RTK (real-time kinematic) surveying and geodetic applications. The most important among them are the ionospheric corrections generated in the reference network. The accuracy of these corrections depends upon the ionospheric conditions and may not always be sufficient to support ambiguity resolution (AR), and hence accurate GPS positioning. This paper presents the analyses of the network-derived ionospheric correction accuracy under extremely varying – quiet and stormy – geomagnetic and ionospheric conditions. In addition, the influence of the correction accuracy on the instantaneous (single-epoch) and on-the-fly (OTF) AR in long-range RTK GPS positioning is investigated, and the results, based on post-processed GPS data, are provided. The network used here to generate the ionospheric corrections consists of three permanent stations selected from the Ohio Continuously Operating Reference Stations (CORS) network. The average separation between the reference stations was ∼200 km and the test baseline was 121 km long. The results show that, during the severe ionospheric storm, the correction accuracy deteriorates to the point when the instantaneous AR is no longer possible, and the OTF AR requires much more time to fix the integers. The analyses presented here also outline the importance of the correct selection of the stochastic constraints in the rover solution applied to the network-derived ionospheric corrections.

100 citations


Journal ArticleDOI
TL;DR: A new data filtering method, based on the Vondrak filter and the technique of cross-validation, is developed for separating signals from noise in data series, and applied to mitigate GPS multipath effects in applications such as deformation monitoring.
Abstract: Multipath disturbance is one of the most important error sources in high-accuracy global positioning system (GPS) positioning and navigation. A new data filtering method, based on the Vondrak filter and the technique of cross-validation, is developed for separating signals from noise in data series, and applied to mitigate GPS multipath effects in applications such as deformation monitoring. Both simulated data series and real GPS observations are used to test the proposed method. It is shown that the method can be used to successfully separate signals from noise at different noise levels, and for varying signal frequencies as long as the noise level is lower than the magnitude of the signals. A multipath model can be derived, based on the current-day GPS observations, with the proposed method and used to remove multipath errors in subsequent days of GPS observations when taking advantage of the sidereal day-to-day repeating characteristics of GPS multipath signals. Tests have shown that the reduction in the root mean square (RMS) values of the GPS errors ranges from 20% to 40% when the method is applied.

89 citations


Journal ArticleDOI
TL;DR: In this article, the variance component estimation (VCE) is implemented in the combined least squares (LS) adjustment of heterogeneous height data (ellipsoidal, orthometric and geoid), for the purpose of calibrating geoid error models.
Abstract: The well-known statistical tool of variance component estimation (VCE) is implemented in the combined least-squares (LS) adjustment of heterogeneous height data (ellipsoidal, orthometric and geoid), for the purpose of calibrating geoid error models. This general treatment of the stochastic model offers the flexibility of estimating more than one variance and/or covariance component to improve the covariance information. Specifically, the iterative minimum norm quadratic unbiased estimation (I-MINQUE) and the iterative almost unbiased estimation (I-AUE) schemes are implemented in case studies with observed height data from Switzerland and parts of Canada. The effect of correlation among measurements of the same height type and the role of the systematic effects and datum inconsistencies in the combined adjustment of ellipsoidal, geoid and orthometric heights on the estimated variance components are investigated in detail. Results give valuable insight into the usefulness of the VCE approach for calibrating geoid error models and the challenges encountered when implementing such a scheme in practice. In all cases, the estimated variance component corresponding to the geoid height data was less than or equal to 1, indicating an overall downscaling of the initial covariance (CV) matrix was necessary. It was also shown that overly optimistic CV matrices are obtained when diagonal-only cofactor matrices are implemented in the stochastic model for the observations. Finally, the divergence of the VCE solution and/or the computation of negative variance components provide insight into the selected parametric model effectiveness.

88 citations


Journal ArticleDOI
TL;DR: It is shown here that the RCR technique fails to tune down the long-wavelength gravity signal from the terrestrial data, and the EGM actually only reduces, in a non-optimised way, the truncation error committed by limiting the Stokes integration to a small region around the computation point.
Abstract: The remove-compute-restore (RCR) technique is the most well known method for regional gravimetric geoid determination today. Its basic theory is the first-order approximation of either Molodensky’s method for quasi-geoid determination or the classical geoid modelling by Helmert’s second method of condensing the topography onto the geoid. Although the basic approximate formulae do not meet today’s demands for a 1-cm geoid, it is sometimes assumed that the removal of the less precise long-wavelength terrestrial gravity anomaly field from Stokes’s integral by utilising a higher-order reference field represented by a more precise Earth gravity model (EGM) and the restoration of the EGM as a low-degree geoid contribution will produce a geoid model of the desired accuracy. Further improvement is achieved also by removing and restoring a residual topographic effect, which favourably smoothes the gravity anomaly to be integrated in Stokes’s formula. However, it is shown here that the RCR technique fails to tune down the long-wavelength gravity signal from the terrestrial data, and the EGM actually only reduces, in a non-optimised way, the truncation error committed by limiting the Stokes integration to a small region around the computation point. Hence, in order to take full advantage of a precise EGM, especially one from new dedicated satellite gravimetry, Stokes’s kernel must be modified in a suitable way to match the errors of terrestrial gravity, EGM and truncation. In addition, topographic, atmospheric and ellipsoidal effects must be carefully applied.

87 citations


Journal ArticleDOI
TL;DR: In this article, three different models have been derived up to a maximum degree of n=90 of a spherical harmonic expansion of the gravitational potential, up to Kaula's rule of thumb.
Abstract: Global gravity field models have been determined based on kinematic orbits covering an observation period of one year beginning from March 2002. Three different models have been derived up to a maximum degree of n=90 of a spherical harmonic expansion of the gravitational potential. One version, ITG-CHAMP01E, has been regularized beginning from degree n=40 upwards, based on the potential coefficients of the gravity field model EGM96. A second model, ITG-CHAMP01K, has been determined based on Kaula’s rule of thumb, also beginning from degree n=40. A third version, ITG-CHAMP01S, has been determined without any regularization. The physical model of the gravity field recovery technique is based on Newton’s equation of motion, formulated as a boundary value problem in the form of a Fredholm-type integral equation. The observation equations are formulated in the space domain by dividing the one-year orbit into short sections of approximately 30-minute arcs. For every short arc, a variance factor has been determined by an iterative computation procedure. The three gravity field models have been validated based on various criteria, and demonstrate the quality of not only the gravity field recovery technique but also the kinematically determined orbits.

84 citations


Journal ArticleDOI
TL;DR: In this article, a feed-forward ANN structure was used to model the local GPS/levelling geoid surface, and the results showed that ANNs can produce results that are comparable to polynomial fitting and LSC.
Abstract: The use of GPS for establishing height control in an area where levelling data are available can involve the so-called GPS/levelling technique. Modelling of the GPS/levelling geoid undulations has usually been carried out using polynomial surface fitting, least-squares collocation (LSC) and finite-element methods. Artificial neural networks (ANNs) have recently been used for many investigations, and proven to be effective in solving complex problems represented by noisy and missing data. In this study, a feed-forward ANN structure, learning the characteristics of the training data through the back-propagation algorithm, is employed to model the local GPS/levelling geoid surface. The GPS/levelling geoid undulations for Istanbul, Turkey, were estimated from GPS and precise levelling measurements obtained during a field study in the period 1998–99. The results are compared to those produced by two well-known conventional methods, namely polynomial fitting and LSC, in terms of root mean square error (RMSE) that ranged from 3.97 to 5.73 cm. The results show that ANNs can produce results that are comparable to polynomial fitting and LSC. The main advantage of the ANN-based surfaces seems to be the low deviations from the GPS/levelling data surface, which is particularly important for distorted levelling networks.

73 citations


Journal ArticleDOI
TL;DR: In this article, the propagation of unmodelled systematic errors into coordinate time series computed using least squares is investigated, to improve the understanding of unexplained signals and apparent noise in geodetic (especially GPS) coordinates.
Abstract: The propagation of unmodelled systematic errors into coordinate time series computed using least squares is investigated, to improve the understanding of unexplained signals and apparent noise in geodetic (especially GPS) coordinate time series. Such coordinate time series are invariably based on a functional model linearised using only zero and first-order terms of a (Taylor) series expansion about the approximate coordinates of the unknown point. The effect of such truncation errors is investigated through the derivation of a generalised systematic error model for the simple case of range observations from a single known reference point to a point which is assumed to be at rest by the least squares model but is in fact in motion. The systematic error function for a one pseudo-satellite two-dimensional case, designed to be as simple but as analogous to GPS positioning as possible, is quantified. It is shown that the combination of a moving reference point and unmodelled periodic displacement at the unknown point of interest, due to ocean tide loading, for example, results in an output coordinate time series containing many periodic terms when only zero and first-order expansion terms are used in the linearisation of the functional model. The amplitude, phase and period of these terms is dependent on the input amplitude, the locations of the unknown point and reference point, and the period of the reference point's motion. The dominant output signals that arise due to truncation errors match those found in coordinate time series obtained from both simulated data and real three-dimensional GPS data.

Journal ArticleDOI
TL;DR: In this article, the authors compared the GPS-based la plata ionospheric model (LPIM) and the International Reference Ionosphere (IRI95) model to estimates from the dual-frequency altimeter onboard the TOPEX/Poseidon (T/P) satellite.
Abstract: Total electron content (TEC) predictions made with the GPS-based la plata ionospheric model (LPIM) and the International Reference Ionosphere (IRI95) model were compared to estimates from the dual-frequency altimeter onboard the TOPEX/Poseidon (T/P) satellite. LPIM and IRI95 were evaluated for the location and time of available T/P data, from January 1997 to December 1998. To investigate temporal and spatial variations of the TEC bias between T/P and each model, the region covered by T/P observations was divided into ten latitude bands. For both models and for all latitudes, the bias was mainly positive (i.e. T/P values were larger); the LPIM bias was lower and less variable than the IRI95 bias. To perform a detailed analysis of temporal and spatial variability of the T/P-LPIM TEC bias, the Earth’s surface was divided into spherical triangles with 9°-sides, and a temporally varying regression model was fitted to every triangle. The highest TEC bias was found over the equatorial anomalies, which is attributed to errors in LPIM. A significant TEC bias was found at 40°N latitude, which is attributed to errors in the T/P Sea State Bias (SSB) correction. To separate systematic errors in the T/P TEC from those caused by LPIM, altimeter range biases estimated by other authors were analysed in connection with the TEC bias. This suggested that LPIM underestimates the TEC, particularly during the Southern Hemisphere summer, while T/P C-band SSB calibration is worse during the Southern Hemisphere winter.

Journal ArticleDOI
TL;DR: In this article, the Bruns formula was used to compute the mean normal gravity, the mean values of gravity generated by topographical and atmo-spheric masses, and the mean gravity disturbance generated by the masses contained within geoid.
Abstract: The main problem of the rigorous definition of the orthometric height is the evaluation of the mean value of the Earth's gravity acceleration along the plumbline within the topography. To find the exact relation between rigor- ous orthometric and Molodensky's normal heights, the mean gravity is decomposed into: the mean normal gravity, the mean values of gravity generated by topographical and atmo- spheric masses, and the mean gravity disturbance generated by the masses contained within geoid. The mean normal gravity is evaluated according to Somigliana-Pizzetti's the- ory of the normal gravity field generated by the ellipsoid of revolution. Using the Bruns formula, the mean values of gravity along the plumbline generated by topographical and atmospheric masses can be computed as the integral mean between the Earth's surface and geoid. Since the disturb- ing gravity potential generated by masses inside the geoid is harmonic above the geoid, the mean value of the gravity disturbance generated by the geoid is defined by applying the Poisson integral equation to the integral mean. Numeri- cal results for a test area in the Canadian Rocky Mountains show that the difference between the rigorously defined or- thometric height and the Molodensky normal height reaches ∼0.5 m.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the effects of digital elevation models (DEMs) on the accuracy of the geoid model in the application of Stokes's formula and showed that the results of these errors can be propagated into the geoids model by the topographic and downward continuation (DWC) corrections.
Abstract: Any errors in digital elevation models (DEMs) will introduce errors directly in gravity anomalies and geoid models when used in interpolating Bouguer gravity anomalies. Errors are also propagated into the geoid model by the topographic and downward continuation (DWC) corrections in the application of Stokes’s formula. The effects of these errors are assessed by the evaluation of the absolute accuracy of nine independent DEMs for the Iran region. It is shown that the improvement in using the high-resolution Shuttle Radar Topography Mission (SRTM) data versus previously available DEMs in gridding of gravity anomalies, terrain corrections and DWC effects for the geoid model are significant. Based on the Iranian GPS/levelling network data, we estimate the absolute vertical accuracy of the SRTM in Iran to be 6.5 m, which is much better than the estimated global accuracy of the SRTM (say 16 m). Hence, this DEM has a comparable accuracy to a current photogrammetric high-resolution DEM of Iran under development. We also found very large differences between the GLOBE and SRTM models on the range of −750 to 550 m. This difference causes an error in the range of −160 to 140 mGal in interpolating surface gravity anomalies and −60 to 60 mGal in simple Bouguer anomaly correction terms. In the view of geoid heights, we found large differences between the use of GLOBE and SRTM DEMs, in the range of −1.1 to 1 m for the study area. The terrain correction of the geoid model at selected GPS/levelling points only differs by 3 cm for these two DEMs.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate seasonal global mean sea level changes using different data resources, including sea level anomalies from satellite radar altimetry, ocean temperature and salinity from the World Ocean Atlas 2001, time-variable gravity observations from the Gravity Recovery and Climate Experiment (GRACE) mission, and terrestrial water storage and atmospheric water vapor changes from the NASA global land data assimilation system and National Centers for Environmental Prediction reanalysis atmospheric model.
Abstract: We estimate seasonal global mean sea level changes using different data resources, including sea level anomalies from satellite radar altimetry, ocean temperature and salinity from the World Ocean Atlas 2001, time-variable gravity observations from the Gravity Recovery and Climate Experiment (GRACE) mission, and terrestrial water storage and atmospheric water vapor changes from the NASA global land data assimilation system and National Centers for Environmental Prediction reanalysis atmospheric model. The results from all estimates are consistent in amplitude and phase at the annual period, in some cases with remarkably good agreement. The results provide a good measure of average annual variation of water stored within atmospheric, land, and ocean reservoirs. We examine how varied treatments of degree-2 and degree-1 spherical harmonics from GRACE, laser ranging, and Earth rotation variations affect GRACE mean sea level change estimates. We also show that correcting the standard equilibrium ocean pole tide correction for mass conservation is needed when using satellite altimeter data in global mean sea level studies. These encouraging results indicate that is reasonable to consider estimating longer-term time series of water storage in these reservoirs, as a way of tracking climate change.

Journal ArticleDOI
TL;DR: In this article, the error contributions within the ocean tide loading (OTL) convolution integral computation were determined to be able to estimate the numerical accuracy of the gravity OTL values.
Abstract: The error contributions within the ocean tide loading (OTL) convolution integral computation were determined to be able to estimate the numerical accuracy of the gravity OTL values. First, the comparison of four OTL programs by different authors (CONMODB, GOTIC2, NLOADF and OLFG/OLMPP) at ten globally distributed gravity stations using exactly the same input values shows discrepancies between 2% and 5%. A new program, called CARGA, was written that is able to reproduce the results of these programs to a level of 0.1%. This has given us the ability to state with certainty the cause of the discrepancies among the four programs. It is shown that by choosing an appropriate interpolation of the Green’s function, refinement of the integration mesh and a high-resolution coastline, an accuracy level of better than 1% can be obtained for stations in Europe. Besides this numerical accuracy, there are errors in the ocean tide model such as a 1% uncertainty in the mean value of the sea-water density and the lack of conservation of tidal water mass, which can produce offsets of around 0.04 μgal.

Journal ArticleDOI
TL;DR: This work uses simulation results to show that Newton’s method usually converges faster than the iteratively reweighted least squares (IRLS) method, which is often used in geodesy for computing robust estimates of parameters.
Abstract: When GPS signal measurements have outliers, using least squares (LS) estimation is likely to give poor position estimates One of the typical approaches to handle this problem is to use robust estimation techniques We study the computational issues of Huber’s M-estimation applied to relative GPS positioning First for code-based relative positioning, we use simulation results to show that Newton’s method usually converges faster than the iteratively reweighted least squares (IRLS) method, which is often used in geodesy for computing robust estimates of parameters Then for code- and carrier-phase-based relative positioning, we present a recursive modified Newton method to compute Huber’s M-estimates of the positions The structures of the model are exploited to make the method efficient, and orthogonal transformations are used to ensure numerical reliability of the method Economical use of computer memory is also taken into account in designing the method Simulation results show that the method is effective

Journal ArticleDOI
TL;DR: This procedure was designed to guard against internal distortion of the two space-geodetic networks and takes advantage of the reduction in tie information needed with the time-series combination method by using the very strong contribution due to co-location of the daily pole of rotation.
Abstract: We have compared the VLBI and GPS terrestrial reference frames, realized using 5 years of time-series observations of station positions and polar motion, with surveyed co-location tie vectors for 25 sites. The goal was to assess the overall quality of the ties and to determine whether a subset of co-location sites might be found with VLBI–GPS ties that are self-consistent within a few millimeters. Our procedure was designed to guard against internal distortion of the two space-geodetic networks and takes advantage of the reduction in tie information needed with the time-series combination method by using the very strong contribution due to co-location of the daily pole of rotation. The general quality of the available ties is somewhat discouraging in that most have residuals, compared to the space-geodetic frames, at the level of 1–2 cm. However, by a careful selection process, we have identified a subset of nine local VLBI–GPS ties that are consistent with each other and with space geodesy to better than 4 mm (RMS) in each component. While certainly promising, it is not possible to confidently assess the reliability of this particular subset without new information to verify the absolute accuracy of at least a few of the highest-quality ties. Particular care must be taken to demonstrate that possible systematic errors within the VLBI and GPS systems have been properly accounted for. A minimum of two (preferably three or four) ties must be measured with accuracies of 1 mm or better in each component, including any potential systematic effects. If this can be done, then the VLBI and GPS frames can be globally aligned to less than 1 mm in each Helmert component using our subset of nine ties. In any case, the X and Y rotations are better determined, to about 0.5 mm, due to the contribution of co-located polar motion.

Journal ArticleDOI
TL;DR: In this article, four different basin functions are developed to estimate water storage variations within individual river basins from time variations in the Stokes coefficients now available from the GRACE mission.
Abstract: Four different basin functions are developed to estimate water storage variations within individual river basins from time variations in the Stokes coefficients now available from the GRACE mission. The four basin functions are evaluated using simulated data. Basin functions differ in how they minimize effects of three major error sources: measurement error; leakage of signal from one region to another; and errors in the atmospheric pressure field removed during GRACE data processing. Three of the basin functions are constant in time, while the fourth changes monthly using information about the signal (hydrologic and oceanic load variations). To test basin functions’ performance, Stokes coefficient variations from land and ocean models are synthesized, and error levels 50 and 100 times greater than pre-launch GRACE error estimate are used to corrupt them. Errors at 50 times pre-launch estimates approximately simulate current GRACE data. GRACE recovery of water storage variations is attempted for five different river basins (Amazon, Mississippi, Lena, Huang He and Oranje), representing a variety of sizes, locations, and signal variance. In the large basins (Amazon, Mississippi and Lena), water storage variations are recovered successfully at both error levels. As the error level increases from 50 to 100 times, basin functions change their shape, yielding less atmospheric pressure error and more leakage error. Amplitude spectra of measurement and atmospheric pressure errors have different shapes, but the best results are obtained when both are used in basin function design. When high-quality information about the signal is available, for example from climate and ocean models, changing the basin function each month can reduce leakage error and improve estimates of time variable water storage within basins.

Journal ArticleDOI
TL;DR: In this article, the topographical potential and its vertical gradient are modeled as a sum of a spherical term and corresponding ellipsoidal correction, and the spectral (series) representation of the potential is introduced.
Abstract: Due to the Global Positioning System (GPS), points on and above the Earth’s surface are readily given by means of a triplet of the Gauss surface normal coordinates L,B and H called ellipsoidal longitude, ellipsoidal latitude, and ellipsoidal height, respectively. For geodetic applications, these curvilinear coordinates refer to the international reference ellipsoid GRS80, which is an equipotential surface of the Somigliana–Pizzetti reference potential field. Here, we aim at representing the gravitational potential, that is generated by the ‘topographical’ masses above GRS80, and its vertical gradient, i.e. an effect on measured gravity, in terms of the Gauss surface normal coordinates (L,B,H). The spatial (integral) formulas for the topographical potential and its vertical gradient are presented as a sum of a spherical term and corresponding ellipsoidal correction. The formulas for the terrain contribution are evaluated over the test area in the Canadian Rocky Mountains using an area-limited discrete integration. The spectral (series) representation of the topographical potential is also introduced, and the condensation of the topographical masses on or inside the reference ellipsoid is discussed in terms of a simple layer density. The ellipsoidal corrections might seem to be of limited significance in view of a relatively low accuracy of currently available topographical data, especially the mass density. However, the representation of the topographical potential and its vertical gradient using the coordinates that are directly observable with a high level of accuracy by GPS certainly has advantages.

Journal ArticleDOI
TL;DR: In this article, a linear combination of the gravitational gradient tensor components is proposed for the recovery of the global gravity field and the results are compared with those of LOS acceleration differences.
Abstract: The GRACE mission has substantiated the low–low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high–low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair, the simplest form of the combined observable, is mostly used for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. As an alternative observable, a linear combination of the gravitational gradient tensor components is proposed. Being a one-point function and having a direct relation with the field geometry (curvature of the field at the point) are two noteworthy achievements of the alternative formulation. In addition, using an observation quantity that is related to the second-instead of the first-order derivatives of the gravitational potential amplifies the high-frequency part of the signal. Since the transition from the first- to the second-order derivatives includes the application of a finite-differences scheme, the high-frequency part of the noise is also amplified. Nevertheless, due to the different spectral behaviour of signal and noise, in the end the second-order approach leads to improved gravitational field resolution. Mathematical formulae for the gradiometry approach, for both linear and higher-degree approximations, are derived. The proposed approach is implemented for recovery of the global gravitational field and the results are compared with those of LOS acceleration differences. Moreover, LOS acceleration difference residuals are calculated, which are at the level of a few tenths of mGal. Error analysis shows that the residuals of the estimated degree variances are less than 10−3. Furthermore, the gravity anomaly residuals are less than 2 mGal for most points on the Earth.

Journal ArticleDOI
TL;DR: In this paper, a new bootstrap-based method for Global Navigation Satellite System (GNSS) carrier-phase ambiguity resolution is introduced, referred to as integer aperture bootstrapping.
Abstract: In this contribution, we introduce a new bootstrap-based method for Global Navigation Satellite System (GNSS) carrier-phase ambiguity resolution. Integer bootstrapping is known to be one of the simplest methods for integer ambiguity estimation with close-to-optimal performance. Its outcome is easy to compute due to the absence of an integer search, and its performance is close to optimal if the decorrelating Z-transformation of the LAMBDA method is used. Moreover, the bootstrapped estimator is presently the only integer estimator for which an exact and easy-to-compute expression of its fail-rate can be given. A possible disadvantage is, however, that the user has only a limited control over the fail-rate. Once the underlying mathematical model is given, the user has no freedom left in changing the value of the fail-rate. Here, we present an ambiguity estimator for which the user is given additional freedom. For this purpose, use is made of the class of integer aperture estimators as introduced in Teunissen (2003). This class is larger than the class of integer estimators. Integer aperture estimators are of a hybrid nature and can have integer outcomes as well as non-integer outcomes. The new estimator is referred to as integer aperture bootstrapping. This new estimator has all the advantages known from integer bootstrapping with the additional advantage that its fail-rate can be controlled by the user. This is made possible by giving the user the freedom over the aperture of the pull-in region. We also give an exact and easy-to-compute expression for its controllable fail-rate.

Journal ArticleDOI
TL;DR: Novel outlier detection algorithms are derived and statistical methods are presented that may be used for this purpose, and a combined algorithm, based on wavelets and a statistical method, shows best performance with an identification rate of about 99%.
Abstract: The satellite missions CHAMP, GRACE, and GOCE mark the beginning of a new era in gravity field determination and modeling. They provide unique models of the global stationary gravity field and its variation in time. Due to inevitable measurement errors, sophisticated pre-processing steps have to be applied before further use of the satellite measurements. In the framework of the GOCE mission, this includes outlier detection, absolute calibration and validation of the SGG (satellite gravity gradiometry) measurements, and removal of temporal effects. In general, outliers are defined as observations that appear to be inconsistent with the remainder of the data set. One goal is to evaluate the effect of additive, innovative and bulk outliers on the estimates of the spherical harmonic coefficients. It can be shown that even a small number of undetected outliers (<0.2 of all data points) can have an adverse effect on the coefficient estimates. Consequently, concepts for the identification and removal of outliers have to be developed. Novel outlier detection algorithms are derived and statistical methods are presented that may be used for this purpose. The methods aim at high outlier identification rates as well as small failure rates. A combined algorithm, based on wavelets and a statistical method, shows best performance with an identification rate of about 99%. To further reduce the influence of undetected outliers, an outlier detection algorithm is implemented inside the gravity field solver (the Quick-Look Gravity Field Analysis tool was used). This results in spherical harmonic coefficient estimates that are of similar quality to those obtained without outliers in the input data.

Journal ArticleDOI
TL;DR: In this article, a statistical test procedure based on uncorrelated least squares residuals, which allows verification of the hypothesis of a heterogeneous variance, is provided for GPS carrier-phase observations.
Abstract: Due to increased demands on the quality of the results of Global Positioning System (GPS) evaluations, various authors have studied improvements of the stochastic model of GPS carrier-phase observations. These improvements are based on the reasonable assumption that the commonly used stochastic model with independent and homoscedastic (i.e. equal variance) errors is unrealistic. However, this has not been proved rigorously so far. A statistical test procedure based on uncorrelated least–squares residuals, which allows verification of the hypothesis of a heterogeneous variance, is provided. The statistical test procedure is of interest in its own right, and is independent of the practical problem considered. The presented technique is applied to GPS carrier-phase observations. Results show that the variances of the investigated observations are far from homogeneous. It is indicated that the error variances of the presented data increase with decreasing GPS satellite elevation. These results confirm the assumption that the commonly used stochastic model of GPS observations is inadequate and has to be improved.

Journal ArticleDOI
TL;DR: The result is a robust and well-distributed DORIS core network of 118 stations (DPOD2000) suitable for POD during the 1993–2008 period considered here, and a selection of the ITRF2000 realization based on specific criteria that is defined here.
Abstract: In view of the future adoption of the new precise orbit determination (POD) standards for the TOPEX/Poseidon and Jason-1 satellites, we propose a method to evaluate terrestrial reference frames for POD. We applied this method to the ITRF2000 realization of the DORIS network using local geodetic ties, plate motion models, the recent DORIS IGN04D02 cumulative solution and DORIS weekly time-series of coordinates. We propose to adopt a selection of the ITRF2000 realization based on specific criteria that we define here, and to extend it with ground stations for which we propose new coordinates and velocities. Only 13 out of 131 stations were considered to be inappropriate for POD activities. The result is a robust and well-distributed DORIS core network of 118 stations (DPOD2000) suitable for POD during the 1993–2008 period considered here.

Journal ArticleDOI
TL;DR: In this article, the scalar-valued B-spline wavelets are used to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth's surface to the mean altitudes of GPS satellites over Japan.
Abstract: Wavelet expansion has been demonstrated to be suitable for the representation of spatial functions. Here we propose the so-called B-spline wavelets to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth’s surface to the mean altitudes of GPS satellites, over Japan. The scalar-valued B-spline wavelets can be defined in a two-dimensional, but not necessarily planar, domain. Generated by a sequence of knots, different degrees of B-splines can be implemented: degree 1 represents the Haar wavelet; degree 2, the linear B-spline wavelet, or degree 4, the cubic B-spline wavelet. A non-uniform version of these wavelets allows us to handle data on a bounded domain without any edge effects. B-splines are easily extended with great computational efficiency to domains of arbitrary dimensions, while preserving their properties. This generalization employs tensor products of B-splines, defined as linear superposition of products of univariate B-splines in different directions. The data and model may be identical at the locations of the data points if the number of wavelet coefficients is equal to the number of grid points. In addition, data compression is made efficient by eliminating the wavelet coefficients with negligible magnitudes, thereby reducing the observational noise. We applied the developed methodology to the representation of the spatial and temporal variations of GIM from an extremely dense GPS network, the GPS Earth Observation Network (GEONET) in Japan. Since the sampling of the TEC is registered regularly in time, we use a two-dimensional B-spline wavelet representation in space and a one-dimensional spline interpolation in time. Over the Japan region, the B-spline wavelet method can overcome the problem of bias for the spherical harmonic model at the boundary, caused by the non-compact support. The hierarchical decomposition not only allows an inexpensive calculation, but also separates visualisation at different levels of detail. Each level corresponds to a certain spatial frequency band, leading to a detection of structures and enhancement in the ionosphere at different resolutions.

Journal ArticleDOI
TL;DR: The results of simulation studies and field experiments indicate that the proposed procedure improves the performance of single-frequency ambiguity resolution in terms of both reliability and time-to-fix-ambiguity.
Abstract: In order to achieve a precise positioning solution from GPS, the carrier-phase measurements with correctly resolved integer ambiguities must be used. Based on the integration of GPS with pseudolites and Inertial Navigation Systems (INS), this paper proposes an effective procedure for single-frequency carrier-phase integer ambiguity resolution. With the inclusion of pseudolites and INS measurements, the proposed procedure can speed up the ambiguity resolution process and increase the reliability of the resolved ambiguities. In addition, a recently developed ambiguity validation test, and a stochastic modelling scheme (based on-line covariance matrix estimation) are adapted to enhance the quality of ambiguity resolution. The results of simulation studies and field experiments indicate that the proposed procedure indeed improves the performance of single-frequency ambiguity resolution in terms of both reliability and time-to-fix-ambiguity.

Journal ArticleDOI
TL;DR: In this paper, the authors compared two deterministic and three stochastic modification methods for computing a regional geoid over the Baltic countries and concluded that the best modification method is made by means of two accuracy estimates: the expected global mean square error of the geoid estimator, and the statistics of the post-fit residuals between the computed geoid models and precise GPS-levelling data.
Abstract: In regional gravimetric geoid determination, it is customary to use the modified Stokes formula that combines local terrestrial data with a global geopotential model. This study compares two deterministic and three stochastic modification methods for computing a regional geoid over the Baltic countries. The final selection of the best modification method is made by means of two accuracy estimates: the expected global mean square error of the geoid estimator, and the statistics of the post-fit residuals between the computed geoid models and precise GPS-levelling data. Numerical results show that the modification methods tested do not provide substantially different results, although the stochastic approaches appear formally better in the selected study area. The 2.8-5.3 cm (RMS) post-fit residuals to the GPS-levelling points indicate the suitability of the new geoid model for many practical applications. Moreover, the numerical comparisons reveal a one-dimensional offset between the regional vertical datum and the geoid models based upon the new GRACE-only geopotential model GGM01s. This gives an impression of a greater reliability of the new model compared to the earlier, EGM96-based and somewhat tilted regional geoid models for the same study area.

Journal ArticleDOI
TL;DR: In this paper, an algorithm to transform from 3D Cartesian to geodetic coordinates is obtained by solving the equation of the Lagrange parameter, which can be recovered to 0.5 mm precision over the range from −6×106 to 1010 m.
Abstract: The algorithm to transform from 3D Cartesian to geodetic coordinates is obtained by solving the equation of the Lagrange parameter. Numerical experiments show that geodetic height can be recovered to 0.5 mm precision over the range from −6×106 to 1010 m.

Journal ArticleDOI
TL;DR: In this article, the problem of global height datum unification is solved in the gravity potential space based on: (1) high-resolution local gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid's potential.
Abstract: The problem of “global height datum unification” is solved in the gravity potential space based on: (1) high-resolution local gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid’s potential. The high-resolution local gravity field model is derived based on a solution of the fixed-free two-boundary-value problem of the Earth’s gravity field using (a) potential difference values (from precise leveling), (b) modulus of the gravity vector (from gravimetry), (c) astronomical longitude and latitude (from geodetic astronomy and/or combination of (GNSS) Global Navigation Satellite System observations with total station measurements), (d) and satellite altimetry. Knowing the height of the reference benchmark in the national height system and its geocentric GNSS coordinates, and using the derived high-resolution local gravity field model, the gravity potential value of the zero point of the height system is computed. The difference between the derived gravity potential value of the zero point of the height system and the geoid’s potential value is computed. This potential difference gives the offset of the zero point of the height system from geoid in the “potential space”, which is transferred into “geometry space” using the transformation formula derived in this paper. The method was applied to the computation of the offset of the zero point of the Iranian height datum from the geoid’s potential value W 0=62636855.8 m2/s2. According to the geometry space computations, the height datum of Iran is 0.09 m below the geoid.