scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Geodesy in 1999"


Journal ArticleDOI
TL;DR: The detailed development of an innovation-based adaptive Kalman filter for an integrated inertial navigation system/global positioning system (INS/GPS) is given, based on the maximum likelihood criterion for the proper choice of the filter weight and hence the filter gain factors.
Abstract: After reviewing the two main approaches of adaptive Kalman filtering, namely, innovation-based adaptive estimation (IAE) and multiple-model-based adaptive estimation (MMAE), the detailed development of an innovation-based adaptive Kalman filter for an integrated inertial navigation system/global positioning system (INS/GPS) is given. The developed adaptive Kalman filter is based on the maximum likelihood criterion for the proper choice of the filter weight and hence the filter gain factors. Results from two kinematic field tests in which the INS/GPS was compared to highly precise reference data are presented. Results show that the adaptive Kalman filter outperforms the conventional Kalman filter by tuning either the system noise variance–covariance (V–C) matrix `Q' or the update measurement noise V–C matrix `R' or both of them.

949 citations


Journal ArticleDOI
TL;DR: In this article, a probabilistic justification for using the integer least squares estimator is given for global positioning system ambiguity resolution, which implies that the success rate of any other integer estimator of the carrier phase ambiguities will be smaller than or at the most equal to the ambiguity success rate.
Abstract: A probabilistic justification is given for using the integer least-squares (LS) estimator. The class of admissible integer estimators is introduced and classical adjustment theory is extended by proving that the integer LS estimator is best in the sense of maximizing the probability of correct integer estimation. For global positioning system ambiguity resolution, this implies that the success rate of any other integer estimator of the carrier phase ambiguities will be smaller than or at the most equal to the ambiguity success rate of the integer LS estimator. The success rates of any one of these estimators may therefore be used to provide lower bounds for the LS success rate. This is particularly useful in case of the bootstrapped estimator.

412 citations


Journal ArticleDOI
TL;DR: The SIGMA-Δ model has been developed for stochastic modelling of global positioning system (GPS) signal diffraction errors in high precision GPS surveys as mentioned in this paper, where the basic information used in the SIGMA Δ model is the measured carrier-to-noise power-density ratio (C/N0).
Abstract: The SIGMA-Δ model has been developed for stochastic modelling of global positioning system (GPS) signal diffraction errors in high precision GPS surveys. The basic information used in the SIGMA-Δ model is the measured carrier-to-noise power-density ratio (C/N0). Using the C/N0 data and a template technique, the proper variances are derived for all phase observations. Thus the quality of the measured phase is automatically assessed and if phase observations are suspected to be contaminated by diffraction effects they are weighted down in the least-squares adjustment. The ability of the SIGMA-Δ model to reduce signal diffraction effects is demonstrated on two static GPS surveys as well as on a kinematic high-precision GPS railway survey. In cases of severe signal diffraction the accuracy of the GPS positions is improved by more than 50% compared to standard GPS processing techniques.

182 citations


Book ChapterDOI
TL;DR: A detailed treatment of adjustment problems in combined global positioning system (GPS)/levelling/geoid networks is given and two modelling alternatives for the correction field are presented, namely a pure deterministic parametric model, and a hybrid deterministic and stochastic model.
Abstract: A detailed and statistically rigorous treatment of adjustment problems in combined GPS/levelling/geoid networks is given in this paper. The two main types of ”unknowns” in this kind of multi-data ID networks are the gravimetric geoid accuracy and a 2D spatial field that describes all the systematic distortions among the available height data sets. An accurate knowledge of the latter becomes especially important when we consider employing GPS techniques for levelling purposes with respect to a local vertical datum. Various modeling alternatives for the correction field are presented, namely a pure discrete deterministic model, a hybrid deterministic and stochastic model, and finally a pure stochastic model. Variance component estimation is also introduced as an important tool for assesing the actual geoid noise level, and checking a-priori given geoid error models. In addition, theoretical comparisons are made with some of the already established adjustment models that have been used in practice. The problem of statistical testing of various model components (data noise, deterministic model, stochastic model) in such networks is also discussed. Finally, some conclusions are drawn and a few recommendations for further study are pointed out.

165 citations


Journal ArticleDOI
TL;DR: In this article, the robust estimation of geodetic datum transformation is discussed, where the robust initial estimates of the transformation parameters should have a high breakdown point in order to provide reliable residuals for the following estimation.
Abstract: The robust estimation of geodetic datum transformation is discussed. The basic principle of robust estimation is introduced. The error influence functions of the robust estimators, together with those of least-squares estimators, are given. Particular attention is given to the robust initial estimates of the transformation parameters, which should have a high breakdown point in order to provide reliable residuals for the following estimation. The median method is applied to solve for robust initial estimates of transformation parameters since it has the highest breakdown point. A smooth weight function is then used to improve the efficiency of the parameter estimates in successive iterative computations. A numerical example is given on a datum transformation between a global positioning system network and the corresponding geodetic network in China. The results show that when the coordinates are contaminated by outliers, the proposed method can still give reasonable results.

149 citations


Journal ArticleDOI
TL;DR: In this paper, the Slepian approach is used to solve the polar gap problem in satellite geodesy, which enables the capture of the maximum amount of information from non-polar gravity field missions.
Abstract: The Slepian problem consists of determining a sequence of functions that constitute an orthonormal basis of a subset of ℝ (or ℝ2) concentrating the maximum information in the subspace of square integrable functions with a band-limited spectrum. The same problem can be stated and solved on the sphere. The relation between the new basis and the ordinary spherical harmonic basis can be explicitly written and numerically studied. The new base functions are orthogonal on both the subspace and the whole sphere. Numerical tests show the applicability of the Slepian approach with regard to solvability and stability in the case of polar data gaps, even in the presence of aliasing. This tool turns out to be a natural solution to the polar gap problem in satellite geodesy. It enables capture of the maximum amount of information from non-polar gravity field missions.

91 citations


Journal ArticleDOI
TL;DR: The theoretical differences between the Helmert deflection of the vertical and that computed from a truncated spherical harmonic series of the gravity field, aside from the limited spectral content in the latter, include the curvature of the normal plumb line, the permanent tidal effect, and datum origin and orientation offsets as discussed by the authors.
Abstract: The theoretical differences between the Helmert deflection of the vertical and that computed from a truncated spherical harmonic series of the gravity field, aside from the limited spectral content in the latter, include the curvature of the normal plumb line, the permanent tidal effect, and datum origin and orientation offsets. A numerical comparison between deflections derived from spherical harmonic model EGM96 and astronomic deflections in the conterminous United States (CONUS) shows that correcting these systematic effects reduces the mean differences in some areas. Overall, the mean difference in CONUS is reduced from −0.219 arcsec to −0.058 arcsec for the south–north deflection, and from +0.016 arcsec to +0.004 arcsec for the west–east deflection. Further analysis of the root-mean-square differences indicates that the high-degree spectrum of the EGM96 model has significantly less power than implied by the deflection data.

90 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that a correction for the quasigeoid-to-geoid separation, amounting to about 3 cm for our area of interest, has to be considered.
Abstract: The definition of the mean Helmert anomaly is reviewed and the theoretically correct procedure for computing this quantity on the Earth's surface and on the Helmert co-geoid is suggested. This includes a discussion of the role of the direct topographical and atmospherical eAects, primary and secondary indirect topographical and atmospherical eAects, ellipsoidal corrections to the gravity anomaly, its downward continuation and other eAects. For the rigorous deriva- tions it was found necessary to treat the gravity anomaly systematically as a point function, defined by means of the fundamental gravimetric equation. It is this treat- ment that allows one to formulate the corrections necessary for computing the 'one-centimetre geoid'. Compared to the standard treatment, it is shown that a 'correction for the quasigeoid-to-geoid separation', amounting to about 3 cm for our area of interest, has to be considered. It is also shown that the 'secondary indirect eAect' has to be evaluated at the topography rather than at the geoid level. This results in another diAerence of the order of several centimetres in the area of interest. An approach is then proposed for determin- ing the mean Helmert anomalies from gravity data observed on the Earth's surface. This approach is based on the widely-held belief that complete Bouguer anom- alies are generally fairly smooth and thus particularly useful for interpolation, approximation and averaging. Numerical results from the Canadian Rocky Mountains for all the corrections as well as the downward contin- uation are shown.

84 citations


Journal ArticleDOI
TL;DR: In this paper, the least squares spectral analysis method is reviewed, with emphasis on its remarkable property to accept time series with an associated, fully populated covariance matrix, and criteria for the statistical significance of the least-squares spectral peaks are formulated.
Abstract: The least-squares spectral analysis method is reviewed, with emphasis on its remarkable property to accept time series with an associated, fully populated covariance matrix. Two distinct cases for the input covariance matrix are examined: (a) it is known absolutely (a-priori variance factor known); and (b) it is known up to a scale factor (a-priori variance factor unknown), thus the estimated covariance matrix is used. For each case, the probability density function that underlines the least-squares spectrum is derived and criteria for the statistical significance of the least-squares spectral peaks are formulated. It is shown that for short series (up to about 150 values) with an estimated covariance matrix (case b), the spectral peaks must be stronger to be statistically significant than in the case of a known covariance matrix (case a): the shorter the series and the lower the significance level, the higher the difference becomes. For long series (more than about 150 values), case (b) converges to case (a) and the least-squares spectrum follows the beta distribution. The results of this investigation are formulated in two new theorems.

78 citations


Journal ArticleDOI
TL;DR: The 2 arcminute × 2 arc minute geoid model (GEOID96) for the United States supports the conversion between North American Datum 1983 (NAD 83) ellipsoid heights and North American Vertical Datum 1988 (NAVD 88) Helmert heights.
Abstract: The 2 arc-minute × 2 arc-minute geoid model (GEOID96) for the United States supports the conversion between North American Datum 1983 (NAD 83) ellipsoid heights and North American Vertical Datum 1988 (NAVD 88) Helmert heights. GEOID96 includes information from global positioning system (GPS) height measurements at optically leveled benchmarks. A separate geocentric gravimetric geoid, G96SSS, was first calculated, then datum transformations and least-squares collocation were used to convert from G96SSS to GEOID96. Fits of 2951 GPS/level (ITRF94/NAVD 88) benchmarks to G96SSS show a 15.1-cm root mean square (RMS) around a tilted plane (0.06 ppm, 178∘ azimuth), with a mean value of −31.4 cm (15.6-cm RMS without plane). This mean represents a bias in NAVD 88 from global mean sea level, remaining nearly constant when computed from subsets of benchmarks. Fits of 2951 GPS/level (NAD 83/NAVD 88) benchmarks to GEOID96 show a 5.5-cm RMS (no tilts, zero average), due primarily to GPS error. The correlated error was 2.5 cm, decorrelating at 40 km, and is due to gravity, geoid and GPS errors. Differences between GEOID96 and GEOID93 range from −122 to +374 cm due primarily to the non-geocentricity of NAD 83.

76 citations


Journal ArticleDOI
TL;DR: Comparison of the results from the least-squares method with those of the robust method shows that the results of the station systematic errors from the robust estimator are more reliable.
Abstract: Methods for analyzing laser-ranging residuals to estimate station-dependent systematic errors and to eliminate outliers in satellite laser ranges are discussed. A robust estimator based on an M-estimation principle is introduced. A practical calculation procedure which provides a robust criterion with high breakdown point and produces robust initial residuals for following iterative robust estimation is presented. Comparison of the results from the least-squares method with those of the robust method shows that the results of the station systematic errors from the robust estimator are more reliable.

Book ChapterDOI
TL;DR: Based on the current best estimates of the fundamental geodetic parameters, i.e., W 0 (Grafarend and Ardalan, 1997) 673-679), GM (Ries et al., Geophys. Res. Letters 19 (1992) 529-531), J 2 (Lemoine et al, GRAGEO-MAR 1996, International Association of Geodesy, Symposia 117, (1996) 461-469) and Ω (given in Internal Communications of IAG/IUGG Special Commission 3, Darmstadt (1997
Abstract: Based on the current best estimates of the fundamental geodetic parameters, i.e., W 0 (Grafarend and Ardalan, Journal of Geodesy 71 (1997) 673-679), GM (Ries et al, Geophys. Res. Letters 19 (1992) 529-531), J 2 (Lemoine et al, GRAGEO-MAR 1996, International Association of Geodesy, Symposia 117, (1996) 461-469) and Ω (given in Internal Communications of IAG/IUGG Special Commission 3, Darmstadt (1997)), the form parameters of a Somigliana-Pizetti level ellipsoid, namely the semi-major axis a and semi-minor axis b (or equivalents the linear eccentricity ɛ = √a 2 - b 2 ) are computed. There are six parameters namely the four fundamental geodetic parameters {W 0 , GM, J 2 , Ω} and the two form parameters {a, b} or {a, ɛ}, which determine the ellipsoidal reference gravity field of Somigliana-Pizetti type constraint to two nonlinear condition equations. Their iterative solution leads to best estimates a = (6378136.572 ±0.053)m, b = (6356751.920 ±0.052)m, ɛ= (521853.580 ±0.013)m for the tide-free geoid of reference and a = (6378136.602 ±0.053)m, b = (6356751.860 ± 0.052)m, ɛ= (521854.674 ±0.015)m for the zero-frequency tide geoid of reference. The best estimates of the form parameters of a Somigliana-Pizetti level ellipsoid, {a, b}, differ significantly by -0.398m, -0.454m, respectively, from the data of the Geodetic Reference System 1980.

Journal ArticleDOI
TL;DR: The necessary theory to bridge the gap between current global positioning system (GPS) ambiguity resolution practice and to evaluate the probabilistic properties of the computed baseline is presented.
Abstract: In current global positioning system (GPS) ambiguity resolution practice there is not yet a rigorous procedure in place to diagnose its expected performance and to evaluate the probabilistic properties of the computed baseline. The necessary theory to bridge this gap is presented. Probabilistic statements about the `fixed' GPS baseline can be made once its probability distribution is known. This distribution is derived for a class of integer ambiguity estimators. Members from this class are the ambiguity estimators that follow from `integer rounding', `integer bootstrapping' and `integer least squares' respectively. It is also shown how this distribution differs from the one which is usually used in practice. The approximations involved are identified and ways of evaluating them are given. In this comparison the precise role of GPS ambiguity resolution is clarified.

Journal ArticleDOI
TL;DR: The Extended Center for Orbit Determination in Europe (CODE) Orbit Model, an empirical orbit model proposed by Beutler and colleagues in 1994, has been tested extensively since January 1996 as mentioned in this paper.
Abstract: The Extended Center for Orbit Determination in Europe (CODE) Orbit Model, an empirical orbit model proposed by Beutler and colleagues in 1994, has been tested extensively since January 1996. Apart from six osculating Keplerian elements, this orbit model consists of nine (instead of the conventional two) parameters to take into account the deterministic part of the force field acting on the satellites. Based on the test results an improved orbit parameterization is proposed. The new orbit parameterization consists of the conventional two parameters plus three additional parameters, a constant and two periodic terms (a cosine and a sine term), in the X-direction to model the effects of the solar radiation pressure. Results based on one full year of routine orbit estimation, using the original and the new orbit parameterization, are presented to demonstrate the superiority of the new approach. An improvement of the orbit estimates with at least a factor of two is observed!

Journal ArticleDOI
TL;DR: The sensitivity and reliability of GPS positioning over long baselines when temporal correlations are modelled with the aid of a correlation function are discussed and exemplary results are given to illustrate the effects on position and accuracy of GPS stations.
Abstract: Due to the steady progress in global positioning system (GPS) technology and methods of data evaluation, it is possible to obtain highly precise relative point positions also for extensive geodetic networks. However, some limiting influences such as temporal correlations of observational data are neglected in most of the GPS processing programs. Therefore, it is necessary to consider the impact of these neglected correlations on the coordinates and their accuracy measures. In this paper the sensitivity and reliability of GPS positioning over long baselines when temporal correlations are modelled with the aid of a correlation function are discussed. The implementation in the variance–covariance matrix and the subsequent evaluation process require a considerable amount of computing time and memory. Therefore it is necessary to use appropriate numerical methods such as approximated matrix inversion in order to reduce the numerical requirements. After the description of the methodical and numerical handling of the temporal correlations, exemplary results are given to illustrate the effects on position and accuracy of GPS stations.

Journal ArticleDOI
TL;DR: In this article, a new iterative procedure to transform geocentric rectangular coordinates to geodetic coordinates is derived, which is sufficiently precise because the resulting relative error is less than 10−15.
Abstract: A new iterative procedure to transform geocentric rectangular coordinates to geodetic coordinates is derived. The procedure solves a modification of Borkowski's quartic equation by the Newton method from a set of stable starters. The new method runs a little faster than the single application of Bowring's formula, which has been known as the most efficient procedure. The new method is sufficiently precise because the resulting relative error is less than 10−15, and this method is stable in the sense that the iteration converges for all coordinates including the near-geocenter region where Bowring's iterative method diverges and the near-polar axis region where Borkowski's non-iterative method suffers a loss of precision.

Journal ArticleDOI
TL;DR: In this paper, a new concept called united ambiguity decorrelation (UIC) is proposed for multidimensional ambiguity decor correlation. But the HL process performs very well in high-dimensional ambiguity-decorrelation tests.
Abstract: Ambiguity decorrelation is a useful technique for rapid integer ambiguity fixing. It plays an important role in the least-squares ambiguity decorrelation adjustment (Lambda) method. An approach to multi-dimension ambiguity decorrelation is proposed by the introduction of a new concept: united ambiguity decorrelation. It is found that united ambiguity decorrelation can provide a rapid and effective route to ambiguity decorrelation. An approach to united ambiguity decorrelation, the HL process, is described in detail. The HL process performs very well in high-dimension ambiguity decorrelation tests.

Journal ArticleDOI
TL;DR: In this article, the authors present a better understanding of constructive approximation in terms of radial basis functions such as splines and wavelets, as well as the uncertainty principle for the quantification of space and momentum localization of trial functions.
Abstract: Current activities and recent progress on constructive approximation and numerical analysis in physical geodesy are reported upon Two major topics of interest are focused upon, namely trial systems for purposes of global and local approximation and methods for adequate geodetic application A fundamental tool is an uncertainty principle, which gives appropriate bounds for the quantification of space and momentum localization of trial functions The essential outcome is a better understanding of constructive approximation in terms of radial basis functions such as splines and wavelets

Journal ArticleDOI
TL;DR: In this article, it was shown that the atmospheric geoid correction is mainly of order H of terrain elevation, while the term of order h2 is within a few millimetres.
Abstract: The well-known International Association of Geodesy (IAG) approach to the atmospheric geoid correction in connection with Stokes' integral formula leads to a very significant bias, of the order of 3.2 m, if Stokes' integral is truncated to a limited region around the computation point. The derived truncation error can be used to correct old results. For future applications a new strategy is recommended, where the total atmospheric geoid correction is estimated as the sum of the direct and indirect effects. This strategy implies computational gains as it avoids the correction of direct effect for each gravity observation, and it does not suffer from the truncation bias mentioned above. It can also easily be used to add the atmospheric correction to old geoid estimates, where this correction was omitted. In contrast to the terrain correction, it is shown that the atmospheric geoid correction is mainly of order H of terrain elevation, while the term of order H2 is within a few millimetres.

Journal ArticleDOI
TL;DR: In this paper, a 3-day period in the Canadian Rocky Mountains over a single 100 × 100 km area which was flown with 10-km line spacing was used to investigate the long-term accuracy and repeatability of the system, as well as its potential for geoid and vertical gradient of gravity determination.
Abstract: In September 1996 the University of Calgary tested a combination of strapdown inertial navigation systems and differential global positioning system (DGPS) receivers for their suitability to determine gravity at aircraft flying altitudes. The purpose of this test was to investigate the long-term accuracy and repeatability of the system, as well as its potential for geoid and vertical gradient of gravity determination. The test took place during a 3-day period in the Canadian Rocky Mountains over a single 100 × 100 km area which was flown with 10-km line spacing. Two flights were done at 4350 m in E–W and N–S profile directions, respectively, and one at 7300 m with E–W profiles. Two strapdown inertial systems, the Honeywell LASEREF III and the Litton-101 Flagship, were flown side by side. Comparison of the system estimates with an upward-continued reference showed root-mean-square (RMS) agreement at the level of 3.5 mGal for 90- and 120-s filter lengths. The LASEREF III, however, performed significantly better than the Litton 101 for shorter filtering periods of 30 and 60 s. A comparison between the two systems results in an RMS agreement of 2.8 and 2.3 mGal for the 90- and 120-s filters. The better agreement between the two systems is mainly due to the fact that the upward-continued reference has not been filtered identically to the system gravity disturbance estimates. Additional low-frequency differences seem to point to an error in the upward-continued reference. Finally, an analysis of crossover points between flight days for the LASEREF III shows a standard deviation of 1.6 mGal, which is near the noise level of the INS and GPS data. Further improvements to the system are possible, and some ideas for future work are briefly presented.

Journal ArticleDOI
TL;DR: In this article, the authors used geodetic measurements from 1963 through 1994 to estimate horizontal strain rates across the Red River fault near Thac Ba, Vietnam, where the estimated rates of shear strain in ten triangular subnetworks surrounding the fault trace are not significantly different from zero at 95% confidence.
Abstract: Geodetic measurements from 1963 through 1994 are used to estimate horizontal strain rates across the Red River fault near Thac Ba, Vietnam. Whether or not this fault system is currently active is a subject of some debate. By combining: (1) triangulation from 1963, (2) triangulation in 1983, and (3) Global Positioning System (GPS) observations in 1994, horizontal shear strain rates are estimated without imposing any prior information on fixed stations. The estimated rates of shear strain in ten triangular subnetworks surrounding the fault trace are not significantly diAerent from zero at 95% confidence. The maximum rate of dextral shear is less than 0.3 lrad/year in all but one of the triangles. The estimates help bound the slip rate in a simple elastic dislocation model for a locked, vertical strike-slip fault. By assuming a locking depth of 5-20 km, the most likely values for the deep slip rate are between 1 and 5 mm/ year of right-lateral motion. These values delimit the 23% confidence interval. At 95% confidence, the slip rate estimate falls between 7 mm/year of left-lateral motion and 15 mm/year of right-lateral motion.

Journal ArticleDOI
TL;DR: In this paper, the horizontal distribution of the wet delay due to water vapour is modeled using ground surface data and is thus often estimated from the GPS data, and three such models are evaluated through simulations, one of which is developed in this paper.
Abstract: Permanently operating Global Positioning System (GPS) receivers are used today, for example, in precise positioning and determination of atmospheric water vapour content. The GPS signals are delayed by various gases when traversing the atmosphere. The delay due to water vapour, the wet delay, is difficult to model using ground surface data and is thus often estimated from the GPS data. In order to obtain the most accurate results from the GPS processing, a modelling of the horizontal distribution of the wet delay may be necessary. Through simulations, three such models are evaluated, one of which is developed in this paper. In the first model the water vapour is assumed to be horizontally stratified, thus the wet delay can be described by only one zenith parameter. The second model gives the wet delay with one zenith and two horizontal gradient parameters. The third model uses the correlation between the wet-delay values in different directions. It is found that for large gradients and strong turbulence the two latter models yield lower errors in the estimated vertical coordinate and wet-delay parameters. For large gradients this improvement is up to 7 mm in the zenith wet-delay parameter, from 9 mm down to 2 and 4 mm for the second and third models, respectively.

Journal ArticleDOI
TL;DR: In this article, a combination of the classical integral formula and a set of spherical harmonic coefficients of the topography to, say, degree and order 360 is recommended for practical use.
Abstract: The classical integral formula for determining the indirect effect in connection with the Stokes–Helmert method is related to a planar approximation of the sea level. A strict integral formula, as well as some approximations to it, are derived. It is concluded that the cap- size truncated integral formulas will suffer from the omission of some long-wavelength contributions, of the order of 50 cm in high mountains for the classical formula. This long-wavelength information can be represented by a set of spherical harmonic coefficients of the topography to, say, degree and order 360. Hence, for practical use, a combination of the classical formula and a set of spherical harmonics is recommended.

Journal ArticleDOI
TL;DR: In this article, the sensitivity of some characteristic results of least-squares adjustments such as the estimated values of the parameters and their variance-covariance matrix due to imminent uncertainties of the stochastic model is discussed in detail.
Abstract: A proper perturbation theory of a mathematical model and the quantities derived by means of least-squares adjustments is indispensable if the results have to be interpreted in a wider context. The sensitivity of some characteristic results of least-squares adjustments such as the estimated values of the parameters and their variance–covariance matrix due to imminent uncertainties of the stochastic model is discussed in detail. Linearizations are used with rigorous error measures and interval mathematics. Numerical examples conclude the investigations.

Journal ArticleDOI
TL;DR: In this paper, a 2×2 arc-minute resolution geoid model, CARIB97, has been computed covering the Caribbean Sea covering the GRS-80 ellipsoid, centered at the ITRF94 (1996) origin.
Abstract: A 2×2 arc-minute resolution geoid model, CARIB97, has been computed covering the Caribbean Sea. The geoid undulations refer to the GRS-80 ellipsoid, centered at the ITRF94 (1996.0) origin. The geoid level is defined by adopting the gravity potential on the geoid as W0=62 636 856.88 m2/s2 and a gravity-mass constant of GM=3.986 004 418×1014 m3/s2. The geoid model was computed by applying high-frequency corrections to the Earth Gravity Model 1996 global geopotential model in a remove-compute-restore procedure. The permanent tide system of CARIB97 is non-tidal. Comparison of CARIB97 geoid heights to 31 GPS/tidal (ITRF94/local) benchmarks shows an average offset (h–H–N) of 51 cm, with an Root Mean Square (RMS) of 62 cm about the average. This represents an improvement over the use of a global geoid model for the region. However, because the measured orthometric heights (H) refer to many differing tidal datums, these comparisons are biased by localized permanent ocean dynamic topography (PODT). Therefore, we interpret the 51 cm as partially an estimate of the average PODT in the vicinity of the 31 island benchmarks. On an island-by-island basis, CARIB97 now offers the ability to analyze local datum problems which were previously unrecognized due to a lack of high-resolution geoid information in the area.

Journal ArticleDOI
TL;DR: In this article, a high-precision device was properly built to set up known displacements along three orthogonal axes of a GPS antenna, and one of the antennas in the considered GPS networks was moved according to centimeter and sub-centimeter displacements; after careful GPS data processing, it was evaluated whether these simulated deformations were correctly a posteriori detected and at which probability level.
Abstract: This paper illustrates the surveys and the results obtained in an experiment whose goal is to evaluate the Global Positioning System (GPS) sensitivity and accuracy for deformation control on non-permanent network of different extensions. To this aim a high-precision device was properly built to set up known displacements along three orthogonal axes of a GPS antenna. One of the antennas in the considered GPS networks was moved according to centimeter and sub-centimeter displacements; after careful GPS data processing, it was evaluated whether these simulated deformations were correctly a posteriori detected and at which probability level. This experiment was carried out both on a local (baselines ranging between 3 and 30 km) and on a regional (baselines ranging between 300 and 600 km) GPS network. The results show that in the local network it is possible to identify the displacements at a level of 10 mm in height and at a level of 5 mm in horizontal position. The analysis of the regional network showed that it is fundamental to investigate new strategies to model the troposphere; in fact, it is necessary to improve the precision of the height in order to correctly identify displacements lower than 60–80 mm; on the contrary, horizontal displacements can be evidenced at the level of 20 mm.

Journal ArticleDOI
TL;DR: An algorithm is given for constructing a generating independent set of double differences from the Boolean array of receiving-station/satellite connections and characterizations of generator equivalence allow alternative generating sets to be identified and selected.
Abstract: When a collection of double differences is used to compute global-positioning-system satellite orbits from a permanent network of receiving stations, linear dependence among the double-differenced observations reduces the number of double differences that contribute new information to the computations. A maximal linearly independent subset of a large collection of double differences contains all the information content of the full set. If r is the number of receivers and s is the number of satellites, the original collection of double differences may have size O(r2s2), whereas the linearly independent subset has size no greater than O(rs). Only such a smaller independent subset needs to participate in the more expensive double-precision matrix computations to correctly correlate all double differences, detect cycle slips, resolve ambiguities, and compute satellite orbits and station positions and relative velocities. Dependence among double differences is characterized using vector space methods together with geometric characterizations of Boolean matrices. These characterizations lend themselves to fast, robust algorithms for computing maximal linearly independent sets (bases) of double differences. An algorithm is given for constructing a generating independent set of double differences from the Boolean array of receiving-station/satellite connections. Characterizations of generator equivalence allow alternative generating sets to be identified and selected. An updating algorithm to handle local changes in the satellite–receiver connection matrix is also described.

Journal ArticleDOI
TL;DR: In this paper, an alternative expression for the variance factors when estimating weights according to the Iterated Almost Unbiased Estimation (IAUE) technique is derived for the first-order leveling network of Taiwan.
Abstract: An alternative expression has been derived for the variance factors when estimating weights according to the Iterated Almost Unbiased Estimation (IAUE) technique. A variance factor can be approximately estimated by finding the ratio of the group redundancy numbers at any two successive iterations. The numerical example of the first-order leveling network of Taiwan indicates: that stabilization of the redundancy number of an observation occurs as the variance factor associated with it converges to unity; that variance factors tend to approach unity for individual groups with large redundancy; and that for those variance factors which fail to converge to unity, the redundancy numbers decrease monotonically as the iteration proceeds.

Journal ArticleDOI
TL;DR: In this paper, a new method for the estimation of variance components is presented, which combines the concept of maximum-likelihood estimation with the Bayesian approach and facilitates computationally efficient introduction of prior information into the estimation process.
Abstract: A new method for the estimation of variance components is presented. The proposed method combines the concept of maximum-likelihood estimation with the Bayesian approach and facilitates computationally efficient introduction of prior information into the estimation process.

Book ChapterDOI
Peiliang Xu1
TL;DR: A number of approaches have been proposed for nonlinear filter with different accuracy as discussed by the authors, which are mainly investigated from the Bayesian point of view, and may be classified into three kinds of methods: linearization, statistical approximation and Monte Carlo simulation.
Abstract: Dynamical systems encountered in reality are essentially nonlinear. A number of approaches were propped for nonlinear filter with different accuracy. They are mainly investigated from the Bayesian point of view, and may be classified into three kinds of methods: linearization, statistical approximation and Monte-Carlo simulation (Jazwinski 1970; Tanizaki 1993; Gelb 1994). Very often, linearization of the nonlinear system is done using either a precomputed nominal trajectory or the estimate of the state vector. This second linearization approach is better known as the extended Kalman filter (Jazwinski 1970). Since the solution based on one-step linearization may be poor for a highly nonlinear system, iteration in this case is expected in order to obtain a more accurate estimate. Higher order approaches should probably be taken into account. It should be noted, however, that if the system presents a significant nonlineaaity, even the mean and covariance matrix of the nonlinear filter can be misleading, since the means of the estimated state variables may deviate appreciably from their true parameter values. The basic idea of statistical approximation is to replace a nonlinear function of random variables by a series expansion (Gelb 1994) or to approximate thea posteriori conditional probability density function of the state vector (Sorenson & Stubberud 1968; Kramer & Sorenson 1988). The Monte Carlo simulation technique may be used to determine the mean and covariance matrix of the nonlinear filter (if properly designed), which requires a large sample to obtain statistically meaningful results (see e.g. Brown & Mariano 1989; Carlin et al. 1992; Gelb 1994; Tanizaki 1993)