scispace - formally typeset
Search or ask a question

Showing papers in "Acta Geodaetica Et Geophysica Hungarica in 2019"


Journal ArticleDOI
TL;DR: An edge detector method for the enhancement of potential field anomalies, based on the logistic function of the total horizontal gradient, which is tested on synthetic data calculated using 3 models, and also on real magnetic and gravity data from Vietnam to demonstrate that the method is a useful tool for the qualitative interpretation of possible field data.
Abstract: Locating the edges of anomalous bodies provides a fundamental tool in the geologic interpretation of potential field data. This paper compares the effectiveness of the commonly used edge detection methods such as the total horizontal gradient, analytic signal, tilt angle, theta map and their modified versions in terms of their accuracy on the determination of edges of source bodies. This paper also introduces an edge detector method for the enhancement of potential field anomalies, which is based on the logistic function of the total horizontal gradient. The new method is tested on synthetic data calculated using 3 models, and also on real magnetic and gravity data from Vietnam. The effectiveness of the method is evaluated by comparing the results with those of other popular methods. These results demonstrate that the method is a useful tool for the qualitative interpretation of potential field data.

47 citations


Journal ArticleDOI
TL;DR: In this paper, gravity anomalies over the Burdur sedimentary basin were inverted for the first time in terms of mapping its basement relief and the algorithm used for inverting the gravity anomalies provided accuracy depth estimates by incorporating an exponential increase in density with depth at its inversion procedure.
Abstract: The study area comprises the NE–SW trending Burdur Basin situated at the tectonically active northeastern part of the Fethiye–Burdur Fault Zone (FBFZ), SW Turkey. The basin demonstrates a half graben geometry hosting lacustrine sedimentary deposits from the Late Miocene onward and is bounded by normal faults on its southern side namely the Burdur Fault Zone. In this study, gravity anomalies over the Burdur sedimentary basin were inverted for the first time in terms of mapping its basement relief. The algorithm used for inverting the gravity anomalies provides accuracy depth estimates by incorporating an exponential increase in density with depth at its inversion procedure. Thus the obtained depth configuration yields also a major improvement on the results of depth content of the sedimentary infill reported previously by other studies that used a constant density contrast in their interpretation. Along the east of Burdur Fault from south to north, the basement depth to the southern end of Burdur Basin is ca 1.8 km and gets shallower to ca 0.6 km towards the north around the Burdur city. The deepest section of the basin is ca 3.2 km to the western side of the Burdur Fault close the southern end of the Burdur Lake. Towards north, out of the depression area of the Burdur Basin, the sedimentary infill is about in range of 0.4–1.2 km. The lateral limits of the basin structure have also been outlined by a recent edge detection method based on the logistic function of the total horizontal gradient (LTHG). The LTHG map related to the Burdur Basin shows maximal amplitudes trending NE–SW as two major lines that clearly delineates the segments of the Burdur Fault Zone to the S-SE of Burdur Lake. The inverted basin depth model by a cross-section perpendicular to the regional strike of the basin represents two-step depositional area of the sedimentary fills confirming a geometry of a half graben structure.

33 citations


Journal ArticleDOI
TL;DR: The performance of Incremental Smoothing is compared with state of the art fusion algorithms based on EKF, and it is shown that the incremental smoothing algorithm can achieve real-time positioning while exhibiting stronger robustness against intermittent noise, continuous noise and continuous interruption abnormality of UWB data.
Abstract: Ultra wide band (UWB) sensors are widely used for indoor positioning; however, in many practical scenarios UWB signals are obscured by people, goods or other obstacles. This results in signal intensity attenuation, multipath effect and even signal loss, which causes a sharp decline in positioning accuracy. Fusion of pedestrian dead-reckoning (PDR) and UWB is an effective method to achieve high-accuracy positioning under non-line of sight conditions. While traditionally Bayesian filters, such as extended Kalman filter (EKF) and particle filter have been used for UWB/PDR fusion, recently Incremental Smoothing has been shown to achieve high accuracy in other application domains. In this paper, incremental Smoothing based on Tukey kernel function is proposed to fuse UWB and PDR data. We compare the performance of Incremental Smoothing with state of the art fusion algorithms based on EKF, and show that the incremental smoothing algorithm can achieve real-time positioning while exhibiting stronger robustness against intermittent noise, continuous noise and continuous interruption abnormality of UWB data.

21 citations


Journal ArticleDOI
TL;DR: The authorsMC The authors is a comprehensive catalogue of focal mechanisms for Romanian earthquakes which occurred between 1929 and 2000 in the Carpathian Orogen, the Moesian and Moldavian Platforms, and the Transylvanian Basin.
Abstract: The purpose of this paper is to present the most comprehensive catalogue of focal mechanisms for Romanian earthquakes which occurred between 1929 and 2000 in the Carpathian Orogen, the Moesian and Moldavian Platforms, and the Transylvanian Basin. The present catalogue (REFMC) is a first step toward creating a centralized and continuous database of earthquake mechanisms in Romania by revising and updating existing data for the twentieth century, which together with the Romanian earthquake catalogue (ROMPLUS)—continuously updated by the National Institute for Earth Physics, provides the fundamental information for any seismicity or seismic hazard assessment. In order to produce a close-to definitive version compatible with more recent and less uncertain focal mechanisms solutions, we revised multiple sets of data (some of which newly found), recalculated and corrected some of the fault-plane solutions and reached a consensus. The catalogue comprises 250 crustal events and 416 intermediate-depth events recorded in the twentieth century starting from 1929. On the basis of the new catalogue data and seismotectonic investigation, we propose a reconfiguration of the seismogenic zones located along Southern Carpathians toward the western side of Romania.

13 citations


Journal ArticleDOI
TL;DR: In this article, the authors constructed a high-resolution 3D P-wave velocity model of the crust and uppermost mantle in the Pannonian Basin which may help us to understand better the structure and evolution of the region.
Abstract: The Pannonian region is a back-arc basin located within the arcuate Alpine–Carpathian mountain chain in central Europe. Beneath the basin both the crust and the lithosphere have smaller thickness than the continental average. During the last few decades several studies have been born to explain the formation of the Pannonian Basin but several key questions remain unanswered. In this study we construct a new high-resolution 3D P-wave velocity model of the crust and uppermost mantle in the Pannonian Basin which may help us to understand better the structure and evolution of the region. For the 3D P-wave velocity structure estimation over 32 thousand traveltime picks have been derived from the ISC bulletin and the local Hungarian National Seismological Bulletin, and altogether we used more than 3200 seismic events (local, near-regional and regional) and more than 150 seismic stations from the time period between 2004 and 2014. For the 3D velocity field inversion we used the FMTOMO software package which uses the so called Fast Marching Method for calculating the traveltime estimations, and the subspace inversion method to recover the model parameters. We also performed several checkerboard tests both to select the appropriate regularization parameters and to help the interpretation of the resulting P-wave velocity model. On the resulting tomographic image the seismic velocity anomalies well resolve the effects of deep sedimentary basins and also Moho topography and the associated updomings of the asthenosphere below the Pannonian Basin. Different major tetonic units and fault zones separating those seem to show characteristic velocity anomalies. Subrecent volcanic activity or associated melt and fluid percolation, heat transfer in the upper mantle and crust may also have an impact on the propagation of seismic waves.

12 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a rigid iterative algorithm of Helmert transformation using a unit dual quaternion and showed that the accuracy of computed parameter is comparable to the classic Procrustes algorithm from Grafarend and Awange.
Abstract: The rigid motion involving both rotation and translation in the 3D space can be simultaneously described by a unit dual quaternion. Considering this excellent property, the paper constructs the Helmert transformation (seven-parameter similarity transformation) model based on a unit dual quaternion and then presents a rigid iterative algorithm of Helmert transformation using a unit dual quaternion. Because of the singularity of the coefficient matrix of the normal equation, the nine parameter (including one scale factor and eight parameters of a dual quaternion) Helmert transformation model is reduced into five parameter (including one scale factor and four parameters of a unit quaternion which can represent the rotation matrix) Helmert transformation one. Besides, a good start estimate of parameter is required for the iterative algorithm, hence another algorithm employed to compute the initial value of parameter is put forward. The numerical experiments involving a case of small rotation angles i.e. geodetic coordinate transformation and a case of big rotation angles i.e. the registration of LIDAR points are studied. The results show the presented algorithms in this paper are correct and valid for the two cases, disregarding the rotation angles are big or small. And the accuracy of computed parameter is comparable to the classic Procrustes algorithm from Grafarend and Awange (J Geod 77:66–76, 2003), the orthonormal matrix algorithm from Zeng (Earth Planets Space 67:105, 2015), and the algorithm from Wang et al. (J Photogramm Remote Sens 94:63–69, 2014).

10 citations


Journal ArticleDOI
TL;DR: Experimental results show that clock offset prediction accuracy can be improved by 7.3% compared with that of the traditional method when the method that considers correlation among satellite clocks is used.
Abstract: During the on-orbit operation of BeiDou satellites, the on board atomic clocks of these satellites are easily affected by changes in the space environment. Since the clock offsets of BeiDou satellites derived from multi-satellite clock estimation exhibit correlation among one another. In this study, the correlation among the clock offsets of BeiDou satellites is analyzed and the influence of correlation among satellites on clock offset prediction accuracy is investigated. To obtain accurate analysis results, the Baarda outlier detection method is initially improved. The improved method can effectively eliminate small errors in clock offset data. Then, the correlation coefficient among BeiDou satellites is calculated, and the method based on the correlation among BeiDou satellite clocks is used to predict the clock offsets of the satellites. Experimental results show that clock offset prediction accuracy can be improved by 7.3% compared with that of the traditional method when the method that considers correlation among satellite clocks is used.

10 citations


Journal ArticleDOI
TL;DR: An improved model for BDS satellite ultra-rapid clock offset prediction based on BDS-2 and BDS-3 combined estimation was found to have a significant effect on optimizing the ultra-Rapid clock products of the International GNSS Monitoring and Assessment Service and GNSS analysis centers.
Abstract: Ultra-rapid clock products provide the main parameters for real-time or near real-time precise point positioning services. However, it has been found that BeiDou ultra-rapid clock offsets do not meet the requirements for high-accuracy applications because of their low accuracy, especially regarding the prediction parts. This study proposes an improved model for BDS satellite ultra-rapid clock offset prediction based on BDS-2 and BDS-3 combined estimation. First, the preprocessing of the clock offset based on frequency data and a denoising method that employed a Tikhonov regularization algorithm was introduced to refine the observed series for predictive modeling. Second, given the coexistence of BDS-2 and BDS-3 satellites and the advantages of the BDS-3 onboard atomic clock, inter-satellite correlations between different satellites were used to adjust the stochastic function in estimating the coefficients for the prediction model. Third, to further improve the accuracy of the prediction model, the residuals of the clock offsets were analyzed by partial least squares regression, in which the main components related to the clock offsets were modeled by a back-propagation neural network. Six experimental schemes were introduced to verify the improved model. Experiments were divided into two groups to compare the preprocessing strategy and prediction model. The experimental results indicated: (1) both the BDS-2 and BDS-3 predicted clock offsets were mutually beneficial in the improved model; (2) because of the lower quality of the observed clock offset from BDS-3, preprocessing was used to improve the prediction accuracy by 1.0–15.2% for BDS-2, and reaching 23.2–31.9% for BDS-3; (3) the accuracy of the clock offsets were improved by 30.7–47.3% for BDS-2, and by 49.9–59.3% for BDS-3 within an 18-h period. The proposed improved model was found to have a significant effect on optimizing the ultra-rapid clock products of the International GNSS Monitoring and Assessment Service and GNSS analysis centers.

9 citations


Journal ArticleDOI
TL;DR: Fault plane solutions for earthquakes recorded in Romania (1929-2012) are analyzed on three depth levels: crust (0-50km), upper (50-110 km) and lower segment (110-201 km) as mentioned in this paper.
Abstract: Fault plane solutions for earthquakes recorded in Romania (1929–2012) are analysed on three depth levels: crust (0–50 km), upper (50–110 km) and lower segment (110–201 km). For the Vrancea intermediate-depth source reverse faulting is predominant. However, local-scale variations occur at the upper and lower limits of the active volume: normal faulting (upper side) and strike-slip with normal faulting (lower side). These edge effects are probably caused by the interaction of cold descending lithosphere with hot surrounding asthenosphere acting there. Fault plane solutions of crustal earthquakes reflect complicated patterns associated to local stress sources perturbing the regional field. One important result of our analysis is the delimitation of specific active alignments in North Dobrogea Orogen, Bârlad Depression, Danubian and Banat zones, while seismicity is diffuse and close to random distribution in the other seismogenic zones. The polar diagrams for azimuthal and dip angle distributions and the ternary diagrams for P, T and B axes show prevalence of reverse faulting in Vrancea intermediate-depth source, strike-slip in combination with normal faulting in South Carpathians and Banat region and a deficit of strike-slip faulting south-east of Carpathians. Lack of strike-slip component makes us believe that the deformation field is controlled in the Carpathians Foredeep not by transcurrent deformation along the major faults crossing the region, but rather by subsidence and folding processes as stress release mechanisms in the crust in response to the intense tectonic processes beneath Vrancea region.

9 citations


Journal ArticleDOI
Zhangzhen Sun1, Tianhe Xu1, Chunhua Jiang1, Yuguo Yang1, Nan Jiang1 
TL;DR: In this paper, the main period term of the Polar Motion (PM) is extracted and reconstructed by the Fourier Transform Band-Pass Filter, which indicates Chandler's amplitude has decayed to its lowest state in 2016 and then enters into the next growth stage.
Abstract: Earth Rotation Parameters (ERP) are indispensable in the transformation between the Celestial Reference Frame and the Terrestrial Reference Frame, and significant for high-precision space navigation and positioning. As a key parameter in ERP, Polar Motion (PM) is of great importance in analyzing and understanding the dynamic interaction between solid Earth, atmosphere, ocean and other geophysical fluids. The diverse excitations, as well as complex motion mechanisms of PM, make it more difficult for its high-precision prediction. In this study, the characteristics of PM from 1962 to 2018 are firstly analyzed. The main period term of the PM is extracted and reconstructed by the Fourier Transform Band-Pass Filter, which indicates Chandler’s amplitude has decayed to its lowest state in 2016 and then enters into the next growth stage. More importantly, a Retrograde Semi-annual Wobble (RSAW) is detected and confirmed for the first time. Secondly, the contributions of Retrograde Annual Wobble (RAW) and RSAW terms to PM are analyzed and compared. Results demonstrate that the magnitudes of RAW and RSAW terms to PM from 1962 to 2018 are about 3–8 mas. Finally, in view of the existence of RAW and RSAW in PM, an improved PM prediction algorithm with considering the influence of RAW and RSAW based on least squares and autoregressive model (LS + AR) is developed. The results show that the inclusion of RAW term can effectively improve the accuracy of the LS + AR model in the prediction span of 1–360 days for both components of PM. Besides, considering the RSAW term, the prediction accuracy can be further improved in the prediction spans of 50–310 days for x component of PM, and in the prediction spans of 50–180 days for y component of PM.

8 citations


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate a strategy of investigating recharge potential of a shallow aquifer in a hard rock terrain through flow in a fractured zone using selfpotential (SP) anomaly and other ancillary (published) data related to the area of investigation.
Abstract: We demonstrate a strategy of investigating recharge potential of a shallow aquifer in a hard rock terrain through flow in a fractured zone using self-potential (SP) anomaly and other ancillary (published) data related to the area of investigation. Our study area is Vilarelho da Raia, northern Portugal which possesses a shallow aquifer within a granitic terrain. We explain the phenomenon responsible for generating SP anomaly due to groundwater flow through a fractured conduit associating a fault. We propose delineating parameters associated with such structure from SP profile across it using a thin 2D conducting sheet model buried below subsurface. We propose a data based robust interpretation using derivative analysis of SP anomaly. We demonstrate that for sufficiently smooth SP anomaly across a 2D buried sheet model the first and the second order vertical derivatives of SP anomaly are the only necessary tools in delineating model parameters using characteristic features of derivatives curves and a set of closed form formulas. We propose using Savitzky–Golay derivative filter to get robust estimate of the first and second order horizontal derivatives. The first and second order vertical derivatives are obtained using Hilbert transform and Laplaces harmonic equation respectively. We applied the proposed technique on SP anomaly profile across a fault at Vilarelho da Raia in estimating parameters of a sheet model which represents the fault plane. We used the estimated parameters and the ancillary data taken from published literature to delineate hydrogeological parameters related to recharging of an aquifer within a granitic terrain.

Journal ArticleDOI
TL;DR: This work comprehensively assesses the performance of four mathematical models for VTEC modeling, i.e., polynomial, trigonometric series, spherical harmonic and multi-surface function, and shows that the performances of all models are insensitive to the model orders but sensitive to the ionosphere activity.
Abstract: The ground-based GNSS VTEC model can adequately capture the spatiotemporal characteristics of ionosphere activities. However, it is difficult to precisely model VTECs with a unified mathematical model. We dedicate to comprehensively assess the performance of four mathematical models for VTEC modeling, i.e., polynomial, trigonometric series, spherical harmonic and multi-surface function. To capture the varying ionosphere situations, three typical regions in Western Europe, Southeast China, and North America are chosen. To reflect the precision and accuracy from different aspects, four evaluation measures are defined based on the comparison with CODE GIM products, the poster-fitting residuals of VTEC modeling, the comparison with high-precision quasi-VTEC computed with L1–L2 phase observations as well as the analysis of precision and stability of receiver and satellite DCB by-products. The results show that, in small regions, the performances of all models are insensitive to the model orders but sensitive to the ionosphere activity. On the whole, the polynomial and spherical harmonic function are comparable in terms of their performance and computation efficiency; Trigonometric series is instable with systematic biases in large region due to its incapability of describing the spatial ionosphere variations; Multi-surface function outperforms the others thanks to its epoch-wise solution with low computation efficiency. In addition, the annual solutions of satellite and receiver DCBs indicate their sufficient stability, which can then be applied as known for single-epoch ionosphere modeling.

Journal ArticleDOI
TL;DR: In this article, an experimental study is conducted to check the effect of different data volume on the final prediction performance and hence to select an optimal data portion for AR model, and the experimental results showed that although the short term prediction were not ameliorated, the method that the AR model parameters calculated by appropriate data volume can effectively improve the accuracy of long-term prediction of polar motion.
Abstract: The Least-squares extrapolation of harmonic models and autoregressive (LS + AR) prediction is currently considered to be one of the best prediction model for polar motion parameters. In this method, LS fitting residuals are treated as data to train an AR model. But it is readily known that using too many data will result in learning a badly relevant AR model, implying increasing the model bias. It can also be possible that using too few data will result in a lower estimation accuracy of the AR model, implying increasing the model variance. So selecting data is a critical issue to compromise between bias and variance, and hence to obtain a model with optimized prediction performance. In this paper, an experimental study is conducted to check the effect of different data volume on the final prediction performance and hence to select an optimal data portion for AR model. The earth orientation parameters products released by the International Earth Rotation and Reference Systems Service were used as primary data to predict changes in polar motion parameters over spans of 1–500 days for 800 experiments. The experimental results showed that although the short term prediction were not ameliorated, but the method that the AR model parameters calculated by appropriate data volume can effectively improve the accuracy of long-term prediction of polar motion.

Journal ArticleDOI
TL;DR: In this article, a new adaptive Kalman filter is proposed for tropospheric delay processing, in which the variance of the process noise for the zenith tropical delay (ZTD) dynamics model is tuned in real time using the least-squares variance component estimation technique.
Abstract: The Global Navigation Satellite System (GNSS) precise point positioning (PPP) technology is currently used to process GNSS water vapor observations in real time or near real time. Further developments are required to improve the accuracy and real-time performance of processing tropospheric delays from which the water vapor observations are extracted. In real-time BDS/GPS precise clock correction estimation with square-root information filtering and PPP solution with Kalman filtering, a fixed variance is often assigned for the process noise of the troposphere dynamics model. However, this fixed value may deviate from reality as the weather conditions change, especially for extreme weather. In this paper, a new adaptive Kalman filter is proposed for tropospheric delay processing, in which the variance of the process noise for the zenith tropospheric delay (ZTD) dynamics model is tuned in real time using the least-squares variance component estimation technique. MGEX/IGS data of 15 consecutive days were processed in a stepwise manner. Namely: Real-time BDS/GPS precise clock corrections were estimated firstly, followed by comparison among ZTD solutions of four schemes based on these real-time clocks of first step and GFZ multi-GNSS precise clock (GBM) final clocks: (1) real-time solution of ZTD (G(GPS), GC(GPS and BDS)) without VCE; (2) real-time solution of ZTD (G, GC) with VCE; (3) final solution of ZTD (G, GC, GR (GPS and GLONASS), GE (GPS and GALILEO), GRC (GPS, GLONASS and BDS), GREC (GPS, GLONASS, GALILEO and BDS) without VCE; and (4) final solution of ZTD (G, GC, GR, GE, GRC, GREC) with VCE. The performance of the ZTD and positioning solution was analyzed. Results showed that the accuracy of estimated real-time satellite clock correction was 0.27, 1.31, 0.29, and 0.21 ns for GPS, BDS/GEO, BDS/IGSO, and BDS/MEO, respectively. For the ZTD solutions, the results of schemes 1 and 2 were 12.8 mm (GC) and 10.1 mm (GC) in terms of mean root-mean-square (RMS) values and 2.1 mm (GC) and 1.6 mm (GC) in terms of minimum RMS values, respectively, thereby showing an improvement for scheme 2 of 1.8–81.4% over scheme 1 with average increasing rates of 20.7% (GC) and 20.2% (G). The results for schemes 3 and 4 were 7.6 mm (G) and 6.3 mm (GRC) in terms of mean RMS values and 2.1 mm (G) and 1.9 mm (GRCE) in terms of minimum RMS values, respectively, thereby showing an improvement in scheme 4 over scheme 3 by 1.3–85.6% with average increasing rates of 22.1% (GRCE), 21.9% (GRC), 18.4% (GR), 15.9% (GC), 15.2% (GE), and 12.1% (G). Similar results can be observed for the positioning solution, especially in the height component. These findings clearly show the advantage of the proposed method, which is consistent with the theoretical analysis. Notably, the advantage of the adaptive VCE becomes significant with the inclusion of additional satellite systems.

Journal ArticleDOI
TL;DR: An assessment of NavIC from aspects of data quality, usability and single point positioning (SPP) performance is carried out using real measured data collected from four sites both within NavIC’s primary and secondary service areas and shows the usability and SPP performance can be improved significantly in the mode of GPS/NavIC compared with those in either single mode.
Abstract: The Navigation with Indian Constellation (NavIC), also known as Indian regional navigation satellite system, is a regional navigation satellite system recently developed by India. Its service area covers from 30°E to 130°E and from 30°S to 50°N. In this contribution, an assessment of NavIC from aspects of data quality, usability and single point positioning (SPP) performance is carried out using real measured data collected from four sites both within NavIC’s primary and secondary service areas. Data quality of NavIC’s signal is assessed measuring its carrier-to-noise-density ratio and each satellite’s orbital period is calculated using its broadcast ephemeris. Visible satellite number and DOPs values of each site in modes of NavIC-only, GPS-only and GPS/NavIC are counted and calculated respectively. SPP solutions in modes of NavIC-only, GPS-only and GPS/NavIC are also carried out for these four sites. The results show that: the signal strength of NavIC’s L5 frequency generally equals that of GPS in site IISC (Fig. 1); mean orbital periods of IGSO and GEO satellites are 86160.70 s and 86152.03 s respectively; in site IISC (within the primary service area), currently NavIC system can provide an independent positioning service with an accuracy of less than 1 m in the east direction and less than 2 m in north and up directions respectively; the usability and SPP performance can be improved significantly in the mode of GPS/NavIC compared with those in either single mode.

Journal ArticleDOI
TL;DR: The map obtained by the Back Propagation Artificial Neural Network was compared with a current map of the Nile, and the result was that the ANN technique can also be used in the absence of a scale.
Abstract: For many years, the adaptation of the historical maps to those used today has been one of the fields of study of cartography and geodesy. In this process, the first problem is that the coordinate system in historical maps differs from the modern coordinate systems. For such problems, many mathematical methods have been developed and implemented for the georeferencing process. The second problem, more complicated than the first, is how to resolve the situation when the mapped area was drawn on more than one sheet of paper and without a scale. This study focuses on the map drawn in 1521 by the Ottoman admiral, Piri Reis, of the Nile, which gave life to Egypt, the cradle of civilizations. The Nile was drawn from Cairo to Rosetta on four map sheets in five parts without a coordinate reference and scale factor. The method that has performed well in solving problems that cannot be expressed mathematically is the artificial neural network (ANN) technique which is widely used in solving engineering problems. The current study reports on the use of the Back Propagation Artificial Neural Network (BPANN). The map obtained by BPANN was compared with a current map of the Nile, and the result was that the ANN technique can also be used in the absence of a scale.

Journal ArticleDOI
TL;DR: It is found that the GPS-only ZTD estimates show a very good agreement with the CODE final ZTD products, but that a systematic negative bias of around 3 mm is showing up between the Z TD estimates from combined GPS/BeiDou data w.r.t. to IGS standards.
Abstract: In this study, done in the frame of the MGEX (Multi-GNSS Experiment) project, for 20 selected worldwide stations, and for the whole year of 2015, we compare zenithal total delay (ZTD) estimated values from BeiDou/GPS combined signals, to ZTD estimates from GPS-only signals, and also to ZTD estimates from the IGS analysis center CODE (CODE products), in order to assess the intrinsic accuracies of these ZTD estimates. We used the PANDA software from Wuhan university for all our data processing, with precise orbits (PPP) from the GFZ IGS analysis center. We found that the GPS-only ZTD estimates show a very good agreement with the CODE final ZTD products, but that a systematic negative bias of around 3 mm is showing up between the ZTD estimates from combined GPS/BeiDou data w.r.t. GPS-only data. This indicates that the accuracy of Beidou satellite orbits and clock errors still need to be improved w.r.t. to IGS standards.

Journal ArticleDOI
TL;DR: The new applicability of the IRLS-FT is demonstrated in the reduction to the pole of synthetic magnetic data generated in the regular equidistant array and subsequently randomized to produce non-equidistant measurements along a survey line.
Abstract: A comprehensive robust inversion-based Fourier transformation algorithm has been proposed based on the advantages of Hermite functions for processing even in random-walk data known as the iteratively reweighted least squares fourier transformation (IRLS-FT) method. By using Hermite functions as the basis functions of discretization, the Fourier spectrum was discretized using a series expansion of which the expansion coefficients were given by a solution of a linear inverse problem. The method enabled a quicker determination of the Jacobi matrix as the Hermite functions were considered as the eigenfunctions of the inverse fourier transformation. The process was robustified using the iteratively reweighted least squares (IRLS) method with Steiner weights. The result was a very efficient, robust and resistant procedure with a higher noise reduction capability irrespective of the data acquisition protocols, thus, whether regular or irregular sampling procedure was used in acquiring the data. The Fourier transformation operation was employed in developing the new method because it facilitated data conversion from time to frequency domain. To reduce the noise sensitivity of the IRLS-FT as characterized by the traditional DFT method, the Fourier transformation was formulated as an overdetermined inverse problem permitting the required noise reduction tools to be applied. Traditionally, geophysical data are acquired on a regular equidistant grid, but the continual improvement in survey equipment’s and processing tools permits non-equidistant measurements. The new applicability of the IRLS-FT is demonstrated in the reduction to the pole of synthetic magnetic data generated in the regular equidistant array and subsequently randomized to produce non-equidistant measurements along a survey line. In one dimensional study, the IRLS-FT processed waveforms were similar for both equidistant and non-equidistant sampling. An application on magnetic data showed a similar anomaly generation for DFT processed equidistant sampling and IRLS-FT processed non-equidistant sampling, indicating the new method is applicable irrespective of the sampling protocol applied in the field survey or data acquisition process. This data processing abilities of the IRLS-FT method simplifies and fasten field data acquisition as measurements are not necessarily taken on a regular grid, which gives it a competitive advantage over the traditional DFT method.

Journal ArticleDOI
TL;DR: A new floor identification method based on confidence interval of Wi-Fi signals was developed and indicated that it was better than other methods in large complex indoor scenes and could significantly reduce the size of the fingerprint database and further improve the efficiency of algorithm.
Abstract: The indoor positioning technology is based on the hotpots of location based services (LBS). However, most indoor positioning systems are two-dimensional and couldn’t meet the requirements of today’s LBS. The complex indoor structures and environment determine the floor positioning rather than the altitude positioning in the vertical direction, so the floor identification is the key to three-dimensional indoor positioning systems. There are many restrictions for the existing floor identification methods based on barometer or inertial sensor. They need to get the comparable data in advance, or detect the test data changes in a certain period of time for accurate identification. The current floor identification methods based on ordinary Wi-Fi fingerprints are less effective in the complex environment. Therefore, a new floor identification method based on confidence interval of Wi-Fi signals was developed in this paper, which was divided into the offline stage and the online stage. In the offline stage, the dynamic Wi-Fi signal sequences were collected fast. Then, the adaptive partitioning of Wi-Fi signal intervals was carried out according to RSSI distribution characteristics in the multi-floor environment. Finally, the confidence levels were calculated and the database of fingerprints was constructed. In the online stage, the matching between the test fingerprints and those in the database was applied to obtain the confidence of APs on each floor monitored by the test fingerprints. The sums of the confidence of APs on each floor were calculated, and the floor corresponding to the maximum value was judged as the target floor. To verify the performance of the proposed method, it was compared with the majority voting committees, K-means, Naive Bayes and KNN methods. The results indicate that it was better than other methods in large complex indoor scenes. Its identification accuracy rate was 92.2% and the error rate was 7.8% only one floor away. Moreover, it also could significantly reduce the size of the fingerprint database and further improve the efficiency of algorithm.

Journal ArticleDOI
TL;DR: Improved method of SIS anomaly detection with precise ephemeris is presented, showing that the constellation fault caused by erroneous clock data can be avoided, and the SIS anomalies can also be detected using the proposed method.
Abstract: The signal-in-space (SIS) anomalies caused by satellites and control segments can greatly affect the reliability and safety of navigation and positioning users. The prior information associated with the failure of the Advanced Receiver Autonomous Integrity Monitoring (ARAIM) algorithm were obtained by the evaluation of SIS failure rates broadcasted with navigation ephemeris to investigate the integrity of navigation and positioning. For the existing ARAIM algorithm, the failure rate of satellites in the BeiDou Navigation Satellite System (BDS) is a conservative estimate, which is inconsistent with the actual SIS performance of BDS. Only the accurate detection of the SIS anomalies of BDS satellites can provide an effective reference to associate with failure. Therefore, to improve the accuracy of SIS anomaly detection for BDS satellites, and to provide higher integrity services for users, this study presents an improved method of SIS anomaly detection with precise ephemeris. The median method was used to detect a gross error in clock data before the calculation of clock datum, and the combination of an experience threshold and trimmed mean was used to determine the anomaly detection threshold. The feasibility and efficiency of the proposed method were also analyzed using data collected between 2015 and 2016. The detection results show that the constellation fault caused by erroneous clock data can be avoided, and the SIS anomalies can also be detected using the proposed method. Additionally, through the comprehensive tests performed in this study, it was found that from 2015 to 2016, the average accumulated duration of anomalies for BDS satellites was 10 h for geostationary orbit (GEO) and incline geosynchronous orbit (IGSO) and 55 h for medium earth orbit (MEO), respectively. These anomalies were primarily caused by the satellite clock.

Journal ArticleDOI
TL;DR: The real-time NavIC signal is examined under intentional software defined radio based chirp-jamming by different methods and jamming scenario has been analysed through power spectral density by calculating the spectral energy distribution per unit time.
Abstract: In the near future, the upcoming Navigation through Indian Constellation (NavIC) system will be used for navigation services in India. All services provided by the NavIC receiver must be reliable and resistant to threats. Therefore, the real-time NavIC signal is examined under intentional software defined radio based chirp-jamming by different methods. Since the NavIC signal is available for all the time, jamming scenario has been analysed through power spectral density by calculating the spectral energy distribution per unit time. Furthermore, the received signal is examined through the auto-correlation function and the signal strength monitored at receiver. The dynamic jamming scenario is used for survey and lead to the main observations. For example, the satellites IRNSS-1E, 1F and 1G are more susceptible to interference in such scenario and if the dynamic receiver is closer to interference source, the jammer becomes a serious threat which can block the NavIC signal.

Journal ArticleDOI
TL;DR: In this paper, a new method of parameter estimation for multivariate errors-in-variables (MEIV) model was proposed based on the principle of maximum likelihood estimation, and two iterative algorithms were presented.
Abstract: In this paper, a new method of parameter estimation for multivariate errors-in-variables (MEIV) model was proposed. The formulae of parameter solution for the MEIV model were deduced based on the principle of maximum likelihood estimation, and two iterative algorithms were presented. Since the iterative process is similar to the classical least square, both of the proposed algorithms are easy to program and understand. Finally, real and simulation datasets of affine coordinate transformation were employed to verify the applicability of the proposed algorithms. The results show that both of the proposed algorithms can achieve identical parameter estimators as those obtained by Lagrange algorithm and Newton algorithm. Additionally, the proposed Algorithm 2 can solve the MEIV model with higher convergence efficiency than Algorithm 1.

Journal ArticleDOI
TL;DR: In this article, a combined empirical mode decomposition (EMD) and multichannel singular spectrum analysis (MSSA) model was used for extraction of the gravity tide correction without a priori information (e.g., station coordinates) from static relative gravimetric data.
Abstract: A combined empirical mode decomposition (EMD) and multichannel singular spectrum analysis (MSSA) model (EMD–MSSA model) was used for extraction of the gravity tide correction without a priori information (e.g., station coordinates) from static relative gravimetric data. Static observational data acquired using a CG-5 relative gravimeter over 16 days were used to investigate the feasibility and reliability of the proposed method. The singular spectrum analysis (SSA) method and empirical mode decomposition (EMD)–independent component analysis (ICA) method were also adopted for comparison. Experimental results show that the time series of the gravity tide correction estimated using EMD–MSSA, SSA and EMD–ICA methods are consistent with a theoretical reference (the Longman formula). The gravity tide correction estimated using the EMD–MSSA method is closer to the theoretical model than other methods, the root-mean-square difference of the residuals between estimated values and theoretical values are smallest, and the accuracy of the gravity tide correction time series derived using the EMD–MSSA method is thus highest. The correlation coefficient of extraction results and GT is highest for the results extracted using the EMD–MSSA method. The experimental results show that using the EMD–MSSA model, which combines the advantages of the MSSA and EMD signal processing methods, improves the extraction estimation accuracy and reliability of the gravity tide correction from relative gravimetric data.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to detect future zones of high-stress accumulation caused by the interaction of active faults within a 3D topographic geological block based on finite-element analysis.
Abstract: As a preliminary three-dimensional numerical analysis, this research aims to detect future zones of high-stress accumulation caused by the interaction of active faults within a 3D topographic geological block based on finite-element analysis. Stress analysis of the three-dimensional topographic model covers both static and dynamic loading caused by topographic loads and crustal movements, and can provide more realistic results. There are many applications to create topographic models from xyz data. Nevertheless, these models do not have the properties required in analytical software. Solid meshing of topographic blocks is abstruse and consumes much time and high CPU usage. Therefore, we first try to create a validated topographic shell model through the introduced methods including nodal projection and statistical analysis, and then upgrade it to a solid model. The stress equations are then assigned to each element of the solid model. The outputs include stress accumulation zones in both pre-failure and failure mode for the whole model. In addition, energy diagrams show the rate of main energies per time and accordingly, represent the perception of power for each energy output. Energy drop during the initial run time is consistent with the collision of the blocks of the model.

Journal ArticleDOI
TL;DR: The heuristic SSA approach performs better in extracting signals from SSC time series contaminated with multiplicative noise and all the mean root mean squared errors and mean absolute errors derived are smaller than the traditional and homomorphic log-transformation SSA, which indicate that the extracted signals are much closer to the real signals than those by the other two approaches.
Abstract: This paper proposes a heuristic singular spectrum analysis (SSA) approach to extract signals from suspended sediment concentration (SSC) time series contaminated by multiplicative noise, in which multiplicative noise is converted to approximate additive noise by multiplying with the signal estimate of the time series. Therefore both the signal and noise components need to be recursively estimated. Since the converted additive noise is heterogeneous, a weight factor is introduced according to the variance of additive noise. The proposed heuristic SSA approach is employed to process the SSC series in San Francisco Bay compared to the traditional SSA and homomorphic log-transformation SSA approach. By using our heuristic SSA approach, the first 10 principal components derived can capture 96.49% of the total variance with the fitting error of 6.17 mg/L, better than those derived by traditional SSA approach and homomorphic log-transformation SSA approach that catch 88.97% and 87.35% of the total variance with the fitting errors of 14.47 mg/L and 15.03 mg/L, respectively. Therefore, our heuristic SSA approach can extract more signals than traditional SSA and homomorphic log-transformation SSA approach. Furthermore, the results from the simulation cases show that all the mean root mean squared errors and mean absolute errors derived by our heuristic SSA are smaller than the traditional and homomorphic log-transformation SSA, which indicate that the extracted signals by heuristic SSA approach are much closer to the real signals than those by the other two approaches. Therefore it can be conclude that our heuristic SSA approach performs better in extracting signals from SSC time series contaminated with multiplicative noise.

Journal ArticleDOI
TL;DR: The VCV matrices obtained from Bernese v5.2 were investigated for GPS measurements to estimate appropriate scale factor (SF) values and, according to the results, SF values do not depend on baseline lengths; but they vary depending on the session durations.
Abstract: Global positioning system (GPS) refers positioning, timing and navigation services for different engineering applications. GPS positioning accuracies vary depending on the several parameters such as surveying method, data processing strategy and software packages. Bernese v5.2 software package is an important tool for processing and analyzing of the GPS measurements especially for the precise applications in scientific community. Although the accuracies of the estimated coordinates are sufficient, Variance–Covariance (VCV) matrices obtained from Bernese v5.2 are very optimistic because the correlations between different observables may be ignored by choosing identical weights for each measurement type in the analysis. This situation causes wrong interpretations for statistical analyses based on these VCV matrices. Therefore, the VCV matrices obtained from software should be scaled. In this study, the VCV matrices obtained from Bernese v5.2 were investigated for GPS measurements to estimate appropriate scale factor (SF) values. Baselines whose lengths ranging from 55 to 268 km and session durations between 2 and 24 h were processed with single baseline strategy for 31 consecutive days. According to the results, SF values do not depend on baseline lengths; but they vary depending on the session durations. A logarithmic function was defined for time-dependent SF values. This function has been tested in deformation analysis at the global test step and the obtained results, when the SF values are taken into account, are more reliable than the results when the unscaled VCV matrices are implemented.

Journal ArticleDOI
TL;DR: In this paper, the first-digit or significant-digit law is investigated for residuals and the normalized residuals estimated from Least Square Estimation (LSE) method.
Abstract: Benford’s law (BL), also known as the first-digit or significant-digit law, is an intriguing pattern in data sets, considers the frequency of occurrence of the first digits, which are not uniformly distributed as might be expected, conversely follow a specified theoretical distribution. According to BL, the occurrence of first non-zero digit in a numerical data, which is generated or found in nature, depends on a logarithmic distribution. Least square estimation (LSE) method is mostly preferred for the estimation of the unknown parameters from different types of geodetic data. The residuals and the normalized residuals of the LSE method, which follow normal distribution and expected values of them are zero are used in outlier detection problem. In this study, BL is investigated for residuals and the normalized residuals estimated from LSE method. Three types of geodetic data are used: (1) simulated regression models, (2) global positioning system (GPS) data, (3) leveling network. The first group data sets are simulated based on linear regression and univariate models and each simulated group is generated for a number of 100, 1000, and 10,000 samples. To generate second group, an international global navigation satellite system (GNSS) service (IGS) station data (ISTA) is processed by kinematic PPP approach using GIPSY OASIS II v6.4 software. Here, the observation duration of GPS data is 4 days. For the last data, a leveling network with 55 points involving 110 observations of height differences is simulated. BL has been applied to the residuals (v) and normalized residuals (w) estimated from LSE method. Goodness-of-fit test has been implemented to determine whether a population has a specified BL distribution or not. This test is based on how good a fit we have between the frequency of occurrence of residuals and normalized residuals in an observed sample and the expected frequencies obtained from the hypothesized distribution. The results depending on the statistical test show that each data set (residuals and normalized residuals) used in this study follows BL.

Journal ArticleDOI
TL;DR: In this paper, the authors present the results from a static part of the load test for the Zglavje viaduct, located on the motorway A1 in Slovenia and the feasible usage of modern geodetic instruments for determination of dynamic response of a structure.
Abstract: The article presents the results from a static part of the load test for the Zglavje viaduct, located on the motorway A1 in Slovenia and the feasible usage of modern geodetic instruments for determination of dynamic response of a structure. Therefore, static analysis was employed for comparison of the implemented methods by which vertical displacements were measured and compared to a calculated displacement. High agreement of the results was established. In addition, experimental part of the geodetic non-contact methods to determine dynamic response of the structure was implemented on the railway bridge across the Mura river on the railway line Ormož–Hodos in Slovenia. Geodetic non-contact methods are evidently more and more applicable at determination of dynamic response for their technologic development in the area of speed and constant data capturing. Our results were provided by employing the Robotic Total Station (RTS).

Journal ArticleDOI
TL;DR: In this article, the authors argue that the spectral properties of low frequency seismic noise are best represented by percentiles of the data instead of the mode, because it is noisy, sensitive to discretization and intrinsic averaging, therefore it is less suitable for a robust characterisation.
Abstract: The site characterisation of future underground gravitational wave detectors is based on spectral properties of the low frequency seismic noise. The evaluation of the collected long term seismological data in the Matra Gravitational and Geophysical Laboratory revealed some aspects that are not apparent in short term spectral noise characterisation. In this paper we survey the methodology. In particular, we argue that the spectral properties are best represented by percentiles of the data instead of the mode, because it is noisy, sensitive to the discretization and intrinsic averaging, therefore it is less suitable for a robust characterisation. The suitable cumulative measures are also scrutinized.

Journal ArticleDOI
TL;DR: In this article, the gravity gradients from the orbital ceiling to the depth of the Mohorovicic discontinuity (Moho) for Central Europe were derived by using the gridded data with a resolution of 0.2°.
Abstract: We discuss the determination of gravity gradients from the orbital ceiling to the depth of the Mohorovicic discontinuity (Moho) for Central Europe. Components of the Eotvos tensor were derived from “Heterogeneous gravity data combination for Earth interior and geophysical exploration research” project (“GOCE+”) by using the gridded data with a resolution of 0.2° per 0.2°. Gravity gradients to Moho boundary depth were modelled forward to the 255 km orbital height. We calculated gradient sensitivity using a 3D model divided into: sediments and consolidated crust including the precise location of the Moho boundary. To define tesseroids as mathematical model we need to set two parameters of the crust: density and thickness for each spherical layer separately. Altitudes for topography/bathymetry were derived from ETOPO1 model, sediments thickness from EuCRUST-07 model, and Moho boundary from Grad and Tiira (Geophys J Int 176(1):279–292, 2009. https://doi.org/10.1111/j.1365-246x.2008.03919.x ) seismic map. For high latitudes, we noted the largest changes for the gradients towards the poles, with particular values of 689.07 mE (milli-eotvos) and 1138.19 mE for VXX and VZZ gradients, respectively. We obtained extreme values for the location of the deep and shallow areas of the crust (Alps, North-Eastern Poland and areas of seas) equal to − 3 E and + 1.5 E, respectively. Most of the gradients showed strong correlation with anomalies in crustal density of − 2.5 E for VZZ and + 1.5 E for VYY in the extreme cases. We showed that changes in crust density and thickness by respectively 50 kg/m3 and 10 km entail changes in gradient values by 15% for density and 10% for depths. Numerical analysis considering Preliminary Reference Earth Model (PREM) showed importance of density modeling for determination of gravity gradients.