scispace - formally typeset
Search or ask a question

Showing papers in "Pure and Applied Geophysics in 2010"


Journal ArticleDOI
TL;DR: A global monitoring system for atmospheric xenon radioactivity is being established as part of the International Monitoring System that will verify compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT) once the treaty has entered into force as discussed by the authors.
Abstract: A global monitoring system for atmospheric xenon radioactivity is being established as part of the International Monitoring System that will verify compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT) once the treaty has entered into force. This paper studies isotopic activity ratios to support the interpretation of observed atmospheric concentrations of 135Xe, 133mXe, 133Xe and 131mXe. The goal is to distinguish nuclear explosion sources from civilian releases. Simulations of nuclear explosions and reactors, empirical data for both test and reactor releases as well as observations by measurement stations of the International Noble Gas Experiment (INGE) are used to provide a proof of concept for the isotopic ratio based method for source discrimination.

137 citations


Journal ArticleDOI
TL;DR: The models under evaluation are described and present, for the first time, preliminary results of this unique experiment, designed to compare time-invariant 5-year earthquake rate forecasts in California.
Abstract: The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

123 citations


Journal ArticleDOI
TL;DR: In this article, Brazilian tests are conducted statically with a material testing machine and dynamically with a split Hopkinson pressure bar system to measure both static and dynamic tensile strength of Barre granite.
Abstract: Granitic rocks usually exhibit strongly anisotropy due to pre-existing microcracks induced by long-term geological loadings. The understanding of the rock anisotropy in mechanical properties is critical to a variety of rock engineering applications. In this paper, Brazilian tests are conducted statically with a material testing machine and dynamically with a split Hopkinson pressure bar system to measure both static and dynamic tensile strength of Barre granite. To understand the anisotropy in tensile strength, samples are cored and labelled using the three principle directions of Barre granite to form six sample groups. For dynamic tests, a pulse shaping technique is used to achieve dynamic equilibrium in the samples during the dynamic test. The finite element method is then implemented to formulate equations that relate the failure load to the material tensile strength by employing an orthotropic elastic material model. For samples in the same orientation group, the tensile strength shows clear loading rate dependence. The tensile strengths also exhibit clear anisotropy under static loading while the anisotropy diminishes as the loading rate increases, which may be due to the interaction of pre-existing microcracks.

107 citations


Journal ArticleDOI
TL;DR: In this paper, a hard gabbro was tested in the laboratory and it was found that there is a critical temperature above which drastic changes in mechanical properties occur and microcracks start developing due to a difference in the thermal expansion coefficients of the crystals.
Abstract: Thermal loading of rocks at high temperatures induces changes in their mechanical properties. In this study, a hard gabbro was tested in the laboratory. Specimens were slowly heated to a maximum temperature of 1,000°C. Subsequent to the thermal loading, specimens were subjected to uniaxial compression. A drastic decrease of both unconfined compressive strength and elastic moduli was observed. The thermal damage of the rock was also highlighted by measuring elastic wave velocities and by monitoring acoustic emissions during testing. The micromechanisms of rock degradation were investigated by analysis of thin sections after each stage of thermal loading. It was found that there is a critical temperature above which drastic changes in mechanical properties occur. Indeed, below a temperature of 600°C, microcracks start developing due to a difference in the thermal expansion coefficients of the crystals. At higher temperatures (above 600°C), oxidation of Fe2+ and Mg2+, as well as bursting of fluid inclusions, are the principal causes of damage. Such mechanical degradation may have dramatic consequences for many geoengineering structures.

105 citations


Journal ArticleDOI
TL;DR: In this article, the authors employ a computationally efficient fault system earthquake simulator, RSQSim, to explore the effects of earthquake nucleation and fault system geometry on earthquake occurrence.
Abstract: We employ a computationally efficient fault system earthquake simulator, RSQSim, to explore effects of earthquake nucleation and fault system geometry on earthquake occurrence. The simulations incorporate rate- and state-dependent friction, high-resolution representations of fault systems, and quasi-dynamic rupture propagation. Faults are represented as continuous planar surfaces, surfaces with a random fractal roughness, and discontinuous fractally segmented faults. Simulated earthquake catalogs have up to 106 earthquakes that span a magnitude range from ∼M4.5 to M8. The seismicity has strong temporal and spatial clustering in the form of foreshocks and aftershocks and occasional large-earthquake pairs. Fault system geometry plays the primary role in establishing the characteristics of stress evolution that control earthquake recurrence statistics. Empirical density distributions of earthquake recurrence times at a specific point on a fault depend strongly on magnitude and take a variety of complex forms that change with position within the fault system. Because fault system geometry is an observable that greatly impacts recurrence statistics, we propose using fault system earthquake simulators to define the empirical probability density distributions for use in regional assessments of earthquake probabilities.

92 citations


Journal ArticleDOI
TL;DR: In this article, the authors extended existing branching models for earthquake occurrences by incorporating potentially important estimates of tectonic deformation and by allowing the parameters in the models to vary across different regimes.
Abstract: We extend existing branching models for earthquake occurrences by incorporating potentially important estimates of tectonic deformation and by allowing the parameters in the models to vary across different tectonic regimes. We partition the Earth’s surface into five regimes: trenches (including subduction zones and oceanic convergent boundaries and earthquakes in outer rise or overriding plate); fast spreading ridges and oceanic transforms; slow spreading ridges and transforms; active continental zones, and plate interiors (everything not included in the previous categories). Our purpose is to specialize the models to give them the greatest possible predictive power for use in earthquake forecasts. We expected the parameters of the branching models to be significantly different in the various tectonic regimes, because earlier studies (Bird and Kagan in Bull Seismol Soc Am 94(6):2380–2399, 2004) found that the magnitude limits and other parameters differed between similar categories. We compiled subsets of the CMT and PDE earthquake catalogs corresponding to each tectonic regime, and optimized the parameters for each, and for the whole Earth, using a maximum likelihood procedure. We also analyzed branching models for California and Nevada using regional catalogs. Our estimates of parameters that can be compared to those of other models were consistent with published results. Examples include the proportion of triggered earthquakes and the exponent describing the temporal decay of triggered earthquakes. We also estimated epicentral location uncertainty and rupture zone size and our results are consistent with independent estimates. Contrary to our expectation, we found no dramatic differences in the branching parameters for the various tectonic regimes. We did find some modest differences between regimes that were robust under changes in earthquake catalog and lower magnitude threshold. Subduction zones have the highest earthquake rates, the largest upper magnitude limit, and the highest proportion of triggered events. Fast spreading ridges have the smallest upper magnitude limit and the lowest proportion of triggered events. The statistical significance of these variations cannot be assessed until methods are developed for estimating confidence limits reliably. Some results apparently depend on arbitrary decisions adopted in the analysis. For example, the proportion of triggered events decreases as the lower magnitude limit is increased, possibly because our procedure for assigning independence probability favors larger earthquakes. In some tests we censored earthquakes occurring near and just after a previous event, to account for the fact that most such earthquakes will be missing from the catalog. Fortunately the branching model parameters were hardly affected, suggesting that the inability to measure immediate aftershocks does not cause a serious estimation bias. We compare our branching model with the ETAS model and discuss the differences in the models parametrization and the results of earthquake catalogs analysis.

79 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the S-wave spectrum of the French national strong motion network (RAP) for various sites in different regions of mainland France to estimate kappa.
Abstract: An important parameter for the characterization of strong ground motion at high-frequencies (>1 Hz) is kappa, κ, which models a linear decay of the acceleration spectrum, a(f), in log-linear space (i.e. a(f) = A 0 exp(− π κ f) for f > f E where f is frequency, f E is a low frequency limit and A 0 controls the amplitude of the spectrum). κ is a key input parameter in the stochastic method for the simulation of strong ground motion, which is particularly useful for areas with insufficient strong-motion data to enable the derivation of robust empirical ground motion prediction equations, such as mainland France. Numerous studies using strong-motion data from western North America (WNA) (an active tectonic region where surface rock is predominantly soft) and eastern North America (ENA) (a stable continental region where surface rock is predominantly very hard) have demonstrated that κ varies with region and surface geology, with WNA rock sites having a κ of about 0.04 s and ENA rock sites having a κ of about 0.006 s. Lower κs are one reason why high-frequency strong ground motions in stable regions are generally higher than in active regions for the same magnitude and distance. Few, if any, estimates of κs for French sites have been published. Therefore, the purpose of this study is to estimate κ using data recorded by the French national strong-motion network (RAP) for various sites in different regions of mainland France. For each record, a value of κ is estimated by following the procedure developed by Anderson and Hough (Bull Seismol Soc Am 74:1969–1993, 1984): this method is based on the analysis of the S-wave spectrum, which has to be performed manually, thus leading to some uncertainties. For the three French regions where most records are available (the Pyrenees, the Alps and the Cotes-d'Azur), a regional κ model is developed using weighted regression on the local geology (soil or rock) and source-to-site distance. It is found that the studied regions have a mean κ between the values found for WNA and ENA. For example, for the Alps region a κ value of 0.0254 s is found for rock sites, an estimate reasonably consistent with previous studies.

79 citations


Journal ArticleDOI
TL;DR: In this paper, an emission inventory of the four relevant xenon isotopes has been created, which specifies source terms for each power plant, and based on the emissions known, the resulting 133Xe concentration levels at all noble gas stations of the final CTBT verification network were calculated and found to be consistent with observations.
Abstract: Monitoring of radioactive noble gases, in particular xenon isotopes, is a crucial element of the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The capability of the noble gas network, which is currently under construction, to detect signals from a nuclear explosion critically depends on the background created by other sources. Therefore, the global distribution of these isotopes based on emissions and transport patterns needs to be understood. A significant xenon background exists in the reactor regions of North America, Europe and Asia. An emission inventory of the four relevant xenon isotopes has recently been created, which specifies source terms for each power plant. As the major emitters of xenon isotopes worldwide, a few medical radioisotope production facilities have been recently identified, in particular the facilities in Chalk River (Canada), Fleurus (Belgium), Pelindaba (South Africa) and Petten (Netherlands). Emissions from these sites are expected to exceed those of the other sources by orders of magnitude. In this study, emphasis is put on 133Xe, which is the most prevalent xenon isotope. First, based on the emissions known, the resulting 133Xe concentration levels at all noble gas stations of the final CTBT verification network were calculated and found to be consistent with observations. Second, it turned out that emissions from the radioisotope facilities can explain a number of observed peaks, meaning that atmospheric transport modelling is an important tool for the categorization of measurements. Third, it became evident that Nuclear Power Plant emissions are more difficult to treat in the models, since their temporal variation is high and not generally reported. Fourth, there are indications that the assumed annual emissions may be underestimated by factors of two to ten, while the general emission patterns seem to be well understood. Finally, it became evident that 133Xe sources mainly influence the sensitivity of the monitoring system in the mid-latitudes, where the network coverage is particularly good.

67 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarized the physical and technical principles upon which the radioxenon technology is based and the advances the technology has undergone during the last 10 years, and presented a new generation of radio-enon monitoring equipment for automated and continuous operation in remote field locations.
Abstract: Atmospheric measurement of radioactive xenon isotopes (radioxenon) plays a key role in remote monitoring of nuclear explosions, since it has a high capability to capture radioactive debris for a wide range of explosion scenarios. It is therefore a powerful tool in providing evidence for nuclear testing, and is one of the key components of the verification regime of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The reliability of this method is largely based on a well-developed measurement technology. In the 1990s, with the prospect of the build-up of a monitoring network for the CTBT, new development of radioxenon equipment started. This article summarizes the physical and technical principles upon which the radioxenon technology is based and the advances the technology has undergone during the last 10 years. In contrast to previously used equipment, which was manually operated, the new generation of radioxenon monitoring equipment is designed for automated and continuous operation in remote field locations. Also the analytical capabilities of the equipment were strongly enhanced. Minimum detectable concentrations of the recently developed systems are well below 1 mBq/m3 for the key nuclide 133Xe for sampling periods between 8 and 24 h. All the systems described here are also able to separately measure with low detection limits the radioxenon isotopes 131mXe, 133mXe and 135Xe, which are also relevant for the detection of nuclear tests. The equipment has been extensively tested during recent years by operation in a laboratory environment and in field locations, by performing comparison measurements with laboratory type equipment and by parallel operation. These tests demonstrate that the equipment has reached a sufficiently high technical standard for deployment in the global CTBT verification regime.

66 citations


Journal ArticleDOI
TL;DR: In this article, the magnitude-frequency distribution of a single longwall in Hamm was studied in detail and showed a maximum at ML = 1.4 corresponding to an estimated characteristic source area of about 2,200 m2.
Abstract: Over the last 25 years mining-induced seismicity in the Ruhr area has continuously been monitored by the Ruhr-University Bochum. About 1,000 seismic events with local magnitudes between 0.7 ≤ ML ≤ 3.3 are located every year. For example, 1,336 events were located in 2006. General characteristics of induced seismicity in the entire Ruhr area are spatial and temporal correlation with mining activity and a nearly constant energy release per unit time. This suggests that induced stresses are released rapidly by many small events. The magnitude–frequency distribution follows a Gutenberg–Richter relation which is a result from combining distributions of single longwalls that themselves show large variability. A high b-value of about 2 was found indicating a lack of large magnitude events. Local analyses of single longwalls indicate that various factors such as local geology and mine layout lead to significant differences in seismicity. Stress redistribution acts very locally since differences on a small scale of some hundreds of meters are observed. A regional relation between seismic moment M0 and local magnitude ML was derived. The magnitude–frequency distribution of a single longwall in Hamm was studied in detail and shows a maximum at ML = 1.4 corresponding to an estimated characteristic source area of about 2,200 m2. Sandstone layers in the hanging or foot wall of the active longwall might fail in these characteristic events. Source mechanisms can mostly be explained by shear failure of two different types above and below the longwall. Fault plane solutions of typical events are consistent with steeply dipping fracture planes parallel to the longwall face and nearly vertical dislocation in direction towards the goaf. We also derive an empirical relation for the decay of ground velocity with epicenter distance and compare maximum observed ground velocity to local magnitude. This is of considerable public interest because about 30 events larger than ML ≥ 1.2 are felt each month by people living in the mining regions. Our relations, for example, indicate that an event in Hamm with a peak ground velocity of 6 mm/s which corresponds to a local magnitude ML between 1.7 and 2.3 is likely to be felt within about 2.3 km radius from the event.

65 citations


Journal ArticleDOI
TL;DR: A new code is presented that integrates the local (range-independent) τp ray equations to provide travel time, range, turning point, and azimuth deviation for any location on the globe given a G2S vector spherical harmonic coefficient set.
Abstract: Expert knowledge suggests that the performance of automated infrasound event association and source location algorithms could be greatly improved by the ability to continually update station travel-time curves to properly account for the hourly, daily, and seasonal changes of the atmospheric state. With the goal of reducing false alarm rates and improving network detection capability we endeavor to develop, validate, and integrate this capability into infrasound processing operations at the International Data Centre of the Comprehensive Nuclear Test-Ban Treaty Organization. Numerous studies have demonstrated that incorporation of hybrid ground-to-space (G2S) enviromental specifications in numerical calculations of infrasound signal travel time and azimuth deviation yields significantly improved results over that of climatological atmospheric specifications, specifically for tropospheric and stratospheric modes. A robust infrastructure currently exists to generate hybrid G2S vector spherical harmonic coefficients, based on existing operational and emperical models on a real-time basis (every 3- to 6-hours) (Drobet al.,2003). Thus the next requirement in this endeavor is to refine numerical procedures to calculate infrasound propagation characteristics for robust automatic infrasound arrival identification and network detection, location, and characterization algorithms. We present results from a new code that integrates the local (range-independent) τp ray equations to provide travel time, range, turning point, and azimuth deviation for any location on the globe given a G2S vector spherical harmonic coefficient set. The code employs an accurate numerical technique capable of handling square-root singularities. We investigate the seasonal variability of propagation characteristics over a five-year time series for two different stations within the International Monitoring System with the aim of understanding the capabilities of current working knowledge of the atmosphere and infrasound propagation models. The statistical behaviors or occurrence frequency of various propagation configurations are discussed. Representative examples of some of these propagation configuration states are also shown.

Journal ArticleDOI
TL;DR: The New Zealand Earthquake Forecast Testing Centre aims to encourage the development of testable models of time-varying earthquake occurrence in the New Zealand region, and to conduct verifiable prospective tests of their performance over a period of five or more years.
Abstract: The New Zealand Earthquake Forecast Testing Centre is being established as one of several similar regional testing centres under the umbrella of the Collaboratory for the Study of Earthquake Predictability (CSEP). The Centre aims to encourage the development of testable models of time-varying earthquake occurrence in the New Zealand region, and to conduct verifiable prospective tests of their performance over a period of five or more years. The test region, data-collection region and requirements for testing are described herein. Models must specify in advance the expected number of earthquakes with epicentral depths h ≤ 40 km in bins of time, magnitude and location within the test region. Short-term models will be tested using 24-h time bins at magnitude M ≥ 4. Intermediate-term models and long-term models will be tested at M ≥ 5 using 3-month, 6-month and 5-year bins, respectively. The tests applied will be the same as at other CSEP testing centres: the so-called N test of the total number of earthquakes expected over the test period; the L test of the likelihood of the earthquake catalogue under the model; and the R test of the ratio of the likelihoods under alternative models. Four long-term, three intermediate-term and two short-term models have been installed to date in the testing centre, with tests of these models commencing on the New Zealand earthquake catalogue from the beginning of 2008. Submission of models is open to researchers worldwide. New models can be submitted at any time. The New Zealand testing centre makes extensive use of software produced by the CSEP testing centre in California. It is envisaged that, in time, the scope of the testing centre will be expanded to include new testing methods and differently-specified models, nonetheless that the New Zealand testing centre will develop in parallel with other regional testing centres through the CSEP international collaborative process.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the impact of canyon geometry on the temperature regime and nocturnal heat island development in the very dense urban area of Athens, Greece during the night period of the summer and autumn of 2007.
Abstract: The present paper investigates the impact of canyon geometry on the temperature regime and nocturnal heat island development in the very dense urban area of Athens, Greece. Detailed measurements of air temperature have been carried out within three deep urban canyons of different aspect ratios (H/W = 3, 2.1 and 1.7) during the night period of the summer and autumn of 2007. An analysis was carried out to investigate the relative impact of the canyon geometry, the undisturbed wind velocity, ambient temperature, and cloud cover on the development of a nocturnal heat island. A clear increase of the median, maximum and minimum values of the cooling rates has been observed for decreasing aspect ratios. Under low ambient temperatures, high wind speeds correspond to a substantial rise of the cooling rate in the urban canyons mainly because of the increased convective losses. On the contrary, cooling rates decrease substantially under high-undisturbed wind speeds and ambient temperatures because of the important convective gains. The impact of cloud cover was found to be important as cloudy skies cause a substantial decrease of the cooling rates in the urban canyons. Comparisons were performed between the temperature data collected in the three studied urban canyons and temperatures recorded in an urban as well as a suburban open space station.

Journal ArticleDOI
TL;DR: In this paper, the location-dependent velocities of Rayleigh and Love-wave groups were obtained by inverting the path-averaged group times by means of a damped least-squares approach.
Abstract: Surface wave data were initially collected from events of magnitude Ms ≥ 5.0 and shallow or moderate focal depth occurred between 1980 and 2002: 713 of them generated Rayleigh waves and 660 Love waves, which were recorded by 13 broadband digital stations in Eurasia and India. Up to 1,525 source-station Rayleigh waveforms and 1,464 Love wave trains have been processed by frequency-time analysis to obtain group velocities. After inverting the path-averaged group times by means of a damped least-squares approach, we have retrieved location-dependent group velocities on a 2° × 2°-sized grid and constructed Rayleigh- and Love-wave group velocity maps at periods 10.4–105.0 s. Resolution and covariance matrices and the rms group velocity misfit have been computed in order to check the quality of the results. Afterwards, depth-dependent SV- and SH-wave velocity models of the crust and upper mantle are obtained by inversion of local Rayleigh- and Love-wave group velocities using a differential damped least-squares method. The results provide: (a) Rayleigh- and Love-wave group velocities at various periods; (b) SV- and SH-wave differential velocity maps at different depths; (c) sharp images of the subducted lithosphere by velocity cross sections along prefixed profiles; (d) regionalized dispersion curves and velocity-depth models related to the main geological formations. The lithospheric root presents a depth that can be substantiated at ~140 km (Qiangtang Block) and exceptionally at ~180 km in some places (Lhasa Block), and which exhibits laterally varying fast velocity very close to that of some shields that even reaches ~4.8 km/s under the northern Lhasa Block and the Qiangtang Block. Slow-velocity anomalies of 7–10% or more beneath southern Tibet and the eastern edge of the Plateau support the idea of a mechanically weak middle-to-lower crust and the existence of crustal flow in Tibet.

Journal ArticleDOI
TL;DR: In this paper, the probability of an earthquake with magnitude M ≥ 7.0 during a specified interval of time has been estimated on the basis of three probabilistic models, namely, Weibull, Gamma and Lognormal, with the help of the earthquake catalogue spanning the period 1846 to 1995.
Abstract: Northeast India and adjoining regions (20°–32° N and 87°–100° E) are highly vulnerable to earthquake hazard in the Indian sub-continent, which fall under seismic zones V, IV and III in the seismic zoning map of India with magnitudes M exceeding 8, 7 and 6, respectively. It has experienced two devastating earthquakes, namely, the Shillong Plateau earthquake of June 12, 1897 (Mw 8.1) and the Assam earthquake of August 15, 1950 (Mw 8.5) that caused huge loss of lives and property in the Indian sub-continent. In the present study, the probabilities of the occurrences of earthquakes with magnitude M ≥ 7.0 during a specified interval of time has been estimated on the basis of three probabilistic models, namely, Weibull, Gamma and Lognormal, with the help of the earthquake catalogue spanning the period 1846 to 1995. The method of maximum likelihood has been used to estimate the earthquake hazard parameters. The logarithmic probability of likelihood function (ln L) is estimated and used to compare the suitability of models and it was found that the Gamma model fits best with the actual data. The sample mean interval of occurrence of such earthquakes is estimated as 7.82 years in the northeast India region and the expected mean values for Weibull, Gamma and Lognormal distributions are estimated as 7.837, 7.820 and 8.269 years, respectively. The estimated cumulative probability for an earthquake M ≥ 7.0 reaches 0.8 after about 15–16 (2010–2011) years and 0.9 after about 18–20 (2013–2015) years from the occurrence of the last earthquake (1995) in the region. The estimated conditional probability also reaches 0.8 to 0.9 after about 13–17 (2008–2012) years in the considered region for an earthquake M ≥ 7.0 when the elapsed time is zero years. However, the conditional probability reaches 0.8 to 0.9 after about 9–13 (2018–2022) years for earthquake M ≥ 7.0 when the elapsed time is 14 years (i.e. 2009).

Journal ArticleDOI
TL;DR: In this article, the authors examined the first six episodes of reservoir-induced seismicity and found that critical excess pore pressures >~300 kPa and >~600 kPa were needed to induce Episodes I-III and Episodes IV-VI, respectively, suggesting the presence of stronger faults in the region.
Abstract: Continuous reservoir-induced seismicity has been observed in the Koyna–Warna region in western India following the beginning of impoundment of Koyna and Warna Reservoirs in 1961 and 1985, respectively. This seismicity includes 19 events with M ≥ 5.0 which occurred in 7 episodes (I–VII) between 1967 and 2005 at non-repeating hypocentral locations. In this study, we examined the first six episodes. The seismicity occurs by diffusion of pore pressures from the reservoirs to hypocentral locations along a saturated, critically stressed network of NE trending faults and NW trending fractures. We used the daily lake levels in the two reservoirs, from impoundment to 2000, to calculate the time history of the diffused pore pressures and their daily rate of change at the hypocentral locations. The results of our analysis indicate that Episodes I and IV are primarily associated with the initial filling of the two reservoirs. The diffused pore pressures are generated by the large (20–45 m) annual fluctuations of lake levels. We interpret that critical excess pore pressures >~300 kPa and >~600 kPa were needed to induce Episodes I–III and Episodes IV–VI, respectively, suggesting the presence of stronger faults in the region. The exceedance of the previous water level maxima (stress memory) was found to be the most important, although not determining factor in inducing the episodes. The annual rise of 40 m or more, rapid filling rates and elevated dp/dt values over a filling cycle, contributed to the rapid increase in pore pressure.

Journal ArticleDOI
TL;DR: In this paper, the spectral analysis of the magnetic anomaly of the Erciyes region was used to identify the thermal regime of the Central Anatolia by applying spectral analysis method to the magnetic data.
Abstract: Curie-point depth and heat flow values of the Erciyes region are determined to identify the thermal regime of the Central Anatolia by applying the spectral analysis method to the magnetic anomaly data. To compute the spectrum of the data, the magnetic anomaly of the region is transformed into 2-D Fourier domain to attain the average Curie depth. This method is useful in determining the top boundary of magnetic anomaly sources and reveals the Curie depth as 13.7 km in the study area. The obtained results imply a high thermal gradient (42.3°C km−1) and corresponding heat flow values (88.8 mWm−2) in the research area. Using the temperature value measured at borehole drilled by the General Directorate of Mineral Research and Exploration of Turkey (MTA), the values for the thermal gradient and heat flow value were computed as 50.7°C km−1, 106.5 mWm−2. From the heat flow value, the Curie-point depth was determined as 11.4 km in this region. It is concluded from the obtained values that the region has very high geothermal potential caused by partial melting of the lower crust.

Journal ArticleDOI
TL;DR: In this paper, an underground cavern containing several million cubic meters of brine was monitored with a staggered array of 36 one-component, 15 Hz geophones installed in 12 boreholes about 160-360 m deep.
Abstract: Several decades of faulty exploitation of salt through solution mining led to the creation of an underground cavern containing several million cubic meters of brine. To eliminate the huge hazard near a densely inhabited area, a technical solution was implemented to resolve this instability concern through the controlled collapse of the roof while pumping the brine out and filling the cavern with sterile. To supervise this, an area of over 1 km2 was monitored with a staggered array of 36 one-component, 15 Hz geophones installed in 12 boreholes about 160–360 m deep. A total of 2,392 seismic events with Mw −2.6 to 0.2 occurred from July 2005 to March 2006, located within an average accuracy of 18 m. The b-value of the frequency-magnitude distribution exhibited a time variation from 0.5 to 1 and from there to 1.5, suggesting that the collapse initiated as a linear fracture pattern, followed by shear planar fragmentations and finally a 3-D failure process. The brunching ratio of seismicity is indicative of a super-critical process, except for a short period in mid-February when temporary stability existed. Event relocation through the use of a collapsing technique outlines that major clusters of seismicity were associated with the main cavern collapse, whereas smaller clusters were generated by the fracturing of smaller size nearby caverns. It is shown that one-component recordings allow for stable and reliable point source event mechanism solutions through automatic moment tensor inversion using time domain estimates of low frequency amplitudes with first polarities attached. Detailed analysis of failure mechanism components uses 912 solutions with conditional number CN 0.5. The largest pure shear (DC) components characterize the events surrounding the cavern ceiling, which exhibit normal and strike-slip failures. The majority of mechanism solutions include up to 30% explosional failure components, which correspond to roof caving under gravitational collapsing. The largest vertical deformation rate relates closely to the cavern roof and floor, as well as the rest of the salt formation, whereas the horizontal deformation rate is most prominent in areas of detected collapses.

Journal ArticleDOI
TL;DR: In this article, a 2D electrical resistivity survey was carried out along a 2.5 km baseline, and a takeout of 40 m was used to assess the potential of this method to detect faults from the ground surface.
Abstract: The Experimental platform of Tournemire (Aveyron, France) developed by IRSN (French Institute for Radiological Protection and Nuclear Safety) is located in a tunnel excavated in a clay-rock formation interbedded between two limestone formations. A well-identified regional fault crosscuts this subhorizontal sedimentary succession, and a subvertical secondary fault zone is intercepted in the clay-rock by drifts and boreholes in the tunnel at a depth of about 250 m. A 2D electrical resistivity survey was carried out along a 2.5 km baseline, and a takeout of 40 m was used to assess the potential of this method to detect faults from the ground surface. In the 300 m-thick zone investigated by the survey, electrical resistivity images reveal several subvertical low-resistivity discontinuities. One of these discontinuities corresponds to the position of the Cernon fault, a major regional fault. One of the subvertical conductive discontinuities crossing the upper limestone formation is consistent with the prolongation towards the ground surface of the secondary fault zone identified in the clay-rock formation from the tunnel. Moreover, this secondary fault zone corresponds to the upward prolongation of a subvertical fault identified in the lower limestone using a 3D high-resolution seismic reflection survey. This type of large-scale electrical resistivity survey is therefore a useful tool for identifying faults in superficial layers from the ground surface and is complementary to 3D seismic reflection surveys.

Journal ArticleDOI
TL;DR: In this article, a method for recognizing the background and the induced seismicity statistically is proposed, which is based on the ETAS model and is applied to the seismicity of southern California and analyzed the sensitivity of the results to the free parameters in the algorithm.
Abstract: The concept of background seismicity is strictly related to the identification of spontaneous and triggered earthquakes. The definition of foreshocks, main shocks and aftershocks is currently based on procedures depending on parameters whose values are notoriously assumed by subjective criteria. We propose a method for recognizing the background and the induced seismicity statistically. Rather than using a binary distinction of the events in these two categories, we prefer to assign to each of them a probability of being independent or triggered. This probability comes from an algorithm based on the ETAS model. A certain degree of subjectivity is still present in this procedure, but it is limited by the possibility of adjusting the free parameters of the algorithm by rigorous statistical criteria such as maximum likelihood. We applied the method to the seismicity of southern California and analyzed the sensitivity of the results to the free parameters in the algorithm. Finally, we show how our statistical declustering algorithm may be used for mapping the background seismicity, or the moment rate in a seismic area.

Journal ArticleDOI
TL;DR: In this article, the authors associate waveform-relocated background seismicity and aftershocks with the 3-D shapes of late Quaternary fault zones in southern California.
Abstract: We associate waveform-relocated background seismicity and aftershocks with the 3-D shapes of late Quaternary fault zones in southern California. Major earthquakes that can slip more than several meters, aftershocks, and near-fault background seismicity mostly rupture different surfaces within these fault zones. Major earthquakes rupture along the mapped traces of the late Quaternary faults, called the principal slip zones (PSZs). Aftershocks occur either on or in the immediate vicinity of the PSZs, typically within zones that are ±2-km wide. In contrast, the near-fault background seismicity is mostly accommodated on a secondary heterogeneous network of small slip surfaces, and forms spatially decaying distributions extending out to distances of ±10 km from the PSZs. We call the regions where the enhanced rate of background seismicity occurs, the seismic damage zones. One possible explanation for the presence of the seismic damage zones and associated seismicity is that the damage develops as faults accommodate bends and geometrical irregularities in the PSZs. The seismic damage zones mature and reach their finite width early in the history of a fault, during the first few kilometers of cumulative offset. Alternatively, the similarity in width of seismic damage zones suggests that most fault zones are of almost equal strength, although the amount of cumulative offset varies widely. It may also depend on the strength of the fault zone, the time since the last major earthquake as well as other parameters. In addition, the seismic productivity appears to be influenced by the crustal structure and heat flow, with more extensive fault networks in regions of thin crust and high heat flow.

Journal ArticleDOI
TL;DR: In this article, a new interpretation approach bypassing Q(f) and using the attenuation coefficient χ(f ) = γ + φ+πf/Qcffff e(f), denoted γ, and effective attenuation, Qcffff e (f) was proposed.
Abstract: Variability of the Earth’s structure makes a first-order impact on attenuation measurements which often does not receive adequate attention. Geometrical spreading (GS) can be used as a simple measure of the effects of such structure. The traditional simplified GS compensation is insufficiently accurate for attenuation measurements, and the residual GS appears as biases in both Q 0 and η parameters in the frequency-dependent attenuation law Q(f) = Q 0 f η . A new interpretation approach bypassing Q(f) and using the attenuation coefficient χ(f) = γ + πf/Q e(f) resolves this problem by directly measuring the residual GS, denoted γ, and effective attenuation, Q e. The approach is illustrated by re-interpreting several published datasets, including nuclear-explosion and local-earthquake codas, Pn, and synthetic 50–300-s surface waves. Some of these examples were key to establishing the Q(f) concept. In all examples considered, χ(f) shows a linear dependence on the frequency, γ ≠ 0, and Q e can be considered frequency-independent. Short-period crustal body waves are characterized by positive γ SP values of (0.6–2.0) × 10−2 s−1 interpreted as related to the downward upper-crustal reflectivity. Long-period surface waves show negative γ LP ≈ −1.9 × 10−5 s−1, which could be caused by insufficient modeling accuracy at long periods. The above γ values also provide a simple explanation for the absorption band observed within the Earth. The band is interpreted as apparent and formed by levels of Q e ≈ 1,100 within the crust decreasing to Q e ≈ 120 within the uppermost mantle, with frequencies of its flanks corresponding to γ LP and γ SP. Therefore, the observed absorption band could be purely geometrical in nature, and relaxation or scattering models may not be necessary for explaining the observed apparent Q(f). Linearity of the attenuation coefficient suggests that at all periods, the attenuation of both Rayleigh and Love waves should be principally accumulated at the sub-crustal depths (~38–100 km).

Journal ArticleDOI
TL;DR: In this paper, activity concentration data from ambient radioxenon measurements in ground level air, which were carried out in Europe in the framework of the International Noble Gas Experiment (INGE) in support of the development and build-up of a radio xenon monitoring network for the Comprehensive Nuclear-Test-Ban Treaty verification regime are presented and discussed.
Abstract: Activity concentration data from ambient radioxenon measurements in ground level air, which were carried out in Europe in the framework of the International Noble Gas Experiment (INGE) in support of the development and build-up of a radioxenon monitoring network for the Comprehensive Nuclear-Test-Ban Treaty verification regime are presented and discussed. Six measurement stations provided data from 5 years of measurements performed between 2003 and 2008: Longyearbyen (Spitsbergen, Norway), Stockholm (Sweden), Dubna (Russian Federation), Schauinsland Mountain (Germany), Bruyeres-le-Châtel and Marseille (both France). The noble gas systems used within the INGE are designed to continuously measure low concentrations of the four radioxenon isotopes which are most relevant for detection of nuclear explosions: 131mXe, 133mXe, 133Xe and 135Xe with a time resolution less than or equal to 24 h and a minimum detectable concentration of 133Xe less than 1 mBq/m3. This European cluster of six stations is particularly interesting because it is highly influenced by a high density of nuclear power reactors and some radiopharmaceutical production facilities. The activity concentrations at the European INGE stations are studied to characterise the influence of civilian releases, to be able to distinguish them from possible nuclear explosions. It was found that the mean activity concentration of the most frequently detected isotope, 133Xe, was 5–20 mBq/m3 within Central Europe where most nuclear installations are situated (Bruyeres-le-Châtel and Schauinsland), 1.4–2.4 mBq/m3 just outside that region (Stockholm, Dubna and Marseille) and 0.2 mBq/m3 in the remote polar station of Spitsbergen. No seasonal trends could be observed from the data. Two interesting events have been examined and their source regions have been identified using atmospheric backtracking methods that deploy Lagrangian particle dispersion modelling and inversion techniques. The results are consistent with known releases of a radiopharmaceutical facility.

Journal ArticleDOI
TL;DR: In this paper, a long distance measurement of radio-xenon in Yellowknife, Canada, in late October 2006 was used to enhance the resolution of the DPRK events' xenon source reconstruction.
Abstract: The announced October 2006 nuclear test explosion in the Democratic People’s Republic of Korea (DPRK) has been the first real test regarding the technical capabilities of the verification system built up by the Vienna-based Provisional Technical Secretariat (PTS) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) to detect and locate a nuclear test event. This paper enhances the resolution of the DPRK events’ xenon source reconstruction published by Saey et al. (2007, “A long distance measurement of radioxenon in Yellowknife, Canada, in late October 2006”, GRL, Vol. 34, L20802) that was based solely on radio-xenon measurements taken at the remote radionuclide station in Yellowknife, Canada by involving additional measurements taken by a mobile noble gas system deployed quite close to the event location in the Republic of Korea (ROK). Moreover the horizontal resolution of the forward and backward atmospheric transport modelling methods applied for the source scenario reconstruction has been enhanced appropriately to reflect the considerably shorter source-receptor distances examined in comparison to the previously published source reconstruction. It is shown that the 133Xe measurements in Yellowknife could register 133Xe traces from the nuclear explosion during the first 3 days after the event, while the mobile measurements were rather sensitive to releases during days 2–4 after the explosion. According to the analysis, the most likely source scenario would consist of an initial (possibly up to 21 h delayed) venting of 1 × 10−15 Bq 133Xe during the first 24 h, followed by a two orders of magnitude weaker seepage during the following 3 days. Both measurements corroborate the scenario of a rather rapid venting and soil diffusion of the 133Xe yielded during the explosion. While the Swedish mobile measurements were crucial to enhancement of the reconstruction of the source scenario, given the installation status of the IMS xenon network at the time of the event, a sensitivity analysis revealed that the fully developed network would have been able to detect 133Xe traces from the Korean explosion at a number of stations and allowed for an even better constraint on the release function. The station Ussuriysk, Russia, being in operation in 2006, would have registered 133Xe within 1 day and with a three orders of magnitudes stronger signal compared to the detection at Yellowknife.

Journal ArticleDOI
TL;DR: A preliminary procedure for comparing earthquake prediction strategies based on alarm functions and for numerical earthquake predictability experiments that involve discretization of the study region and observations is described.
Abstract: Rigorous predictability experimentation requires a statistical characterization of the performance metric used to evaluate the participating models. We explore the properties of the area skill score measure and consider issues related to experimental discretization. For the case of continuous alarm functions and continuous observations, we present exact analytical solutions that describe the distribution of the area skill score for unskilled predictors, and we also describe how a Gaussian distribution with known mean and variance can be used to approximate the area skill score distribution. We quantify the deviation of the exact distribution from the Gaussian approximation by specifying the kurtosis excess as a function of the number of observed target earthquakes. For numerical earthquake predictability experiments that involve discretization of the study region and observations, we explore simulation procedures for estimating the area skill score distribution, and we present efficient algorithms for various experimental scenarios. When more than one target earthquake occurs within a given space/time/magnitude bin, the probabilities of predicting individual events are not independent, and this requires special consideration. Having presented the statistical properties of the area skill score, we describe and illustrate a preliminary procedure for comparing earthquake prediction strategies based on alarm functions.

Journal ArticleDOI
TL;DR: A simple algorithm is suggested to find the (n, τw) representation of all random guess strategies, the set D, and it is proved that there exists the unique case of w when D degenerates to the diagonal n + τw = 1.
Abstract: The quality of earthquake prediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets n, τ w in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the (n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the (n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of (n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an observation network of 12 stations in and around Shikoku and the Kii Peninsula to conduct research for forecasting Tonankai and Nankai earthquakes.
Abstract: In 2006, we started construction of an observation network of 12 stations in and around Shikoku and the Kii Peninsula to conduct research for forecasting Tonankai and Nankai earthquakes. The purpose of the network is to clarify the mechanism of past preseismic groundwater changes and crustal deformation related to Tonankai and Nankai earthquakes. Construction of the network of 12 stations was completed in January 2009. Work on two stations, Hongu-Mikoshi (HGM) and Ichiura (ICU), was finished earlier and they began observations in 2007. These two stations detected strain changes caused by the slow-slip events on the plate boundary in June 2008, although related changes in groundwater levels were not clearly recognized.

Journal ArticleDOI
TL;DR: In this paper, a model for studying aftershock sequences that integrates Coulomb static stress change analysis, seismicity equations based on rate-state friction nucleation of earthquakes, slip of geometrically complex faults, and fractal-like, spatially heterogeneous models of crustal stress is presented.
Abstract: In this paper, we present a model for studying aftershock sequences that integrates Coulomb static stress change analysis, seismicity equations based on rate-state friction nucleation of earthquakes, slip of geometrically complex faults, and fractal-like, spatially heterogeneous models of crustal stress. In addition to modeling instantaneous aftershock seismicity rate patterns with initial clustering on the Coulomb stress increase areas and an approximately 1/t diffusion back to the pre-mainshock background seismicity, the simulations capture previously unmodeled effects. These include production of a significant number of aftershocks in the traditional Coulomb stress shadow zones and temporal changes in aftershock focal mechanism statistics. The occurrence of aftershock stress shadow zones arises from two sources. The first source is spatially heterogeneous initial crustal stress, and the second is slip on geometrically rough faults, which produces localized positive Coulomb stress changes within the traditional stress shadow zones. Temporal changes in simulated aftershock focal mechanisms result in inferred stress rotations that greatly exceed the true stress rotations due to the main shock, even for a moderately strong crust (mean stress 50 MPa) when stress is spatially heterogeneous. This arises from biased sampling of the crustal stress by the synthetic aftershocks due to the non-linear dependence of seismicity rates on stress changes. The model indicates that one cannot use focal mechanism inversion rotations to conclusively demonstrate low crustal strength (≤10 MPa); therefore, studies of crustal strength following a stress perturbation may significantly underestimate the mean crustal stress state for regions with spatially heterogeneous stress.

Journal ArticleDOI
TL;DR: In this paper, the authors simulate high-frequency (>5 Hz) Love-wave propagation in a layered earth model and dispersion characteristics for near-surface applications using the staggered-grid finite-difference (FD) method.
Abstract: Love-wave propagation has been a topic of interest to crustal, earthquake, and engineering seismologists for many years because it is independent of Poisson’s ratio and more sensitive to shear (S)-wave velocity changes and layer thickness changes than are Rayleigh waves. It is well known that Love-wave generation requires the existence of a low S-wave velocity layer in a multilayered earth model. In order to study numerically the propagation of Love waves in a layered earth model and dispersion characteristics for near-surface applications, we simulate high-frequency (>5 Hz) Love waves by the staggered-grid finite-difference (FD) method. The air–earth boundary (the shear stress above the free surface) is treated using the stress-imaging technique. We use a two-layer model to demonstrate the accuracy of the staggered-grid modeling scheme. We also simulate four-layer models including a low-velocity layer (LVL) or a high-velocity layer (HVL) to analyze dispersive energy characteristics for near-surface applications. Results demonstrate that: (1) the staggered-grid FD code and stress-imaging technique are suitable for treating the free-surface boundary conditions for Love-wave modeling, (2) Love-wave inversion should be treated with extra care when a LVL exists because of a lack of LVL information in dispersions aggravating uncertainties in the inversion procedure, and (3) energy of high modes in a low-frequency range is very weak, so that it is difficult to estimate the cutoff frequency accurately, and “mode-crossing” occurs between the second higher and third higher modes when a HVL exists.

Journal ArticleDOI
TL;DR: In this article, a study was conducted in three villages (Tumba, Kabazi, and Ndaiga) of Nakasongola District, central Uganda to investigate the hydrogeological characteristics of the basement aquifers.
Abstract: Knowledge of aquifer parameters is essential for management of groundwater resources Conventionally, these parameters are estimated through pumping tests carried out on water wells This paper presents a study that was conducted in three villages (Tumba, Kabazi, and Ndaiga) of Nakasongola District, central Uganda to investigate the hydrogeological characteristics of the basement aquifers Our objective was to correlate surface resistivity data with aquifer properties in order to reveal the groundwater potential in the district Existing electrical resistivity and borehole data from 20 villages in Nakasongola District were used to correlate the aquifer apparent resistivity (ρ e) with its hydraulic conductivity (K e), and aquifer transverse resistance (TR) with its transmissivity (T e) K e was found to be related to ρ e by; $$ {\text{Log }}(K_{\text{e}} ) = - 0002\rho_{\text{e}} + 2692 $$ Similarly, TR was found to be related to T by; $$ {\text{TR}} = - 007T_{\text{e}} + 2260 $$ Using these expressions, aquifer parameters (T c and K c) were extrapolated from measurements obtained from surface resistivity surveys Our results show very low resistivities for the presumed water-bearing aquifer zones, possibly because of deteriorating quality of the groundwater and their packing and grain size Drilling at the preferred VES spots was conducted before the pumping tests to reveal the aquifer characteristics Aquifer parameters (T o and K o) as obtained from pumping tests gave values (29,4247 m2/day, 3743 m/day), (9,8011 m2/day, 4370 m/day), (31,8524 m2/day, 3929 m/day) The estimated aquifer parameter (T c and K c) when extrapolated from surface geoelectrical data gave (7,1429 m2/day, 3819 m/day), (28,2000 m2/day, 4634 m/day), (19,4286 m2/day, 4592 m/day) for Tumba, Kabazi, and Ndaiga villages, respectively Interestingly, the similarity between the K c and K o pairs was not significantly different We observed no significant relationships between the T c and T o pairs The root mean square errors were estimated to be 18,159 m2/day and 414 m/day