scispace - formally typeset
Search or ask a question

Showing papers in "Seismological Research Letters in 2019"


Journal ArticleDOI
TL;DR: Five research areas in seismology are surveyed in which ML classification, regression, clustering algorithms show promise: earthquake detection and phase picking, earthquake early warning, ground‐motion prediction, seismic tomography, and earthquake geodesy.
Abstract: This article provides an overview of current applications of machine learning (ML) in seismology. ML techniques are becoming increasingly widespread in seismology, with applications ranging from identifying unseen signals and patterns to extracting features that might improve our physical understanding. The survey of the applications in seismology presented here serves as a catalyst for further use of ML. Five research areas in seismology are surveyed in which ML classification, regression, clustering algorithms show promise: earthquake detection and phase picking, earthquake early warning (EEW), ground‐motion prediction, seismic tomography, and earthquake geodesy. We conclude by discussing the need for a hybrid approach combining data‐driven ML with traditional physical modeling.

287 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the possible involvement of the Republic of Korea's first enhanced geothermal system (EGS) project in the 2017 Pohang earthquake, where the epicenter of the earthquake was located near the project's drill site.
Abstract: © 2019 Seismological Society of America. All rights reserved. On the afternoon of 15 November 2017, the coastal city of Pohang, Korea, was rocked by a magnitude 5.5 earthquake (Mw, U.S. Geological Survey). Questions soon arose about the possible involvement in the earthquake of the Republic of Korea's first enhanced geothermal system (EGS) project because the epicenter of the earthquake was located near the project's drill site. The Pohang EGS project was intended to create an artificial geothermal reservoir within low-permeability crystalline basement by hydraulically stimulating the rock to form a connected network of fractures between two wells, PX-1 and PX-2, at a depth of ∼4 km. Forensic examination of the tectonic stress conditions, local geology, well drilling data, the five high-pressure well stimulations undertaken to create the EGS reservoir, and the seismicity induced by injection produced definitive evidence that earthquakes induced by high-pressure injection into the PX-2 well activated a previously unmapped fault that triggered the Mw 5.5 earthquake. Important lessons of a general nature can be learned from the Pohang experience and can serve to increase the safety of future EGS projects in Korea and elsewhere.

153 citations



Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method, named rapid earthquake association and location (REAL), for associating seismic phases and locating seismic events rapidly, simultaneously, and automatically.
Abstract: Rapid association of seismic phases and event location are crucial for real‐time seismic monitoring. We propose a new method, named rapid earthquake association and location (REAL), for associating seismic phases and locating seismic events rapidly, simultaneously, and automatically. REAL combines the advantages of both pick‐based and waveform‐based detection and location methods. It associates arrivals of different seismic phases and locates seismic events primarily through counting the number of P and S picks and secondarily from travel‐time residuals. A group of picks are associated with a particular earthquake if there are enough picks within the theoretical travel‐time windows. The location is determined to be at the grid point with the most picks, and if multiple locations have the same maximum number of picks, the grid point among them with smallest travel‐time residuals. We refine seismic locations using a least‐squares location method (VELEST) and a high‐precision relative location method (hypoDD). REAL can be used for rapid seismic characterization due to its computational efficiency. As an example application, we apply REAL to earthquakes in the 2016 central Apennines, Italy, earthquake sequence occurring during a five‐day period in October 2016, midway in time between the two largest earthquakes. We associate and locate more than three times as many events (3341) as are in Italy's National Institute of Geophysics and Volcanology routine catalog (862). The spatial distribution of these relocated earthquakes shows a similar but more concentrated pattern relative to the cataloged events. Our study demonstrates that it is possible to characterize seismicity automatically and quickly using REAL and seismic picks.

94 citations








Journal ArticleDOI
TL;DR: In this article, the authors describe the latest update to the algorithm, ElarmS v.3.0 (ElarmS-3 or E3), which analyzes the frequency content of incoming signals to better differentiate between teleseismic and local earthquakes.
Abstract: The University of California Berkeley’s (UCB) Earthquake Alert Systems (ElarmS) is a network-based earthquake early warning (EEW) algorithm that was one of the original algorithms developed for theU.S. west-coast-wide ShakeAlert EEW system. Here, we describe the latest update to the algorithm, ElarmS v.3.0 (ElarmS-3 or E3). A new teleseismic filter has been developed for E3 that analyzes the frequency content of incoming signals to better differentiate between teleseismic and local earthquakes. A series of trigger filters, including amplitude-based checks and a horizontal-to-vertical ratio check, have also been added to E3 to improve the quality of triggers that are used to create events. Because of its excellent performance, E3 is now the basis for EPIC, the only ShakeAlert point-source algorithm going forward. We can therefore also use the performance of E3 described here to assess the likely performance of ShakeAlert in the coming public rollout. We should expect false events with magnitudes between M 5 and 6 less than once per year. False events with M ≥ 6 will be even less frequent, with none having been observed in testing. We do not expect to miss any M ≥ 6 onshore earthquakes, though the system may miss some large offshore events and may miss one onshore earthquake between M 5 and 6 per year. Finally, in the metropolitan regions where the station density is on the order of 10 km, we expect users 20, 30, and 40 km from an earthquake epicenter to get 3, 6, and 9 s warning, respectively, before the S-wave shaking begins. Electronic Supplement: Screenshot of the Earthquake Alert Systems (ElarmS) review tool, and example histograms and tables of algorithm performance created by the review tool.

Journal ArticleDOI
TL;DR: This work applies convolutional neural networks (CNNs) to the problem of associations to read earthquake waveform arrival pairs between two stations and predict the binary classification of whether the two waveforms are from a common source or different sources.
Abstract: Correctly determining the association of seismic phases across a network is crucial for developing accurate earthquake catalogs. Nearly all established methods use travel-time information as the main criterion for determining associations, and in problems in which earthquake rates are high and many false arrivals are present, many standard techniques may fail to resolve the problem accurately. As an alternative approach, in this work we apply convolutional neural networks (CNNs) to the problem of associations; we train CNNs to read earthquake waveform arrival pairs between two stations and predict the binary classification of whether the two waveforms are from a common source or different sources. Applying the method to a large training dataset of previously cataloged earthquakes in Chile, we obtain> 80% true positive prediction rates for highfrequency data (> 2 Hz) and stations separated in excess of 100 km. As a secondary benefit, the output of the neural network can also be used to infer predicted phase types of arrivals. The method is ideally applied in conjunction with standard travel-time-based association routines and can be adapted for arbitrary network geometries and applications, so long as sufficient training data are available. Electronic Supplement: Figures of neural network training histories, and the results of testing the convolutional neural network (CNN) predictions under several different conditions.

Journal ArticleDOI
TL;DR: The sparse autoencoder method introduced in this article is effective in attenuating the seismic noise and is capable of preserving subtle features of the data, while removing the spatially incoherent random noise.
Abstract: Seismic waves that are recorded by near-surface sensors are usually disturbed by strong noise. Hence, the recorded seismic data are sometimes of poor quality; this phenomenon can be characterized as a low signal-to-noise ratio (SNR). The low SNR of the seismic data may lower the quality of many subsequent seismological analyses, such as inversion and imaging. Thus, the removal of unwanted seismic noise has significant importance. In this article, we intend to improve the SNR of many seismological datasets by developing new denoising framework that is based on an unsupervised machine-learning technique. We leverage the unsupervised learning philosophy of the autoencoding method to adaptively learn the seismic signals from the noisy observations. This could potentially enable us to better represent the true seismic-wave components. To mitigate the influence of the seismic noise on the learned features and suppress the trivial components associated with low-amplitude neurons in the hidden layer, we introduce a sparsity constraint to the autoencoder neural network. The sparse autoencoder method introduced in this article is effective in attenuating the seismic noise. More importantly, it is capable of preserving subtle features of the data, while removing the spatially incoherent random noise. We apply the proposed denoising framework to a reflection seismic image, depth-domain receiver function gather, and an earthquake stack dataset. The purpose of this study is to demonstrate the framework’s potential in real-world applications. INTRODUCTION Seismic phases from the discontinuities in the Earth’s interior contain significant constraints for high-resolution deep Earth imaging; however, they sometimes arrive as weak-amplitude waveforms (Rost and Weber, 2001; Rost and Thomas, 2002; Deuss, 2009; Saki et al., 2015; Guan and Niu, 2017, 2018; Schneider et al., 2017; Chai et al., 2018). The detection of these weak-amplitude seismic phases is sometimes challenging because of three main reasons: (1) the amplitude of these phases is very small and can be neglected easily when seen next to the amplitudes of neighboring phases that are much larger; (2) the coherency of the weak-amplitude seismic phases is seriously degraded because of insufficient array coverage and spatial sampling; and (3) the strong random background noise that is even stronger than the weak phases in amplitude makes the detection even harder. As an example of the challenges presented, the failure in detecting the weak reflection phases from mantle discontinuities could result in misunderstanding of the mineralogy or temperature properties of the Earth interior. To conquer the challenges in detecting weak seismic phases, we need to develop specific processing techniques. In earthquake seismology, in order to highlight a specific weak phase, recordings in the seismic arrays are often shifted and stacked for different slowness and back-azimuth values (Rost and Thomas, 2002). Stacking serves as one of the most widely used approaches in enhancing the energy of target signals. Shearer (1991a) stacked long-period seismograms of shallow earthquakes that were recorded from the Global Digital Seismograph Network for 5 yr and obtained a gather that shows typical arrivals clearly from the deep Earth. Morozov and Dueker (2003) investigated the effectiveness of stacking in enhancing the signals of the receiver functions. They defined a signal-to-noise ratio (SNR) metric that was based on the multichannel coherency of the signals and the incoherency of the random noise, and they showed that the stacking can significantly improve the SNR of the stacked seismic trace. However, stacking methods have some drawbacks. First, they do not necessarily remove the noise present in the signal. Second, they require a large array of seismometers. Third, they require coherency of arrivals across the array, which are not always about earthquake seismology. From this point of view, a single-channel method seems to be a better substitute for improving the SNR of seismograms (Mousavi and Langston, 2016; 2017). In the reflection seismology community, many noise attenuation methods have been proposed and implemented in field applications over the past several decades. Prediction-based methods utilize the predictive property of the seismic signal to construct a predictive filter that prevents noise. Median filters and their variants use the statistical principle to reject Gaussian white noise or impulsive noise (Mi et al., 2000; Bonar and Sacchi, 2012). The dictionary-learning-based methods adaptively learn the basis from the data to sparsify the noisy seismic data, which will in turn suppress the noise (Zhang, van der Baan, et al., 2018). These methods require experimenters to solve the dictionary-updating and sparse-coding methods and can be very 1552 Seismological Research Letters Volume 90, Number 4 July/August 2019 doi: 10.1785/0220190028 Downloaded from https://pubs.geoscienceworld.org/ssa/srl/article-pdf/90/4/1552/4790732/srl-2019028.1.pdf by Seismological Society of America, Mattie Adam on 09 July 2019 expensive, computationally speaking. Decomposition-based methods decompose the noisy data into constitutive components, so that one can easily select the components that primarily represent the signal and remove those associated with noise. This category includes singular value decomposition (SVD)-based methods (Bai et al., 2018), empirical-mode decomposition (Chen, 2016), continuous wavelet transform (Mousavi et al., 2016), morphological decomposition (Huang et al., 2017), and so on. Rank-reduction-based methods assume that seismic data have a low-rank structure (Kumar et al., 2015; Zhou et al., 2017). If the data consist of κ complex linear events, the constructed Hankel matrix of the frequency data is a matrix of rank κ (Hua, 1992). Noise will increase the rank of theHankel matrix of the data, which can be attenuated via rank reduction. Such methods include Cadzow filtering (Cadzow, 1988; Zu et al., 2017) and SVD (Vautard et al., 1992). Most of the denoising methods are largely effective in processing reflection seismic images. The applications in more general seismological datasets are seldom reported, partially because of the fact that many seismological datasets have extremely low data quality. That is, they are characterized by low SNR and poor spatial sampling. Besides, most traditional denoising algorithms are based on carefully tuned parameters to obtain satisfactory performance. These parameters are usually data dependent and require a great deal of experiential knowledge. Thus, they are not flexible enough to use in application to many real-world problems. More research efforts have been dedicated to using machine-learning methods for seismological data processing (Chen, 2018a,b; Zhang, Wang, et al., 2018; Bergen et al., 2019; Lomax et al., 2019; McBrearty et al., 2019). Recently, supervised learning (Zhu et al., 2018) has been successfully applied for denoising of the seismic signals. However, supervised methods with deep networks require very large training datasets (sometimes to an order of a billion) of clean signals and their noisy contaminated realizations. In this article, we develop a new automatic denoising framework for improving the SNR of the seismological datasets based on an unsupervised machine-learning (UML) approach; this would be the autoencoder method. We leverage the autoencoder neural network to adaptively learn the features from the raw noisy seismological datasets during the encoding process, and then we optimally represent the data using these learned features during the decoding process. To effectively suppress the random noise, we use the sparsity constraint to regularize the neurons in the hidden layer. We apply the proposed UML-based denoising framework to a group of seismological datasets, including a reflection seismic image, a receiver function gather, and an earthquake stack. We observe a very encouraging performance, which demonstrates its great potential in a wide range of applications. METHOD Unsupervised Autoencoder Method Wewill first introduce the autoencoder neural network that we use for denoising seismological datasets. Autoencoders are specific neural networks that consist of two connected parts (decoder and encoder) that try to copy their input to the output layer. Hence, they can automatically learn the main features of the data in an unsupervised manner. In this article, the network is simply a three-layer architecture with an input layer, a hidden layer, and an output layer. The encoding process in the autoencoder neural network can be expressed as follows: EQ-TARGET;temp:intralink-;df1;323;673 ξ W1x b1 ; 1 in which x is the training sample (x∈Rn), ξ is the activation function. The decoding process can be expressed as follows: EQ-TARGET;temp:intralink-;df2;323;608 x ⌢ ξ W2x b2 : 2 In equations (1) and (2), W1 is the weighting matrix between the input layer and the hidden layer; b1 is the forward bias vector; W2 is the weighting matrix between the hidden layer and output layer; b2 is the backward bias vector; and ξ is the activation function. In this study, we use the softplus function as the activation function: EQ-TARGET;temp:intralink-;df3;323;505 ξ x log 1 e : 3 Sparsity Regularized Autoencoder To mitigate the influence of the seismic noise on the learned features and suppress the trivial components associated with low-amplitude neurons in the hidden layer, we apply a sparsity constraint to the hidden layer; that is, the output or last layer of the encoder. The sparsity constraint can help dropout the extracted nontrivial features that correspond to the noise and a small value in the hidden units. It can thus highlight the most dominant features in the data—the useful signals. The sparse penalty term can be written as follows: EQ-TARGET;temp:intralink-;df4;323;335~ R p ; 4 in which R is the penalty function: EQ-TARGET;temp:

Journal ArticleDOI
TL;DR: A database of fully curated HR‐ GNSS displacement waveforms for significant earthquakes, including data from HR‐GNSS networks at near‐source to regional distances (1–1000 km) for 29 earthquakes between M_w 6.0 and 9.0 worldwide.
Abstract: Displacement waveforms derived from Global Navigation Satellite System (GNSS) data have become more commonly used by seismologists in the past 15 yrs. Unlike strong‐motion accelerometer recordings that are affected by baseline offsets during very strong shaking, GNSS data record displacement with fidelity down to 0 Hz. Unfortunately, fully processed GNSS waveform data are still scarce because of limited public availability and the highly technical nature of GNSS processing. In an effort to further the use and adoption of high‐rate (HR) GNSS for earthquake seismology, ground‐motion studies, and structural monitoring applications, we describe and make available a database of fully curated HR‐GNSS displacement waveforms for significant earthquakes. We include data from HR‐GNSS networks at near‐source to regional distances (1–1000 km) for 29 earthquakes between M_w 6.0 and 9.0 worldwide. As a demonstration of the utility of this dataset, we model the magnitude scaling properties of peak ground displacements (PGDs) for these events. In addition to tripling the number of earthquakes used in previous PGD scaling studies, the number of data points over a range of distances and magnitudes is dramatically increased. The data are made available as a compressed archive with the article.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate the performance of the P-alert network during the 2018 Hualien earthquake and demonstrate that the results were in good agreement with the patterns of observed damage in the area.
Abstract: On 6 February 2018, anMw 6.4 earthquake struck near the city of Hualien, in eastern Taiwan with a focal depth of 10.4 km. The earthquake caused strong shaking and severe damage to many buildings in Hualien. The maximum intensity during this earthquake reached VII (> 0:4g) in the epicentral region, which is extreme in Taiwan and capable of causing damage in built structures. About 17 people died and approximately 285 were injured. Taiwan was one of the first countries to implement an earthquake early warning (EEW) system that is capable of issuing a warning prior to strong shaking. In addition to the official EEW run by the Central Weather Bureau (CWB), a low-cost EEW system (P-alert) has been deployed by National Taiwan University (NTU). The P-alert network is currently operational and is capable of providing on-site EEW as well as a map of expected ground shaking. In the present work, we demonstrate the performance of the P-alert network during the 2018 Hualien earthquake. The shake maps generated by the P-alert network were available within 2 min and are in good agreement with the patterns of observed damage in the area. These shake maps provide insights into rupture directivity that are crucial for earthquake engineering. During this earthquake, individual P-alert stations acted as on-site EEWsystems and provided 2–8 s lead time in the blind zone around the epicenter. The coseismic deformation (Cd) is estimated using the records of P-alert stations. The higher Cd values (Cd > 2) in the epicentral region are very helpful for authorities for the purpose of responding to damage mitigation.


Journal ArticleDOI
TL;DR: The Raspberry Shake 4D (RS-4D) as discussed by the authors is a low-cost, all-in-one package that includes a vertical-component geophone, threecomponent accelerometer, digitizer, and near-real-time miniSEED data transmission and costs only a few hundred dollars per unit.
Abstract: Seismologists have recently begun using low-cost nodal sensors in dense deployments to sample the seismic wavefield at unprecedented spatial resolution. Earthquake early warning systems and other monitoring networks (e.g., wastewater injection) would also benefit from network densification; however, current nodal sensors lack power systems or the real-time data transmission required for these applications. A candidate sensor for these networks may instead be a low-cost, all-in-one package such as the OSOP Raspberry Shake 4D (RS-4D). The RS-4D includes a vertical-component geophone, threecomponent accelerometer, digitizer, and near-real-time miniSEED data transmission and costs only a few hundred dollars per unit. Here, we step through instrument testing of three RS-4Ds at the Albuquerque Seismological Laboratory (ASL). We find that the geophones have sensitivities constrained to within 4% of nominal, but that they have relatively high self-noise levels compared with the broadband sensors typically used in seismic networks. To demonstrate the impact this would have on characterizing nearby events, we estimate local magnitudes of earthquakes in Oklahoma using Trillium Compact broadband sensor data from U.S. Geological Survey aftershock deployments as well as 23 Raspberry Shakes operated by hobbyists and private owners within the state. We find that for ML 2.0–4.0 earthquakes at distances of 20–100 km from seismic stations, the Raspberry Shakes require events of magnitude ∼0:3 larger than the broadband sensors to reliably estimate ML at a given distance from the epicenter. We conclude that RS-4Ds are suitable for densifying backbone networks designed for studies of local and regional events.

Journal ArticleDOI
TL;DR: In this article, the authors used microseismic observations to populate statistical models that forecast expected event magnitudes and used them to make operational decisions during hydraulic fracturing operations in a shale gas site.
Abstract: Earthquakes induced by subsurface fluid injection pose a significant issue across a range of industries. Debate continues as to the most effective methods to mitigate the resulting seismic hazard. Observations of induced seismicity indicate that the rate of seismicity scales with the injection volume and that events follow the Gutenberg–Richter distribution. These two inferences permit us to populate statistical models of the seismicity and extrapolate them to make forecasts of the expected event magnitudes as injection continues. Here, we describe a shale gas site where this approach was used in real time to make operational decisions during hydraulic fracturing operations.Microseismic observations revealed the intersection between hydraulic fracturing and a pre‐existing fault or fracture network that became seismically active. Although “red light” events, requiring a pause to the injection program, occurred on several occasions, the observed event magnitudes fell within expected levels based on the extrapolated statistical models, and the levels of seismicity remained within acceptable limits as defined by the regulator. To date, induced seismicity has typically been regulated using retroactive traffic light schemes. This study shows that the use of high‐quality microseismic observations to populate statistical models that forecast expected event magnitudes can provide a more effective approach.



Journal ArticleDOI
TL;DR: In this paper, a large-scale template-matching catalog between 2010 and 2016 using the GrowClust algorithm is used to identify approximately 2500 seismogenic fault segments that are in general agreement with focal mechanisms and optimally oriented relative to maximum principle stress measurements.
Abstract: Oklahoma is one of the most seismically active places in the United States as a result of industry activities. To characterize the fault networks responsible for these earthquakes in Oklahoma, we relocated a large-scale template-matching catalog between 2010 and 2016 using the GrowClust algorithm. This relocated catalog is currently the most complete statewide catalog for Oklahoma during this seven year window. Using this relocated catalog, we identified seismogenic fault segments by developing an algorithm (FaultID) that clusters earthquakes and then identifies linear trends within each cluster. Considering the large number of earthquakes in Oklahoma, this algorithm made the process of identifying previously unmapped seismogenic faults more approachable and objective. We identify approximately 2500 seismogenic fault segments that are in general agreement with focal mechanisms and optimally oriented relative to maximum principle stress measurements. We demonstrate that these fault orientations can be used to approximate the maximum principle stress orientations. Supplemental Content: Relocated earthquake catalog in Oklahoma between 2010 and 2016 and a table of the seismogenic fault segments identified with the FaultID algorithm.

Journal ArticleDOI
TL;DR: Rodgers et al. as mentioned in this paper reported on high-performance computing (HPC) fully deterministic simulation of ground motions for a moment magnitude (Mw) 7.0 scenario earthquake on the Hayward fault resolved to 5 Hz using the SW4 finite-difference code.
Abstract: Author(s): Rodgers, AJ; Pitarka, A; Anders Petersson, N; Sjogreen, B; McCallen, DB; Abrahamson, N | Abstract: We report on high-performance computing (HPC) fully deterministic simulation of ground motions for a moment magnitude (Mw) 7.0 scenario earthquake on the Hayward fault resolved to 5 Hz using the SW4 finite-difference code. We computed motions obeying physics-based 3D wave propagation at a regional scale with an Mw 7.0 kinematic rupture model generated following Graves and Pitarka (2016). Both plane-layered (1D) and 3D Earth models were considered, with 3D subsurface material properties and topography interpolated from a model of the U.S. Geological Survey (USGS). The resulting ground-motion intensities cover a broader frequency range than typically considered in regional-scale simulations, including higher frequencies relevant for engineering analysis of structures. Median intensities for sites across the domain are within the reported between-event uncertainties (τ) of ground-motion models (GMMs) across spectral periods 0.2-10 s (frequencies 0.1-5 Hz). The within-event standard deviation ϕ of ground-motion intensity measurement residuals range 0.2-0.5 natural log units with values consistently larger for the 3D model. Source-normalized ratios of intensities (3D/ 1D) reveal patterns of path and site effects that are correlated with known geologic structure. These results demonstrate that earthquake simulations with fully deterministic wave propagation in 3D Earth models on HPC platforms produce broadband ground motions with median and within-event aleatory variability consistent with empirical models. Systematic intensity variations for the 3D model caused by path and site effects suggest that these epistemic effects can be estimated and removed to reduce variation in site-specific hazard estimates. This study motivates future work to evaluate the validity of the USGS 3D model and investigate the development of path and site corrections by running more scenarios.


Journal ArticleDOI
TL;DR: In this paper, the authors adopt a new, more holistic analysis by linking produced water (PW) volumes, disposal, and seismicity in all major U.S. unconventional oil plays (Bakken, Eagle Ford, and Permian plays, and Oklahoma) and provide guidance for long-term management.
Abstract: With the U.S. unconventional oil revolution, adverse impacts from subsurface disposal of coproduced water, such as induced seismicity, have markedly increased, particularly in Oklahoma. Here, we adopt a new, more holistic analysis by linking produced water (PW) volumes, disposal, and seismicity in all major U.S. unconventional oil plays (Bakken, Eagle Ford, and Permian plays, and Oklahoma) and provide guidance for long-term management. Results show that monthly PW injection volumes doubled across the plays since 2009. We show that the shift in PW disposal to nonproducing geologic zones related to lowpermeability unconventional reservoirs is a fundamental driver of induced seismicity. We statistically associate seismicity in Oklahoma to (1) PW injection rates, (2) cumulative PW volumes, and (3) proximity to basement with updated data through 2017. The major difference between intensive seismicity in Oklahoma versus low seismicity levels in the Bakken, Eagle Ford, and Permian basin plays is attributed to proximity to basement with deep injection near basement in Oklahoma relative to shallower injection distant from basement in other plays. Directives to mitigate Oklahoma seismicity are consistent with our findings: reducing (1) PW injection rates and (2) regional injection volumes by 40% relative to the 2014 total in wells near the basement resulted in a 70% reduction in the number of M ≥ 3 earthquakes in 2017 relative to the 2015 peak seismicity. Understanding linkages between PWmanagement and seismicity allows us to develop a portfolio of strategies to reduce future adverse impacts of PWmanagement, including reuse of PW for hydraulic fracturing in the oil and gas sector. Electronic Supplement: To be added later.

Journal ArticleDOI
TL;DR: Renard et al. as discussed by the authors used unsupervised machine learning (ML) to identify patterns in the acoustic signal during the laboratory seismic cycle and precursors to labquakes.
Abstract: Recent work shows that machine learning (ML) can predict failure time and other aspects of laboratory earthquakes using the acoustic signal emanating from the fault zone. These approaches use supervised ML to construct a mapping between features of the acoustic signal and fault properties such as the instantaneous frictional state and time to failure. We build on this work by investigating the potential for unsupervised ML to identify patterns in the acoustic signal during the laboratory seismic cycle and precursors to labquakes. We use data from friction experiments showing repetitive stick-slip failure (the lab equivalent of earthquakes) conducted at constant normal stress (2.0 MPa) and constant shearing velocity (10 μm=s). Acoustic emission signals are recorded continuously throughout the experiment at 4MHz using broadband piezoceramic sensors. Statistical features of the acoustic signal are used with unsupervised ML clustering algorithms to identify patterns (clusters) within the data. We find consistent trends and systematic transitions in the ML clusters throughout the seismic cycle, including some evidence for precursors to labquakes. Further work is needed to connect the ML clustering patterns to physical mechanisms of failure and estimates of the time to failure. Supplemental Content: Figures and text that describe the statistical features, sensitivity analysis of the moving windows, effects of the bandwidth parameter, and additional clustering results. PRECURSORS TO EARTHQUAKES Earthquake forecasting is an important problem for mitigating seismic hazard, and it can help illuminate the physics of earthquake nucleation. Forecasts could be based on physical models of the nucleation process or changes in fault-zone properties (so-called precursors) before failure. However, with current monitoring techniques and models of earthquake nucleation, we are far from forecasting earthquakes or even identifying reliable precursors despite long-standing interests in the problem (Milne, 1899; Marzocchi, 2018) and a broad range of related and direct observations ranging from landslides (Poli, 2017), to glacial motion (e.g., Faillettaz et al., 2015, 2016), geochemical signals (Cui et al., 2017; Martinelli and Dadomo, 2017), geodesy (Chen et al., 2010; Xie et al., 2016; Moro et al., 2017), and seismology (Antonioli et al., 2005; Niu et al., 2008; Rivet et al., 2011; Bouchon et al., 2013). The situation is somewhat better for labquakes. Laboratory friction experiments coupled with ultrasonic measurements have been used to document the approach to failure (Scholz, 1968; Weeks et al., 1978; Chen et al., 1993), with important recent advances in documenting precursors based on spatiotemporal changes in rock properties before failure (Pyrak-Nolte, 2006; Mair et al., 2007; Goebel et al., 2013, 2015; Johnson et al., 2013; Kaproth and Marone, 2013; Hedayat et al., 2014; McLaskey and Lockner, 2014; Scuderi et al., 2016; Jiang et al., 2017; Rouet-Leduc et al., 2017, 2018; Hulbert et al., 2019; Renard et al., 2018; Rivière et al., 2018). Laboratory observations of precursors before earthquakelike failure encompass a variety of measurements, including high-resolution images that illuminate the failure nucleation process. These include passive measurements of acoustic emissions (AEs) (e.g., McLaskey and Lockner, 2014; Goebel et al., 2015), active measurements of fault-zone elastic properties (e.g., Scuderi et al., 2016; Tinti et al., 2016), and direct observations, using x-ray microtomography (micro-CT), of damage evolution in the failure zone (Renard et al., 2017). The microCT work reveals microfracture patterns and the interplay between shear deformation and local volume strain (Renard et al., 2017, 2018). The AE studies show that the Gutenberg–Richter b-value decreases systematically during the laboratory seismic cycle (Goebel et al., 2013; Rivière et al., 2018). In addition, active source measurements of elastic wavespeed and travel time show systematic changes throughout the laboratory seismic cycle and distinct precursors to failure for the complete spectrum of failure modes from slow to fast 1088 Seismological Research Letters Volume 90, Number 3 May/June 2019 doi: 10.1785/0220180367 Downloaded from https://pubs.geoscienceworld.org/ssa/srl/article-pdf/90/3/1088/4686471/srl-2018367.1.pdf by cjm38 on 03 May 2019 elastodynamic events (Kaproth and Marone, 2013; Scuderi et al., 2016; Tinti et al., 2016). These studies include measurements for dozens of repetitive stick-slip failure events showing that elastic wavespeed and transmitted amplitude increase during the linear-elastic loading stage and decrease during inelastic loading. MACHINE LEARNING AND ACOUSTIC SIGNALS BEFORE FAILURE Recent developments in the application of machine learning (ML) to seismic data suggest a number of possible benefits for seismic hazard analysis and earthquake prediction. One approach shows systematic changes in event occurrence patterns and seismic spectra that could illuminate the earthquake nucleation process (e.g., Holtzman et al., 2018; Wu et al., 2018). Another approach, using laboratory data similar to those that we focus on in this article, has shown that supervised ML can predict stick-slip frictional failure events—the lab equivalent of earthquakes (Rouet-Leduc et al., 2017). These works show that the timing of failure events can be predicted with fidelity using continuous records of the acoustic emissions generated within the fault zone (Rouet-Leduc et al., 2017, 2018; Hulbert et al., 2019). Stick-slip failure events are preceded by a cascade of microfailure events that radiate elastic energy in a manner that foretells catastrophic failure. Remarkably, this signal predicts the time of failure; the slip duration; and for some events, the magnitude of slip. However, successful implementation of a supervised ML algorithm demands access to a large labeled training dataset. Unsupervised ML offers an alternative approach that can be applied when labeled data are not available. The purpose of this article is to explore the application of unsupervised ML to characterize acoustic emissions during the laboratory seismic cycle and search for precursors to failure. This approach differs significantly from previous work using supervised ML in which statistical features are used to build a function that maps an input (statistics of the acoustic signal) to an output (e.g., time to failure). Supervised ML involves a training stage followed by a stage in which the algorithm is tested against new observations. In unsupervised ML, the task at hand is quite different. In our case, the goal is to find structure (clusters) within the seismic signal and track its evolution throughout the seismic cycle. Clusters are characterized and identified within an n-dimensional feature space via an ML clustering algorithm. We use a mean-shift ML clustering algorithm (Cheng, 1995; Comaniciu and Meer, 2002) to assess statistical features of the acoustic signal and compare our results with those obtained using the commonly used kmeans clustering algorithm (Tan et al., 2006). We apply both clustering algorithms to 43 statistical features after conducting a principal component analysis (PCA). For comparison to our previous work, we perform a second analysis using only the variance and kurtosis of the acoustic signal identified as the most significant features in the supervisedML analysis (Rouet-Leduc et al., 2017, 2018; Hulbert et al., 2019). That is, they improved the accuracy of the ML regression analysis the most out of ∼100 statistical features. Our goal is to assess how robust these features are when attempting to identify precursors to failure via unsupervised ML. We acknowledge that using results from a supervised ML study as inputs to an unsupervised ML analysis may violate the truly unsupervised nature of the analysis. However, we argue that this approach is well warranted because it can help connect unsupervised and supervised ML approaches. Our work has the potential to improve the understanding of laboratory precursors and ultimately to improve methods for seismic hazard analysis. FRICTION STICK-SLIP EXPERIMENTS We use data from frictional experiments conducted in a biaxial deformation apparatus (Fig. 1a) using the double-direct shear configuration (e.g., Rathbun and Marone, 2010). Two layers of simulated fault gouge are sheared simultaneously within three forcing blocks that contain grooves perpendicular to the shear direction to prevent shear at the layer boundary. The grooves are 0.8 mm deep and spaced every 1.0 mm. The initial gouge layer thickness is ∼5 mm, and the nominal contact area is 100 × 100 mm2. The center forcing block (15 cm) is longer than the side blocks (10 cm) so that the friction area remains constant during shear. Our experiment used glass beads with particle diameters in the 104to 149-μm range to simulate granular fault gouge (Anthony and Marone, 2005). The gouge layers are bounded by cellophane tape around the edges, and a thin rubber jacket is placed around the bottom half of the Horizontal DCDT Multichannel PZT Blocks Vertical DCDT (a)

Journal ArticleDOI
TL;DR: In this article, the authors used the direct current components of accelerometers recording the gravitational acceleration to estimate the sensor orientation at the seafloor observation network for earthquakes and tsunamis along the Japan Trench.
Abstract: The Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench (S‐net) is a novel cabled ocean‐bottom station network covering a broad offshore region east of northeastern Japan. To best use the S‐net data, we estimated sensor orientations of all 150 S‐net stations, because without this information the orientations of measurements in geodetical coordinates cannot be specified. We determined three parameters of the sensor orientation at each station: the tilt angle of the long axis of the cable, the rotation angle around the long axis, and the azimuth of the long axis. We estimated the tilt and rotation angles by using the direct current components of accelerometers recording the gravitational acceleration. The tilt and rotation angles slightly varied within the range of 0.001°–0.1° for most stations during the period from 2016 to 2018 except for coseismic steps of rotation angles greater than 1° because of the 20 August 2016 Mw 6.0 off Sanriku and 20 November 2016 Mw 6.9 off Fukushima earthquakes. The long‐axis azimuths were estimated by the particle motions of long‐period Rayleigh waves. We used the accelerometer records in 0.01–0.03 Hz of 7–14 teleseismic earthquakes with Mw 7.0–8.2. The azimuths were constrained with 95% confidence intervals of ±3°–12°. After correcting original waveforms based on the estimated sensor orientation, we confirmed coherent waveforms within the whole S‐net stations and separation of Rayleigh and Love waves in radial and transverse components. The waveforms were also coherent with those of on‐land broadband stations. We provide the estimated sensor orientations and rotation matrix for conversion from the XYZ to east, north, and up components. The estimated orientation can be a fundamental resource for further seismic and geodetic explorations based on S‐net data.



Journal ArticleDOI
TL;DR: In this paper, an artificial neural network (ANN) framework was proposed to develop ground motion models for natural and induced earthquakes in Oklahoma, Kansas, and Texas, which can predict peak ground acceleration, peak ground velocity, and spectral accelerations at different frequencies given earthquake magnitude, hypocentral distance, and site condition.
Abstract: This article puts forward an artificial neural network (ANN) framework to develop ground-motion models (GMMs) for natural and induced earthquakes in Oklahoma, Kansas, and Texas. The developed GMMs are mathematical equations that predict peak ground acceleration, peak ground velocity, and spectral accelerations at different frequencies given earthquake magnitude, hypocentral distance, and site condition. The motivation of this research stems from the recent increase in the seismicity rate of this particular region, which is mainly believed to be the result of the human activities related to petroleum production and wastewater disposal. Literature has shown that such events generally have shallow depths, leading to large-amplitude shaking, especially at short hypocentral distances. Thus, there is a pressing need to develop site-specific GMMs for this region. This study proposes an ANN-based framework to develop GMMs using a selected database of 4528 ground motions, including 376 seismic events with magnitudes of 3 to 5.8, recorded over the 4to 500-km hypocentral distance range in these three states since 2005. The results show that the proposed GMMs lead to accurate estimations and have generalization capability for ground motions with a range of seismic characteristics similar to those considered in the database. The sensitivity of the equations to predictive parameters is also presented. Finally, the attenuation of ground motions in this particular region is compared with those in other areas of North America. Electronic Supplement:Text and figures describing the selection of the hidden layer size of the artificial neural network (ANN) models, as well as sensitivity of ANN models to modeling assumptions

Journal ArticleDOI
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, as to provide real-time information about concrete mechanical properties such as E-modulus and compressive strength.
Abstract: Author(s): Shakibay Senobari, Nader; Funning, Gareth J; Keogh, Eamonn; Zhu, Yan; Yeh, Chin-Chia Michael; Zimmerman, Zachary; Mueen, Abdullah