scispace - formally typeset
Open accessJournal ArticleDOI: 10.1038/S41467-021-21670-X

Real-time determination of earthquake focal mechanism via deep learning.

04 Mar 2021-Nature Communications (Nature Publishing Group)-Vol. 12, Iss: 1, pp 1432-1432
Abstract: An immediate report of the source focal mechanism with full automation after a destructive earthquake is crucial for timely characterizing the faulting geometry, evaluating the stress perturbation, and assessing the aftershock patterns. Advanced technologies such as Artificial Intelligence (AI) has been introduced to solve various problems in real-time seismology, but the real-time source focal mechanism is still a challenge. Here we propose a novel deep learning method namely Focal Mechanism Network (FMNet) to address this problem. The FMNet trained with 787,320 synthetic samples successfully estimates the focal mechanisms of four 2019 Ridgecrest earthquakes with magnitude larger than Mw 5.4. The network learns the global waveform characteristics from theoretical data, thereby allowing the extensive applications of the proposed method to regions of potential seismic hazards with or without historical earthquake data. After receiving data, the network takes less than two hundred milliseconds for predicting the source focal mechanism reliably on a single CPU. The authors here present a deep learning method to determine the source focal mechanism of earthquakes in realtime. They trained their network with approximately 800k synthetic samples and managed to successfully estimate the focal mechanism of four 2019 Ridgecrest earthquakes with magnitudes larger than Mw 5.4.

... read more

Topics: Focal mechanism (61%), Aftershock (50%)
Citations
  More

5 results found



Posted ContentDOI: 10.1002/ESSOAR.10507941.1
08 Sep 2021-
Abstract: Surface-wave seismograms are widely used by researchers to study Earth’s interior and earthquakes. Reliable results require effective waveform quality control to reduce artifacts from signal comple...

... read more

Topics: Waveform (60%), Signal (53%)

Open accessJournal ArticleDOI: 10.1016/J.TECTO.2021.229140
Xin Zhao1, Chao Wang1, Hong Zhang1, Yixian Tang1  +2 moreInstitutions (1)
15 Nov 2021-Tectonophysics
Abstract: Satellite interferometric synthetic aperture radar (InSAR) has been playing an important role in the earthquake surface deformation observation and source model inversions with its advantage of fine spatial resolution in comparison with seismological data. At present, the two-step inversion approach, including a nonlinear inversion step for determining the fault geometry (e.g., length, width, depth, strike, dip, rake, slip) and a linear inversion step for estimating the slip distribution, is widely used to obtain seismic source parameters from satellite InSAR data. However, the nonlinear inversion step has some weaknesses, such as a prior knowledge requirement, local minima solution, complex and time-consuming. Coseismic differential interferogram is input into back propagation neural network (BPNN) to perform real time inversion of fault geometry in the previous study. But due to the simpleness of network structure, rake and slip parameters is fixed and inversion accuracy is limited. In this paper, we propose a deep learning approach of Earthquake Source Parameters Inversion using ResNet (abbreviated as ESPI-ResNet) from satellite InSAR data. ESPI-ResNet is able to firstly classify the 4 fault types and then invert the 7 fault geometry parameters based on uniform slip model. We train our model by firstly creating a large dataset of simulated interferograms with source parameters ranges confined by seismological and geological data. The accuracy of fault type classification is 99.6% and root mean squared error (RMSE) of inversed fault geometry parameters is low in the simulated test dataset. To find most suitable model for seismic source parameters inversion, we further compare the inversed accuracy of different networks, including BPNN, VGG-16, ResNet-18, ResNet-34, ResNet-50 and DenseNet. We find that moderate increase of network depth and the use of convolution, deep residual learning can improve the model's performance for source parameters inversion, and therefore ResNet-34 is chosen as the backbone network in this study. Finally, real differential interferograms of Yutian earthquake (2020 Mw 6.3), Jiuzhaigou earthquake (2017 Mw 6.5) and Menyuan earthquake (2016 Mw 5.9) are used to validate the proposed method. Real earthquakes validation shows all of them have correct fault type classification, and the inverted results are consistent with the current seismic source parameters data.

... read more



Open accessJournal ArticleDOI: 10.1029/2021JB022685
Abstract: We present an approach for estimating in near real-time full moment tensors of earthquakes and their parameter uncertainties based on short time windows of recorded seismic waveform data by conside...

... read more

Topics: Moment (mathematics) (62%), Seismic moment (57%), Waveform (52%)
References
  More

62 results found


Journal ArticleDOI: 10.1038/NATURE14539
Yann LeCun1, Yann LeCun2, Yoshua Bengio3, Geoffrey E. Hinton4  +1 moreInstitutions (5)
28 May 2015-Nature
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

... read more

33,931 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2015.7298965
07 Jun 2015-
Abstract: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.

... read more

Topics: Scale-space segmentation (55%)

18,335 Citations


Journal ArticleDOI: 10.1162/NECO.2006.18.7.1527
01 Jul 2006-Neural Computation
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

... read more

Topics: Deep belief network (63%), Convolutional Deep Belief Networks (61%), Generative model (58%) ... show more

13,005 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2016.308
27 Jun 2016-
Abstract: Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set.

... read more

Topics: Test set (51%)

12,684 Citations


Journal ArticleDOI: 10.1038/NATURE16961
David Silver1, Aja Huang1, Chris J. Maddison1, Arthur Guez1  +16 moreInstitutions (1)
28 Jan 2016-Nature
Abstract: The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

... read more

Topics: Monte Carlo tree search (70%), Computer Go (67%), Game mechanics (63%) ... show more

10,555 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20215