scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Real-time determination of earthquake focal mechanism via deep learning.

04 Mar 2021-Nature Communications (Nature Publishing Group)-Vol. 12, Iss: 1, pp 1432-1432
TL;DR: In this paper, the authors proposed a novel deep learning method named Focal Mechanism Network (FMNet) to address the problem of real-time source focal mechanism prediction in earthquakes.
Abstract: An immediate report of the source focal mechanism with full automation after a destructive earthquake is crucial for timely characterizing the faulting geometry, evaluating the stress perturbation, and assessing the aftershock patterns. Advanced technologies such as Artificial Intelligence (AI) has been introduced to solve various problems in real-time seismology, but the real-time source focal mechanism is still a challenge. Here we propose a novel deep learning method namely Focal Mechanism Network (FMNet) to address this problem. The FMNet trained with 787,320 synthetic samples successfully estimates the focal mechanisms of four 2019 Ridgecrest earthquakes with magnitude larger than Mw 5.4. The network learns the global waveform characteristics from theoretical data, thereby allowing the extensive applications of the proposed method to regions of potential seismic hazards with or without historical earthquake data. After receiving data, the network takes less than two hundred milliseconds for predicting the source focal mechanism reliably on a single CPU. The authors here present a deep learning method to determine the source focal mechanism of earthquakes in realtime. They trained their network with approximately 800k synthetic samples and managed to successfully estimate the focal mechanism of four 2019 Ridgecrest earthquakes with magnitudes larger than Mw 5.4.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors reviewed the experiment study of fault slip from field experiments, laboratory experiments, and numerical experiments, which helps to provide a clear understanding of earthquake mechanics and showed that there are five main influencing factors in the study of faults and earthquakes: stress, velocity, material, fluid, and temperature.

29 citations

Journal ArticleDOI
TL;DR: In this paper , a hybrid deep-learning network (HybridNet) is proposed for predicting on-site peak ground velocity (PGV), which is composed of CNN and RNN feature extraction blocks.
Abstract: Rapidly and accurately predicting on-site peak ground velocity (PGV) is important for earthquake hazard mitigation. Traditional methods used to predict PGV involve a single physics-based parameter, like the peak displacement (Pd) or squared velocity integral (IV2) techniques; deep-learning methods involve a single neural network model, like the convolutional neural network (CNN) or recurrent neural network (RNN) models, to extract feature for estimating the PGV. Here, based on the training dataset from earthquake events occurred in Japan, we construct hybrid deep-learning network (HybridNet) for predicting on-site PGV, which is consist of CNN and RNN feature extraction blocks. We use physics-based feature time series, waveforms and a site feature from a single station as the input of HybridNet model; additionally, we concatenate the features from the CNN block, RNN block and site feature to predict the on-site PGV. We show that concerning the standard deviation of error, the mean absolute error and the coefficient of determination for PGV prediction, HybridNet model exhibits better performance on the test dataset than the baseline models. Additionally, potential damage zone (PDZ) can be predicted by interpolating the predicted PGVs at the stations. Based on the predicted PGV of the HybridNet model, we investigate the feasibility of PDZ estimation on five earthquakes (M≥6.5). And we find that within a few seconds after the arrival of P-wave, the predicted PDZ is consistent well with the PGV ShakeMap obtained from the U.S. Geological Survey.

26 citations

Journal ArticleDOI
TL;DR: The results show that TNNA‐AUS successfully reduces the inversion bias and improves the computational efficiency and inversion accuracy, compared with the global improvement strategy of adding training samples according to the prior distribution of model parameters.

25 citations

Journal ArticleDOI
TL;DR: This work presents an approach for estimating in near real-time full moment tensors of earthquakes and their parameter uncertainties based on short time windows of recorded seismic waveform data by computing the tensor and parameter uncertainties in real time.
Abstract: We present an approach for estimating in near real-time full moment tensors of earthquakes and their parameter uncertainties based on short time windows of recorded seismic waveform data by conside...

10 citations

References
More filters
Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Abstract: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.

28,225 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Abstract: Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set.

16,962 citations

Journal ArticleDOI
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

15,055 citations

Journal ArticleDOI
28 Jan 2016-Nature
TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Abstract: The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

14,377 citations