scispace - formally typeset
Open AccessJournal ArticleDOI

Learning macroscopic parameters in nonlinear multiscale simulations using nonlocal multicontinua upscaling techniques

Reads0
Chats0
TLDR
A novel nonlocal nonlinear coarse grid approximation using a machine learning algorithm for unsaturated and two-phase flow problems in heterogeneous and fractured porous media, where mathematical models are formulated as general multicontinuum models.
About
This article is published in Journal of Computational Physics.The article was published on 2020-07-01 and is currently open access. It has received 25 citations till now. The article focuses on the topics: Nonlinear system & Mathematical model.

read more

Citations
More filters
Journal ArticleDOI

Generalized Multiscale Finite Element method for multicontinua unsaturated flow problems in fractured porous media

TL;DR: Numerical results illustrate that the presented method provide accurate solution of the unsaturated multicontinua problem on the coarse grid with huge reduction of the discrete system size.
Posted Content

Preconditioning Markov Chain Monte Carlo Method for Geomechanical Subsidence using multiscale method and machine learning technique

TL;DR: The numerical solution of the poroelasticity problem with stochastic properties is considered, and a Two-stage Markov Chain Monte Carlo method for geomechanical subsidence is presented.
Journal ArticleDOI

A benchmark study of the multiscale and homogenization methods for fully implicit multiphase flow simulations

TL;DR: This paper develops the first comparison benchmark study of the advanced multiscale methods for simulation of coupled processes in porous media and extends their applicability to fully implicit simulations using the algebraic dynamic multilevel (ADM) method.
Journal ArticleDOI

Developing a homogenization approach for estimation of in-plan effective elastic moduli of hexagonal honeycombs

TL;DR: In this article, a computational technique is developed for determining effective in-plane properties of hexagonal core honeycombs utilizing homogenization multi-scale technique based on averaging theorems.
Journal ArticleDOI

A multi-stage deep learning based algorithm for multiscale model reduction

TL;DR: It is numerically show that using different reduced order models as inputs of each stage can improve the training and it is found that the mathematical approach is a systematical way of decoupling information and gives the best result.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Related Papers (5)