scispace - formally typeset
Open AccessJournal ArticleDOI

B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data

Reads0
Chats0
TLDR
Compared with PINNs, B-PINNs obtain more accurate predictions in scenarios with large noise due to their capability of avoiding overfitting and dropout employed in PINNs can hardly provide accurate predictions with reasonable uncertainty.
About
This article is published in Journal of Computational Physics.The article was published on 2021-01-15 and is currently open access. It has received 410 citations till now. The article focuses on the topics: Uncertainty quantification & Dropout (neural networks).

read more

Citations
More filters
Journal ArticleDOI

Physics-informed machine learning

TL;DR: Some of the prevailing trends in embedding physics into machine learning are reviewed, some of the current capabilities and limitations are presented and diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems are discussed.
Posted Content

When and why PINNs fail to train: A neural tangent kernel perspective

TL;DR: A novel gradient descent algorithm is proposed that utilizes the eigenvalues of the NTK to adaptively calibrate the convergence rate of the total training error and a series of numerical experiments are performed to verify the correctness of the theory and the practical effectiveness of the proposed algorithms.
Journal ArticleDOI

Physics-Informed Neural Networks for Heat Transfer Problems

TL;DR: In this paper, physics-informed neural networks (PINNs) have been applied to various prototype heat transfer problems, targeting in particular realistic conditions not readily tackled with traditional computational methods.
Journal ArticleDOI

hp-VPINNs: Variational physics-informed neural networks with domain decomposition

TL;DR: A general framework for hp-variational physics-informed neural networks (hp-VPINNs) based on the nonlinear approximation of shallow and deep neural networks and hp-refinement via domain decomposition and projection onto space of high-order polynomials is formulated.
Posted Content

Integrating Physics-Based Modeling with Machine Learning: A Survey

TL;DR: An overview of techniques to integrate machine learning with physics-based modeling and classes of methodologies used to construct physics-guided machine learning models and hybrid physics-machine learning frameworks from a machine learning standpoint is provided.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations

TL;DR: In this article, the authors introduce physics-informed neural networks, which are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations.
Book

Bayesian learning for neural networks

TL;DR: Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods.
Proceedings Article

Dropout as a Bayesian approximation: representing model uncertainty in deep learning

TL;DR: A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Related Papers (5)