Abstract: Tomographic background oriented schlieren (Tomo-BOS) imaging measures density or temperature fields in 3D using multiple camera BOS projections, and is particularly useful for instantaneous flow visualizations of complex fluid dynamics problems. We propose a new method based on physics-informed neural networks (PINNs) to infer the full continuous 3D velocity and pressure fields from snapshots of 3D temperature fields obtained by Tomo-BOS imaging. PINNs seamlessly integrate the underlying physics of the observed fluid flow and the visualization data, hence enabling the inference of latent quantities using limited experimental data. In this hidden fluid mechanics paradigm, we train the neural network by minimizing a loss function composed of a data mismatch term and residual terms associated with the coupled Navier-Stokes and heat transfer equations. We first quantify the accuracy of the proposed method based on a 2D synthetic data set for buoyancy-driven flow, and subsequently apply it to the Tomo-BOS data set, where we are able to infer the instantaneous velocity and pressure fields of the flow over an espresso cup based only on the temperature field provided by the Tomo-BOS imaging. Moreover, we conduct an independent PIV experiment to validate the PINN inference for the unsteady velocity field at a center plane. To explain the observed flow physics, we also perform systematic PINN simulations at different Reynolds and Richardson numbers and quantify the variations in velocity and pressure fields. The results in this paper indicate that the proposed deep learning technique can become a promising direction in experimental fluid mechanics.

... read more

Topics: Fluid mechanics (58%), Fluid dynamics (55%), Flow (mathematics) (52%) ... show more

More

7 results found

••

George Em Karniadakis^{1}, Ioannis G. Kevrekidis^{2}, Lu Lu^{3}, Paris Perdikaris^{4} +2 more•Institutions (4)

01 Jun 2021-

Abstract: Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems. The rapidly developing field of physics-informed learning integrates data and mathematical models seamlessly, enabling accurate inference of realistic and high-dimensional multiphysics problems. This Review discusses the methodology and provides diverse examples and an outlook for further developments.

... read more

Topics: Artificial neural network (56%)

96 Citations

•••

Abstract: We propose a discretization-free approach based on the physics-informed neural network (PINN) method for solving coupled advection-dispersion and Darcy flow equations with space-dependent hydraulic conductivity. In this approach, the hydraulic conductivity, hydraulic head, and concentration fields are approximated with deep neural networks (DNNs). We assume that the conductivity field is given by its values on a grid, and we use these values to train the conductivity DNN. The head and concentration DNNs are trained by minimizing the residuals of the flow equation and ADE and using the initial and boundary conditions as additional constraints. The PINN method is applied to one- and two-dimensional forward advection-dispersion equations (ADEs), where its performance for various Peclet numbers ($Pe$) is compared with the analytical and numerical solutions. We find that the PINN method is accurate with errors of less than 1% and outperforms some conventional discretization-based methods for $Pe$ larger than 100. Next, we demonstrate that the PINN method remains accurate for the backward ADEs, with the relative errors in most cases staying under 5% compared to the reference concentration field. Finally, we show that when available, the concentration measurements can be easily incorporated in the PINN method and significantly improve (by more than 50% in the considered cases) the accuracy of the PINN solution of the backward ADE.

... read more

2 Citations

•••

Abstract: We propose a discretization-free approach based on the physics-informed neural network (PINN) method for solving coupled advection-dispersion and Darcy flow equations with space-dependent hydraulic conductivity. In this approach, the hydraulic conductivity, hydraulic head, and concentration fields are approximated with deep neural networks (DNNs). We assume that the conductivity field is given by its values on a grid, and we use these values to train the conductivity DNN. The head and concentration DNNs are trained by minimizing the residuals of the flow equation and ADE and using the initial and boundary conditions as additional constraints. The PINN method is applied to one- and two-dimensional forward advection-dispersion equations (ADEs), where its performance for various Peclet numbers ($Pe$) is compared with the analytical and numerical solutions. We find that the PINN method is accurate with errors of less than 1% and outperforms some conventional discretization-based methods for $Pe$ larger than 100. Next, we demonstrate that the PINN method remains accurate for the backward ADEs, with the relative errors in most cases staying under 5% compared to the reference concentration field. Finally, we show that when available, the concentration measurements can be easily incorporated in the PINN method and significantly improve (by more than 50% in the considered cases) the accuracy of the PINN solution of the backward ADE.

... read more

2 Citations

••

Abstract: Continuous reconstructions of periodic phenomena provide powerful tools to understand, predict and model natural situations and engineering problems. In line with the recent method called Physics-Informed Neural Networks (PINN) where a multi layer perceptron directly approximates any physical quantity as a symbolic function of time and space coordinates, we present an extension, namely ModalPINN, that encodes the approximation of a limited number of Fourier mode shapes. In addition to the added interpretability, this representation performs up to two orders of magnitude more precisely for a similar number of degrees of freedom and training time in some cases as illustrated through the test case of laminar shedding of vortices over a cylinder. This added simplicity proves to be robust in regards to flow reconstruction using only a limited number of sensors with asymmetric data that simulates an experimental configuration, even when a Gaussian noise or a random delay is added, imitating imperfect and sparse information.

... read more

Topics: Fourier series (53%), Degrees of freedom (mechanics) (52%), Gaussian noise (51%) ... show more

•••

Ameya D. Jagtap^{1}, Yeonjong Shin^{1}, Kenji Kawaguchi^{2}, George Em Karniadakis^{1}•Institutions (2)

Abstract: We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions. KNNs employ the Kronecker product, which provides an efficient way of constructing a very wide network while keeping the number of parameters low. Our theoretical analysis reveals that under suitable conditions, KNNs induce a faster decay of the loss than that by the feed-forward networks. This is also empirically verified through a set of computational examples. Furthermore, under certain technical assumptions, we establish global convergence of gradient descent for KNNs. As a specific case, we propose the Rowdy activation function that is designed to get rid of any saturation region by injecting sinusoidal fluctuations, which include trainable parameters. The proposed Rowdy activation function can be employed in any neural network architecture like feed-forward neural networks, Recurrent neural networks, Convolutional neural networks etc. The effectiveness of KNNs with Rowdy activation is demonstrated through various computational experiments including function approximation using feed-forward neural networks, solution inference of partial differential equations using the physics-informed neural networks, and standard deep learning benchmark problems using convolutional and fully-connected neural networks.

... read more

Topics: Deep learning (65%), Recurrent neural network (64%), Activation function (63%) ... show more

More

32 results found

••

Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

... read more

Topics: Deep learning (59%), Object detection (53%), Cognitive neuroscience of visual object recognition (51%) ... show more

33,931 Citations

••

Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

... read more

Topics: Stochastic optimization (63%), Convex optimization (54%), Rate of convergence (52%)

23,369 Citations

••

11 Jun 2002-

Abstract: This practical guide intends to provide comprehensive information on the PIV technique that in the past decade has gained significant popularity
throughout engineering and scientific fields involving fluid mechanics. Relevant theoretical background information directly support the practical
aspects associated with the planning, performance and understanding of experiments employing the PIV technique. The second edition includes
extensive revisions taking into account significant progress on the technique as well as the continuously broadening range of possible
applications which are illustrated by a multitude of examples. Among the new topics covered are high-speed imaging, three-component methods,
advanced evaluation and post-processing techniques as well as microscopic PIV, the latter made possible by extending the group of authors by
an internationally recognized expert. This book is primarily intended for engineers, scientists and students, who already have some basic
knowledge of fluid mechanics and non-intrusive optical measurement techniques. It shall guide researchers and engineers to design and perform
their experiment successfully without requiring them to first become specialists in the field. Nonetheless many of the basic properties of PIV are
provided as they must be well understood before a correct interpretation of the results is possible.

... read more

4,809 Citations

•••

Abstract: We introduce physics-informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations. In this work, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct types of algorithms, namely continuous time and discrete time models. The first type of models forms a new family of data-efficient spatio-temporal function approximators, while the latter type allows the use of arbitrarily accurate implicit Runge–Kutta time stepping schemes with unlimited number of stages. The effectiveness of the proposed framework is demonstrated through a collection of classical problems in fluids, quantum mechanics, reaction–diffusion systems, and the propagation of nonlinear shallow-water waves.

... read more

Topics: Nonlinear system (59%), Discrete time and continuous time (56%), Deep learning (55%) ... show more

1,914 Citations

•••

Abstract: High-dimensional PDEs have been a longstanding computational challenge. We propose to solve high-dimensional PDEs by approximating the solution with a deep neural network which is trained to satisfy the differential operator, initial condition, and boundary conditions. Our algorithm is meshfree, which is key since meshes become infeasible in higher dimensions. Instead of forming a mesh, the neural network is trained on batches of randomly sampled time and space points. The algorithm is tested on a class of high-dimensional free boundary PDEs, which we are able to accurately solve in up to 200 dimensions. The algorithm is also tested on a high-dimensional Hamilton–Jacobi–Bellman PDE and Burgers' equation. The deep learning algorithm approximates the general solution to the Burgers' equation for a continuum of different boundary conditions and physical conditions (which can be viewed as a high-dimensional space). We call the algorithm a “Deep Galerkin Method (DGM)” since it is similar in spirit to Galerkin methods, with the solution approximated by a neural network instead of a linear combination of basis functions. In addition, we prove a theorem regarding the approximation power of neural networks for a class of quasilinear parabolic PDEs.

... read more

Topics: Partial differential equation (57%), Boundary value problem (54%), Boundary (topology) (54%) ... show more

783 Citations