scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Flow over an espresso cup: Inferring 3D velocity and pressure fields from tomographic background oriented schlieren videos via physics-informed neural networks

TL;DR: In this article, the authors proposed a new method based on physics-informed neural networks (PINNs) to infer the full continuous 3D velocity and pressure fields from snapshots of 3D temperature fields obtained by Tomo-BOS imaging.
Abstract: Tomographic background oriented schlieren (Tomo-BOS) imaging measures density or temperature fields in 3D using multiple camera BOS projections, and is particularly useful for instantaneous flow visualizations of complex fluid dynamics problems. We propose a new method based on physics-informed neural networks (PINNs) to infer the full continuous 3D velocity and pressure fields from snapshots of 3D temperature fields obtained by Tomo-BOS imaging. PINNs seamlessly integrate the underlying physics of the observed fluid flow and the visualization data, hence enabling the inference of latent quantities using limited experimental data. In this hidden fluid mechanics paradigm, we train the neural network by minimizing a loss function composed of a data mismatch term and residual terms associated with the coupled Navier-Stokes and heat transfer equations. We first quantify the accuracy of the proposed method based on a 2D synthetic data set for buoyancy-driven flow, and subsequently apply it to the Tomo-BOS data set, where we are able to infer the instantaneous velocity and pressure fields of the flow over an espresso cup based only on the temperature field provided by the Tomo-BOS imaging. Moreover, we conduct an independent PIV experiment to validate the PINN inference for the unsteady velocity field at a center plane. To explain the observed flow physics, we also perform systematic PINN simulations at different Reynolds and Richardson numbers and quantify the variations in velocity and pressure fields. The results in this paper indicate that the proposed deep learning technique can become a promising direction in experimental fluid mechanics.
Citations
More filters
Journal ArticleDOI
01 Jun 2021
TL;DR: Some of the prevailing trends in embedding physics into machine learning are reviewed, some of the current capabilities and limitations are presented and diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems are discussed.
Abstract: Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems. The rapidly developing field of physics-informed learning integrates data and mathematical models seamlessly, enabling accurate inference of realistic and high-dimensional multiphysics problems. This Review discusses the methodology and provides diverse examples and an outlook for further developments.

1,114 citations

Journal ArticleDOI
TL;DR: In this article, a distributed framework for physics-informed neural networks (PINNs) based on two recent extensions, namely conservative PINNs and extended PINNs (XPINNs), which employ domain decomposition in space and in time-space, respectively, is developed.

56 citations

Journal ArticleDOI
TL;DR: In this article , a physics-informed neural network (PINN) is proposed to reconstruct the dense velocity field from sparse experimental data, which can not only improve the velocity resolution but also predict the pressure field.
Abstract: The velocities measured by particle image velocimetry (PIV) and particle tracking velocimetry (PTV) commonly provide sparse information on flow motions. A dense velocity field with high resolution is indispensable for data visualization and analysis. In the present work, a physics-informed neural network (PINN) is proposed to reconstruct the dense velocity field from sparse experimental data. A PINN is a network-based data assimilation method. Within the PINN, both the velocity and pressure are approximated by minimizing a loss function consisting of the residuals of the data and the Navier–Stokes equations. Therefore, the PINN can not only improve the velocity resolution but also predict the pressure field. The performance of the PINN is investigated using two-dimensional (2D) Taylor's decaying vortices and turbulent channel flow with and without measurement noise. For the case of 2D Taylor's decaying vortices, the activation functions, optimization algorithms, and some parameters of the proposed method are assessed. For the case of turbulent channel flow, the ability of the PINN to reconstruct wall-bounded turbulence is explored. Finally, the PINN is applied to reconstruct dense velocity fields from the experimental tomographic PIV (Tomo-PIV) velocity in the three-dimensional wake flow of a hemisphere. The results indicate that the proposed PINN has great potential for extending the capabilities of PIV/PTV.

50 citations

Journal ArticleDOI
TL;DR: In this paper , physics-informed neural networks (PINNs) are applied for solving the Navier-Stokes equations for laminar flows by solving the Falkner-Skan boundary layer.
Abstract: Physics-informed neural networks (PINNs) are successful machine-learning methods for the solution and identification of partial differential equations. We employ PINNs for solving the Reynolds-averaged Navier–Stokes equations for incompressible turbulent flows without any specific model or assumption for turbulence and by taking only the data on the domain boundaries. We first show the applicability of PINNs for solving the Navier–Stokes equations for laminar flows by solving the Falkner–Skan boundary layer. We then apply PINNs for the simulation of four turbulent-flow cases, i.e., zero-pressure-gradient boundary layer, adverse-pressure-gradient boundary layer, and turbulent flows over a NACA4412 airfoil and the periodic hill. Our results show the excellent applicability of PINNs for laminar flows with strong pressure gradients, where predictions with less than 1% error can be obtained. For turbulent flows, we also obtain very good accuracy on simulation results even for the Reynolds-stress components.

47 citations

Journal ArticleDOI
TL;DR: In this article , the authors review the applications of ML in aerodynamic shape optimization (ASO) and provide a perspective on the state-of-the-art and future directions.

44 citations

References
More filters
Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Posted Content
TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

23,486 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce physics-informed neural networks, which are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations.

5,448 citations

Book
11 Jun 2002
TL;DR: In this paper, the authors present a practical guide for the planning, performance and understanding of experiments employing the PIV technique, which is primarily intended for engineers, scientists and students, who already have some basic knowledge of fluid mechanics and nonintrusive optical measurement techniques.
Abstract: This practical guide intends to provide comprehensive information on the PIV technique that in the past decade has gained significant popularity throughout engineering and scientific fields involving fluid mechanics. Relevant theoretical background information directly support the practical aspects associated with the planning, performance and understanding of experiments employing the PIV technique. The second edition includes extensive revisions taking into account significant progress on the technique as well as the continuously broadening range of possible applications which are illustrated by a multitude of examples. Among the new topics covered are high-speed imaging, three-component methods, advanced evaluation and post-processing techniques as well as microscopic PIV, the latter made possible by extending the group of authors by an internationally recognized expert. This book is primarily intended for engineers, scientists and students, who already have some basic knowledge of fluid mechanics and non-intrusive optical measurement techniques. It shall guide researchers and engineers to design and perform their experiment successfully without requiring them to first become specialists in the field. Nonetheless many of the basic properties of PIV are provided as they must be well understood before a correct interpretation of the results is possible.

4,811 citations

Journal ArticleDOI
TL;DR: A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function.
Abstract: Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as the “curse of dimensionality.” This paper introduces a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black–Scholes equation, the Hamilton–Jacobi–Bellman equation, and the Allen–Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and cost. This opens up possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their interrelationships.

1,309 citations

Trending Questions (1)
How does the flow look like in espresso pucks?

The paper proposes a method to infer the flow velocity and pressure fields over an espresso cup using temperature fields obtained from Tomo-BOS imaging.