scispace - formally typeset
Search or ask a question
Author

Ameya D. Jagtap

Bio: Ameya D. Jagtap is an academic researcher from Brown University. The author has contributed to research in topics: Artificial neural network & Nonlinear system. The author has an hindex of 7, co-authored 21 publications receiving 503 citations. Previous affiliations of Ameya D. Jagtap include Tata Institute of Fundamental Research & TIFR Centre for Applicable Mathematics.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a physics-informed neural network (PINN) was used to approximate the Euler equations that model high-speed aerodynamic flows in one-dimensional and two-dimensional domains.

485 citations

Journal ArticleDOI
TL;DR: It is theoretically proved that in the proposed method, gradient descent algorithms are not attracted to suboptimal critical points or local minima, and the proposed adaptive activation functions are shown to accelerate the minimization process of the loss values in standard deep learning benchmarks with and without data augmentation.

405 citations

Journal ArticleDOI
TL;DR: In cPINN, locally adaptive activation functions are used, hence training the model faster compared to its fixed counterparts, and it efficiently lends itself to parallelized computation, where each sub-domain can be assigned to a different computational node.

369 citations

Journal ArticleDOI
TL;DR: It is proved that in the proposed method, the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate, and that the gradient dynamics of the proposedmethod is not achievable by base methods with any (adaptive) learning rates.
Abstract: We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing a scalable parameter in each layer (layer-wise) and for every neuron (neuron-wise) separately, and then optimizing it using a variant of stochastic gradient descent algorithm. In order to further increase the training speed, an activation slope-based slope recovery term is added in the loss function, which further accelerates convergence, thereby reducing the training cost. On the theoretical side, we prove that in the proposed method, the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate, and that the gradient dynamics of the proposed method is not achievable by base methods with any (adaptive) learning rates. We further show that the adaptive activation methods accelerate the convergence by implicitly multiplying conditioning matrices to the gradient of the base method without any explicit computation of the conditioning matrix and the matrix-vector product. The different adaptive activation functions are shown to induce different implicit conditioning matrices. Furthermore, the proposed methods with the slope recovery are shown to accelerate the training process.

159 citations

Journal ArticleDOI
TL;DR: In this article, a distributed framework for physics-informed neural networks (PINNs) based on two recent extensions, namely conservative PINNs and extended PINNs (XPINNs), which employ domain decomposition in space and in time-space, respectively, is developed.

56 citations


Cited by
More filters
Journal ArticleDOI
01 Jun 2021
TL;DR: Some of the prevailing trends in embedding physics into machine learning are reviewed, some of the current capabilities and limitations are presented and diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems are discussed.
Abstract: Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems. The rapidly developing field of physics-informed learning integrates data and mathematical models seamlessly, enabling accurate inference of realistic and high-dimensional multiphysics problems. This Review discusses the methodology and provides diverse examples and an outlook for further developments.

1,114 citations

Journal ArticleDOI
TL;DR: In this article, a physics-informed neural network (PINN) was used to approximate the Euler equations that model high-speed aerodynamic flows in one-dimensional and two-dimensional domains.

485 citations

Journal ArticleDOI
TL;DR: Compared with PINNs, B-PINNs obtain more accurate predictions in scenarios with large noise due to their capability of avoiding overfitting and dropout employed in PINNs can hardly provide accurate predictions with reasonable uncertainty.

410 citations

Journal ArticleDOI
TL;DR: In cPINN, locally adaptive activation functions are used, hence training the model faster compared to its fixed counterparts, and it efficiently lends itself to parallelized computation, where each sub-domain can be assigned to a different computational node.

369 citations

Book
06 May 1998
TL;DR: Orthogonal approximations in Sobolev spaces stability and convergence spectral methods and pseudospectral methods spectral methods for multi-dimensional and high order problems mixed spectral methods combined spectral methods spectral method on the spherical surface as discussed by the authors.
Abstract: Orthogonal approximations in Sobolev spaces stability and convergence spectral methods and pseudospectral methods spectral methods for multi-dimensional and high order problems mixed spectral methods combined spectral methods spectral methods on the spherical surface.

365 citations