scispace - formally typeset
Search or ask a question
Author

Qizhi He

Bio: Qizhi He is an academic researcher from Pacific Northwest National Laboratory. The author has contributed to research in topics: Estimation theory & Topology optimization. The author has an hindex of 7, co-authored 22 publications receiving 300 citations. Previous affiliations of Qizhi He include Dalian University of Technology & Wuhan University.

Papers
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that the physics-informed deep neural networks used for estimating space-dependent hydraulic conductivity, hydraulic head, and concentration fields from sparse measurements are significantly more accurate than standard data-driven DNNs when the training set consists of sparse data.

144 citations

Journal ArticleDOI
TL;DR: This method is able to improve the boundary description quality of the optimal result with much less design variables as compared with the case of global refinement, and therefore can greatly reduce the computational burden involved in the sensitivity analysis and optimization process.

77 citations

Journal ArticleDOI
TL;DR: In this article, an adaptive method for topology optimization of structures, by using independent error control for the separated displacement and material density fields, is proposed, which can achieve high quality and high-accuracy optimal solutions comparable to those obtained with fixed globally fine analysis meshes and fine distributed density points, but with much less computational cost.

61 citations

Journal ArticleDOI
TL;DR: In this paper, a new data-driven simulation approach coupled with a locally convex reconstruction is proposed to enhance accuracy and robustness against noise and outliers in data sets in the data driven computing.

55 citations

Journal ArticleDOI
TL;DR: Based on the element-free Galerkin (EFG) method, an analysis-independent density variable approach is proposed for topology optimization of geometrically nonlinear structures as mentioned in this paper.
Abstract: Based on the element-free Galerkin (EFG) method, an analysis-independent density variable approach is proposed for topology optimization of geometrically nonlinear structures. This method eliminates the mesh distortion problem often encountered in the finite element analysis of large deformations. The topology optimization problem is formulated on the basis of point-wise description of the material density field. This density field is constructed by a physical meaning-preserving interpolation with the density values of the design variable points, which can be freely positioned independently of the field points used in the displacement analysis. An energy criterion of convergence is used to resolve the well-known convergence difficulty, which would be usually encountered in low density regions, where displacements oscillate severely during the optimization process. Numerical examples are given to demonstrate the effectiveness of the developed approach. It is shown that relatively clear optimal solutions can be achieved, without exhibiting numerical instabilities like the so-called "layering" or "islanding" phenomena even in large deformation cases. This study not only confirms the potential of the EFG method in topology optimization involving large deformations, but also provides a novel topology optimization framework based on element-free discretization of displacement and density fields, which can also easily incorporate other meshless analysis methods for specific purposes.

45 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of the literature on physics-informed neural networks can be found in this article , where the primary goal of the study was to characterize these networks and their related advantages and disadvantages, as well as incorporate publications on a broader range of collocation-based physics informed neural networks.
Abstract: Abstract Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

216 citations

Journal ArticleDOI
TL;DR: This review is expected to provide a clear picture of layerwise theory for modeling of composite laminated structures and serve as a useful resource and guide to researchers who intend to extend their work into these research areas.

170 citations

Posted Content
TL;DR: An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs.
Abstract: Deep learning has achieved remarkable success in diverse applications; however, its use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation. The PINN algorithm is simple, and it can be applied to different types of PDEs, including integro-differential equations, fractional PDEs, and stochastic PDEs. Moreover, from the implementation point of view, PINNs solve inverse problems as easily as forward problems. We propose a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. For pedagogical reasons, we compare the PINN algorithm to a standard finite element method. We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering. Specifically, DeepXDE can solve forward problems given initial and boundary conditions, as well as inverse problems given some extra measurements. DeepXDE supports complex-geometry domains based on the technique of constructive solid geometry, and enables the user code to be compact, resembling closely the mathematical formulation. We introduce the usage of DeepXDE and its customizability, and we also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for five different examples. More broadly, DeepXDE contributes to the more rapid development of the emerging Scientific Machine Learning field.

144 citations

Journal ArticleDOI
TL;DR: The goal of this review paper is to provide the first comprehensive review of the recently developed methods in the ML algorithms and describe their application to porous media and geoscience.

130 citations

Journal ArticleDOI
TL;DR: In this article, a physics-informed neural network (PINN) framework was proposed to predict the temperature and melt pool dynamics during metal additive manufacturing (AM) processes with only a moderate amount of labeled data sets.
Abstract: The recent explosion of machine learning (ML) and artificial intelligence (AI) shows great potential in the breakthrough of metal additive manufacturing (AM) process modeling, which is an indispensable step to derive the process-structure-property relationship. However, the success of conventional machine learning tools in data science is primarily attributed to the unprecedented large amount of labeled data-sets (big data), which can be either obtained by experiments or first-principle simulations. Unfortunately, these labeled data-sets are expensive to obtain in AM due to the high expense of the AM experiments and prohibitive computational cost of high-fidelity simulations, hindering the direct applications of big-data based ML tools to metal AM problems. To fully exploit the power of machine learning for metal AM while alleviating the dependence on “big data”, we put forth a physics-informed neural network (PINN) framework that fuses both data and first physical principles, including conservation laws of momentum, mass, and energy, into the neural network to inform the learning processes. To the best knowledge of the authors, this is the first application of physics-informed deep learning to three dimensional AM processes modeling. Besides, we propose a hard-type approach for Dirichlet boundary conditions (BCs) based on a Heaviside function, which can not only exactly enforce the BCs but also accelerate the learning process. The PINN framework is applied to two representative metal manufacturing problems, including the 2018 NIST AM-Benchmark test series. We carefully assess the performance of the PINN model by comparing the predictions with available experimental data and high-fidelity simulation results, using finite element based variational multi-scale formulation method. The investigations show that the PINN, owed to the additional physical knowledge, can accurately predict the temperature and melt pool dynamics during metal AM processes with only a moderate amount of labeled data-sets. The foray of PINN to metal AM shows the great potential of physics-informed deep learning for broader applications to advanced manufacturing. All the data-sets and the PINN code will be made open-sourced in https://yan.cee.illinois.edu/ once the paper is published.

123 citations