DGM: A deep learning algorithm for solving partial differential equations
Citations
1,114 citations
1,058 citations
760 citations
660 citations
560 citations
Cites background or methods from "DGM: A deep learning algorithm for ..."
...(6) by minimizing the residual loss where the exact derivatives are calculated with automatic differentiation [32, 39, 33, 37]....
[...]
...Given one input x = [K(s1), · · · , K(sns)], most previous works [32, 39, 33, 37] use FC-NNs to represent the solution as...
[...]
...analytical and meshfree [33, 34]; (2) the loss function can be derived from the variational form [35, 36]; (3) stochastic gradient descent is used to train the network by randomly sampling mini-batches of inputs (spatial locations and/or time instances) [37, 35]; (4) deeper networks are used to break the curse of dimensionality [38] allowing for several high-dimensional PDEs to be solved with high accuracy and speed [39, 40, 37, 41]; (5) multiscale numerical solvers are enhanced by replacing the linear basis with learned ones with DNNs [42, 43]; (6) surrogate modeling for PDEs [44, 45, 36]....
[...]
References
111,197 citations
"DGM: A deep learning algorithm for ..." refers methods in this paper
...Parameters are updated using the well-known ADAM algorithm (see [26]) with a decaying learning rate schedule (more details on the learning rate are provided below)....
[...]
72,897 citations
"DGM: A deep learning algorithm for ..." refers background in this paper
...2) is similar to the architecture for LSTM networks (see [23]) and highway networks (see [46])....
[...]
23,486 citations
18,443 citations
"DGM: A deep learning algorithm for ..." refers background in this paper
...If g 6= 0 such that g is the trace of some appropriately smooth function, say φ, then one can reduce the inhomogeneous boundary conditions on ∂ΩT to the homogeneous one by introducing in place of u the new function u− φ, see Section 4 of Chapter V in [27] or Chapter 8 of [19] for details on such considerations....
[...]
12,286 citations