scispace - formally typeset
K

Konstantinos Spiliopoulos

Researcher at Boston University

Publications -  149
Citations -  3351

Konstantinos Spiliopoulos is an academic researcher from Boston University. The author has contributed to research in topics: Stochastic differential equation & Large deviations theory. The author has an hindex of 23, co-authored 139 publications receiving 2439 citations. Previous affiliations of Konstantinos Spiliopoulos include University of Maryland, College Park & Heriot-Watt University.

Papers
More filters
Journal ArticleDOI

DGM: A deep learning algorithm for solving partial differential equations

TL;DR: A deep learning algorithm similar in spirit to Galerkin methods, using a deep neural network instead of linear combinations of basis functions is proposed, and is implemented for American options in up to 100 dimensions.
Journal ArticleDOI

Mean field analysis of neural networks: A law of large numbers

TL;DR: Machine learning and neural networks have revolutionized fields such as image, text, and speech recognition as discussed by the authors, and many important real-world applications in these areas are based on neural networks.
Posted Content

Mean Field Analysis of Neural Networks: A Central Limit Theorem

TL;DR: In this paper, the central limit theorem for neural networks with a single hidden layer was proved in the asymptotic regime of simultaneously (a) large numbers of hidden units and (b) large number of stochastic gradient descent training iterations.
Posted Content

Mean Field Analysis of Neural Networks: A Law of Large Numbers

TL;DR: It is rigorously proved that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation, which can be considered a law of large numbers for neural networks.
Journal ArticleDOI

Mean field analysis of neural networks: A central limit theorem

TL;DR: In this article, the central limit theorem for neural networks with a single hidden layer was proved in the asymptotic regime of simultaneously (a) large numbers of hidden units and (b) large number of stochastic gradient descent training iterations.