scispace - formally typeset
J

Justin Sirignano

Researcher at University of Illinois at Urbana–Champaign

Publications -  63
Citations -  2955

Justin Sirignano is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Artificial neural network & Stochastic gradient descent. The author has an hindex of 20, co-authored 54 publications receiving 1965 citations. Previous affiliations of Justin Sirignano include Imperial College London & Stanford University.

Papers
More filters
Journal ArticleDOI

DGM: A deep learning algorithm for solving partial differential equations

TL;DR: A deep learning algorithm similar in spirit to Galerkin methods, using a deep neural network instead of linear combinations of basis functions is proposed, and is implemented for American options in up to 100 dimensions.
Journal ArticleDOI

Universal Features of Price Formation in Financial Markets: Perspectives From Deep Learning

TL;DR: Using a large-scale deep learning approach applied to a high-frequency database containing billions of market quotes and transactions for US equities, the authors uncover nonparametric evidence for the exis...
Posted Content

Deep Learning for Mortgage Risk

TL;DR: This paper analyzes multi-period mortgage risk at loan and pool levels using an unprecedented dataset of over 120 million prime and subprime mortgages originated across the United States between 1995 and 2014, which includes the individual characteristics of each loan, monthly updates on loan performance over the life of a loan, and a number of time-varying economic variables at the zip code level.
Journal ArticleDOI

Mean field analysis of neural networks: A law of large numbers

TL;DR: Machine learning and neural networks have revolutionized fields such as image, text, and speech recognition as discussed by the authors, and many important real-world applications in these areas are based on neural networks.
Posted Content

Mean Field Analysis of Neural Networks: A Central Limit Theorem

TL;DR: In this paper, the central limit theorem for neural networks with a single hidden layer was proved in the asymptotic regime of simultaneously (a) large numbers of hidden units and (b) large number of stochastic gradient descent training iterations.