scispace - formally typeset
L

Lu Lu

Researcher at Massachusetts Institute of Technology

Publications -  56
Citations -  5802

Lu Lu is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 19, co-authored 49 publications receiving 1640 citations. Previous affiliations of Lu Lu include University of Pennsylvania & Brown University.

Papers
More filters
Journal ArticleDOI

MIONet: Learning multiple-input operators via tensor product

TL;DR: A universal approximation theorem of continuous multiple-input operators is proved and a novel neural operator, MIONet, is proposed, which can learn solution operators involving systems governed by ordinary and partial differential equations.
Journal ArticleDOI

DeepM&Mnet for hypersonics: Predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators

TL;DR: DeepM&Mnet as discussed by the authors employs a special neural network for approximating nonlinear operators, the DeepONet, which is used to predict separately each individual field, given inputs from the rest of the fields of the coupled multiphysics system.
Journal ArticleDOI

Probing the Twisted Structure of Sickle Hemoglobin Fibers via Particle Simulations.

TL;DR: A particle model-resembling a coarse-grained molecular model-constructed to match the intermolecular contacts between HbS molecules is developed and it is demonstrated that the particle model predicts the formation of HBS polymer fibers by attachment of monomers to rough fiber ends and the growth rate increases linearly withHbS concentration.
Posted Content

Collapse of Deep and Narrow Neural Nets

TL;DR: This work demonstrates this collapse of deep and narrow NNs both numerically and theoretically, and provides estimates of the probability of collapse, and constructs a diagram of a safe region for designing NNs that avoid the collapse to erroneous states.
Journal ArticleDOI

Deep transfer learning and data augmentation improve glucose levels prediction in type 2 diabetes patients

TL;DR: In this article, the authors developed deep learning methods to predict patient-specific blood glucose during various time horizons in the immediate future using patientspecific every 30-min long glucose measurements by the continuous glucose monitoring (CGM) to predict future glucose levels in 5min to 1h.