scispace - formally typeset
Search or ask a question
Institution

Courant Institute of Mathematical Sciences

EducationNew York, New York, United States
About: Courant Institute of Mathematical Sciences is a education organization based out in New York, New York, United States. It is known for research contribution in the topics: Nonlinear system & Boundary value problem. The organization has 2414 authors who have published 7759 publications receiving 439773 citations. The organization is also known as: CIMS & New York University Department of Mathematics.


Papers
More filters
Proceedings Article
17 Jul 2017
TL;DR: This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.
Abstract: We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.

5,667 citations

Journal ArticleDOI
TL;DR: This paper is concerned with the mathematical structure of the immersed boundary (IB) method, which is intended for the computer simulation of fluid–structure interaction, especially in biological fluid dynamics.
Abstract: This paper is concerned with the mathematical structure of the immersed boundary (IB) method, which is intended for the computer simulation of fluid–structure interaction, especially in biological fluid dynamics. The IB formulation of such problems, derived here from the principle of least action, involves both Eulerian and Lagrangian variables, linked by the Dirac delta function. Spatial discretization of the IB equations is based on a fixed Cartesian mesh for the Eulerian variables, and a moving curvilinear mesh for the Lagrangian variables. The two types of variables are linked by interaction equations that involve a smoothed approximation to the Dirac delta function. Eulerian/Lagrangian identities govern the transfer of data from one mesh to the other. Temporal discretization is by a second-order Runge–Kutta method. Current and future research directions are pointed out, and applications of the IB method are briefly discussed. Introduction The immersed boundary (IB) method was introduced to study flow patterns around heart valves and has evolved into a generally useful method for problems of fluid–structure interaction. The IB method is both a mathematical formulation and a numerical scheme. The mathematical formulation employs a mixture of Eulerian and Lagrangian variables. These are related by interaction equations in which the Dirac delta function plays a prominent role. In the numerical scheme motivated by the IB formulation, the Eulerian variables are defined on a fixed Cartesian mesh, and the Lagrangian variables are defined on a curvilinear mesh that moves freely through the fixed Cartesian mesh without being constrained to adapt to it in any way at all.

4,164 citations

Posted Content
TL;DR: This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Abstract: Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

4,133 citations

Proceedings Article
04 Dec 2017
TL;DR: The authors proposed to penalize the norm of the gradient of the critic with respect to its input to improve the training stability of Wasserstein GANs and achieve stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Abstract: Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

3,622 citations

Proceedings Article
07 Dec 2015
TL;DR: In this paper, the use of character-level convolutional networks (ConvNets) for text classification has been explored and compared with traditional models such as bag of words, n-grams and their TFIDF variants.
Abstract: This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.

3,052 citations


Authors

Showing all 2441 results

NameH-indexPapersCitations
Xiang Zhang1541733117576
Yann LeCun121369171211
Benoît Roux12049362215
Alan S. Perelson11863266767
Thomas J. Spencer11653152743
Salvatore Torquato10455240208
Joel L. Lebowitz10175439713
Bo Huang9772840135
Amir Pnueli9433143351
Rolf D. Reitz9361136618
Michael Q. Zhang9337842008
Samuel Karlin8939641432
David J. Heeger8826838154
Luis A. Caffarelli8735332440
Weinan E8432322887
Network Information
Related Institutions (5)
Princeton University
146.7K papers, 9.1M citations

87% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

87% related

Carnegie Mellon University
104.3K papers, 5.9M citations

85% related

ETH Zurich
122.4K papers, 5.1M citations

85% related

University of California, Santa Barbara
80.8K papers, 4.6M citations

85% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202317
202244
2021299
2020291
2019355
2018301