scispace - formally typeset
Search or ask a question
Institution

Courant Institute of Mathematical Sciences

EducationNew York, New York, United States
About: Courant Institute of Mathematical Sciences is a education organization based out in New York, New York, United States. It is known for research contribution in the topics: Nonlinear system & Boundary value problem. The organization has 2414 authors who have published 7759 publications receiving 439773 citations. The organization is also known as: CIMS & New York University Department of Mathematics.


Papers
More filters
Journal ArticleDOI
TL;DR: An automatic, adaptive mesh refinement strategy for solving hyperbolic conservation laws in two dimensions and how to organize the algorithm to minimize memory and CPU overhead is developed.

2,650 citations

Proceedings Article
08 Dec 2008
TL;DR: The problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard and a spectral method is obtained whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian.
Abstract: Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art.

2,641 citations

Journal ArticleDOI
TL;DR: In this paper, the finite difference methods of Godunov, Hyman, Lax and Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and Glimm's method, a random choice method, are discussed.

2,448 citations

Proceedings ArticleDOI
01 Sep 2009
TL;DR: It is shown that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks and that two stages of feature extraction yield better accuracy than one.
Abstract: In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode This paper addresses three questions: 1 How does the non-linearities that follow the filter banks influence the recognition accuracy? 2 does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hardwired filters? 3 Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks We show that two stages of feature extraction yield better accuracy than one Most surprisingly, we show that a two-stage system with random filters can yield almost 63% recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (56%) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (≫ 65%), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (053%)

2,317 citations

Journal ArticleDOI
TL;DR: It has long been assumed that sensory neurons are adapted to the statistical properties of the signals to which they are exposed, but recent developments in statistical modeling have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis.
Abstract: ▪ Abstract It has long been assumed that sensory neurons are adapted, through both evolutionary and developmental processes, to the statistical properties of the signals to which they are exposed. Attneave (1954), Barlow (1961) proposed that information theory could provide a link between environmental statistics and neural responses through the concept of coding efficiency. Recent developments in statistical modeling, along with powerful computational tools, have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis for both individual neurons and populations of neurons.

2,280 citations


Authors

Showing all 2441 results

NameH-indexPapersCitations
Xiang Zhang1541733117576
Yann LeCun121369171211
Benoît Roux12049362215
Alan S. Perelson11863266767
Thomas J. Spencer11653152743
Salvatore Torquato10455240208
Joel L. Lebowitz10175439713
Bo Huang9772840135
Amir Pnueli9433143351
Rolf D. Reitz9361136618
Michael Q. Zhang9337842008
Samuel Karlin8939641432
David J. Heeger8826838154
Luis A. Caffarelli8735332440
Weinan E8432322887
Network Information
Related Institutions (5)
Princeton University
146.7K papers, 9.1M citations

87% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

87% related

Carnegie Mellon University
104.3K papers, 5.9M citations

85% related

ETH Zurich
122.4K papers, 5.1M citations

85% related

University of California, Santa Barbara
80.8K papers, 4.6M citations

85% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202317
202244
2021299
2020291
2019355
2018301