scispace - formally typeset
Search or ask a question
Institution

Massachusetts Institute of Technology

EducationCambridge, Massachusetts, United States
About: Massachusetts Institute of Technology is a education organization based out in Cambridge, Massachusetts, United States. It is known for research contribution in the topics: Population & Laser. The organization has 116795 authors who have published 268000 publications receiving 18272025 citations. The organization is also known as: MIT & M.I.T..


Papers
More filters
Journal ArticleDOI
TL;DR: Three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem are described and compared, giving 89% correct predictions on a musk odor prediction task.

2,767 citations

Proceedings ArticleDOI
17 Jun 1997
TL;DR: A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated.
Abstract: We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.

2,764 citations

Journal ArticleDOI
TL;DR: This study shows that these r-RuO2 and r-IrO2 NPs can serve as a benchmark in the development of active OER catalysts for electrolyzers, metal-air batteries, and photoelectrochemical water splitting applications.
Abstract: The activities of the oxygen evolution reaction (OER) on iridium-oxide- and ruthenium-oxide-based catalysts are among the highest known to date. However, the OER activities of thermodynamically stable rutile iridium oxide (r-IrO2) and rutile iridium oxide (r-RuO2), normalized to catalyst mass or true surface area are not well-defined. Here we report a synthesis of r-IrO2 and r-RuO2 nanoparticles (NPs) of ∼6 nm, and examine their OER activities in acid and alkaline solutions. Both r-IrO2 and r-RuO2 NPs were highly active for OER, with r-RuO2 exhibiting up to 10 A/goxide at 1.48 V versus reversible hydrogen electrode. When comparing the two, r-RuO2 NPs were found to have slightly higher intrinsic and mass OER activities than r-IrO2 in both acid and basic solutions. Interestingly, these oxide NPs showed higher stability under OER conditions than commercial Ru/C and Ir/C catalysts. Our study shows that these r-RuO2 and r-IrO2 NPs can serve as a benchmark in the development of active OER catalysts for electrol...

2,762 citations

Proceedings ArticleDOI
18 Apr 2019
TL;DR: This work presents SpecAugment, a simple data augmentation method for speech recognition that is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients) and achieves state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work.
Abstract: We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER.

2,758 citations

Journal ArticleDOI
TL;DR: This paper found evidence consistent with managers manipulating real activities to avoid reporting annual losses, such as price discounts to temporarily increase sales, overproduction to report lower cost of goods sold, and reduction of discretionary expenditures to improve reported margins.

2,752 citations


Authors

Showing all 117442 results

NameH-indexPapersCitations
Eric S. Lander301826525976
Robert Langer2812324326306
George M. Whitesides2401739269833
Trevor W. Robbins2311137164437
George Davey Smith2242540248373
Yi Cui2201015199725
Robert J. Lefkowitz214860147995
David J. Hunter2131836207050
Daniel Levy212933194778
Rudolf Jaenisch206606178436
Mark J. Daly204763304452
David Miller2032573204840
David Baltimore203876162955
Rakesh K. Jain2001467177727
Ronald M. Evans199708166722
Network Information
Related Institutions (5)
University of California, Berkeley
265.6K papers, 16.8M citations

96% related

Stanford University
320.3K papers, 21.8M citations

95% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

95% related

University of California, San Diego
204.5K papers, 12.3M citations

95% related

Columbia University
224K papers, 12.8M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023240
20221,124
202110,595
202011,922
201911,207
201810,883