Institution
Massachusetts Institute of Technology
Education•Cambridge, Massachusetts, United States•
About: Massachusetts Institute of Technology is a education organization based out in Cambridge, Massachusetts, United States. It is known for research contribution in the topics: Population & Laser. The organization has 116795 authors who have published 268000 publications receiving 18272025 citations. The organization is also known as: MIT & M.I.T..
Topics: Population, Laser, Context (language use), Computer science, Gene
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem are described and compared, giving 89% correct predictions on a musk odor prediction task.
2,767 citations
••
17 Jun 1997TL;DR: A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated.
Abstract: We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.
2,764 citations
••
TL;DR: This study shows that these r-RuO2 and r-IrO2 NPs can serve as a benchmark in the development of active OER catalysts for electrolyzers, metal-air batteries, and photoelectrochemical water splitting applications.
Abstract: The activities of the oxygen evolution reaction (OER) on iridium-oxide- and ruthenium-oxide-based catalysts are among the highest known to date. However, the OER activities of thermodynamically stable rutile iridium oxide (r-IrO2) and rutile iridium oxide (r-RuO2), normalized to catalyst mass or true surface area are not well-defined. Here we report a synthesis of r-IrO2 and r-RuO2 nanoparticles (NPs) of ∼6 nm, and examine their OER activities in acid and alkaline solutions. Both r-IrO2 and r-RuO2 NPs were highly active for OER, with r-RuO2 exhibiting up to 10 A/goxide at 1.48 V versus reversible hydrogen electrode. When comparing the two, r-RuO2 NPs were found to have slightly higher intrinsic and mass OER activities than r-IrO2 in both acid and basic solutions. Interestingly, these oxide NPs showed higher stability under OER conditions than commercial Ru/C and Ir/C catalysts. Our study shows that these r-RuO2 and r-IrO2 NPs can serve as a benchmark in the development of active OER catalysts for electrol...
2,762 citations
••
18 Apr 2019TL;DR: This work presents SpecAugment, a simple data augmentation method for speech recognition that is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients) and achieves state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work.
Abstract: We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER.
2,758 citations
••
TL;DR: This paper found evidence consistent with managers manipulating real activities to avoid reporting annual losses, such as price discounts to temporarily increase sales, overproduction to report lower cost of goods sold, and reduction of discretionary expenditures to improve reported margins.
2,752 citations
Authors
Showing all 117442 results
Name | H-index | Papers | Citations |
---|---|---|---|
Eric S. Lander | 301 | 826 | 525976 |
Robert Langer | 281 | 2324 | 326306 |
George M. Whitesides | 240 | 1739 | 269833 |
Trevor W. Robbins | 231 | 1137 | 164437 |
George Davey Smith | 224 | 2540 | 248373 |
Yi Cui | 220 | 1015 | 199725 |
Robert J. Lefkowitz | 214 | 860 | 147995 |
David J. Hunter | 213 | 1836 | 207050 |
Daniel Levy | 212 | 933 | 194778 |
Rudolf Jaenisch | 206 | 606 | 178436 |
Mark J. Daly | 204 | 763 | 304452 |
David Miller | 203 | 2573 | 204840 |
David Baltimore | 203 | 876 | 162955 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Ronald M. Evans | 199 | 708 | 166722 |