scispace - formally typeset
Search or ask a question
Institution

Massachusetts Institute of Technology

EducationCambridge, Massachusetts, United States
About: Massachusetts Institute of Technology is a education organization based out in Cambridge, Massachusetts, United States. It is known for research contribution in the topics: Population & Laser. The organization has 116795 authors who have published 268000 publications receiving 18272025 citations. The organization is also known as: MIT & M.I.T..


Papers
More filters
Book
01 Jan 1967
TL;DR: In this article, the authors present an abstract theory that categorically and systematically describes what all these machines can do and what they cannot do, giving sound theoretical or practical grounds for each judgment, and the abstract theory tells us in no uncertain terms that the machines' potential range is enormous and that its theoretical limitations are of the subtlest and most elusive sort.
Abstract: From the Preface (See Front Matter for full Preface) Man has within a single generation found himself sharing the world with a strange new species: the computers and computer-like machines. Neither history, nor philosophy, nor common sense will tell us how these machines will affect us, for they do not do "work" as did machines of the Industrial Revolution. Instead of dealing with materials or energy, we are told that they handle "control" and "information" and even "intellectual processes." There are very few individuals today who doubt that the computer and its relatives are developing rapidly in capability and complexity, and that these machines are destined to play important (though not as yet fully understood) roles in society's future. Though only some of us deal directly with computers, all of us are falling under the shadow of their ever-growing sphere of influence, and thus we all need to understand their capabilities and their limitations. It would indeed be reassuring to have a book that categorically and systematically described what all these machines can do and what they cannot do, giving sound theoretical or practical grounds for each judgment. However, although some books have purported to do this, it cannot be done for the following reasons: a) Computer-like devices are utterly unlike anything which science has ever considered---we still lack the tools necessary to fully analyze, synthesize, or even think about them; and b) The methods discovered so far are effective in certain areas, but are developing much too rapidly to allow a useful interpretation and interpolation of results. The abstract theory---as described in this book---tells us in no uncertain terms that the machines' potential range is enormous, and that its theoretical limitations are of the subtlest and most elusive sort. There is no reason to suppose machines have any limitations not shared by man.

2,219 citations

Journal ArticleDOI
25 Jun 2000
TL;DR: It is shown that QIM is "provably good" against arbitrary bounded and fully informed attacks, and achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods.
Abstract: We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing the information-embedding rate, minimizing the distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications.

2,218 citations

Journal ArticleDOI
TL;DR: Comparative genomic approaches were developed to systematically identify both miRNAs and their targets that are conserved in Arabidopsis thaliana and rice, and the expression of miR395, the sulfurylase-targeting miRNA, increases upon sulfate starvation, showing that miRNAAs can be induced by environmental stress.

2,217 citations

Journal ArticleDOI
03 Jul 2013-Cell
TL;DR: This Review outlines the emerging understanding of lincRNAs in vertebrate animals, with emphases on how they are being identified and current conclusions and questions regarding their genomics, evolution and mechanisms of action.

2,213 citations

Posted Content
TL;DR: The main contribution of the paper is the introduction of new algorithms, namely, PRM and RRT*, which are provably asymptotically optimal, i.e. such that the cost of the returned solution converges almost surely to the optimum.
Abstract: During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e., such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.

2,210 citations


Authors

Showing all 117442 results

NameH-indexPapersCitations
Eric S. Lander301826525976
Robert Langer2812324326306
George M. Whitesides2401739269833
Trevor W. Robbins2311137164437
George Davey Smith2242540248373
Yi Cui2201015199725
Robert J. Lefkowitz214860147995
David J. Hunter2131836207050
Daniel Levy212933194778
Rudolf Jaenisch206606178436
Mark J. Daly204763304452
David Miller2032573204840
David Baltimore203876162955
Rakesh K. Jain2001467177727
Ronald M. Evans199708166722
Network Information
Related Institutions (5)
University of California, Berkeley
265.6K papers, 16.8M citations

96% related

Stanford University
320.3K papers, 21.8M citations

95% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

95% related

University of California, San Diego
204.5K papers, 12.3M citations

95% related

Columbia University
224K papers, 12.8M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023240
20221,124
202110,595
202011,922
201911,207
201810,883