scispace - formally typeset
Search or ask a question
Institution

California Institute of Technology

EducationPasadena, California, United States
About: California Institute of Technology is a education organization based out in Pasadena, California, United States. It is known for research contribution in the topics: Galaxy & Population. The organization has 57649 authors who have published 146691 publications receiving 8620287 citations. The organization is also known as: Caltech & Cal Tech.


Papers
More filters
Journal ArticleDOI
16 Mar 2006-Nature
TL;DR: This work describes a simple method for folding long, single-stranded DNA molecules into arbitrary two-dimensional shapes, which can be programmed to bear complex patterns such as words and images on their surfaces.
Abstract: 'Bottom-up fabrication', which exploits the intrinsic properties of atoms and molecules to direct their self-organization, is widely used to make relatively simple nanostructures. A key goal for this approach is to create nanostructures of high complexity, matching that routinely achieved by 'top-down' methods. The self-assembly of DNA molecules provides an attractive route towards this goal. Here I describe a simple method for folding long, single-stranded DNA molecules into arbitrary two-dimensional shapes. The design for a desired shape is made by raster-filling the shape with a 7-kilobase single-stranded scaffold and by choosing over 200 short oligonucleotide 'staple strands' to hold the scaffold in place. Once synthesized and mixed, the staple and scaffold strands self-assemble in a single step. The resulting DNA structures are roughly 100 nm in diameter and approximate desired shapes such as squares, disks and five-pointed stars with a spatial resolution of 6 nm. Because each oligonucleotide can serve as a 6-nm pixel, the structures can be programmed to bear complex patterns such as words and images on their surfaces. Finally, individual DNA structures can be programmed to form larger assemblies, including extended periodic lattices and a hexamer of triangles (which constitutes a 30-megadalton molecular complex).

6,141 citations

Posted Content
TL;DR: In this paper, it was shown that under suitable conditions on the coding matrix, the input vector can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program).
Abstract: This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector $f \in \R^n$ from corrupted measurements $y = A f + e$. Here, $A$ is an $m$ by $n$ (coding) matrix and $e$ is an arbitrary and unknown vector of errors. Is it possible to recover $f$ exactly from the data $y$? We prove that under suitable conditions on the coding matrix $A$, the input $f$ is the unique solution to the $\ell_1$-minimization problem ($\|x\|_{\ell_1} := \sum_i |x_i|$) $$ \min_{g \in \R^n} \| y - Ag \|_{\ell_1} $$ provided that the support of the vector of errors is not too large, $\|e\|_{\ell_0} := |\{i : e_i eq 0\}| \le \rho \cdot m$ for some $\rho > 0$. In short, $f$ can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; $f$ is recovered exactly even in situations where a significant fraction of the output is corrupted.

6,136 citations

Posted Content
TL;DR: In this article, it was shown that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law, then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.
Abstract: Suppose we are given a vector $f$ in $\R^N$. How many linear measurements do we need to make about $f$ to be able to recover $f$ to within precision $\epsilon$ in the Euclidean ($\ell_2$) metric? Or more exactly, suppose we are interested in a class ${\cal F}$ of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy $\epsilon$? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law (or if the coefficient sequence of $f$ in a fixed basis decays like a power-law), then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.

5,693 citations

Journal ArticleDOI
TL;DR: The generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more, and a new method is offered that is both easier to implement and produces accurate standard errors.
Abstract: We examine some issues in the estimation of time-series cross-section models, calling into question the conclusions of many published studies, particularly in the field of comparative political economy. We show that the generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more. We also provide an alternative estimator of the standard errors that is correct when the error structures show complications found in this type of model. Monte Carlo analysis shows that these “panel-corrected standard errors” perform well. The utility of our approach is demonstrated via a reanalysis of one “social democratic corporatist” model.

5,670 citations

Journal ArticleDOI
TL;DR: A series of improvements to the spectroscopic reductions are described, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities.
Abstract: This paper describes the Seventh Data Release of the Sloan Digital Sky Survey (SDSS), marking the completion of the original goals of the SDSS and the end of the phase known as SDSS-II. It includes 11,663 deg^2 of imaging data, with most of the ~2000 deg^2 increment over the previous data release lying in regions of low Galactic latitude. The catalog contains five-band photometry for 357 million distinct objects. The survey also includes repeat photometry on a 120° long, 2°.5 wide stripe along the celestial equator in the Southern Galactic Cap, with some regions covered by as many as 90 individual imaging runs. We include a co-addition of the best of these data, going roughly 2 mag fainter than the main survey over 250 deg^2. The survey has completed spectroscopy over 9380 deg^2; the spectroscopy is now complete over a large contiguous area of the Northern Galactic Cap, closing the gap that was present in previous data releases. There are over 1.6 million spectra in total, including 930,000 galaxies, 120,000 quasars, and 460,000 stars. The data release includes improved stellar photometry at low Galactic latitude. The astrometry has all been recalibrated with the second version of the USNO CCD Astrograph Catalog, reducing the rms statistical errors at the bright end to 45 milliarcseconds per coordinate. We further quantify a systematic error in bright galaxy photometry due to poor sky determination; this problem is less severe than previously reported for the majority of galaxies. Finally, we describe a series of improvements to the spectroscopic reductions, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities.

5,665 citations


Authors

Showing all 58155 results

NameH-indexPapersCitations
Eric S. Lander301826525976
Donald P. Schneider2421622263641
George M. Whitesides2401739269833
Yi Chen2174342293080
David Baltimore203876162955
Edward Witten202602204199
George Efstathiou187637156228
Michael A. Strauss1851688208506
Jing Wang1844046202769
Ruedi Aebersold182879141881
Douglas Scott1781111185229
Hyun-Chul Kim1764076183227
Phillip A. Sharp172614117126
Timothy M. Heckman170754141237
Zhenan Bao169865106571
Network Information
Related Institutions (5)
Massachusetts Institute of Technology
268K papers, 18.2M citations

95% related

Princeton University
146.7K papers, 9.1M citations

94% related

Max Planck Society
406.2K papers, 19.5M citations

93% related

University of California, Berkeley
265.6K papers, 16.8M citations

93% related

Centre national de la recherche scientifique
382.4K papers, 13.6M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023176
2022737
20214,682
20205,519
20195,321
20185,133