scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 2006"


Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations


Journal ArticleDOI
TL;DR: The Two Micron All Sky Survey (2MASS) as mentioned in this paper collected 25.4 Tbytes of raw imaging data from two dedicated 1.3 m diameter telescopes located at Mount Hopkins, Arizona and CerroTololo, Chile.
Abstract: Between 1997 June and 2001 February the Two Micron All Sky Survey (2MASS) collected 25.4 Tbytes of raw imagingdatacovering99.998%ofthecelestialsphereinthenear-infraredJ(1.25 � m),H(1.65 � m),andKs(2.16 � m) bandpasses. Observations were conducted from two dedicated 1.3 m diameter telescopes located at Mount Hopkins, Arizona,andCerroTololo,Chile.The7.8sofintegrationtimeaccumulatedforeachpointontheskyandstrictquality control yielded a 10 � point-source detection level of better than 15.8, 15.1, and 14.3 mag at the J, H, and Ks bands, respectively, for virtually the entire sky. Bright source extractions have 1 � photometric uncertainty of <0.03 mag and astrometric accuracy of order 100 mas. Calibration offsets between any two points in the sky are <0.02 mag. The 2MASS All-Sky Data Release includes 4.1 million compressed FITS images covering the entire sky, 471 million source extractions in a Point Source Catalog, and 1.6 million objects identified as extended in an Extended Source Catalog.

12,126 citations


Journal ArticleDOI
TL;DR: Solar energy is by far the largest exploitable resource, providing more energy in 1 hour to the earth than all of the energy consumed by humans in an entire year, and if solar energy is to be a major primary energy source, it must be stored and dispatched on demand to the end user.
Abstract: Global energy consumption is projected to increase, even in the face of substantial declines in energy intensity, at least 2-fold by midcentury relative to the present because of population and economic growth. This demand could be met, in principle, from fossil energy resources, particularly coal. However, the cumulative nature of CO2 emissions in the atmosphere demands that holding atmospheric CO2 levels to even twice their preanthropogenic values by midcentury will require invention, development, and deployment of schemes for carbon-neutral energy production on a scale commensurate with, or larger than, the entire present-day energy supply from all sources combined. Among renewable energy resources, solar energy is by far the largest exploitable resource, providing more energy in 1 hour to the earth than all of the energy consumed by humans in an entire year. In view of the intermittency of insolation, if solar energy is to be a major primary energy source, it must be stored and dispatched on demand to the end user. An especially attractive approach is to store solar-converted energy in the form of chemical bonds, i.e., in a photosynthetic process at a year-round average efficiency significantly higher than current plants or algae, to reduce land-area requirements. Scientific challenges involved with this process include schemes to capture and convert solar energy and then store the energy in the form of chemical bonds, producing oxygen from water and a reduced fuel such as hydrogen, methane, methanol, or other hydrocarbon species.

7,076 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Abstract: Suppose we wish to recover a vector x_0 Є R^m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax_0 + e; A is an n by m matrix with far fewer rows than columns (n « m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x^# to the l_(1-)regularization problem min ‖x‖l_1 subject to ‖Ax - y‖l(2) ≤ Є, where Є is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level ‖x^# - x_0‖l_2 ≤ C Є. As a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x_0; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/[log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

6,727 citations


Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations


Journal ArticleDOI
16 Mar 2006-Nature
TL;DR: This work describes a simple method for folding long, single-stranded DNA molecules into arbitrary two-dimensional shapes, which can be programmed to bear complex patterns such as words and images on their surfaces.
Abstract: 'Bottom-up fabrication', which exploits the intrinsic properties of atoms and molecules to direct their self-organization, is widely used to make relatively simple nanostructures. A key goal for this approach is to create nanostructures of high complexity, matching that routinely achieved by 'top-down' methods. The self-assembly of DNA molecules provides an attractive route towards this goal. Here I describe a simple method for folding long, single-stranded DNA molecules into arbitrary two-dimensional shapes. The design for a desired shape is made by raster-filling the shape with a 7-kilobase single-stranded scaffold and by choosing over 200 short oligonucleotide 'staple strands' to hold the scaffold in place. Once synthesized and mixed, the staple and scaffold strands self-assemble in a single step. The resulting DNA structures are roughly 100 nm in diameter and approximate desired shapes such as squares, disks and five-pointed stars with a spatial resolution of 6 nm. Because each oligonucleotide can serve as a 6-nm pixel, the structures can be programmed to bear complex patterns such as words and images on their surfaces. Finally, individual DNA structures can be programmed to form larger assemblies, including extended periodic lattices and a hexamer of triangles (which constitutes a 30-megadalton molecular complex).

6,141 citations


Journal ArticleDOI
TL;DR: In this article, a spin-1/2 system on a honeycomb lattice is studied, where the interactions between nearest neighbors are of XX, YY or ZZ type, depending on the direction of the link; different types of interactions may differ in strength.

4,032 citations


Journal ArticleDOI
TL;DR: A role is proposed for miR-146 in control of Toll-like receptor and cytokine signaling through a negative feedback regulation loop involving down-regulation of IL-1 receptor-associated kinase 1 and TNF receptor- associated factor 6 protein levels.
Abstract: Activation of mammalian innate and acquired immune responses must be tightly regulated by elaborate mechanisms to control their onset and termination. MicroRNAs have been implicated as negative regulators controlling diverse biological processes at the level of posttranscriptional repression. Expression profiling of 200 microRNAs in human monocytes revealed that several of them (miR-146a/b, miR-132, and miR-155) are endotoxin-responsive genes. Analysis of miR-146a and miR-146b gene expression unveiled a pattern of induction in response to a variety of microbial components and proinflammatory cytokines. By means of promoter analysis, miR-146a was found to be a NF-κB-dependent gene. Importantly, miR-146a/b were predicted to base-pair with sequences in the 3′ UTRs of the TNF receptor-associated factor 6 and IL-1 receptor-associated kinase 1 genes, and we found that these UTRs inhibit expression of a linked reporter gene. These genes encode two key adapter molecules downstream of Toll-like and cytokine receptors. Thus, we propose a role for miR-146 in control of Toll-like receptor and cytokine signaling through a negative feedback regulation loop involving down-regulation of IL-1 receptor-associated kinase 1 and TNF receptor-associated factor 6 protein levels.

3,947 citations


Proceedings Article
04 Dec 2006
TL;DR: A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed, which powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch achieve only 84%.
Abstract: A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed It consists of two steps: first forming activation maps on certain feature channels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps The model is simple, and biologically plausible insofar as it is naturally parallelized This model powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch ([2], [3], [4]) achieve only 84%

3,475 citations


Journal ArticleDOI
TL;DR: This work presents a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks, and shows that this approach can take advantage of redundant network capacity for improved success probability and robustness.
Abstract: We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks

2,806 citations


Journal ArticleDOI
TL;DR: In this article, distance measurements to 71 high redshift type Ia supernovae discovered during the first year of the 5-year Supernova Legacy Survey (SNLS) were presented.
Abstract: We present distance measurements to 71 high redshift type Ia supernovae discovered during the first year of the 5-year Supernova Legacy Survey (SNLS). These events were detected and their multi-color light-curves measured using the MegaPrime/MegaCam instrument at the Canada-France-Hawaii Telescope (CFHT), by repeatedly imaging four one-square degree fields in four bands. Follow-up spectroscopy was performed at the VLT, Gemini and Keck telescopes to confirm the nature of the supernovae and to measure their redshift. With this data set, we have built a Hubble diagram extending to z = 1, with all distance measurements involving at least two bands. Systematic uncertainties are evaluated making use of the multiband photometry obtained at CFHT. Cosmological fits to this first year SNLS Hubble diagram give the following results: {Omega}{sub M} = 0.263 {+-} 0.042 (stat) {+-} 0.032 (sys) for a flat {Lambda}CDM model; and w = -1.023 {+-} 0.090 (stat) {+-} 0.054 (sys) for a flat cosmology with constant equation of state w when combined with the constraint from the recent Sloan Digital Sky Survey measurement of baryon acoustic oscillations.

Journal ArticleDOI
TL;DR: The von Neumann entropy of rho, a measure of the entanglement of the interior and exterior variables, has the form S(rho) = alphaL - gamma + ..., where the ellipsis represents terms that vanish in the limit L --> infinity.
Abstract: We formulate a universal characterization of the many-particle quantum entanglement in the ground state of a topologically ordered two-dimensional medium with a mass gap. We consider a disk in the plane, with a smooth boundary of length L, large compared to the correlation length. In the ground state, by tracing out all degrees of freedom in the exterior of the disk, we obtain a marginal density operator rho for the degrees of freedom in the interior. The von Neumann entropy of rho, a measure of the entanglement of the interior and exterior variables, has the form S(rho)=alphaL-gamma+[centered ellipsis], where the ellipsis represents terms that vanish in the limit L-->[infinity]. We show that -gamma is a universal constant characterizing a global feature of the entanglement in the ground state. Using topological quantum field theory methods, we derive a formula for gamma in terms of properties of the superselection sectors of the medium.

Journal ArticleDOI
15 Jun 2006-Nature
TL;DR: It is shown, in a gambling task, that human subjects' choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma, and a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes is suggested.
Abstract: Humans are remarkably curious, and that is useful in helping us to learn about new environments and possibilities. But curiosity killed the cat, they say, and it also carries with it substantial potential risks and costs for us. Statisticians, engineers and economists have long considered ways of balancing the costs and benefits of exploration. Tests involving a gambling task and an fMRI brain scanner now show that humans appear to obey similar principles when considering their options. The players had to balance the desire to select the richest option based on accumulated experience against the desire to choose a less familiar option that might have a larger payoff. The frontopolar cortex, a brain area known to be involved in cognitive control, was preferentially active during exploratory decisions. The results suggest a neurobiological account of human exploration and point to a new area for behavioural and neural investigations.

Journal ArticleDOI
30 Jun 2006-Cell
TL;DR: Recent work is discussed that suggests that the dynamics (fusion and fission) of these organelles is important in development and disease.

Journal ArticleDOI
26 Jul 2006-Nature
TL;DR: D devices in which optics and fluidics are used synergistically to synthesize novel functionalities are described, according to three broad categories of interactions: fluid–solid interfaces, purely fluidic interfaces and colloidal suspensions.
Abstract: We describe devices in which optics and fluidics are used synergistically to synthesize novel functionalities. Fluidic replacement or modification leads to reconfigurable optical systems, whereas the implementation of optics through the microfluidic toolkit gives highly compact and integrated devices. We categorize optofluidics according to three broad categories of interactions: fluid–solid interfaces, purely fluidic interfaces and colloidal suspensions. We describe examples of optofluidic devices in each category.

Journal ArticleDOI
28 Jun 2006-Nature
TL;DR: Removal of Drosophila PINK1 homologue function results in male sterility, apoptotic muscle degeneration, defects in mitochondrial morphology and increased sensitivity to multiple stresses including oxidative stress, which underscores the importance of mitochondrial dysfunction as a central mechanism of Parkinson's disease pathogenesis.
Abstract: Parkinson's disease is the second most common neurodegenerative disorder and is characterized by the degeneration of dopaminergic neurons in the substantia nigra. Mitochondrial dysfunction has been implicated as an important trigger for Parkinson's disease-like pathogenesis because exposure to environmental mitochondrial toxins leads to Parkinson's disease-like pathology. Recently, multiple genes mediating familial forms of Parkinson's disease have been identified, including PTEN-induced kinase 1 (PINK1 ; PARK6 ) and parkin (PARK2 ), which are also associated with sporadic forms of Parkinson's disease. PINK1 encodes a putative serine/threonine kinase with a mitochondrial targeting sequence. So far, no in vivo studies have been reported for pink1 in any model system. Here we show that removal of Drosophila PINK1 homologue (CG4523; hereafter called pink1) function results in male sterility, apoptotic muscle degeneration, defects in mitochondrial morphology and increased sensitivity to multiple stresses including oxidative stress. Pink1 localizes to mitochondria, and mitochondrial cristae are fragmented in pink1 mutants. Expression of human PINK1 in the Drosophila testes restores male fertility and normal mitochondrial morphology in a portion of pink1 mutants, demonstrating functional conservation between human and Drosophila Pink1. Loss of Drosophila parkin shows phenotypes similar to loss of pink1 function. Notably, overexpression of parkin rescues the male sterility and mitochondrial morphology defects of pink1 mutants, whereas double mutants removing both pink1 and parkin function show muscle phenotypes identical to those observed in either mutant alone. These observations suggest that pink1 and parkin function, at least in part, in the same pathway, with pink1 functioning upstream of parkin. The role of the pink1–parkin pathway in regulating mitochondrial function underscores the importance of mitochondrial dysfunction as a central mechanism of Parkinson's disease pathogenesis.

Journal ArticleDOI
26 Jan 2006-Nature
TL;DR: It is shown that in men (at least) empathic responses are shaped by valuation of other people's social behaviour, such that they empathize with fair opponents while favouring the physical punishment of unfair opponents, a finding that echoes recent evidence for altruistic punishment.
Abstract: The neural processes underlying empathy are a subject of intense interest within the social neurosciences. However, very little is known about how brain empathic responses are modulated by the affective link between individuals. We show here that empathic responses are modulated by learned preferences, a result consistent with economic models of social preferences. We engaged male and female volunteers in an economic game, in which two confederates played fairly or unfairly, and then measured brain activity with functional magnetic resonance imaging while these same volunteers observed the confederates receiving pain. Both sexes exhibited empathy-related activation in pain-related brain areas (fronto-insular and anterior cingulate cortices) towards fair players. However, these empathy-related responses were significantly reduced in males when observing an unfair person receiving pain. This effect was accompanied by increased activation in reward-related areas, correlated with an expressed desire for revenge. We conclude that in men (at least) empathic responses are shaped by valuation of other people's social behaviour, such that they empathize with fair opponents while favouring the physical punishment of unfair opponents, a finding that echoes recent evidence for altruistic punishment.

Journal ArticleDOI
TL;DR: In this paper, the authors employed a matrix-based power spectrum estimation method using pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 20 k-bands of both the clustering power and its anisotropy due to redshift-space distortions.
Abstract: We measure the large-scale real-space power spectrum P(k) using luminous red galaxies (LRGs) in the Sloan Digital Sky Survey (SDSS) and use this measurement to sharpen constraints on cosmological parameters from the Wilkinson Microwave Anisotropy Probe (WMAP). We employ a matrix-based power spectrum estimation method using Pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 20 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well-behaved window functions in the range 0.01h/Mpc 0.1h/Mpc and associated nonlinear complications, yet agree well with more aggressive published analyses where nonlinear modeling is crucial.

Journal ArticleDOI
08 Dec 2006-Science
TL;DR: The design and experimental implementation of DNA-based digital logic circuits using single-stranded nucleic acids as inputs and outputs are reported, suggesting applications in biotechnology and bioengineering.
Abstract: Biological organisms perform complex information processing and control tasks using sophisticated biochemical circuits, yet the engineering of such circuits remains ineffective compared with that of electronic circuits. To systematically create complex yet reliable circuits, electrical engineers use digital logic, wherein gates and subcircuits are composed modularly and signal restoration prevents signal degradation. We report the design and experimental implementation of DNA-based digital logic circuits. We demonstrate AND, OR, and NOT gates, signal restoration, amplification, feedback, and cascading. Gate design and circuit construction is modular. The gates use single-stranded nucleic acids as inputs and outputs, and the mechanism relies exclusively on sequence recognition and strand displacement. Biological nucleic acids such as microRNAs can serve as inputs, suggesting applications in biotechnology and bioengineering.

Journal ArticleDOI
23 Jun 2006-Science
TL;DR: Experimental results from 15 diverse populations show that all populations demonstrate some willingness to administer costly punishment as unequal behavior increases, and the magnitude of this punishment varies substantially across populations, and costly punishment positively covaries with altruistic behavior across populations.
Abstract: Recent behavioral experiments aimed at understanding the evolutionary foundations of human cooperation have suggested that a willingness to engage in costly punishment, even in one-shot situations, may be part of human psychology and a key element in understanding our sociality. However, because most experiments have been confined to students in industrialized societies, generalizations of these insights to the species have necessarily been tentative. Here, experimental results from 15 diverse populations show that (i) all populations demonstrate some willingness to administer costly punishment as unequal behavior increases, (ii) the magnitude of this punishment varies substantially across populations, and (iii) costly punishment positively covaries with altruistic behavior across populations. These findings are consistent with models of the gene-culture coevolution of human altruism and further sharpen what any theory of human cooperation needs to explain.

Journal ArticleDOI
TL;DR: It is demonstrated that the suggested model can enable a model of object recognition in cortex to expand from recognizing individual objects in isolation to sequentially recognizing all objects in a more complex scene.

Journal ArticleDOI
17 Feb 2006-Science
TL;DR: Using satellite radar interferometry observations of Greenland, widespread glacier acceleration below 66° north between 1996 and 2000, which rapidly expanded to 70° north in 2005, and as more glaciers accelerate farther north, the contribution of Greenland to sea-level rise will continue to increase.
Abstract: Using satellite radar interferometry observations of Greenland, we detected widespread glacier acceleration below 66° north between 1996 and 2000, which rapidly expanded to 70° north in 2005. Accelerated ice discharge in the west and particularly in the east doubled the ice sheet mass deficit in the last decade from 90 to 220 cubic kilometers per year. As more glaciers accelerate farther north, the contribution of Greenland to sea-level rise will continue to increase.

Journal ArticleDOI
TL;DR: FAST TCP is described, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation, and its equilibrium and stability properties are characterized.
Abstract: We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties which the current TCP implementation has at large windows. We describe the architecture and summarize some of the algorithms implemented in our prototype. We characterize its equilibrium and stability properties. We evaluate it experimentally in terms of throughput, fairness, stability, and responsiveness

Journal ArticleDOI
TL;DR: In this article, the spectral energy distributions (SEDs) of 259 quasars with both Sloan Digital Sky Survey (SDS) and Spitzer photometry were analyzed.
Abstract: We present an analysis of the mid-infrared (MIR) and optical properties of type 1 (broad-line) quasars detected by the Spitzer Space Telescope. The MIR color-redshift relation is characterized to z ~ 3, with predictions to z = 7. We demonstrate how combining MIR and optical colors can yield even more efficient selection of active galactic nuclei (AGNs) than MIR or optical colors alone. Composite spectral energy distributions (SEDs) are constructed for 259 quasars with both Sloan Digital Sky Survey and Spitzer photometry, supplemented by near-IR, GALEX, VLA, and ROSAT data, where available. We discuss how the spectral diversity of quasars influences the determination of bolometric luminosities and accretion rates; assuming the mean SED can lead to errors as large as 50% for individual quasars when inferring a bolometric luminosity from an optical luminosity. Finally, we show that careful consideration of the shape of the mean quasar SED and its redshift dependence leads to a lower estimate of the fraction of reddened/obscured AGNs missed by optical surveys as compared to estimates derived from a single mean MIR to optical flux ratio.

Journal ArticleDOI
TL;DR: In this paper, a sample of 87 rest-frame UV-selected star-forming galaxies with mean spectroscopic redshift z = 2.26 ± 0.17 was used to study the correlation between metallicity and stellar mass at high redshift.
Abstract: We use a sample of 87 rest-frame UV-selected star-forming galaxies with mean spectroscopic redshift z = 2.26 ± 0.17 to study the correlation between metallicity and stellar mass at high redshift. Using stellar masses determined from SED fitting to observed 0.3-8 μm photometry, we divide the sample into six bins in stellar mass and construct six composite Hα + [N ] spectra from all of the objects in each bin. We estimate the mean oxygen abundance in each bin from the [N II]/Hα ratio and find a monotonic increase in metallicity with increasing stellar mass, from 12 + log(O/H) < 8.2 for galaxies with M = 2.7 × 109 M☉ to 12 + log(O/H) = 8.6 for galaxies with M = 1.0 × 1011 M☉. We use the empirical relation between SFR density and gas density to estimate the gas fractions of the galaxies, finding an increase in gas fraction with decreasing stellar mass. These gas fractions, combined with the observed metallicities, allow the estimation of the effective yield yeff as a function of stellar mass; in constrast to observations in the local universe, which show a decrease in yeff with decreasing baryonic mass, we find a slight increase. Such a variation of metallicity with gas fraction is best fitted by a model with supersolar yield and an outflow rate ~4 times higher than the SFR. We conclude that the mass-metallicity relation at high redshift is driven by the increase in metallicity as the gas fraction decreases through star formation and is likely modulated by metal loss from strong outflows in galaxies of all masses.

Journal ArticleDOI
TL;DR: In this article, a numerical analysis of surface plasmon waveguides exhibiting both long-range propagation and spatial confinement of light with lateral dimensions of less than 10% of the free-space wavelength is presented.
Abstract: We present a numerical analysis of surface plasmon waveguides exhibiting both long-range propagation and spatial confinement of light with lateral dimensions of less than 10% of the free-space wavelength. Attention is given to characterizing the dispersion relations, wavelength-dependent propagation, and energy density decay in two-dimensional Ag/SiO2/Ag structures with waveguide thicknesses ranging from 12 nm to 250 nm. As in conventional planar insulator-metal-insulator (IMI) surface plasmon waveguides, analytic dispersion results indicate a splitting of plasmon modes—corresponding to symmetric and antisymmetric electric field distributions—as SiO2 core thickness is decreased below 100 nm. However, unlike IMI structures, surface plasmon momentum of the symmetric mode does not always exceed photon momentum, with thicker films (d~50 nm) achieving effective indices as low as n=0.15. In addition, antisymmetric mode dispersion exhibits a cutoff for films thinner than d=20 nm, terminating at least 0.25 eV below resonance. From visible to near infrared wavelengths, plasmon propagation exceeds tens of microns with fields confined to within 20 nm of the structure. As the SiO2 core thickness is increased, propagation distances also increase with localization remaining constant. Conventional waveguiding modes of the structure are not observed until the core thickness approaches 100 nm. At such thicknesses, both transverse magnetic and transverse electric modes can be observed. Interestingly, for nonpropagating modes (i.e., modes where propagation does not exceed the micron scale), considerable field enhancement in the waveguide core is observed, rivaling the intensities reported in resonantly excited metallic nanoparticle waveguides.

Journal ArticleDOI
TL;DR: This work uses simulations with model lattice proteins to demonstrate how extra stability increases evolvability by allowing a protein to accept a wider range of beneficial mutations while still folding to its native structure.
Abstract: The biophysical properties that enable proteins to so readily evolve to perform diverse biochemical tasks are largely unknown. Here, we show that a protein’s capacity to evolve is enhanced by the mutational robustness conferred by extra stability. We use simulations with model lattice proteins to demonstrate how extra stability increases evolvability by allowing a protein to accept a wider range of beneficial mutations while still folding to its native structure. We confirm this view experimentally by mutating marginally stable and thermostable variants of cytochrome P450 BM3. Mutants of the stabilized parent were more likely to exhibit new or improved functions. Only the stabilized P450 parent could tolerate the highly destabilizing mutations needed to confer novel activities such as hydroxylating the antiinflammatory drug naproxen. Our work establishes a crucial link between protein stability and evolution. We show that we can exploit this link to discover protein functions, and we suggest how natural evolution might do the same.

Journal ArticleDOI
10 Nov 2006-Science
TL;DR: The sequence and analysis of the 814-megabase genome of the sea urchin Strongylocentrotus purpuratus is reported, a model for developmental and systems biology and yields insights into the evolution of deuterostomes.
Abstract: We report the sequence and analysis of the 814-megabase genome of the sea urchin Strongylocentrotus purpuratus, a model for developmental and systems biology. The sequencing strategy combined whole-genome shotgun and bacterial artificial chromosome (BAC) sequences. This use of BAC clones, aided by a pooling strategy, overcame difficulties associated with high heterozygosity of the genome. The genome encodes about 23,300 genes, including many previously thought to be vertebrate innovations or known only outside the deuterostomes. This echinoderm genome provides an evolutionary outgroup for the chordates and yields insights into the evolution of deuterostomes.

Journal ArticleDOI
TL;DR: The application of strong electric fields in water and organic liquids has been studied for several years, because of its importance in electrical transmission processes and its practical applications in biology, chemistry, and electrochemistry as discussed by the authors.
Abstract: The application of strong electric fields in water and organic liquids has been studied for several years, because of its importance in electrical transmission processes and its practical applications in biology, chemistry, and electrochemistry. More recently, liquid-phase electrical discharge reactors have been investigated, and are being developed, for many environmental applications, including drinking water and wastewater treatment, as well as, potentially, for environmentally benign chemical processes. This paper reviews the current status of research on the application of high-voltage electrical discharges for promoting chemical reactions in the aqueous phase, with particular emphasis on applications to water cleaning.

Journal ArticleDOI
TL;DR: The idea of space-time coding devised for multiple-antenna systems to the problem of communications over a wireless relay network with Rayleigh fading channels is applied and it is shown that for high SNR, the pairwise error probability (PEP) behaves as (logP/P)min{TH}, with T the coherence interval.
Abstract: We apply the idea of space-time coding devised for multiple-antenna systems to the problem of communications over a wireless relay network with Rayleigh fading channels. We use a two-stage protocol, where in one stage the transmitter sends information and in the other, the relays encode their received signals into a "distributed" linear dispersion (LD) code, and then transmit the coded signals to the receive node. We show that for high SNR, the pairwise error probability (PEP) behaves as (logP/P)min{TH}, with T the coherence interval, that is, the number of symbol periods during which the channels keep constant, R the number of relay nodes, and P the total transmit power. Thus, apart from the log P factor, the system has the same diversity as a multiple-antenna system with R transmit antennas, which is the same as assuming that the R relays can fully cooperate and have full knowledge of the transmitted signal. We further show that for a network with a large number of relays and a fixed total transmit power across the entire network, the optimal power allocation is for the transmitter to expend half the power and for the relays to collectively expend the other half. We also show that at low and high SNR, the coding gain is the same as that of a multiple-antenna system with R antennas. However, at intermediate SNR, it can be quite different, which has implications for the design of distributed space-time codes