scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 2005"


Journal ArticleDOI
TL;DR: The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additional absorption phenomena, added line-shape formalisms, and validity, and molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth.
Abstract: This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additional absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. A powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.

7,638 citations


Journal ArticleDOI
TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Abstract: This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr/1/:=/spl Sigma//sub i/|x/sub i/|) min(g/spl isin/R/sup n/) /spl par/y - Ag/spl par//sub /spl lscr/1/ provided that the support of the vector of errors is not too large, /spl par/e/spl par//sub /spl lscr/0/:=|{i:e/sub i/ /spl ne/ 0}|/spl les//spl rho//spl middot/m for some /spl rho/>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of /spl lscr//sub 1/ is a crucial property we call the uniform uncertainty principle that we shall describe in detail.

6,853 citations


Posted Content
TL;DR: It is shown that it is possible to recover x0 accurately based on the data y from incomplete and contaminated observations.
Abstract: Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

6,226 citations


Posted Content
TL;DR: In this paper, it was shown that under suitable conditions on the coding matrix, the input vector can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program).
Abstract: This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector $f \in \R^n$ from corrupted measurements $y = A f + e$. Here, $A$ is an $m$ by $n$ (coding) matrix and $e$ is an arbitrary and unknown vector of errors. Is it possible to recover $f$ exactly from the data $y$? We prove that under suitable conditions on the coding matrix $A$, the input $f$ is the unique solution to the $\ell_1$-minimization problem ($\|x\|_{\ell_1} := \sum_i |x_i|$) $$ \min_{g \in \R^n} \| y - Ag \|_{\ell_1} $$ provided that the support of the vector of errors is not too large, $\|e\|_{\ell_0} := |\{i : e_i eq 0\}| \le \rho \cdot m$ for some $\rho > 0$. In short, $f$ can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; $f$ is recovered exactly even in situations where a significant fraction of the output is corrupted.

6,136 citations


Journal ArticleDOI
TL;DR: This paper considers the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere and demonstrates how these are explicitly satisfied by HEALPix.
Abstract: HEALPix the Hierarchical Equal Area isoLatitude Pixelization is a versatile structure for the pixelization of data on the sphere. An associated library of computational algorithms and visualization software supports fast scientific applications executable directly on discretized spherical maps generated from very large volumes of astronomical data. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background experiments (e.g., BOOMERANG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including, e.g., Planck, Herschel, SAFIR, and the Beyond Einstein inflation probe. In this paper we consider the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere. We demonstrate how these are explicitly satisfied by HEALPix.

5,518 citations


Journal ArticleDOI
TL;DR: A review of the physics of small volumes (nanoliters) of fluids is presented, as parametrized by a series of dimensionless numbers expressing the relative importance of various physical phenomena as mentioned in this paper.
Abstract: Microfabricated integrated circuits revolutionized computation by vastly reducing the space, labor, and time required for calculations. Microfluidic systems hold similar promise for the large-scale automation of chemistry and biology, suggesting the possibility of numerous experiments performed rapidly and in parallel, while consuming little reagent. While it is too early to tell whether such a vision will be realized, significant progress has been achieved, and various applications of significant scientific and practical interest have been developed. Here a review of the physics of small volumes (nanoliters) of fluids is presented, as parametrized by a series of dimensionless numbers expressing the relative importance of various physical phenomena. Specifically, this review explores the Reynolds number Re, addressing inertial effects; the Peclet number Pe, which concerns convective and diffusive transport; the capillary number Ca expressing the importance of interfacial tension; the Deborah, Weissenberg, and elasticity numbers De, Wi, and El, describing elastic effects due to deformable microstructural elements like polymers; the Grashof and Rayleigh numbers Gr and Ra, describing density-driven flows; and the Knudsen number, describing the importance of noncontinuum molecular effects. Furthermore, the long-range nature of viscous flows and the small device dimensions inherent in microfluidics mean that the influence of boundaries is typically significant. A variety of strategies have been developed to manipulate fluids by exploiting boundary effects; among these are electrokinetic effects, acoustic streaming, and fluid-structure interactions. The goal is to describe the physics behind the rich variety of fluid phenomena occurring on the nanoliter scale using simple scaling arguments, with the hopes of developing an intuitive sense for this occasionally counterintuitive world.

4,044 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: This work proposes a novel approach to learn and recognize natural scene categories by representing the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning.
Abstract: We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a "theme". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes.

3,920 citations


Proceedings Article
01 Jan 2005
TL;DR: In quantum optical devices, microcavities can coax atoms or quantum dots to emit spontaneous photons in a desired direction or can provide an environment where dissipative mechanisms such as spontaneous emission are overcome so that quantum entanglement of radiation and matter is possible.
Abstract: Microcavity physics and design will be reviewed. Following an overview of applications in quantum optics, communications and biosensing, recent advances in ultra-high-Q research will be presented.

2,857 citations


Journal ArticleDOI
Joseph Adams1, Madan M. Aggarwal2, Zubayer Ahammed3, J. Amonett4  +363 moreInstitutions (46)
TL;DR: In this paper, the most important experimental results from the first three years of nucleus-nucleus collision studies at RHIC were reviewed, with emphasis on results of the STAR experiment.

2,750 citations


Journal ArticleDOI
TL;DR: Galaxy Evolution Explorer (GALEX) as mentioned in this paper performed the first space UV sky survey, including imaging and grism surveys in two bands (1350-1750 and 1750-2750?).
Abstract: We give an overview of the Galaxy Evolution Explorer (GALEX), a NASA Explorer Mission launched on 2003 April 28. GALEX is performing the first space UV sky survey, including imaging and grism surveys in two bands (1350-1750 and 1750-2750 ?). The surveys include an all-sky imaging survey (mAB 20.5), a medium imaging survey of 1000 deg2 (mAB 23), a deep imaging survey of 100 deg2 (mAB 25), and a nearby galaxy survey. Spectroscopic (slitless) grism surveys (R = 100-200) are underway with various depths and sky coverage. Many targets overlap existing or planned surveys in other bands. We will use the measured UV properties of local galaxies, along with corollary observations, to calibrate the relationship of the UV and global star formation rate in local galaxies. We will apply this calibration to distant galaxies discovered in the deep imaging and spectroscopic surveys to map the history of star formation in the universe over the redshift range 0 < z < 2 and probe the physical drivers of star formation in galaxies. The GALEX mission includes a guest investigator program, supporting the wide variety of programs made possible by the first UV sky survey.

2,410 citations


Journal ArticleDOI
TL;DR: This review focuses on the composition, regulation and function of cullin–RING ligases, and describes how these enzymes can be characterized by a set of general principles.
Abstract: Cullin–RING complexes comprise the largest known class of ubiquitin ligases. Owing to the great diversity of their substrate-receptor subunits, it is possible that there are hundreds of distinct cullin–RING ubiquitin ligases in eukaryotic cells, which establishes these enzymes as key mediators of post-translational protein regulation. In this review, we focus on the composition, regulation and function of cullin–RING ligases, and describe how these enzymes can be characterized by a set of general principles.

Journal ArticleDOI
TL;DR: In this paper, a power-spectrum analysis of the final 2DF Galaxy Redshift Survey (2dFGRS) employing a direct Fourier method is presented, and the covariance matrix is determined using two different approaches to the construction of mock surveys, which are used to demonstrate that the input cosmological model can be correctly recovered.
Abstract: We present a power-spectrum analysis of the final 2dF Galaxy Redshift Survey (2dFGRS), employing a direct Fourier method. The sample used comprises 221 414 galaxies with measured redshifts. We investigate in detail the modelling of the sample selection, improving on previous treatments in a number of respects. A new angular mask is derived, based on revisions to the photometric calibration. The redshift selection function is determined by dividing the survey according to rest-frame colour, and deducing a self-consistent treatment of k-corrections and evolution for each population. The covariance matrix for the power-spectrum estimates is determined using two different approaches to the construction of mock surveys, which are used to demonstrate that the input cosmological model can be correctly recovered. We discuss in detail the possible differences between the galaxy and mass power spectra, and treat these using simulations, analytic models and a hybrid empirical approach. Based on these investigations, we are confident that the 2dFGRS power spectrum can be used to infer the matter content of the universe. On large scales, our estimated power spectrum shows evidence for the ‘baryon oscillations’ that are predicted in cold dark matter (CDM) models. Fitting to a CDM model, assuming a primordial n s = 1 spectrum, h = 0.72 and negligible neutrino mass, the preferred

Journal ArticleDOI
23 Jun 2005-Nature
TL;DR: A remarkable subset of MTL neurons are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names, which suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.
Abstract: It takes a fraction of a second to recognize a person or an object even when seen under strikingly different conditions. How such a robust, high-level representation is achieved by neurons in the human brain is still unclear. In monkeys, neurons in the upper stages of the ventral visual pathway respond to complex images such as faces and objects and show some degree of invariance to metric properties such as the stimulus size, position and viewing angle. We have previously shown that neurons in the human medial temporal lobe (MTL) fire selectively to images of faces, animals, objects or scenes. Here we report on a remarkable subset of MTL neurons that are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names. These results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

Journal ArticleDOI
TL;DR: A cross-cultural study of behavior in ultimatum, public goods, and dictator games in a range of small-scale societies exhibiting a wide variety of economic and cultural conditions found the canonical model – based on self-interest – fails in all of the societies studied.
Abstract: Researchers from across the social sciences have found consistent deviations from the predictions of the canonical model of self-interest in hundreds of experiments from around the world. This research, however, cannot determine whether the uniformity re- sults from universal patterns of human behavior or from the limited cultural variation available among the university students used in virtually all prior experimental work. To address this, we undertook a cross-cultural study of behavior in ultimatum, public goods, and dictator games in a range of small-scale societies exhibiting a wide variety of economic and cultural conditions. We found, first, that the canonical model - based on self-interest - fails in all of the societies studied. Second, our data reveal substantially more behavioral vari- ability across social groups than has been found in previous research. Third, group-level differences in economic organization and the structure of social interactions explain a substantial portion of the behavioral variation across societies: the higher the degree of market integration and the higher the payoffs to cooperation in everyday life, the greater the level of prosociality expressed in experimental games. Fourth, the available individual-level economic and demographic variables do not consistently explain game behavior, either within or across groups. Fifth, in many cases experimental play appears to reflect the common interactional patterns of everyday life.

Journal ArticleDOI
TL;DR: In this paper, the authors use brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity, and game theory to understand human decision making.
Abstract: Neuroeconomics uses knowledge about brain mechanisms to inform economic analysis, and roots economics in biology. It opens up the "black box" of the brain, much as organizational economics adds detail to the theory of the firm. Neuroscientists use many tools— including brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity. The key insight for economics is that the brain is composed of multiple systems which interact. Controlled systems ("executive function") interrupt automatic ones. Emotions and cognition both guide decisions. Just as prices and allocations emerge from the interaction of two processes—supply and demand— individual decisions can be modeled as the result of two (or more) processes interacting. Indeed, "dual-process" models of this sort are better rooted in neuroscientific fact, and more empirically accurate, than single-process models (such as utility-maximization). We discuss how brain evidence complicates standard assumptions about basic preference, to include homeostasis and other kinds of state-dependence. We also discuss applications to intertemporal choice, risk and decision making, and game theory. Intertemporal choice appears to be domain-specific and heavily influenced by emotion. The simplified s-d of quasi-hyperbolic discounting is supported by activation in distinct regions of limbic and cortical systems. In risky decision, imaging data tentatively support the idea that gains and losses are coded separately, and that ambiguity is distinct from risk, because it activates fear and discomfort regions. (Ironically, lesion patients who do not receive fear signals in prefrontal cortex are "rationally" neutral toward ambiguity.) Game theory studies show the effect of brain regions implicated in "theory of mind", correlates of strategic skill, and effects of hormones and other biological variables. Finally, economics can contribute to neuroscience because simple rational-choice models are useful for understanding highly-evolved behavior like motor actions that earn rewards, and Bayesian integration of sensorimotor information.

Journal ArticleDOI
TL;DR: This work proposes a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback.
Abstract: In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1/n, irrespective of its path loss. In fact, using M=/spl alpha/logn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.

Proceedings ArticleDOI
24 Apr 2005
TL;DR: This work proposes a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters, and shows that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.
Abstract: We consider a network of distributed sensors, where where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.

Journal ArticleDOI
TL;DR: In this paper, a device physics model for radial p-n junction nanorod solar cells was developed, in which densely packed nanorods, each having a pn junction in the radial direction, are oriented with the rod axis parallel to the incident light direction.
Abstract: A device physics model has been developed for radial p-n junction nanorod solar cells, in which densely packed nanorods, each having a p-n junction in the radial direction, are oriented with the rod axis parallel to the incident light direction. High-aspect-ratio (length/diameter) nanorods allow the use of a sufficient thickness of material to obtain good optical absorption while simultaneously providing short collection lengths for excited carriers in a direction normal to the light absorption. The short collection lengths facilitate the efficient collection of photogenerated carriers in materials with low minority-carrier diffusion lengths. The modeling indicates that the design of the radial p-n junction nanorod device should provide large improvements in efficiency relative to a conventional planar geometry p-n junction solar cell, provided that two conditions are satisfied: (1) In a planar solar cell made from the same absorber material, the diffusion length of minority carriers must be too low to allow for extraction of most of the light-generated carriers in the absorber thickness needed to obtain full light absorption. (2) The rate of carrier recombination in the depletion region must not be too large (for silicon this means that the carrier lifetimes in the depletion region must be longer than ~10 ns). If only condition (1) is satisfied, the modeling indicates that the radial cell design will offer only modest improvements in efficiency relative to a conventional planar cell design. Application to Si and GaAs nanorod solar cells is also discussed in detail.

Journal ArticleDOI
Abstract: We have obtained spectroscopic redshifts using the Keck I telescope for a sample of 73 submillimeter galaxies (SMGs), with a median 850 μm flux density of 5.7 mJy, for which precise positions are available through their faint radio emission. The galaxies lie at redshifts out to z = 3.6, with a median redshift of 2.2 and an interquartile range z = 1.7-2.8. Modeling a purely submillimeter flux-limited sample, based on the expected selection function for our radio-identified sample, suggests a median redshift of 2.3, with a redshift distribution remarkably similar to the optically and radio-selected quasars. The observed redshift distributions are similar for the active galactic nucleus (AGN) and starburst subsamples. The median RAB is 24.6 for the sample. However, the dust-corrected ultraviolet (UV) luminosities of the galaxies rarely hint at the huge bolometric luminosities indicated by their radio/submillimeter emission, with the effect that the true luminosity can be underestimated by a median factor of ~120 for SMGs with pure starburst spectra. Radio and submillimeter observations are thus essential to select the most luminous high-redshift galaxies. The 850 μm, radio, and redshift data are used to estimate the dust temperatures and characterize photometric redshifts. Using 450 μm measurements for a subset of our sample, we confirm that a median dust temperature of Td = 36 ± 7 K, derived on the assumption that the local far-infrared (FIR)-radio correlation applies at high redshift, is reasonable. Individual 450 μm detections are consistent with the local radio-FIR relation holding at z ~ 2. This median Td is lower than that estimated for similarly luminous IRAS 60 μm galaxies locally. We demonstrate that dust temperature variations make it impossible to estimate redshifts for individual SGMs to better than Δz 1 using simple long-wavelength photometric methods. We calculate total infrared and bolometric luminosities (the median infrared luminosity estimated from the radio is 8.5 × 1012 L☉), construct a luminosity function, and quantify the strong evolution of the submillimeter population across z = 0.5-3.5 relative to local IRAS galaxies. We use the bolometric luminosities and UV-spectral classifications to determine a lower limit to the AGN content of the population and measure directly the varying the contribution of highly obscured, luminous galaxies to the luminosity density history of the universe for the first time. We conclude that bright submillimeter galaxies contribute a comparable star formation density to Lyman break galaxies at z = 2-3, and including galaxies below our submillimeter flux limit, this population may be the dominant site of massive star formation at this epoch. The rapid evolution of SMGs and QSO populations contrasts with that seen in bolometrically lower luminosity galaxy samples selected in the rest-frame UV and suggests a close link between SMGs and the formation and evolution of the galactic halos that host QSOs.

Journal ArticleDOI
TL;DR: The results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities are presented.
Abstract: This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.

Journal ArticleDOI
TL;DR: The Lagrangian Coherent Structures (LCS) as mentioned in this paper are defined as ridges of Finite-Time Lyapunov Exponent (FTLE) fields, which can be seen as finite-time mixing templates.

Journal ArticleDOI
TL;DR: For the "sphere decoding" algorithm of Fincke and Pohst, a closed-form expression is found for the expected complexity, both for the infinite and finite lattice, which suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can be implemented in real time.
Abstract: The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worst-case complexity of the integer least-squares problem, we study its expected complexity, averaged over the noise and over the lattice. For the "sphere decoding" algorithm of Fincke and Pohst, we find a closed-form expression for the expected complexity, both for the infinite and finite lattice. It is demonstrated in the second part of this paper that, for a wide range of signal-to-noise ratios (SNRs) and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real time-a result with many practical implications.

Journal ArticleDOI
09 Dec 2005-Science
TL;DR: Using functional brain imaging, it is shown that the level of ambiguity in choices correlates positively with activation in the amygdala and orbitofrontal cortex, and negatively with the striatal system, suggesting a general neural circuit responding to degrees of uncertainty, contrary to decision theory.
Abstract: Much is known about how people make decisions under varying levels of probability (risk). Less is known about the neural basis of decision-making when probabilities are uncertain because of missing information (ambiguity). In decision theory, ambiguity about probabilities should not affect choices. Using functional brain imaging, we show that the level of ambiguity in choices correlates positively with activation in the amygdala and orbitofrontal cortex, and negatively with a striatal system. Moreover, striatal activity correlates positively with expected reward. Neurological subjects with orbitofrontal lesions were insensitive to the level of ambiguity and risk in behavioral choices. These data suggest a general neural circuit responding to degrees of uncertainty, contrary to decision theory.

Journal ArticleDOI
TL;DR: It is shown that cells with targeted null mutations in Mfn1 or Mfn2 retained low levels of mitochondrial fusion and escaped major cellular dysfunction, suggesting a requirement for mitochondrial fusion, beyond maintenance of organelle morphology.

Journal ArticleDOI
06 Jan 2005-Nature
TL;DR: It is shown that SM, a patient with rare bilateral amygdala damage, shows an inability to make normal use of information from the eye region of faces when judging emotions, a defect the authors trace to a lack of spontaneous fixations on the eyes during free viewing of faces.
Abstract: Ten years ago, we reported that SM, a patient with rare bilateral amygdala damage, showed an intriguing impairment in her ability to recognize fear from facial expressions. Since then, the importance of the amygdala in processing information about facial emotions has been borne out by a number of lesion and functional imaging studies. Yet the mechanism by which amygdala damage compromises fear recognition has not been identified. Returning to patient SM, we now show that her impairment stems from an inability to make normal use of information from the eye region of faces when judging emotions, a defect we trace to a lack of spontaneous fixations on the eyes during free viewing of faces. Although SM fails to look normally at the eye region in all facial expressions, her selective impairment in recognizing fear is explained by the fact that the eyes are the most important feature for identifying this emotion. Notably, SM's recognition of fearful faces became entirely normal when she was instructed explicitly to look at the eyes. This finding provides a mechanism to explain the amygdala's role in fear recognition, and points to new approaches for the possible rehabilitation of patients with defective emotion perception.

Journal ArticleDOI
01 Apr 2005-Science
TL;DR: Using a multiround version of an economic exchange (trust game), it is reported that reciprocity expressed by one player strongly predicts future trust expressed by their partner—a behavioral finding mirrored by neural responses in the dorsal striatum that extends previous model-based functional magnetic resonance imaging studies into the social domain.
Abstract: Using a multiround version of an economic exchange (trust game), we report that reciprocity expressed by one player strongly predicts future trust expressed by their partner—a behavioral finding mirrored by neural responses in the dorsal striatum. Here, analyses within and between brains revealed two signals—one encoded by response magnitude, and the other by response timing. Response magnitude correlated with the “intention to trust” on the next play of the game, and the peak of these “intention to trust” responses shifted its time of occurrence by 14 seconds as player reputations developed. This temporal transfer resembles a similar shift of reward prediction errors common to reinforcement learning models, but in the context of a social exchange. These data extend previous model-based functional magnetic resonance imaging studies into the social domain and broaden our view of the spectrum of functions implemented by the dorsal striatum.

Journal ArticleDOI
TL;DR: Early success is described in the evolution of binary black-hole spacetimes with a numerical code based on a generalization of harmonic coordinates capable of evolving binary systems for enough time to extract information about the orbit, merger, and gravitational waves emitted during the event.
Abstract: We describe early success in the evolution of binary black-hole spacetimes with a numerical code based on a generalization of harmonic coordinates. Indications are that with sufficient resolution this scheme is capable of evolving binary systems for enough time to extract information about the orbit, merger, and gravitational waves emitted during the event. As an example we show results from the evolution of a binary composed of two equal mass, nonspinning black holes, through a single plunge orbit, merger, and ringdown. The resultant black hole is estimated to be a Kerr black hole with angular momentum parameter a[approximate]0.70. At present, lack of resolution far from the binary prevents an accurate estimate of the energy emitted, though a rough calculation suggests on the order of 5% of the initial rest mass of the system is radiated as gravitational waves during the final orbit and ringdown.

Journal ArticleDOI
28 Apr 2005-Nature
TL;DR: A synthetic multicellular system in which genetically engineered ‘receiver’ cells are programmed to form ring-like patterns of differentiation based on chemical gradients of an acyl-homoserine lactone signal that is synthesized by ‘sender” cells is shown.
Abstract: Pattern formation is a hallmark of coordinated cell behaviour in both single and multicellular organisms. It typically involves cell–cell communication and intracellular signal processing. Here we show a synthetic multicellular system in which genetically engineered ‘receiver’ cells are programmed to form ring-like patterns of differentiation based on chemical gradients of an acyl-homoserine lactone (AHL) signal that is synthesized by ‘sender’ cells. In receiver cells, ‘band-detect’ gene networks respond to user-defined ranges of AHL concentrations. By fusing different fluorescent proteins as outputs of network variants, an initially undifferentiated ‘lawn’ of receivers is engineered to form a bullseye pattern around a sender colony. Other patterns, such as ellipses and clovers, are achieved by placing senders in different configurations. Experimental and theoretical analyses reveal which kinetic parameters most significantly affect ring development over time. Construction and study of such synthetic multicellular systems can improve our quantitative understanding of naturally occurring developmental processes and may foster applications in tissue engineering, biomaterial fabrication and biosensing.

Journal ArticleDOI
25 Mar 2005-Science
TL;DR: It is found that protein production rates fluctuate over a time scale of about one cell cycle, while intrinsic noise decays rapidly, which can form a basis for quantitative modeling of natural gene circuits and for design of synthetic ones.
Abstract: The quantitative relation between transcription factor concentrations and the rate of protein production from downstream genes is central to the function of genetic networks. Here we show that this relation, which we call the gene regulation function (GRF), fluctuates dynamically in individual living cells, thereby limiting the accuracy with which transcriptional genetic circuits can transfer signals. Using fluorescent reporter genes and fusion proteins, we characterized the bacteriophage lambda promoter P_R in Escherichia coli. A novel technique based on binomial errors in protein partitioning enabled calibration of in vivo biochemical parameters in molecular units. We found that protein production rates fluctuate over a time scale of about one cell cycle, while intrinsic noise decays rapidly. Thus, biochemical parameters, noise, and slowly varying cellular states together determine the effective single-cell GRF. These results can form a basis for quantitative modeling of natural gene circuits and for design of synthetic ones.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a model of quantum computation in which the set of elementary operations is limited to Clifford unitaries, the creation of the state |0>, and qubit measurement in the computational basis.
Abstract: We consider a model of quantum computation in which the set of elementary operations is limited to Clifford unitaries, the creation of the state |0>, and qubit measurement in the computational basis. In addition, we allow the creation of a one-qubit ancilla in a mixed state rho, which should be regarded as a parameter of the model. Our goal is to determine for which rho universal quantum computation (UQC) can be efficiently simulated. To answer this question, we construct purification protocols that consume several copies of rho and produce a single output qubit with higher polarization. The protocols allow one to increase the polarization only along certain "magic" directions. If the polarization of rho along a magic direction exceeds a threshold value (about 65%), the purification asymptotically yields a pure state, which we call a magic state. We show that the Clifford group operations combined with magic states preparation are sufficient for UQC. The connection of our results with the Gottesman-Knill theorem is discussed.