scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Barbara published in 2014"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +324 moreInstitutions (70)
TL;DR: In this paper, the authors present the first cosmological results based on Planck measurements of the cosmic microwave background (CMB) temperature and lensing-potential power spectra, which are extremely well described by the standard spatially-flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations.
Abstract: This paper presents the first cosmological results based on Planck measurements of the cosmic microwave background (CMB) temperature and lensing-potential power spectra. We find that the Planck spectra at high multipoles (l ≳ 40) are extremely well described by the standard spatially-flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations. Within the context of this cosmology, the Planck data determine the cosmological parameters to high precision: the angular size of the sound horizon at recombination, the physical densities of baryons and cold dark matter, and the scalar spectral index are estimated to be θ∗ = (1.04147 ± 0.00062) × 10-2, Ωbh2 = 0.02205 ± 0.00028, Ωch2 = 0.1199 ± 0.0027, and ns = 0.9603 ± 0.0073, respectively(note that in this abstract we quote 68% errors on measured parameters and 95% upper limits on other parameters). For this cosmology, we find a low value of the Hubble constant, H0 = (67.3 ± 1.2) km s-1 Mpc-1, and a high value of the matter density parameter, Ωm = 0.315 ± 0.017. These values are in tension with recent direct measurements of H0 and the magnitude-redshift relation for Type Ia supernovae, but are in excellent agreement with geometrical constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent level precision using Planck CMB data alone. We use high-resolution CMB data together with Planck to provide greater control on extragalactic foreground components in an investigation of extensions to the six-parameter ΛCDM model. We present selected results from a large grid of cosmological models, using a range of additional astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured over the standard six-parameter ΛCDM cosmology. The deviation of the scalar spectral index from unity isinsensitive to the addition of tensor modes and to changes in the matter content of the Universe. We find an upper limit of r0.002< 0.11 on the tensor-to-scalar ratio. There is no evidence for additional neutrino-like relativistic particles beyond the three families of neutrinos in the standard model. Using BAO and CMB data, we find Neff = 3.30 ± 0.27 for the effective number of relativistic degrees of freedom, and an upper limit of 0.23 eV for the sum of neutrino masses. Our results are in excellent agreement with big bang nucleosynthesis and the standard value of Neff = 3.046. We find no evidence for dynamical dark energy; using BAO and CMB data, the dark energy equation of state parameter is constrained to be w = -1.13-0.10+0.13. We also use the Planck data to set limits on a possible variation of the fine-structure constant, dark matter annihilation and primordial magnetic fields. Despite the success of the six-parameter ΛCDM model in describing the Planck data at high multipoles, we note that this cosmology does not provide a good fit to the temperature power spectrum at low multipoles. The unusual shape of the spectrum in the multipole range 20 ≲ l ≲ 40 was seen previously in the WMAP data and is a real feature of the primordial CMB anisotropies. The poor fit to the spectrum at low multipoles is not of decisive significance, but is an “anomaly” in an otherwise self-consistent analysis of the Planck temperature data.

7,060 citations


Journal ArticleDOI
25 Jul 2014-Science
TL;DR: Defaunation is both a pervasive component of the planet’s sixth mass extinction and also a major driver of global ecological change.
Abstract: We live amid a global wave of anthropogenically driven biodiversity loss: species and population extirpations and, critically, declines in local species abundance. Particularly, human impacts on animal biodiversity are an under-recognized form of global environmental change. Among terrestrial vertebrates, 322 species have become extinct since 1500, and populations of the remaining species show 25% average decline in abundance. Invertebrate patterns are equally dire: 67% of monitored populations show 45% mean abundance decline. Such animal declines will cascade onto ecosystem functioning and human well-being. Much remains unknown about this “Anthropocene defaunation”; these knowledge gaps hinder our capacity to predict and limit defaunation impacts. Clearly, however, defaunation is both a pervasive component of the planet’s sixth mass extinction and also a major driver of global ecological change.

2,697 citations


Journal ArticleDOI
27 Nov 2014-Nature
TL;DR: Alternative diets that offer substantial health benefits could, if widely adopted, reduce global agricultural greenhouse gas emissions, reduce land clearing and resultant species extinctions, and help prevent such diet-related chronic non-communicable diseases.
Abstract: Diets link environmental and human health. Rising incomes and urbanization are driving a global dietary transition in which traditional diets are replaced by diets higher in refined sugars, refined fats, oils and meats. By 2050 these dietary trends, if unchecked, would be a major contributor to an estimated 80 per cent increase in global agricultural greenhouse gas emissions from food production and to global land clearing. Moreover, these dietary shifts are greatly increasing the incidence of type II diabetes, coronary heart disease and other chronic non-communicable diseases that lower global life expectancies. Alternative diets that offer substantial health benefits could, if widely adopted, reduce global agricultural greenhouse gas emissions, reduce land clearing and resultant species extinctions, and help prevent such diet-related chronic non-communicable diseases. The implementation of dietary solutions to the tightly linked diet–environment– health trilemma is a global challenge, and opportunity, of great environmental and public health importance.

2,200 citations


Journal ArticleDOI
D. S. Akerib1, Henrique Araujo2, X. Bai3, A. J. Bailey2, J. Balajthy4, S. Bedikian5, Ethan Bernard5, A. Bernstein6, Alexander Bolozdynya1, A. W. Bradley1, D. Byram7, Sidney Cahn5, M. C. Carmona-Benitez8, C. Chan9, J.J. Chapman9, A. A. Chiller7, C. Chiller7, K. Clark1, T. Coffey1, A. Currie2, A. Curioni5, Steven Dazeley6, L. de Viveiros10, A. Dobi4, J. E. Y. Dobson11, E. M. Dragowsky1, E. Druszkiewicz12, B. N. Edwards5, C. H. Faham13, S. Fiorucci9, C. E. Flores14, R. J. Gaitskell9, V. M. Gehman13, C. Ghag15, K.R. Gibson1, Murdock Gilchriese13, C. R. Hall4, M. Hanhardt3, S. A. Hertel5, M. Horn5, D. Q. Huang9, M. Ihm16, R. G. Jacobsen16, L. Kastens5, K. Kazkaz6, R. Knoche4, S. Kyre8, R. L. Lander14, N. A. Larsen5, C. Lee1, David Leonard4, K. T. Lesko13, A. Lindote10, M.I. Lopes10, A. Lyashenko5, D.C. Malling9, R. L. Mannino17, Daniel McKinsey5, Dongming Mei7, J. Mock14, M. Moongweluwan12, J. A. Morad14, M. Morii18, A. St. J. Murphy11, C. Nehrkorn8, H. N. Nelson8, F. Neves10, James Nikkel5, R. A. Ott14, M. Pangilinan9, P. D. Parker5, E. K. Pease5, K. Pech1, P. Phelps1, L. Reichhart15, T. A. Shutt1, C. Silva10, W. Skulski12, C. Sofka17, V. N. Solovov10, P. Sorensen6, T.M. Stiegler17, K. O'Sullivan5, T. J. Sumner2, Robert Svoboda14, M. Sweany14, Matthew Szydagis14, D. J. Taylor, B. P. Tennyson5, D. R. Tiedt3, Mani Tripathi14, S. Uvarov14, J.R. Verbus9, N. Walsh14, R. C. Webb17, J. T. White17, D. White8, M. S. Witherell8, M. Wlasenko18, F.L.H. Wolfs12, M. Woods14, Chao Zhang7 
TL;DR: The first WIMP search data set is reported, taken during the period from April to August 2013, presenting the analysis of 85.3 live days of data, finding that the LUX data are in disagreement with low-mass W IMP signal interpretations of the results from several recent direct detection experiments.
Abstract: The Large Underground Xenon (LUX) experiment is a dual-phase xenon time-projection chamber operating at the Sanford Underground Research Facility (Lead, South Dakota). The LUX cryostat was filled for the first time in the underground laboratory in February 2013. We report results of the first WIMP search data set, taken during the period from April to August 2013, presenting the analysis of 85.3 live days of data with a fiducial volume of 118 kg. A profile-likelihood analysis technique shows our data to be consistent with the background-only hypothesis, allowing 90% confidence limits to be set on spin-independent WIMP-nucleon elastic scattering with a minimum upper limit on the cross section of 7.6 × 10(-46) cm(2) at a WIMP mass of 33 GeV/c(2). We find that the LUX data are in disagreement with low-mass WIMP signal interpretations of the results from several recent direct detection experiments.

1,962 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented cosmological constraints from a joint analysis of type Ia supernova (SN Ia) observations obtained by the SDSS-II and SNLS collaborations.
Abstract: Aims. We present cosmological constraints from a joint analysis of type Ia supernova (SN Ia) observations obtained by the SDSS-II and SNLS collaborations. The dataset includes several low-redshift samples (z< 0.1), all three seasons from the SDSS-II (0.05

1,939 citations


Journal ArticleDOI
TL;DR: The theoretical modeling of point defects in crystalline materials by means of electronic-structure calculations, with an emphasis on approaches based on density functional theory (DFT), is reviewed in this paper.
Abstract: Point defects and impurities strongly affect the physical properties of materials and have a decisive impact on their performance in applications. First-principles calculations have emerged as a powerful approach that complements experiments and can serve as a predictive tool in the identification and characterization of defects. The theoretical modeling of point defects in crystalline materials by means of electronic-structure calculations, with an emphasis on approaches based on density functional theory (DFT), is reviewed. A general thermodynamic formalism is laid down to investigate the physical properties of point defects independent of the materials class (semiconductors, insulators, and metals), indicating how the relevant thermodynamic quantities, such as formation energy, entropy, and excess volume, can be obtained from electronic structure calculations. Practical aspects such as the supercell approach and efficient strategies to extrapolate to the isolated-defect or dilute limit are discussed. Recent advances in tractable approximations to the exchange-correlation functional ($\mathrm{DFT}+U$, hybrid functionals) and approaches beyond DFT are highlighted. These advances have largely removed the long-standing uncertainty of defect formation energies in semiconductors and insulators due to the failure of standard DFT to reproduce band gaps. Two case studies illustrate how such calculations provide new insight into the physics and role of point defects in real materials.

1,846 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, M. I. R. Alves2, C. Armitage-Caplan3  +469 moreInstitutions (89)
TL;DR: The European Space Agency's Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009 as discussed by the authors.
Abstract: The European Space Agency’s Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009. In March 2013, ESA and the Planck Collaboration released the initial cosmology products based on the first 15.5 months of Planck data, along with a set of scientific and technical papers and a web-based explanatory supplement. This paper gives an overview of the mission and its performance, the processing, analysis, and characteristics of the data, the scientific results, and the science data products and papers in the release. The science products include maps of the cosmic microwave background (CMB) and diffuse extragalactic foregrounds, a catalogue of compact Galactic and extragalactic sources, and a list of sources detected through the Sunyaev-Zeldovich effect. The likelihood code used to assess cosmological models against the Planck data and a lensing likelihood are described. Scientific results include robust support for the standard six-parameter ΛCDM model of cosmology and improved measurements of its parameters, including a highly significant deviation from scale invariance of the primordial power spectrum. The Planck values for these parameters and others derived from them are significantly different from those previously determined. Several large-scale anomalies in the temperature distribution of the CMB, first detected by WMAP, are confirmed with higher confidence. Planck sets new limits on the number and mass of neutrinos, and has measured gravitational lensing of CMB anisotropies at greater than 25σ. Planck finds no evidence for non-Gaussianity in the CMB. Planck’s results agree well with results from the measurements of baryon acoustic oscillations. Planck finds a lower Hubble constant than found in some more local measures. Some tension is also present between the amplitude of matter fluctuations (σ8) derived from CMB data and that derived from Sunyaev-Zeldovich data. The Planck and WMAP power spectra are offset from each other by an average level of about 2% around the first acoustic peak. Analysis of Planck polarization data is not yet mature, therefore polarization results are not released, although the robust detection of E-mode polarization around CMB hot and cold spots is shown graphically.

1,719 citations


Journal ArticleDOI
24 Apr 2014-Nature
TL;DR: The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.
Abstract: A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

1,710 citations


Book ChapterDOI
01 Jul 2014
TL;DR: This paper argues that multimedia instructional messages that are designed in light of how the human mind works are more likely to lead to meaningful learning than those that are not.
Abstract: Abstract A fundamental hypothesis underlying research on multimedia learning is that multimedia instructional messages that are designed in light of how the human mind works are more likely to lead to meaningful learning than those that are not. The cognitive theory of multimedia learning (CTML) is based on three cognitive science principles of learning: the human information processing system includes dual channels for visual/pictorial and auditory/verbal processing (i.e., dual-channels assumption); each channel has limited capacity for processing (i.e., limited capacity assumption); and active learning entails carrying out a coordinated set of cognitive processes during learning (i.e., active processing assumption). The cognitive theory of multimedia learning specifies five cognitive processes in multimedia learning: selecting relevant words from the presented text or narration, selecting relevant images from the presented illustrations, organizing the selected words into a coherent verbal representation, organizing selected images into a coherent pictorial representation, and integrating the pictorial and verbal representations and prior knowledge. Multimedia instructional messages should be designed to prime these processes. The Case for Multimedia Learning What is the rationale for a theory of multimedia learning? People learn more deeply from words and pictures than from words alone. This assertion – which can be called the multimedia principle – underlies much of the interest in multimedia learning. For thousands of years, words have been the major format for instruction – including spoken words, and within the last few hundred years, printed words.

1,705 citations


Journal ArticleDOI
TL;DR: The status of understanding of the operation of bulk heterojunction (BHJ) solar cells is reviewed and a summary of the problems to be solved to achieve the predicted power conversion efficiencies of >20% for a single cell is concluded.
Abstract: The status of understanding of the operation of bulk heterojunction (BHJ) solar cells is reviewed. Because the carrier photoexcitation recombination lengths are typically 10 nm in these disordered materials, the length scale for self-assembly must be of order 10–20 nm. Experiments have verified the existence of the BHJ nanostructure, but the morphology remains complex and a limiting factor. Three steps are required for generation of electrical power: i) absorption of photons from the sun; ii) photoinduced charge separation and the generation of mobile carriers; iii) collection of electrons and holes at opposite electrodes. The ultrafast charge transfer process arises from fundamental quantum uncertainty; mobile carriers are directly generated (electrons in the acceptor domains and holes in the donor domains) by the ultrafast charge transfer (≈70%) with ≈30% generated by exciton diffusion to a charge separating heterojunction. Sweep-out of the mobile carriers by the internal field prior to recombination is essential for high performance. Bimolecular recombination dominates in materials where the donor and acceptor phases are pure. Impurities degrade performance by introducing Shockly–Read–Hall decay. The review concludes with a summary of the problems to be solved to achieve the predicted power conversion efficiencies of >20% for a single cell.

1,492 citations


Journal ArticleDOI
TL;DR: In this article, the authors reported that NGC 2617 went through a dramatic outburst, during which its X-ray flux increased by over an order of magnitude followed by an increase of its optical/ultraviolet (UV) continuum flux.
Abstract: After the All-Sky Automated Survey for SuperNovae discovered a significant brightening of the inner region of NGC 2617, we began a ∼70 day photometric and spectroscopic monitoring campaign from the X-ray through near-infrared (NIR) wavelengths. We report that NGC 2617 went through a dramatic outburst, during which its X-ray flux increased by over an order of magnitude followed by an increase of its optical/ultraviolet (UV) continuum flux by almost an order of magnitude. NGC 2617, classified as a Seyfert 1.8 galaxy in 2003, is now a Seyfert 1 due to the appearance of broad optical emission lines and a continuum blue bump. Such 'changing look active galactic nuclei (AGNs)' are rare and provide us with important insights about AGN physics. Based on the Hβ line width and the radius-luminosity relation, we estimate the mass of central black hole (BH) to be (4 ± 1) × 10{sup 7} M {sub ☉}. When we cross-correlate the light curves, we find that the disk emission lags the X-rays, with the lag becoming longer as we move from the UV (2-3 days) to the NIR (6-9 days). Also, the NIR is more heavily temporally smoothed than the UV. This can largely be explained bymore » a simple model of a thermally emitting thin disk around a BH of the estimated mass that is illuminated by the observed, variable X-ray fluxes.« less

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, Frederico Arroja4  +321 moreInstitutions (79)
TL;DR: In this article, the authors present the implications for cosmic inflation of the Planck measurements of the cosmic microwave background (CMB) anisotropies in both temperature and polarization based on the full Planck survey.
Abstract: We present the implications for cosmic inflation of the Planck measurements of the cosmic microwave background (CMB) anisotropies in both temperature and polarization based on the full Planck survey, which includes more than twice the integration time of the nominal survey used for the 2013 release papers. The Planck full mission temperature data and a first release of polarization data on large angular scales measure the spectral index of curvature perturbations to be ns = 0.968 ± 0.006 and tightly constrain its scale dependence to dns/ dlnk = −0.003 ± 0.007 when combined with the Planck lensing likelihood. When the Planck high-l polarization data are included, the results are consistent and uncertainties are further reduced. The upper bound on the tensor-to-scalar ratio is r0.002< 0.11 (95% CL). This upper limit is consistent with the B-mode polarization constraint r< 0.12 (95% CL) obtained from a joint analysis of the BICEP2/Keck Array and Planck data. These results imply that V(φ) ∝ φ2 and natural inflation are now disfavoured compared to models predicting a smaller tensor-to-scalar ratio, such as R2 inflation. We search for several physically motivated deviations from a simple power-law spectrum of curvature perturbations, including those motivated by a reconstruction of the inflaton potential not relying on the slow-roll approximation. We find that such models are not preferred, either according to a Bayesian model comparison or according to a frequentist simulation-based analysis. Three independent methods reconstructing the primordial power spectrum consistently recover a featureless and smooth over the range of scales 0.008 Mpc-1 ≲ k ≲ 0.1 Mpc-1. At large scales, each method finds deviations from a power law, connected to a deficit at multipoles l ≈ 20−40 in the temperature power spectrum, but at an uncompelling statistical significance owing to the large cosmic variance present at these multipoles. By combining power spectrum and non-Gaussianity bounds, we constrain models with generalized Lagrangians, including Galileon models and axion monodromy models. The Planck data are consistent with adiabatic primordial perturbations, and the estimated values for the parameters of the base Λ cold dark matter (ΛCDM) model are not significantly altered when more general initial conditions are admitted. In correlated mixed adiabatic and isocurvature models, the 95% CL upper bound for the non-adiabatic contribution to the observed CMB temperature variance is | αnon - adi | < 1.9%, 4.0%, and 2.9% for CDM, neutrino density, and neutrino velocity isocurvature modes, respectively. We have tested inflationary models producing an anisotropic modulation of the primordial curvature power spectrum findingthat the dipolar modulation in the CMB temperature field induced by a CDM isocurvature perturbation is not preferred at a statistically significant level. We also establish tight constraints on a possible quadrupolar modulation of the curvature perturbation. These results are consistent with the Planck 2013 analysis based on the nominal mission data and further constrain slow-roll single-field inflationary models, as expected from the increased precision of Planck data using the full set of observations.

Journal ArticleDOI
TL;DR: Recent advances in formulation and delivery strategies, such as the use of microsphere-based controlled-release technologies, protein modification methods that make use of polyethylene glycol and other polymers, and genetic manipulation of biopharmaceutical drugs are highlighted and discussed.
Abstract: The formulation and delivery of biopharmaceutical drugs, such as monoclonal antibodies and recombinant proteins, poses substantial challenges owing to their large size and susceptibility to degradation. In this Review we highlight recent advances in formulation and delivery strategies — such as the use of microsphere-based controlled-release technologies, protein modification methods that make use of polyethylene glycol and other polymers, and genetic manipulation of biopharmaceutical drugs — and discuss their advantages and limitations. We also highlight current and emerging delivery routes that provide an alternative to injection, including transdermal, oral and pulmonary delivery routes. In addition, the potential of targeted and intracellular protein delivery is discussed.

Journal ArticleDOI
TL;DR: Progress is described in the field of electroorganic synthesis, a process that can be accomplished more efficiently and purposefully using modern computational tools, and summarizes recent advances.
Abstract: Electroorganic synthesis has become an established, useful, and environmentally benign alternative to classic organic synthesis for the oxidation or the reduction of organic compounds. In this context, the use of redox mediators to achieve indirect processes is attaining increased significance, since it offers many advantages compared to a direct electrolysis. Kinetic inhibitions that are associated with the electron transfer at the electrode/electrolyte interface, for example, can be eliminated and higher or totally different selectivity can be achieved. In many cases, a mediated electron transfer can occur against a potential gradient, meaning that lower potentials are needed, reducing the probability of undesired side-reactions. In addition, the use of electron transfer mediators can help to avoid electrode passivation resulting from polymer film formation on the electrode surface. Although the principle of indirect electrolysis was established many years ago, new, exciting and useful developments continue to be made. In recent years, several new types of redox mediators have been designed and examined, a process that can be accomplished more efficiently and purposefully using modern computational tools. New protocols including, the development of double mediatory systems in biphasic media, enantioselective mediation and heterogeneous electrocatalysis using immobilized mediators have been established. Furthermore, the understanding of mediated electron transfer reaction mechanisms has advanced. This review describes progress in the field of electroorganic synthesis and summarizes recent advances.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +273 moreInstitutions (59)
TL;DR: In this article, the authors characterized the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors, including the effect of the optics, detectors, data processing and the scan strategy.
Abstract: This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is the angular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5 degrees) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five errors eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multiple l similar to 1500, below 0.5% at 143 and 217 GHz up to l similar to 2000.


Journal ArticleDOI
TL;DR: This survey reviews the vast literature on the theory and the applications of complex oscillator networks, focusing on phase oscillator models that are widespread in real-world synchronization phenomena, that generalize the celebrated Kuramoto model, and that feature a rich phenomenology.

Journal ArticleDOI
06 Jun 2014-Science
TL;DR: The recent 70% decline in deforestation in the Brazilian Amazon suggests that it is possible to manage the advance of a vast agricultural frontier Enforcement of laws, interventions in soy and beef supply chains, restrictions on access to credit, and expansion of protected areas appear to have contributed to this decline, as did a decline in the demand for new deforestation as mentioned in this paper.
Abstract: The recent 70% decline in deforestation in the Brazilian Amazon suggests that it is possible to manage the advance of a vast agricultural frontier Enforcement of laws, interventions in soy and beef supply chains, restrictions on access to credit, and expansion of protected areas appear to have contributed to this decline, as did a decline in the demand for new deforestation The supply chain interventions that fed into this deceleration are precariously dependent on corporate risk management, and public policies have relied excessively on punitive measures Systems for delivering positive incentives for farmers to forgo deforestation have been designed but not fully implemented Territorial approaches to deforestation have been effective and could consolidate progress in slowing deforestation while providing a framework for addressing other important dimensions of sustainable development

Journal ArticleDOI
TL;DR: In this paper, the authors present results for the equation of state in ($2+1$)-flavor QCD using the highly improved staggered quark action and lattices with temporal extent.
Abstract: We present results for the equation of state in ($2+1$)-flavor QCD using the highly improved staggered quark action and lattices with temporal extent ${N}_{\ensuremath{\tau}}=6$, 8, 10, and 12. We show that these data can be reliably extrapolated to the continuum limit and obtain a number of thermodynamic quantities and the speed of sound in the temperature range 130--400 MeV. We compare our results with previous calculations and provide an analytic parameterization of the pressure, from which other thermodynamic quantities can be calculated, for use in phenomenology. We show that the energy density in the crossover region, $145\text{ }\text{ }\mathrm{MeV}\ensuremath{\le}T\ensuremath{\le}163\text{ }\text{ }\mathrm{MeV}$, defined by the chiral transition, is ${\ensuremath{\epsilon}}_{c}=(0.18--0.5)\text{ }\text{ }\mathrm{GeV}/{\mathrm{fm}}^{3}$, i.e., $(1.2--3.1)\text{ }{\ensuremath{\epsilon}}_{\text{nuclear}}$. At high temperatures, we compare our results with resummed and dimensionally reduced perturbation theory calculations. As a byproduct of our analyses, we obtain the values of the scale parameters ${r}_{0}$ from the static quark potential and ${w}_{0}$ from the gradient flow.

Journal ArticleDOI
12 Mar 2014-ACS Nano
TL;DR: This paper introduces and demonstrates FET biosensors based on molybdenum disulfide (MoS2), which provides extremely high sensitivity and at the same time offers easy patternability and device fabrication, due to its 2D atomically layered structure.
Abstract: Biosensors based on field-effect transistors (FETs) have attracted much attention, as they offer rapid, inexpensive, and label-free detection. While the low sensitivity of FET biosensors based on bulk 3D structures has been overcome by using 1D structures (nanotubes/nanowires), the latter face severe fabrication challenges, impairing their practical applications. In this paper, we introduce and demonstrate FET biosensors based on molybdenum disulfide (MoS2), which provides extremely high sensitivity and at the same time offers easy patternability and device fabrication, due to its 2D atomically layered structure. A MoS2-based pH sensor achieving sensitivity as high as 713 for a pH change by 1 unit along with efficient operation over a wide pH range (3–9) is demonstrated. Ultrasensitive and specific protein sensing is also achieved with a sensitivity of 196 even at 100 femtomolar concentration. While graphene is also a 2D material, we show here that it cannot compete with a MoS2-based FET biosensor, which ...

Journal ArticleDOI
TL;DR: A numerical and experimental investigation of D-Wave One showed evidence for quantum annealing with 108 qubits as discussed by the authors, which is the largest number of qubits known to exist in the world.
Abstract: Quantum annealing is expected to solve certain optimization problems more efficiently, but there are still open questions regarding the functioning of devices such as D-Wave One. A numerical and experimental investigation of its performance shows evidence for quantum annealing with 108 qubits.

Journal ArticleDOI
31 Jan 2014-Science
TL;DR: The time dependence of the separation of photogenerated electron hole pairs across the donor-acceptor heterojunction in OPV model systems is reported, consistent with charge separation through access to delocalized π-electron states in ordered regions of the fullerene acceptor material.
Abstract: Understanding the charge-separation mechanism in organic photovoltaic cells (OPVs) could facilitate optimization of their overall efficiency. Here we report the time dependence of the separation of photogenerated electron hole pairs across the donor-acceptor heterojunction in OPV model systems. By tracking the modulation of the optical absorption due to the electric field generated between the charges, we measure ~200 millielectron volts of electrostatic energy arising from electron-hole separation within 40 femtoseconds of excitation, corresponding to a charge separation distance of at least 4 nanometers. At this separation, the residual Coulomb attraction between charges is at or below thermal energies, so that electron and hole separate freely. This early time behavior is consistent with charge separation through access to delocalized π-electron states in ordered regions of the fullerene acceptor material.

Journal ArticleDOI
TL;DR: This review focuses on the current understanding of penetration of NPs through biological barriers, andphasis is placed on transport barriers and not immunological barriers.

Journal ArticleDOI
Alain Abergel1, Peter A. R. Ade2, Nabila Aghanim1, M. I. R. Alves1  +307 moreInstitutions (66)
TL;DR: In this article, the authors presented an all-sky model of dust emission from the Planck 857, 545 and 353 GHz, and IRAS 100 micron data.
Abstract: This paper presents an all-sky model of dust emission from the Planck 857, 545 and 353 GHz, and IRAS 100 micron data. Using a modified black-body fit to the data we present all-sky maps of the dust optical depth, temperature, and spectral index over the 353-3000 GHz range. This model is a tight representation of the data at 5 arcmin. It shows variations of the order of 30 % compared with the widely-used model of Finkbeiner, Davis, and Schlegel. The Planck data allow us to estimate the dust temperature uniformly over the whole sky, providing an improved estimate of the dust optical depth compared to previous all-sky dust model, especially in high-contrast molecular regions. An increase of the dust opacity at 353 GHz, tau_353/N_H, from the diffuse to the denser interstellar medium (ISM) is reported. It is associated with a decrease in the observed dust temperature, T_obs, that could be due at least in part to the increased dust opacity. We also report an excess of dust emission at HI column densities lower than 10^20 cm^-2 that could be the signature of dust in the warm ionized medium. In the diffuse ISM at high Galactic latitude, we report an anti-correlation between tau_353/N_H and T_obs while the dust specific luminosity, i.e., the total dust emission integrated over frequency (the radiance) per hydrogen atom, stays about constant. The implication is that in the diffuse high-latitude ISM tau_353 is not as reliable a tracer of dust column density as we conclude it is in molecular clouds where the correlation of tau_353 with dust extinction estimated using colour excess measurements on stars is strong. To estimate Galactic E(B-V) in extragalactic fields at high latitude we develop a new method based on the thermal dust radiance, instead of the dust optical depth, calibrated to E(B-V) using reddening measurements of quasars deduced from Sloan Digital Sky Survey data.

Journal ArticleDOI
TL;DR: The present review highlights both connections with other disciplines and lessons for a social psychological understanding of intervention and change in self-affirmation interventions.
Abstract: People have a basic need to maintain the integrity of the self, a global sense of personal adequacy. Events that threaten self-integrity arouse stress and self-protective defenses that can hamper performance and growth. However, an intervention known as self-affirmation can curb these negative outcomes. Self-affirmation interventions typically have people write about core personal values. The interventions bring about a more expansive view of the self and its resources, weakening the implications of a threat for personal integrity. Timely affirmations have been shown to improve education, health, and relationship outcomes, with benefits that sometimes persist for months and years. Like other interventions and experiences, self-affirmations can have lasting benefits when they touch off a cycle of adaptive potential, a positive feedback loop between the self-system and the social system that propagates adaptive outcomes over time. The present review highlights both connections with other disciplines and lessons for a social psychological understanding of intervention and change.

Book
13 Feb 2014
TL;DR: In this paper, the authors identify two potential sources of excessive control effort in Lyapunov design techniques and show how such effort can be greatly reduced, and present a variety of control design methods suitable for systems described by low-order nonlinear ordinary differential equations.
Abstract: Presenting advances in the theory and design of robust nonlinear control systems, this volume identifies two potential sources of excessive control effort in Lyapunov design techniques and shows how such effort can be greatly reduced. Within the framework of Lyapunov design techniques the authors develop a variety of control design methods suitable for systems described by low-order nonlinear ordinary differential equations. There is an emphasis on global controller designs, that is designs for the entire region of model validity.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Yashar Akrami3, Yashar Akrami4  +310 moreInstitutions (70)
TL;DR: In this article, the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite were investigated.
Abstract: We test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect our studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.

Journal ArticleDOI
TL;DR: A metal-free ATRP process, mediated by light and catalyzed by an organic-based photoredox catalyst, is reported that resulted in block copolymer formation was facile and could be combined with other controlled radical processes leading to structural and synthetic versatility.
Abstract: Overcoming the challenge of metal contamination in traditional ATRP systems, a metal-free ATRP process, mediated by light and catalyzed by an organic-based photoredox catalyst, is reported. Polymerization of vinyl monomers are efficiently activated and deactivated with light leading to excellent control over the molecular weight, polydispersity, and chain ends of the resulting polymers. Significantly, block copolymer formation was facile and could be combined with other controlled radical processes leading to structural and synthetic versatility. We believe that these new organic-based photoredox catalysts will enable new applications for controlled radical polymerizations and also be of further value in both small molecule and polymer chemistry.

Journal ArticleDOI
14 Feb 2014-Science
TL;DR: Methane emissions from U.S. and Canadian natural gas systems appear larger than official estimates, and global atmospheric CH4 concentrations are on the rise, with the causes still poorly understood.
Abstract: Natural gas (NG) is a potential “bridge fuel” during transition to a decarbonized energy system: It emits less carbon dioxide during combustion than other fossil fuels and can be used in many industries. However, because of the high global warming potential of methane (CH4, the major component of NG), climate benefits from NG use depend on system leakage rates. Some recent estimates of leakage have challenged the benefits of switching from coal to NG, a large near-term greenhouse gas (GHG) reduction opportunity ( 1 – 3 ). Also, global atmospheric CH4 concentrations are on the rise, with the causes still poorly understood ( 4 ).

Journal ArticleDOI
06 Nov 2014-Nature
TL;DR: A more coordinated approach to risk management and land-use planning in these coupled systems is needed because fire will never operate as a natural ecosystem process, and the impact on society will continue to grow.
Abstract: The impacts of escalating wildfire in many regions - the lives and homes lost, the expense of suppression and the damage to ecosystem services - necessitate a more sustainable coexistence with wildfire. Climate change and continued development on fire-prone landscapes will only compound current problems. Emerging strategies for managing ecosystems and mitigating risks to human communities provide some hope, although greater recognition of their inherent variation and links is crucial. Without a more integrated framework, fire will never operate as a natural ecosystem process, and the impact on society will continue to grow. A more coordinated approach to risk management and land-use planning in these coupled systems is needed.