scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Barbara published in 2020"


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +229 moreInstitutions (70)
TL;DR: In this article, the authors present cosmological parameter results from the full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction Compared to the 2015 results, improved measurements of large-scale polarization allow the reionization optical depth to be measured with higher precision, leading to significant gains in the precision of other correlated parameters Improved modelling of the small-scale polarization leads to more robust constraints on manyparameters,withresidualmodellinguncertaintiesestimatedtoaffectthemonlyatthe05σlevelWefindgoodconsistencywiththestandard spatially-flat6-parameter ΛCDMcosmologyhavingapower-lawspectrumofadiabaticscalarperturbations(denoted“base ΛCDM”inthispaper), from polarization, temperature, and lensing, separately and in combination A combined analysis gives dark matter density Ωch2 = 0120±0001, baryon density Ωbh2 = 00224±00001, scalar spectral index ns = 0965±0004, and optical depth τ = 0054±0007 (in this abstract we quote 68% confidence regions on measured parameters and 95% on upper limits) The angular acoustic scale is measured to 003% precision, with 100θ∗ = 10411±00003Theseresultsareonlyweaklydependentonthecosmologicalmodelandremainstable,withsomewhatincreasederrors, in many commonly considered extensions Assuming the base-ΛCDM cosmology, the inferred (model-dependent) late-Universe parameters are: HubbleconstantH0 = (674±05)kms−1Mpc−1;matterdensityparameterΩm = 0315±0007;andmatterfluctuationamplitudeσ8 = 0811±0006 We find no compelling evidence for extensions to the base-ΛCDM model Combining with baryon acoustic oscillation (BAO) measurements (and consideringsingle-parameterextensions)weconstraintheeffectiveextrarelativisticdegreesoffreedomtobe Neff = 299±017,inagreementwith the Standard Model prediction Neff = 3046, and find that the neutrino mass is tightly constrained toPmν < 012 eV The CMB spectra continue to prefer higher lensing amplitudesthan predicted in base ΛCDM at over 2σ, which pulls some parameters that affect thelensing amplitude away from the ΛCDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAOdataThejointconstraintwithBAOmeasurementsonspatialcurvatureisconsistentwithaflatuniverse, ΩK = 0001±0002Alsocombining with Type Ia supernovae (SNe), the dark-energy equation of state parameter is measured to be w0 = −103±003, consistent with a cosmological constant We find no evidence for deviations from a purely power-law primordial spectrum, and combining with data from BAO, BICEP2, and Keck Array data, we place a limit on the tensor-to-scalar ratio r0002 < 006 Standard big-bang nucleosynthesis predictions for the helium and deuterium abundances for the base-ΛCDM cosmology are in excellent agreement with observations The Planck base-ΛCDM results are in good agreement with BAO, SNe, and some galaxy lensing observations, but in slight tension with the Dark Energy Survey’s combined-probe results including galaxy clustering (which prefers lower fluctuation amplitudes or matter density parameters), and in significant, 36σ, tension with local measurements of the Hubble constant (which prefer a higher value) Simple model extensions that can partially resolve these tensions are not favoured by the Planck data

4,688 citations


Journal ArticleDOI
28 Jan 2020-ACS Nano
TL;DR: Prominent authors from all over the world joined efforts to summarize the current state-of-the-art in understanding and using SERS, as well as to propose what can be expected in the near future, in terms of research, applications, and technological development.
Abstract: The discovery of the enhancement of Raman scattering by molecules adsorbed on nanostructured metal surfaces is a landmark in the history of spectroscopic and analytical techniques. Significant experimental and theoretical effort has been directed toward understanding the surface-enhanced Raman scattering (SERS) effect and demonstrating its potential in various types of ultrasensitive sensing applications in a wide variety of fields. In the 45 years since its discovery, SERS has blossomed into a rich area of research and technology, but additional efforts are still needed before it can be routinely used analytically and in commercial products. In this Review, prominent authors from around the world joined together to summarize the state of the art in understanding and using SERS and to predict what can be expected in the near future in terms of research, applications, and technological development. This Review is dedicated to SERS pioneer and our coauthor, the late Prof. Richard Van Duyne, whom we lost during the preparation of this article.

1,768 citations


Journal ArticleDOI
Yashar Akrami1, Yashar Akrami2, M. Ashdown3, J. Aumont4  +180 moreInstitutions (59)
TL;DR: In this paper, a power-law fit to the angular power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky is presented.
Abstract: The study of polarized dust emission has become entwined with the analysis of the cosmic microwave background (CMB) polarization. We use new Planck maps to characterize Galactic dust emission as a foreground to the CMB polarization. We present Planck EE, BB, and TE power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky. We present power-law fits to the angular power spectra, yielding evidence for statistically significant variations of the exponents over sky regions and a difference between the values for the EE and BB spectra. The TE correlation and E/B power asymmetry extend to low multipoles that were not included in earlier Planck polarization papers. We also report evidence for a positive TB dust signal. Combining data from Planck and WMAP, we determine the amplitudes and spectral energy distributions (SEDs) of polarized foregrounds, including the correlation between dust and synchrotron polarized emission, for the six sky regions as a function of multipole. This quantifies the challenge of the component separation procedure required for detecting the reionization and recombination peaks of primordial CMB B modes. The SED of polarized dust emission is fit well by a single-temperature modified blackbody emission law from 353 GHz to below 70 GHz. For a dust temperature of 19.6 K, the mean spectral index for dust polarization is $\beta_{\rm d}^{P} = 1.53\pm0.02 $. By fitting multi-frequency cross-spectra, we examine the correlation of the dust polarization maps across frequency. We find no evidence for decorrelation. If the Planck limit for the largest sky region applies to the smaller sky regions observed by sub-orbital experiments, then decorrelation might not be a problem for CMB experiments aiming at a primordial B-mode detection limit on the tensor-to-scalar ratio $r\simeq0.01$ at the recombination peak.

1,749 citations


Journal ArticleDOI
13 Nov 2020-Science
TL;DR: It is found that neuropilin-1 (NRP1), known to bind furin-cleaved substrates, significantly potentiates SARS-CoV-2 infectivity, an effect blocked by a monoclonal blocking antibody against NRP1.
Abstract: The causative agent of coronavirus disease 2019 (COVID-19) is the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). For many viruses, tissue tropism is determined by the availability of virus receptors and entry cofactors on the surface of host cells. In this study, we found that neuropilin-1 (NRP1), known to bind furin-cleaved substrates, significantly potentiates SARS-CoV-2 infectivity, an effect blocked by a monoclonal blocking antibody against NRP1. A SARS-CoV-2 mutant with an altered furin cleavage site did not depend on NRP1 for infectivity. Pathological analysis of olfactory epithelium obtained from human COVID-19 autopsies revealed that SARS-CoV-2 infected NRP1-positive cells facing the nasal cavity. Our data provide insight into SARS-CoV-2 cell infectivity and define a potential target for antiviral intervention.

1,304 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Frederico Arroja4  +251 moreInstitutions (72)
TL;DR: In this paper, the authors present the cosmological legacy of the Planck satellite, which provides the strongest constraints on the parameters of the standard cosmology model and some of the tightest limits available on deviations from that model.
Abstract: The European Space Agency’s Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857 GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter ΛCDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (θ*) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the ΛCDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.

879 citations


Journal ArticleDOI
T. Aoyama1, Nils Asmussen2, M. Benayoun3, Johan Bijnens4  +146 moreInstitutions (64)
TL;DR: The current status of the Standard Model calculation of the anomalous magnetic moment of the muon is reviewed in this paper, where the authors present a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice approach.

801 citations


Journal ArticleDOI
04 Jun 2020-Nature
TL;DR: The results obtained by seventy different teams analysing the same functional magnetic resonance imaging dataset show substantial variation, highlighting the influence of analytical choices and the importance of sharing workflows publicly and performing multiple analyses.
Abstract: Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses1. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2-5. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.

551 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +213 moreInstitutions (66)
TL;DR: In this article, the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release are described, with a hybrid method using different approximations at low (l ǫ ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases.
Abstract: We describe the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release. The overall approach is similar in spirit to the one retained for the 2013 and 2015 data release, with a hybrid method using different approximations at low (l ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases. With more realistic simulations, and better correction and modelling of systematic effects, we can now make full use of the CMB polarization observed in the High Frequency Instrument (HFI) channels. The low-multipole EE cross-spectra from the 100 GHz and 143 GHz data give a constraint on the ΛCDM reionization optical-depth parameter τ to better than 15% (in combination with the TT low-l data and the high-l temperature and polarization data), tightening constraints on all parameters with posterior distributions correlated with τ . We also update the weaker constraint on τ from the joint TEB likelihood using the Low Frequency Instrument (LFI) channels, which was used in 2015 as part of our baseline analysis. At higher multipoles, the CMB temperature spectrum and likelihood are very similar to previous releases. A better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (i.e., the polarization efficiencies) allow us to make full use of polarization spectra, improving the ΛCDM constraints on the parameters θ MC , ω c , ω b , and H 0 by more than 30%, and ns by more than 20% compared to TT-only constraints. Extensive tests on the robustness of the modelling of the polarization data demonstrate good consistency, with some residual modelling uncertainties. At high multipoles, we are now limited mainly by the accuracy of the polarization efficiency modelling. Using our various tests, simulations, and comparison between different high-multipole likelihood implementations, we estimate the consistency of the results to be better than the 0.5 σ level on the ΛCDM parameters, as well as classical single-parameter extensions for the joint likelihood (to be compared to the 0.3 σ levels we achieved in 2015 for the temperature data alone on ΛCDM only). Minor curiosities already present in the previous releases remain, such as the differences between the best-fit ΛCDM parameters for the l > 800 ranges of the power spectrum, or the preference for more smoothing of the power-spectrum peaks than predicted in ΛCDM fits. These are shown to be driven by the temperature power spectrum and are not significantly modified by the inclusion of the polarization data. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations.

523 citations


Journal ArticleDOI
TL;DR: In this article, the status and prospects for flat-band engineering in van der Waals heterostructures and explore how both phenomena emerge from the moire flat bands are reviewed and discussed.
Abstract: Strongly correlated systems can give rise to spectacular phenomenology, from high-temperature superconductivity to the emergence of states of matter characterized by long-range quantum entanglement. Low-density flat-band systems play a vital role because the energy range of the band is so narrow that the Coulomb interactions dominate over kinetic energy, putting these materials in the strongly-correlated regime. Experimentally, when a band is narrow in both energy and momentum, its filling may be tuned in situ across the whole range, from empty to full. Recently, one particular flat-band system—that of van der Waals heterostructures, such as twisted bilayer graphene—has exhibited strongly correlated states and superconductivity, but it is still not clear to what extent the two are linked. Here, we review the status and prospects for flat-band engineering in van der Waals heterostructures and explore how both phenomena emerge from the moire flat bands. The identification of superconductivity and strong interactions in twisted bilayer 2D materials prompted many questions about the interplay of these phenomena. This Perspective presents the status of the field and the urgent issues for future study.

506 citations


Journal ArticleDOI
20 Mar 2020
TL;DR: This article reviews the mainstream compression approaches such as compact model, tensor decomposition, data quantization, and network sparsification, and answers the question of how to leverage these methods in the design of neural network accelerators and present the state-of-the-art hardware architectures.
Abstract: Domain-specific hardware is becoming a promising topic in the backdrop of improvement slow down for general-purpose processors due to the foreseeable end of Moore’s Law. Machine learning, especially deep neural networks (DNNs), has become the most dazzling domain witnessing successful applications in a wide spectrum of artificial intelligence (AI) tasks. The incomparable accuracy of DNNs is achieved by paying the cost of hungry memory consumption and high computational complexity, which greatly impedes their deployment in embedded systems. Therefore, the DNN compression concept was naturally proposed and widely used for memory saving and compute acceleration. In the past few years, a tremendous number of compression techniques have sprung up to pursue a satisfactory tradeoff between processing efficiency and application accuracy. Recently, this wave has spread to the design of neural network accelerators for gaining extremely high performance. However, the amount of related works is incredibly huge and the reported approaches are quite divergent. This research chaos motivates us to provide a comprehensive survey on the recent advances toward the goal of efficient compression and execution of DNNs without significantly compromising accuracy, involving both the high-level algorithms and their applications in hardware design. In this article, we review the mainstream compression approaches such as compact model, tensor decomposition, data quantization, and network sparsification. We explain their compression principles, evaluation metrics, sensitivity analysis, and joint-way use. Then, we answer the question of how to leverage these methods in the design of neural network accelerators and present the state-of-the-art hardware architectures. In the end, we discuss several existing issues such as fair comparison, testing workloads, automatic compression, influence on security, and framework/hardware-level support, and give promising topics in this field and the possible challenges as well. This article attempts to enable readers to quickly build up a big picture of neural network compression and acceleration, clearly evaluate various methods, and confidently get started in the right way.

499 citations


Journal ArticleDOI
TL;DR: A geodatabase of 35 environmental, socioeconomic, topographic, and demographic variables that could explain the spatial variability of disease incidence across the continental United States is compiled and it suggested that even though incorporating spatial autocorrelation could significantly improve the performance of the global ordinary least square model; these models still represent a significantly poor performance compared to the local models.

Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +202 moreInstitutions (63)
TL;DR: In this article, the authors presented an extensive set of tests of the robustness of the lensing-potential power spectrum, and constructed a minimum-variance estimator likelihood over lensing multipoles 8.
Abstract: We present measurements of the cosmic microwave background (CMB) lensing potential using the final Planck 2018 temperature and polarization data. Using polarization maps filtered to account for the noise anisotropy, we increase the significance of the detection of lensing in the polarization maps from 5σ to 9σ . Combined with temperature, lensing is detected at 40σ . We present an extensive set of tests of the robustness of the lensing-potential power spectrum, and construct a minimum-variance estimator likelihood over lensing multipoles 8 ≤ L ≤ 400 (extending the range to lower L compared to 2015), which we use to constrain cosmological parameters. We find good consistency between lensing constraints and the results from the Planck CMB power spectra within the ΛCDM model. Combined with baryon density and other weak priors, the lensing analysis alone constrains (1σ errors). Also combining with baryon acoustic oscillation data, we find tight individual parameter constraints, σ 8 = 0.811 ± 0.019, , and . Combining with Planck CMB power spectrum data, we measure σ 8 to better than 1% precision, finding σ 8 = 0.811 ± 0.006. CMB lensing reconstruction data are complementary to galaxy lensing data at lower redshift, having a different degeneracy direction in σ 8 − Ωm space; we find consistency with the lensing results from the Dark Energy Survey, and give combined lensing-only parameter constraints that are tighter than joint results using galaxy clustering. Using the Planck cosmic infrared background (CIB) maps as an additional tracer of high-redshift matter, we make a combined Planck -only estimate of the lensing potential over 60% of the sky with considerably more small-scale signal. We additionally demonstrate delensing of the Planck power spectra using the joint and individual lensing potential estimates, detecting a maximum removal of 40% of the lensing-induced power in all spectra. The improvement in the sharpening of the acoustic peaks by including both CIB and the quadratic lensing reconstruction is detected at high significance.

Journal ArticleDOI
TL;DR: The electronic properties of CsV_{3}Sb_{5} are presented, demonstrating bulk superconductivity in single crystals with a T_{c}=2.5 K, and the implications for the formation of unconventional super conductivity in this material are discussed.
Abstract: A cesium-rich ``kagome'' metal is both a topological insulator and a superconductor, making it a compelling material for future quantum technologies.

Journal ArticleDOI
28 Feb 2020-Science
TL;DR: New data from different sources that have emerged recently are integrated and suggested and this provides policy-makers and energy analysts a recalibrated understanding of global data center energy use, its drivers, and near-term efficiency potential.
Abstract: Growth in energy use has slowed owing to efficiency gains that smart policies can help maintain in the near term Data centers represent the information backbone of an increasingly digitalized world. Demand for their services has been rising rapidly (1), and data-intensive technologies such as artificial intelligence, smart and connected energy systems, distributed manufacturing systems, and autonomous vehicles promise to increase demand further (2). Given that data centers are energy-intensive enterprises, estimated to account for around 1% of worldwide electricity use, these trends have clear implications for global energy demand and must be analyzed rigorously. Several oft-cited yet simplistic analyses claim that the energy used by the world's data centers has doubled over the past decade and that their energy use will triple or even quadruple within the next decade (3–5). Such estimates contribute to a conventional wisdom (5, 6) that as demand for data center services rises rapidly, so too must their global energy use. But such extrapolations based on recent service demand growth indicators overlook strong countervailing energy efficiency trends that have occurred in parallel (see the first figure). Here, we integrate new data from different sources that have emerged recently and suggest more modest growth in global data center energy use (see the second figure). This provides policy-makers and energy analysts a recalibrated understanding of global data center energy use, its drivers, and near-term efficiency potential.

Journal ArticleDOI
21 Feb 2020-Science
TL;DR: The quantum anomalous Hall (QAH) effect combines topology and magnetism to produce precisely quantized Hall resistance at zero magnetic field as mentioned in this paper, driven by intrinsic strong interactions, which polarize the electrons into a single spin and valley-resolved moire miniband with Chern number C = 1.
Abstract: The quantum anomalous Hall (QAH) effect combines topology and magnetism to produce precisely quantized Hall resistance at zero magnetic field. We report the observation of a QAH effect in twisted bilayer graphene aligned to hexagonal boron nitride. The effect is driven by intrinsic strong interactions, which polarize the electrons into a single spin- and valley-resolved moire miniband with Chern number C = 1. In contrast to magnetically doped systems, the measured transport energy gap is larger than the Curie temperature for magnetic ordering, and quantization to within 0.1% of the von Klitzing constant persists to temperatures of several kelvin at zero magnetic field. Electrical currents as small as 1 nanoampere controllably switch the magnetic order between states of opposite polarization, forming an electrically rewritable magnetic memory.

Journal ArticleDOI
Yashar Akrami1, Frederico Arroja2, M. Ashdown3, J. Aumont4  +187 moreInstitutions (59)
TL;DR: In this paper, the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps were used to obtain constraints on primordial non-Gaussianity.
Abstract: We analyse the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps to obtain constraints on primordial non-Gaussianity (NG). We compare estimates obtained from separable template-fitting, binned, and optimal modal bispectrum estimators, finding consistent values for the local, equilateral, and orthogonal bispectrum amplitudes. Our combined temperature and polarization analysis produces the following final results: $f_{NL}^{local}$ = −0.9 ± 5.1; $f_{NL}^{equil}$ = −26 ± 47; and $f_{NL}^{ortho}$ = −38 ± 24 (68% CL, statistical). These results include low-multipole (4 ≤ l < 40) polarization data that are not included in our previous analysis. The results also pass an extensive battery of tests (with additional tests regarding foreground residuals compared to 2015), and they are stable with respect to our 2015 measurements (with small fluctuations, at the level of a fraction of a standard deviation, which is consistent with changes in data processing). Polarization-only bispectra display a significant improvement in robustness; they can now be used independently to set primordial NG constraints with a sensitivity comparable to WMAP temperature-based results and they give excellent agreement. In addition to the analysis of the standard local, equilateral, and orthogonal bispectrum shapes, we consider a large number of additional cases, such as scale-dependent feature and resonance bispectra, isocurvature primordial NG, and parity-breaking models, where we also place tight constraints but do not detect any signal. The non-primordial lensing bispectrum is, however, detected with an improved significance compared to 2015, excluding the null hypothesis at 3.5σ. Beyond estimates of individual shape amplitudes, we also present model-independent reconstructions and analyses of the Planck CMB bispectrum. Our final constraint on the local primordial trispectrum shape is $g_{NL}^{local}$ = (−5.8 ± 6.5) × 10$^4$ (68% CL, statistical), while constraints for other trispectrum shapes are also determined. Exploiting the tight limits on various bispectrum and trispectrum shapes, we constrain the parameter space of different early-Universe scenarios that generate primordial NG, including general single-field models of inflation, multi-field models (e.g. curvaton models), models of inflation with axion fields producing parity-violation bispectra in the tensor sector, and inflationary models involving vector-like fields with directionally-dependent bispectra. Our results provide a high-precision test for structure-formation scenarios, showing complete agreement with the basic picture of the ΛCDM cosmology regarding the statistics of the initial conditions, with cosmic structures arising from adiabatic, passive, Gaussian, and primordial seed perturbations.

Journal ArticleDOI
T. Aoyama1, Nils Asmussen2, M. Benayoun3, Johan Bijnens4  +146 moreInstitutions (64)
TL;DR: The current status of the Standard Model calculation of the anomalous magnetic moment of the muon has been reviewed in this paper, where the authors present a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice-QCD approach.
Abstract: We review the present status of the Standard Model calculation of the anomalous magnetic moment of the muon. This is performed in a perturbative expansion in the fine-structure constant $\alpha$ and is broken down into pure QED, electroweak, and hadronic contributions. The pure QED contribution is by far the largest and has been evaluated up to and including $\mathcal{O}(\alpha^5)$ with negligible numerical uncertainty. The electroweak contribution is suppressed by $(m_\mu/M_W)^2$ and only shows up at the level of the seventh significant digit. It has been evaluated up to two loops and is known to better than one percent. Hadronic contributions are the most difficult to calculate and are responsible for almost all of the theoretical uncertainty. The leading hadronic contribution appears at $\mathcal{O}(\alpha^2)$ and is due to hadronic vacuum polarization, whereas at $\mathcal{O}(\alpha^3)$ the hadronic light-by-light scattering contribution appears. Given the low characteristic scale of this observable, these contributions have to be calculated with nonperturbative methods, in particular, dispersion relations and the lattice approach to QCD. The largest part of this review is dedicated to a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice-QCD approach. The final result reads $a_\mu^\text{SM}=116\,591\,810(43)\times 10^{-11}$ and is smaller than the Brookhaven measurement by 3.7$\sigma$. The experimental uncertainty will soon be reduced by up to a factor four by the new experiment currently running at Fermilab, and also by the future J-PARC experiment. This and the prospects to further reduce the theoretical uncertainty in the near future-which are also discussed here-make this quantity one of the most promising places to look for evidence of new physics.

Journal ArticleDOI
TL;DR: Reversible-deactivation radical polymerization (RDRP) as mentioned in this paper is one of the most widely used techniques in polymer synthesis. But it has not yet been widely used in the field of biomedical applications.

Journal ArticleDOI
06 Nov 2020-Science
TL;DR: It is shown that even if fossil fuel emissions were eliminated immediately, emissions from the global food system alone would make it impossible to limit warming to 1.5°C and difficult even to realize the 2°C target.
Abstract: The Paris Agreement’s goal of limiting the increase in global temperature to 1.5° or 2°C above preindustrial levels requires rapid reductions in greenhouse gas emissions. Although reducing emissions from fossil fuels is essential for meeting this goal, other sources of emissions may also preclude its attainment. We show that even if fossil fuel emissions were immediately halted, current trends in global food systems would prevent the achievement of the 1.5°C target and, by the end of the century, threaten the achievement of the 2°C target. Meeting the 1.5°C target requires rapid and ambitious changes to food systems as well as to all nonfood sectors. The 2°C target could be achieved with less-ambitious changes to food systems, but only if fossil fuel and other nonfood emissions are eliminated soon.

Journal ArticleDOI
TL;DR: In this article, a plasmonic photocatalyst consisting of a Cu nanoparticle "antenna" with single-Ru atomic "reactor" sites on the nanoparticle surface was proposed for low-temperature, light-driven methane dry reforming.
Abstract: Syngas, an extremely important chemical feedstock composed of carbon monoxide and hydrogen, can be generated through methane (CH4) dry reforming with CO2. However, traditional thermocatalytic processes require high temperatures and suffer from coke-induced instability. Here, we report a plasmonic photocatalyst consisting of a Cu nanoparticle ‘antenna’ with single-Ru atomic ‘reactor’ sites on the nanoparticle surface, ideal for low-temperature, light-driven methane dry reforming. This catalyst provides high light energy efficiency when illuminated at room temperature. In contrast to thermocatalysis, long-term stability (50 h) and high selectivity (>99%) were achieved in photocatalysis. We propose that light-excited hot carriers, together with single-atom active sites, cause the observed performance. Quantum mechanical modelling suggests that single-atom doping of Ru on the Cu(111) surface, coupled with excited-state activation, results in a substantial reduction in the barrier for CH4 activation. This photocatalyst design could be relevant for future energy-efficient industrial processes. Syngas is a mixture of CO and H2 that can be converted into a variety of fuels. Syngas can be produced thermocatalytically from CH4 and CO2, but this requires high temperatures and coke formation can be a problem. Here the authors demonstrate lower temperature, light-driven production of syngas using a coke-resistant plasmonic photocatalyst.

Posted Content
TL;DR: StereoSet, a large-scale natural English dataset to measure stereotypical biases in four domains: gender, profession, race, and religion, is presented and it is shown that popular models like BERT, GPT-2, RoBERTa, and XLnet exhibit strong stereotypical biases.
Abstract: A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs (biases) are known to hurt target groups. Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases. In order to assess the adverse effects of these models, it is important to quantify the bias captured in them. Existing literature on quantifying bias evaluates pretrained language models on a small set of artificially constructed bias-assessing sentences. We present StereoSet, a large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. We evaluate popular models like BERT, GPT-2, RoBERTa, and XLNet on our dataset and show that these models exhibit strong stereotypical biases. We also present a leaderboard with a hidden test set to track the bias of future language models at this https URL

Journal ArticleDOI
30 Oct 2020-Carbon
TL;DR: In this paper, a review of recent achievements in manufacturing EM microwave absorption materials, particularly focusing on the unique and key factors in design and control of structures and components is presented, and current challenges and prospects for future development in this rapidly blossoming field are discussed.

Journal ArticleDOI
TL;DR: Estimates of the status of fish stocks from all available scientific assessments are compiled, and it is shown that, on average, fish stocks are increasing where they are assessed, and where fisheries management is less intense, stock status and trends are worse.
Abstract: Marine fish stocks are an important part of the world food system and are particularly important for many of the poorest people of the world. Most existing analyses suggest overfishing is increasing, and there is widespread concern that fish stocks are decreasing throughout most of the world. We assembled trends in abundance and harvest rate of stocks that are scientifically assessed, constituting half of the reported global marine fish catch. For these stocks, on average, abundance is increasing and is at proposed target levels. Compared with regions that are intensively managed, regions with less-developed fisheries management have, on average, 3-fold greater harvest rates and half the abundance as assessed stocks. Available evidence suggests that the regions without assessments of abundance have little fisheries management, and stocks are in poor shape. Increased application of area-appropriate fisheries science recommendations and management tools are still needed for sustaining fisheries in places where they are lacking.

Journal ArticleDOI
19 Aug 2020-Nature
TL;DR: Modelled supply curves show that, with policy reform and technological innovation, the production of food from the sea may increase sustainably, perhaps supplying 25% of the increase in demand for meat products by 2050.
Abstract: Global food demand is rising, and serious questions remain about whether supply can increase sustainably1 Land-based expansion is possible but may exacerbate climate change and biodiversity loss, and compromise the delivery of other ecosystem services2–6 As food from the sea represents only 17% of the current production of edible meat, we ask how much food we can expect the ocean to sustainably produce by 2050 Here we examine the main food-producing sectors in the ocean—wild fisheries, finfish mariculture and bivalve mariculture—to estimate ‘sustainable supply curves’ that account for ecological, economic, regulatory and technological constraints We overlay these supply curves with demand scenarios to estimate future seafood production We find that under our estimated demand shifts and supply scenarios (which account for policy reform and technology improvements), edible food from the sea could increase by 21–44 million tonnes by 2050, a 36–74% increase compared to current yields This represents 12–25% of the estimated increase in all meat needed to feed 98 billion people by 2050 Increases in all three sectors are likely, but are most pronounced for mariculture Whether these production potentials are realized sustainably will depend on factors such as policy reforms, technological innovation and the extent of future shifts in demand Modelled supply curves show that, with policy reform and technological innovation, the production of food from the sea may increase sustainably, perhaps supplying 25% of the increase in demand for meat products by 2050

Journal ArticleDOI
TL;DR: It is shown that social distancing following US state-level emergency declarations substantially varies by income, and this general pattern holds across income quantiles, data sources, and mobility measures.
Abstract: In the absence of a vaccine, social distancing measures are one of the primary tools to reduce the transmission of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus, which causes coronavirus disease 2019 (COVID-19). We show that social distancing following US state-level emergency declarations substantially varies by income. Using mobility measures derived from mobile device location pings, we find that wealthier areas decreased mobility significantly more than poorer areas, and this general pattern holds across income quantiles, data sources, and mobility measures. Using an event study design focusing on behavior subsequent to state emergency orders, we document a reversal in the ordering of social distancing by income: Wealthy areas went from most mobile before the pandemic to least mobile, while, for multiple measures, the poorest areas went from least mobile to most. Previous research has shown that lower income communities have higher levels of preexisting health conditions and lower access to healthcare. Combining this with our core finding-that lower income communities exhibit less social distancing-suggests a double burden of the COVID-19 pandemic with stark distributional implications.

Journal ArticleDOI
TL;DR: In this review, a detailed snapshot of current progress in quantum algorithms for ground-state, dynamics, and thermal-state simulation is taken and their strengths and weaknesses for future developments are analyzed.
Abstract: As we begin to reach the limits of classical computing, quantum computing has emerged as a technology that has captured the imagination of the scientific world. While for many years, the ability to execute quantum algorithms was only a theoretical possibility, recent advances in hardware mean that quantum computing devices now exist that can carry out quantum computation on a limited scale. Thus, it is now a real possibility, and of central importance at this time, to assess the potential impact of quantum computers on real problems of interest. One of the earliest and most compelling applications for quantum computers is Feynman's idea of simulating quantum systems with many degrees of freedom. Such systems are found across chemistry, physics, and materials science. The particular way in which quantum computing extends classical computing means that one cannot expect arbitrary simulations to be sped up by a quantum computer, thus one must carefully identify areas where quantum advantage may be achieved. In this review, we briefly describe central problems in chemistry and materials science, in areas of electronic structure, quantum statistical mechanics, and quantum dynamics that are of potential interest for solution on a quantum computer. We then take a detailed snapshot of current progress in quantum algorithms for ground-state, dynamics, and thermal-state simulation and analyze their strengths and weaknesses for future developments.

Journal ArticleDOI
TL;DR: KV3Sb5 shows enhanced skew scattering that scales quadratically, not linearly, with the longitudinal conductivity, possibly arising from the combination of highly conductive Dirac quasiparticles with a frustrated magnetic sublattice, which allows the possibility of reaching an anomalous Hall angle of 90° in metals.
Abstract: The anomalous Hall effect (AHE) is one of the most fundamental phenomena in physics. In the highly conductive regime, ferromagnetic metals have been the focus of past research. Here, we report a giant extrinsic AHE in KV3Sb5, an exfoliable, highly conductive semimetal with Dirac quasiparticles and a vanadium Kagome net. Even without report of long range magnetic order, the anomalous Hall conductivity reaches 15,507 Ω-1 cm-1 with an anomalous Hall ratio of ≈ 1.8%; an order of magnitude larger than Fe. Defying theoretical expectations, KV3Sb5 shows enhanced skew scattering that scales quadratically, not linearly, with the longitudinal conductivity, possibly arising from the combination of highly conductive Dirac quasiparticles with a frustrated magnetic sublattice. This allows the possibility of reaching an anomalous Hall angle of 90° in metals. This observation raises fundamental questions about AHEs and opens new frontiers for AHE and spin Hall effect exploration, particularly in metallic frustrated magnets.

Journal ArticleDOI
TL;DR: Author(s): Bivins, Aaron; North, Devin; Ahmad, Arslan; Ahmed, Warish; Alm, Eric; Been, Frederic; Bhattacharya, Prosun; Bijlsma, Lubertus; Boehm, Alexandria B; Brown, Joe; Buttiglieri, Gianluigi; Calabro, Vincenza; Carducci, Annalaura; Castiglioni, Sara; Cetecioglu Guro
Abstract: Author(s): Bivins, Aaron; North, Devin; Ahmad, Arslan; Ahmed, Warish; Alm, Eric; Been, Frederic; Bhattacharya, Prosun; Bijlsma, Lubertus; Boehm, Alexandria B; Brown, Joe; Buttiglieri, Gianluigi; Calabro, Vincenza; Carducci, Annalaura; Castiglioni, Sara; Cetecioglu Gurol, Zeynep; Chakraborty, Sudip; Costa, Federico; Curcio, Stefano; de Los Reyes, Francis L; Delgado Vela, Jeseth; Farkas, Kata; Fernandez-Casi, Xavier; Gerba, Charles; Gerrity, Daniel; Girones, Rosina; Gonzalez, Raul; Haramoto, Eiji; Harris, Angela; Holden, Patricia A; Islam, Md Tahmidul; Jones, Davey L; Kasprzyk-Hordern, Barbara; Kitajima, Masaaki; Kotlarz, Nadine; Kumar, Manish; Kuroda, Keisuke; La Rosa, Giuseppina; Malpei, Francesca; Mautus, Mariana; McLellan, Sandra L; Medema, Gertjan; Meschke, John Scott; Mueller, Jochen; Newton, Ryan J; Nilsson, David; Noble, Rachel T; van Nuijs, Alexander; Peccia, Jordan; Perkins, T Alex; Pickering, Amy J; Rose, Joan; Sanchez, Gloria; Smith, Adam; Stadler, Lauren; Stauber, Christine; Thomas, Kevin; van der Voorn, Tom; Wigginton, Krista; Zhu, Kevin; Bibby, Kyle

Journal ArticleDOI
TL;DR: In this article, the authors propose that soil carbon persistence can be understood through the lens of decomposers as a result of functional complexity derived from the interplay between spatial and temporal variation of molecular diversity and composition, which suggests soil management should be based on constant care rather than one-time action to lock away carbon in soils.
Abstract: Soil organic carbon management has the potential to aid climate change mitigation through drawdown of atmospheric carbon dioxide. To be effective, such management must account for processes influencing carbon storage and re-emission at different space and time scales. Achieving this requires a conceptual advance in our understanding to link carbon dynamics from the scales at which processes occur to the scales at which decisions are made. Here, we propose that soil carbon persistence can be understood through the lens of decomposers as a result of functional complexity derived from the interplay between spatial and temporal variation of molecular diversity and composition. For example, co-location alone can determine whether a molecule is decomposed, with rapid changes in moisture leading to transport of organic matter and constraining the fitness of the microbial community, while greater molecular diversity may increase the metabolic demand of, and thus potentially limit, decomposition. This conceptual shift accounts for emergent behaviour of the microbial community and would enable soil carbon changes to be predicted without invoking recalcitrant carbon forms that have not been observed experimentally. Functional complexity as a driver of soil carbon persistence suggests soil management should be based on constant care rather than one-time action to lock away carbon in soils. Dynamic interactions between chemical and biological controls govern the stability of soil organic carbon and drive complex, emergent patterns in soil carbon persistence.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a theory for the area-law to volume-law entanglement transition in many-body systems that undergo both random unitary evolutions and projective measurements.
Abstract: A new class of quantum entanglement transitions separating phases with different entanglement entropy scaling has been observed in recent numerical studies. Despite the numerical efforts, an analytical understanding of such transitions has remained elusive. Here, the authors propose a theory for the area-law to volume-law entanglement transition in many-body systems that undergo both random unitary evolutions and projective measurements. Using the replica method, the authors map analytically this entanglement transition to an ordering transition in a classical statistical mechanics model. They derive the general entanglement scaling properties at the transition and show a solvable limit where this transition can be mapped onto two-dimensional percolation.