scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 2021"


Journal ArticleDOI
TL;DR: Advances in nanoparticle design that overcome heterogeneous barriers to delivery are discussed, arguing that intelligent nanoparticles design can improve efficacy in general delivery applications while enabling tailored designs for precision applications, thereby ultimately improving patient outcome overall.
Abstract: In recent years, the development of nanoparticles has expanded into a broad range of clinical applications. Nanoparticles have been developed to overcome the limitations of free therapeutics and navigate biological barriers - systemic, microenvironmental and cellular - that are heterogeneous across patient populations and diseases. Overcoming this patient heterogeneity has also been accomplished through precision therapeutics, in which personalized interventions have enhanced therapeutic efficacy. However, nanoparticle development continues to focus on optimizing delivery platforms with a one-size-fits-all solution. As lipid-based, polymeric and inorganic nanoparticles are engineered in increasingly specified ways, they can begin to be optimized for drug delivery in a more personalized manner, entering the era of precision medicine. In this Review, we discuss advanced nanoparticle designs utilized in both non-personalized and precision applications that could be applied to improve precision therapies. We focus on advances in nanoparticle design that overcome heterogeneous barriers to delivery, arguing that intelligent nanoparticle design can improve efficacy in general delivery applications while enabling tailored designs for precision applications, thereby ultimately improving patient outcome overall.

2,179 citations


Journal ArticleDOI
23 Jun 2021
TL;DR: In this article, the authors describe the state-of-the-art in the field of federated learning from the perspective of distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, and statistics.
Abstract: The term Federated Learning was coined as recently as 2016 to describe a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective. Since then, the topic has gathered much interest across many different disciplines and the realization that solving many of these interdisciplinary problems likely requires not just machine learning but techniques from distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, statistics, and more. This monograph has contributions from leading experts across the disciplines, who describe the latest state-of-the art from their perspective. These contributions have been carefully curated into a comprehensive treatment that enables the reader to understand the work that has been done and get pointers to where effort is required to solve many of the problems before Federated Learning can become a reality in practical systems. Researchers working in the area of distributed systems will find this monograph an enlightening read that may inspire them to work on the many challenging issues that are outlined. This monograph will get the reader up to speed quickly and easily on what is likely to become an increasingly important topic: Federated Learning.

2,144 citations


Journal ArticleDOI
24 Feb 2021-Nature
TL;DR: In this paper, an electron transport layer with an ideal film coverage, thickness and composition was developed by tuning the chemical bath deposition of tin dioxide (SnO2) to improve the performance of metal halide perovskite solar cells.
Abstract: Metal halide perovskite solar cells (PSCs) are an emerging photovoltaic technology with the potential to disrupt the mature silicon solar cell market. Great improvements in device performance over the past few years, thanks to the development of fabrication protocols1-3, chemical compositions4,5 and phase stabilization methods6-10, have made PSCs one of the most efficient and low-cost solution-processable photovoltaic technologies. However, the light-harvesting performance of these devices is still limited by excessive charge carrier recombination. Despite much effort, the performance of the best-performing PSCs is capped by relatively low fill factors and high open-circuit voltage deficits (the radiative open-circuit voltage limit minus the high open-circuit voltage)11. Improvements in charge carrier management, which is closely tied to the fill factor and the open-circuit voltage, thus provide a path towards increasing the device performance of PSCs, and reaching their theoretical efficiency limit12. Here we report a holistic approach to improving the performance of PSCs through enhanced charge carrier management. First, we develop an electron transport layer with an ideal film coverage, thickness and composition by tuning the chemical bath deposition of tin dioxide (SnO2). Second, we decouple the passivation strategy between the bulk and the interface, leading to improved properties, while minimizing the bandgap penalty. In forward bias, our devices exhibit an electroluminescence external quantum efficiency of up to 17.2 per cent and an electroluminescence energy conversion efficiency of up to 21.6 per cent. As solar cells, they achieve a certified power conversion efficiency of 25.2 per cent, corresponding to 80.5 per cent of the thermodynamic limit of its bandgap.

1,557 citations


Journal ArticleDOI
01 Jun 2021
TL;DR: Some of the prevailing trends in embedding physics into machine learning are reviewed, some of the current capabilities and limitations are presented and diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems are discussed.
Abstract: Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems. The rapidly developing field of physics-informed learning integrates data and mathematical models seamlessly, enabling accurate inference of realistic and high-dimensional multiphysics problems. This Review discusses the methodology and provides diverse examples and an outlook for further developments.

1,114 citations


Journal ArticleDOI
25 Feb 2021-Nature
TL;DR: It is demonstrated that relatively low antibody titers are sufficient for protection against SARS-CoV-2 in rhesus macaques, and that cellular immune responses may also contribute to protection if antibody responses are suboptimal.
Abstract: Recent studies have reported the protective efficacy of both natural1 and vaccine-induced2–7 immunity against challenge with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in rhesus macaques. However, the importance of humoral and cellular immunity for protection against infection with SARS-CoV-2 remains to be determined. Here we show that the adoptive transfer of purified IgG from convalescent rhesus macaques (Macaca mulatta) protects naive recipient macaques against challenge with SARS-CoV-2 in a dose-dependent fashion. Depletion of CD8+ T cells in convalescent macaques partially abrogated the protective efficacy of natural immunity against rechallenge with SARS-CoV-2, which suggests a role for cellular immunity in the context of waning or subprotective antibody titres. These data demonstrate that relatively low antibody titres are sufficient for protection against SARS-CoV-2 in rhesus macaques, and that cellular immune responses may contribute to protection if antibody responses are suboptimal. We also show that higher antibody titres are required for treatment of SARS-CoV-2 infection in macaques. These findings have implications for the development of SARS-CoV-2 vaccines and immune-based therapeutic agents. Adoptive transfer of purified IgG from convalescent macaques protects naive macaques against SARS-CoV-2 infection, and cellular immune responses contribute to protection against rechallenge with SARS-CoV-2.

881 citations


Journal ArticleDOI
TL;DR: A review of lipid nanoparticles for mRNA delivery can be found in this paper, where the authors discuss the design of nanoparticles and examine physiological barriers and possible administration routes for lipid nanoparticle-mRNA systems.
Abstract: Messenger RNA (mRNA) has emerged as a new category of therapeutic agent to prevent and treat various diseases. To function in vivo, mRNA requires safe, effective and stable delivery systems that protect the nucleic acid from degradation and that allow cellular uptake and mRNA release. Lipid nanoparticles have successfully entered the clinic for the delivery of mRNA; in particular, lipid nanoparticle-mRNA vaccines are now in clinical use against coronavirus disease 2019 (COVID-19), which marks a milestone for mRNA therapeutics. In this Review, we discuss the design of lipid nanoparticles for mRNA delivery and examine physiological barriers and possible administration routes for lipid nanoparticle-mRNA systems. We then consider key points for the clinical translation of lipid nanoparticle-mRNA formulations, including good manufacturing practice, stability, storage and safety, and highlight preclinical and clinical studies of lipid nanoparticle-mRNA therapeutics for infectious diseases, cancer and genetic disorders. Finally, we give an outlook to future possibilities and remaining challenges for this promising technology.

758 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed an analytical framework to examine mask usage, synthesizing the relevant literature to inform multiple areas: population impact, transmission characteristics, source control, wearer protection, sociological considerations, and implementation considerations.
Abstract: The science around the use of masks by the public to impede COVID-19 transmission is advancing rapidly. In this narrative review, we develop an analytical framework to examine mask usage, synthesizing the relevant literature to inform multiple areas: population impact, transmission characteristics, source control, wearer protection, sociological considerations, and implementation considerations. A primary route of transmission of COVID-19 is via respiratory particles, and it is known to be transmissible from presymptomatic, paucisymptomatic, and asymptomatic individuals. Reducing disease spread requires two things: limiting contacts of infected individuals via physical distancing and other measures and reducing the transmission probability per contact. The preponderance of evidence indicates that mask wearing reduces transmissibility per contact by reducing transmission of infected respiratory particles in both laboratory and clinical contexts. Public mask wearing is most effective at reducing spread of the virus when compliance is high. Given the current shortages of medical masks, we recommend the adoption of public cloth mask wearing, as an effective form of source control, in conjunction with existing hygiene, distancing, and contact tracing strategies. Because many respiratory particles become smaller due to evaporation, we recommend increasing focus on a previously overlooked aspect of mask usage: mask wearing by infectious people ("source control") with benefits at the population level, rather than only mask wearing by susceptible people, such as health care workers, with focus on individual outcomes. We recommend that public officials and governments strongly encourage the use of widespread face masks in public, including the use of appropriate regulation.

679 citations


Journal ArticleDOI
TL;DR: A new deep neural network called DeepONet can lean various mathematical operators with small generalization error and can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations.
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions. However, a less known but powerful result is that a NN with a single hidden layer can accurately approximate any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the structure and potential of deep neural networks (DNNs) in learning continuous operators or complex systems from streams of scattered data. Here, we thus extend this theorem to DNNs. We design a new network with small generalization error, the deep operator network (DeepONet), which consists of a DNN for encoding the discrete input function space (branch net) and another DNN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study different formulations of the input function space and its effect on the generalization error for 16 different diverse applications. Neural networks are known as universal approximators of continuous functions, but they can also approximate any mathematical operator (mapping a function to another function), which is an important capability for complex systems such as robotics control. A new deep neural network called DeepONet can lean various mathematical operators with small generalization error.

675 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an alternative estimator that is free of contamination, and illustrate the relative shortcomings of two-way fixed effects regressions with leads and lags through an empirical application.

570 citations


Journal Article
TL;DR: This work introduces benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL, and releases benchmark tasks and datasets with a comprehensive evaluation of existing algorithms and an evaluation protocol together with an open-source codebase.
Abstract: The offline reinforcement learning (RL) problem, also known as batch RL, refers to the setting where a policy must be learned from a static dataset, without additional online data collection. This setting is compelling as it potentially allows RL methods to take advantage of large, pre-collected datasets, much like how the rise of large datasets has fueled results in supervised learning in recent years. However, existing online RL benchmarks are not tailored towards the offline setting, making progress in offline RL difficult to measure. In this work, we introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL. Examples of such properties include: datasets generated via hand-designed controllers and human demonstrators, multi-objective datasets where an agent can perform different tasks in the same environment, and datasets consisting of a mixtures of policies. To facilitate research, we release our benchmark tasks and datasets with a comprehensive evaluation of existing algorithms and an evaluation protocol together with an open-source codebase. We hope that our benchmark will focus research effort on methods that drive improvements not just on simulated tasks, but ultimately on the kinds of real-world problems where offline RL will have the largest impact.

563 citations


Journal ArticleDOI
TL;DR: Practical guidance is provided to researchers employing synthetic control methods and the advantages of the synthetic control framework as a research design, and the settings where synthetic controls provide reliable estimates and those where they may fail are described.
Abstract: Probably because of their interpretability and transparent nature, synthetic controls have become widely applied in empirical research in economics and the social sciences. This article aims to provide practical guidance to researchers employing synthetic control methods. The article starts with an overview and an introduction to synthetic control estimation. The main sections discuss the advantages of the synthetic control framework as a research design, and describe the settings where synthetic controls provide reliable estimates and those where they may fail. The article closes with a discussion of recent extensions, related methods, and avenues for future research.

Journal ArticleDOI
27 Jul 2021-ACS Nano
TL;DR: A comprehensive review of metal-halide perovskite nanocrystals can be found in this article, where researchers having expertise in different fields (chemistry, physics, and device engineering) have joined together to provide a state-of-the-art overview and future prospects of metalhalide nanocrystal research.
Abstract: Metal-halide perovskites have rapidly emerged as one of the most promising materials of the 21st century, with many exciting properties and great potential for a broad range of applications, from photovoltaics to optoelectronics and photocatalysis. The ease with which metal-halide perovskites can be synthesized in the form of brightly luminescent colloidal nanocrystals, as well as their tunable and intriguing optical and electronic properties, has attracted researchers from different disciplines of science and technology. In the last few years, there has been a significant progress in the shape-controlled synthesis of perovskite nanocrystals and understanding of their properties and applications. In this comprehensive review, researchers having expertise in different fields (chemistry, physics, and device engineering) of metal-halide perovskite nanocrystals have joined together to provide a state of the art overview and future prospects of metal-halide perovskite nanocrystal research.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1428 moreInstitutions (155)
TL;DR: In this article, the population of 47 compact binary mergers detected with a false-alarm rate of 0.614 were dynamically assembled, and the authors found that the BBH rate likely increases with redshift, but not faster than the star formation rate.
Abstract: We report on the population of 47 compact binary mergers detected with a false-alarm rate of 0.01 are dynamically assembled. Third, we estimate merger rates, finding RBBH = 23.9-+8.614.3 Gpc-3 yr-1 for BBHs and RBNS = 320-+240490 Gpc-3 yr-1 for binary neutron stars. We find that the BBH rate likely increases with redshift (85% credibility) but not faster than the star formation rate (86% credibility). Additionally, we examine recent exceptional events in the context of our population models, finding that the asymmetric masses of GW190412 and the high component masses of GW190521 are consistent with our models, but the low secondary mass of GW190814 makes it an outlier.

Journal ArticleDOI
TL;DR: The space charge mechanism revealed by in situ magnetometry can be generalized to a broad range of transition metal compounds for which a large electron density of states is accessible, and provides pivotal guidance for creating advanced energy storage systems.
Abstract: In lithium-ion batteries (LIBs), many promising electrodes that are based on transition metal oxides exhibit anomalously high storage capacities beyond their theoretical values. Although this phenomenon has been widely reported, the underlying physicochemical mechanism in such materials remains elusive and is still a matter of debate. In this work, we use in situ magnetometry to demonstrate the existence of strong surface capacitance on metal nanoparticles, and to show that a large number of spin-polarized electrons can be stored in the already-reduced metallic nanoparticles (that are formed during discharge at low potentials in transition metal oxide LIBs), which is consistent with a space charge mechanism. Through quantification of the surface capacitance by the variation in magnetism, we further show that this charge capacity of the surface is the dominant source of the extra capacity in the Fe3O4/Li model system, and that it also exists in CoO, NiO, FeF2 and Fe2N systems. The space charge mechanism revealed by in situ magnetometry can therefore be generalized to a broad range of transition metal compounds for which a large electron density of states is accessible, and provides pivotal guidance for creating advanced energy storage systems.

Journal ArticleDOI
TL;DR: TEASER++ as mentioned in this paper uses a truncated least squares (TLS) cost that makes the estimation insensitive to a large fraction of spurious correspondences and provides a general graph-theoretic framework to decouple scale, rotation and translation estimation, which allows solving in cascade for the three transformations.
Abstract: We propose the first fast and certifiable algorithm for the registration of two sets of three-dimensional (3-D) points in the presence of large amounts of outlier correspondences. A certifiable algorithm is one that attempts to solve an intractable optimization problem (e.g., robust estimation with outliers) and provides readily checkable conditions to verify if the returned solution is optimal (e.g., if the algorithm produced the most accurate estimate in the face of outliers) or bound its suboptimality or accuracy. Toward this goal, we first reformulate the registration problem using a truncated least squares (TLS) cost that makes the estimation insensitive to a large fraction of spurious correspondences. Then, we provide a general graph-theoretic framework to decouple scale, rotation, and translation estimation, which allows solving in cascade for the three transformations. Despite the fact that each subproblem (scale, rotation, and translation estimation) is still nonconvex and combinatorial in nature, we show that 1) TLS scale and (component-wise) translation estimation can be solved in polynomial time via an adaptive voting scheme, 2) TLS rotation estimation can be relaxed to a semidefinite program (SDP) and the relaxation is tight, even in the presence of extreme outlier rates, and 3) the graph-theoretic framework allows drastic pruning of outliers by finding the maximum clique. We name the resulting algorithm TEASER ( Truncated least squares Estimation And SEmidefinite Relaxation ). While solving large SDP relaxations is typically slow, we develop a second fast and certifiable algorithm, named TEASER++, that uses graduated nonconvexity to solve the rotation subproblem and leverages Douglas-Rachford Splitting to efficiently certify global optimality. For both algorithms, we provide theoretical bounds on the estimation errors, which are the first of their kind for robust registration problems. Moreover, we test their performance on standard benchmarks, object detection datasets, and the 3DMatch scan matching dataset, and show that 1) both algorithms dominate the state-of-the-art (e.g., RANSAC, branch-&-bound, heuristics) and are robust to more than $\text{99}\%$ outliers when the scale is known, 2) TEASER++ can run in milliseconds and it is currently the fastest robust registration algorithm, and 3) TEASER++ is so robust it can also solve problems without correspondences (e.g., hypothesizing all-to-all correspondences), where it largely outperforms ICP and it is more accurate than Go-ICP while being orders of magnitude faster. We release a fast open-source C++ implementation of TEASER++.

Journal ArticleDOI
Toni Delorey1, Carly G. K. Ziegler, Graham Heimberg1, Rachelly Normand, Yiming Yang2, Yiming Yang1, Asa Segerstolpe1, Domenic Abbondanza1, Stephen J. Fleming1, Ayshwarya Subramanian1, Daniel T. Montoro1, Karthik A. Jagadeesh1, Kushal K. Dey2, Pritha Sen, Michal Slyper1, Yered Pita-Juárez, Devan Phillips1, Jana Biermann3, Zohar Bloom-Ackermann1, Nikolaos Barkas1, Andrea Ganna4, Andrea Ganna2, James Gomez1, Johannes C. Melms3, Igor Katsyv3, Erica Normandin1, Erica Normandin2, Pourya Naderi5, Pourya Naderi2, Yury Popov2, Yury Popov5, Siddharth S. Raju2, Siddharth S. Raju1, Sebastian Niezen5, Sebastian Niezen2, Linus T.-Y. Tsai, Katherine J. Siddle1, Katherine J. Siddle2, Malika Sud1, Victoria M. Tran1, Shamsudheen K. Vellarikkal1, Shamsudheen K. Vellarikkal6, Yiping Wang3, Liat Amir-Zilberstein1, Deepak Atri1, Deepak Atri6, Joseph M. Beechem7, Olga R. Brook5, Jonathan H. Chen1, Jonathan H. Chen2, Prajan Divakar7, Phylicia Dorceus1, Jesse M. Engreitz8, Jesse M. Engreitz1, Adam Essene5, Donna M. Fitzgerald2, Robin Fropf7, Steven Gazal9, Joshua Gould1, John Grzyb6, Tyler Harvey1, Jonathan L. Hecht2, Jonathan L. Hecht5, Tyler Hether7, Judit Jané-Valbuena1, Michael Leney-Greene1, Hui Ma2, Hui Ma1, Cristin McCabe1, Daniel E. McLoughlin2, Eric M. Miller7, Christoph Muus1, Christoph Muus2, Mari Niemi4, Robert F. Padera6, Robert F. Padera10, Robert F. Padera2, Liuliu Pan7, Deepti Pant5, Carmel Pe’er1, Jenna Pfiffner-Borges1, Christopher J. Pinto2, Jacob Plaisted6, Jason Reeves7, Marty Ross7, Melissa Rudy1, Erroll H. Rueckert7, Michelle Siciliano6, Alexander Sturm1, Ellen Todres1, Avinash Waghray2, Sarah Warren7, Shuting Zhang1, Daniel R. Zollinger7, Lisa A. Cosimi6, Rajat M. Gupta6, Rajat M. Gupta1, Nir Hacohen2, Nir Hacohen1, Hanina Hibshoosh3, Winston Hide, Alkes L. Price2, Jayaraj Rajagopal2, Purushothama Rao Tata11, Stefan Riedel2, Stefan Riedel5, Gyongyi Szabo2, Gyongyi Szabo5, Gyongyi Szabo1, Timothy L. Tickle1, Patrick T. Ellinor1, Deborah T. Hung2, Deborah T. Hung1, Pardis C. Sabeti, Richard M. Novak12, Robert S. Rogers2, Robert S. Rogers5, Donald E. Ingber13, Donald E. Ingber2, Donald E. Ingber12, Z. Gordon Jiang2, Z. Gordon Jiang5, Dejan Juric2, Mehrtash Babadi1, Samouil L. Farhi1, Benjamin Izar, James R. Stone2, Ioannis S. Vlachos, Isaac H. Solomon6, Orr Ashenberg1, Caroline B. M. Porter1, Bo Li1, Bo Li2, Alex K. Shalek, Alexandra-Chloé Villani, Orit Rozenblatt-Rosen1, Orit Rozenblatt-Rosen14, Aviv Regev 
29 Apr 2021-Nature
TL;DR: In this article, single-cell analysis of lung, heart, kidney and liver autopsy samples shows the molecular and cellular changes and immune response resulting from severe SARS-CoV-2 infection.
Abstract: COVID-19, which is caused by SARS-CoV-2, can result in acute respiratory distress syndrome and multiple organ failure1–4, but little is known about its pathophysiology. Here we generated single-cell atlases of 24 lung, 16 kidney, 16 liver and 19 heart autopsy tissue samples and spatial atlases of 14 lung samples from donors who died of COVID-19. Integrated computational analysis uncovered substantial remodelling in the lung epithelial, immune and stromal compartments, with evidence of multiple paths of failed tissue regeneration, including defective alveolar type 2 differentiation and expansion of fibroblasts and putative TP63+ intrapulmonary basal-like progenitor cells. Viral RNAs were enriched in mononuclear phagocytic and endothelial lung cells, which induced specific host programs. Spatial analysis in lung distinguished inflammatory host responses in lung regions with and without viral RNA. Analysis of the other tissue atlases showed transcriptional alterations in multiple cell types in heart tissue from donors with COVID-19, and mapped cell types and genes implicated with disease severity based on COVID-19 genome-wide association studies. Our foundational dataset elucidates the biological effect of severe SARS-CoV-2 infection across the body, a key step towards new treatments. Single-cell analysis of lung, heart, kidney and liver autopsy samples shows the molecular and cellular changes and immune response resulting from severe COVID-19 infection.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1692 moreInstitutions (195)
TL;DR: In this article, the authors reported the observation of gravitational waves from two compact binary coalescences in LIGO's and Virgo's third observing run with properties consistent with neutron star-black hole (NSBH) binaries.
Abstract: We report the observation of gravitational waves from two compact binary coalescences in LIGO’s and Virgo’s third observing run with properties consistent with neutron star–black hole (NSBH) binaries. The two events are named GW200105_162426 and GW200115_042309, abbreviated as GW200105 and GW200115; the first was observed by LIGO Livingston and Virgo and the second by all three LIGO–Virgo detectors. The source of GW200105 has component masses 8.9−1.5+1.2 and 1.9−0.2+0.3M⊙ , whereas the source of GW200115 has component masses 5.7−2.1+1.8 and 1.5−0.3+0.7M⊙ (all measurements quoted at the 90% credible level). The probability that the secondary’s mass is below the maximal mass of a neutron star is 89%–96% and 87%–98%, respectively, for GW200105 and GW200115, with the ranges arising from different astrophysical assumptions. The source luminosity distances are 280−110+110 and 300−100+150Mpc , respectively. The magnitude of the primary spin of GW200105 is less than 0.23 at the 90% credible level, and its orientation is unconstrained. For GW200115, the primary spin has a negative spin projection onto the orbital angular momentum at 88% probability. We are unable to constrain the spin or tidal deformation of the secondary component for either event. We infer an NSBH merger rate density of 45−33+75Gpc−3yr−1 when assuming that GW200105 and GW200115 are representative of the NSBH population or 130−69+112Gpc−3yr−1 under the assumption of a broader distribution of component masses.

Journal ArticleDOI
TL;DR: A review of recent progress in carbon materials for supercapacitor electrodes is presented in this paper, where the characteristics and fabrication methods of these materials and their performance as capacitor electrodes are discussed.
Abstract: Increased energy consumption stimulates the development of various energy types. As a result, the storage of these different types of energy becomes a key issue. Supercapacitors, as one important energy storage device, have gained much attention and owned a wide range of applications by taking advantages of micro-size, lightweight, high power density and long cycle life. From this perspective, numerous studies, especially on electrode materials, have been reported and great progress in the advancement in both the fundamental and applied fields of supercapacitor has been achieved. Herein, a review of recent progress in carbon materials for supercapacitor electrodes is presented. First, the two mechanisms of supercapacitors are briefly introduced. Then, research on carbon-based material electrodes for supercapacitor in recent years is summarized, including different dimensional carbon-based materials and biomass-derived carbon materials. The characteristics and fabrication methods of these materials and their performance as capacitor electrodes are discussed. On the basis of these materials, many supercapacitor devices have been developed. Therefore, in the third part, the supercapacitor devices based on these carbon materials are summarized. A brief overview of two types of conventional supercapacitor according to the charge storage mechanism is compiled, including their development process, the merits or withdraws, and the principle of expanding the potential range. Additionally, another fast-developed capacitor, hybrid ion capacitors as a good compromise between battery and supercapacitor are also discussed. Finally, the future aspects and challenges on the carbon-based materials as supercapacitor electrodes are proposed.

Journal ArticleDOI
11 Feb 2021-Nature
TL;DR: In this paper, the authors show that the superconducting phase is suppressed and bounded at the Van Hove singularities that partially surround the broken-symmetry phase, which is difficult to reconcile with weak-coupling Bardeen-Cooper-Schrieffer theory.
Abstract: Moire superlattices1,2 have recently emerged as a platform upon which correlated physics and superconductivity can be studied with unprecedented tunability3–6. Although correlated effects have been observed in several other moire systems7–17, magic-angle twisted bilayer graphene remains the only one in which robust superconductivity has been reproducibly measured4–6. Here we realize a moire superconductor in magic-angle twisted trilayer graphene (MATTG)18, which has better tunability of its electronic structure and superconducting properties than magic-angle twisted bilayer graphene. Measurements of the Hall effect and quantum oscillations as a function of density and electric field enable us to determine the tunable phase boundaries of the system in the normal metallic state. Zero-magnetic-field resistivity measurements reveal that the existence of superconductivity is intimately connected to the broken-symmetry phase that emerges from two carriers per moire unit cell. We find that the superconducting phase is suppressed and bounded at the Van Hove singularities that partially surround the broken-symmetry phase, which is difficult to reconcile with weak-coupling Bardeen–Cooper–Schrieffer theory. Moreover, the extensive in situ tunability of our system allows us to reach the ultrastrong-coupling regime, characterized by a Ginzburg–Landau coherence length that reaches the average inter-particle distance, and very large TBKT/TF values, in excess of 0.1 (where TBKT and TF are the Berezinskii–Kosterlitz–Thouless transition and Fermi temperatures, respectively). These observations suggest that MATTG can be electrically tuned close to the crossover to a two-dimensional Bose–Einstein condensate. Our results establish a family of tunable moire superconductors that have the potential to revolutionize our fundamental understanding of and the applications for strongly coupled superconductivity. Highly tunable moire superconductivity is observed in magic-angle twisted trilayer graphene, and observations suggest that this superconductor can be tuned close to the crossover to a two-dimensional Bose–Einstein condensate.

Journal ArticleDOI
TL;DR: The Q-Chem quantum chemistry program package as discussed by the authors provides a suite of tools for modeling core-level spectroscopy, methods for describing metastable resonances, and methods for computing vibronic spectra, the nuclear-electronic orbital method, and several different energy decomposition analysis techniques.
Abstract: This article summarizes technical advances contained in the fifth major release of the Q-Chem quantum chemistry program package, covering developments since 2015. A comprehensive library of exchange-correlation functionals, along with a suite of correlated many-body methods, continues to be a hallmark of the Q-Chem software. The many-body methods include novel variants of both coupled-cluster and configuration-interaction approaches along with methods based on the algebraic diagrammatic construction and variational reduced density-matrix methods. Methods highlighted in Q-Chem 5 include a suite of tools for modeling core-level spectroscopy, methods for describing metastable resonances, methods for computing vibronic spectra, the nuclear-electronic orbital method, and several different energy decomposition analysis techniques. High-performance capabilities including multithreaded parallelism and support for calculations on graphics processing units are described. Q-Chem boasts a community of well over 100 active academic developers, and the continuing evolution of the software is supported by an "open teamware" model and an increasingly modular design.

Journal ArticleDOI
12 May 2021-Nature
TL;DR: In this paper, the metal-induced gap states (MIGS) were suppressed and degenerate states in the metal dichalcogenides (TMDs) spontaneously formed in contact with bismuth.
Abstract: Advanced beyond-silicon electronic technology requires both channel materials and also ultralow-resistance contacts to be discovered1,2. Atomically thin two-dimensional semiconductors have great potential for realizing high-performance electronic devices1,3. However, owing to metal-induced gap states (MIGS)4–7, energy barriers at the metal–semiconductor interface—which fundamentally lead to high contact resistance and poor current-delivery capability—have constrained the improvement of two-dimensional semiconductor transistors so far2,8,9. Here we report ohmic contact between semimetallic bismuth and semiconducting monolayer transition metal dichalcogenides (TMDs) where the MIGS are sufficiently suppressed and degenerate states in the TMD are spontaneously formed in contact with bismuth. Through this approach, we achieve zero Schottky barrier height, a contact resistance of 123 ohm micrometres and an on-state current density of 1,135 microamps per micrometre on monolayer MoS2; these two values are, to the best of our knowledge, the lowest and highest yet recorded, respectively. We also demonstrate that excellent ohmic contacts can be formed on various monolayer semiconductors, including MoS2, WS2 and WSe2. Our reported contact resistances are a substantial improvement for two-dimensional semiconductors, and approach the quantum limit. This technology unveils the potential of high-performance monolayer transistors that are on par with state-of-the-art three-dimensional semiconductors, enabling further device downscaling and extending Moore’s law. Electric contacts of semimetallic bismuth on monolayer semiconductors are shown to suppress metal-induced gap states and thus have very low contact resistance and a zero Schottky barrier height.

Journal ArticleDOI
17 Mar 2021-Nature
TL;DR: It is found that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy, and that subtly shifting attention to accuracy increases the quality of news that people subsequently share.
Abstract: In recent years, there has been a great deal of concern about the proliferation of false and misleading news on social media1–4. Academics and practitioners alike have asked why people share such misinformation, and sought solutions to reduce the sharing of misinformation5–7. Here, we attempt to address both of these questions. First, we find that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy. This dissociation suggests that sharing does not necessarily indicate belief. Nonetheless, most participants say it is important to share only accurate news. To shed light on this apparent contradiction, we carried out four survey experiments and a field experiment on Twitter; the results show that subtly shifting attention to accuracy increases the quality of news that people subsequently share. Together with additional computational analyses, these findings indicate that people often share misinformation because their attention is focused on factors other than accuracy—and therefore they fail to implement a strongly held preference for accurate sharing. Our results challenge the popular claim that people value partisanship over accuracy8,9, and provide evidence for scalable attention-based interventions that social media platforms could easily implement to counter misinformation online. Surveys and a field experiment with Twitter users show that prompting people to think about the accuracy of news sources increases the quality of the news that they share online.

Journal ArticleDOI
01 Jul 2021-Nature
TL;DR: In this paper, a programmable quantum simulator based on deterministically prepared two-dimensional arrays of neutral atoms, featuring strong interactions controlled by coherent atomic excitation into Rydberg states, is presented.
Abstract: Motivated by far-reaching applications ranging from quantum simulations of complex processes in physics and chemistry to quantum information processing1, a broad effort is currently underway to build large-scale programmable quantum systems. Such systems provide insights into strongly correlated quantum matter2–6, while at the same time enabling new methods for computation7–10 and metrology11. Here we demonstrate a programmable quantum simulator based on deterministically prepared two-dimensional arrays of neutral atoms, featuring strong interactions controlled by coherent atomic excitation into Rydberg states12. Using this approach, we realize a quantum spin model with tunable interactions for system sizes ranging from 64 to 256 qubits. We benchmark the system by characterizing high-fidelity antiferromagnetically ordered states and demonstrating quantum critical dynamics consistent with an Ising quantum phase transition in (2 + 1) dimensions13. We then create and study several new quantum phases that arise from the interplay between interactions and coherent laser excitation14, experimentally map the phase diagram and investigate the role of quantum fluctuations. Offering a new lens into the study of complex quantum matter, these observations pave the way for investigations of exotic quantum phases, non-equilibrium entanglement dynamics and hardware-efficient realization of quantum algorithms. A programmable quantum simulator with 256 qubits is created using neutral atoms in two-dimensional optical tweezer arrays, demonstrating a quantum phase transition and revealing new quantum phases of matter.

Journal ArticleDOI
07 Apr 2021-Nature
TL;DR: In this article, the authors used positron emission tomography (PET) tracers to measure the access to and uptake of glucose and glutamine by specific cell subsets in the TME.
Abstract: Cancer cells characteristically consume glucose through Warburg metabolism1, a process that forms the basis of tumour imaging by positron emission tomography (PET) Tumour-infiltrating immune cells also rely on glucose, and impaired immune cell metabolism in the tumour microenvironment (TME) contributes to immune evasion by tumour cells2–4 However, whether the metabolism of immune cells is dysregulated in the TME by cell-intrinsic programs or by competition with cancer cells for limited nutrients remains unclear Here we used PET tracers to measure the access to and uptake of glucose and glutamine by specific cell subsets in the TME Notably, myeloid cells had the greatest capacity to take up intratumoral glucose, followed by T cells and cancer cells, across a range of cancer models By contrast, cancer cells showed the highest uptake of glutamine This distinct nutrient partitioning was programmed in a cell-intrinsic manner through mTORC1 signalling and the expression of genes related to the metabolism of glucose and glutamine Inhibiting glutamine uptake enhanced glucose uptake across tumour-resident cell types, showing that glutamine metabolism suppresses glucose uptake without glucose being a limiting factor in the TME Thus, cell-intrinsic programs drive the preferential acquisition of glucose and glutamine by immune and cancer cells, respectively Cell-selective partitioning of these nutrients could be exploited to develop therapies and imaging strategies to enhance or monitor the metabolic programs and activities of specific cell populations in the TME Positron emission tomography measurements of nutrient uptake in cells of the tumour microenvironment reveal cell-intrinsic partitioning in which glucose uptake is higher in myeloid cells, whereas glutamine is preferentially acquired by cancer cells

Journal ArticleDOI
TL;DR: In this paper, the authors present a set of guidelines for analysing critical data from lignin-first approaches, including feedstock analysis and process parameters, with the ambition of uniting the lignIN-first research community around a common set of reportable metrics, including fractionation efficiency, product yields, solvent mass balances, catalyst efficiency, and requirements for additional reagents such as reducing, oxidising, or capping agents.
Abstract: The valorisation of the plant biopolymer lignin is now recognised as essential to enabling the economic viability of the lignocellulosic biorefining industry. In this context, the “lignin-first” biorefining approach, in which lignin valorisation is considered in the design phase, has demonstrated the fullest utilisation of lignocellulose. We define lignin-first methods as active stabilisation approaches that solubilise lignin from native lignocellulosic biomass while avoiding condensation reactions that lead to more recalcitrant lignin polymers. This active stabilisation can be accomplished by solvolysis and catalytic conversion of reactive intermediates to stable products or by protection-group chemistry of lignin oligomers or reactive monomers. Across the growing body of literature in this field, there are disparate approaches to report and analyse the results from lignin-first approaches, thus making quantitative comparisons between studies challenging. To that end, we present herein a set of guidelines for analysing critical data from lignin-first approaches, including feedstock analysis and process parameters, with the ambition of uniting the lignin-first research community around a common set of reportable metrics. These guidelines comprise standards and best practices or minimum requirements for feedstock analysis, stressing reporting of the fractionation efficiency, product yields, solvent mass balances, catalyst efficiency, and the requirements for additional reagents such as reducing, oxidising, or capping agents. Our goal is to establish best practices for the research community at large primarily to enable direct comparisons between studies from different laboratories. The use of these guidelines will be helpful for the newcomers to this field and pivotal for further progress in this exciting research area.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1335 moreInstitutions (144)
TL;DR: The data recorded by these instruments during their first and second observing runs are described, including the gravitational-wave strain arrays, released as time series sampled at 16384 Hz.

Journal ArticleDOI
TL;DR: The review discusses the new classes of RiPPs that have been discovered, the advances in the understanding of the installation of both primary and secondary post-translational modifications, and the mechanisms by which the enzymes recognize the leader peptides in their substrates.

ReportDOI
TL;DR: In this article, targeted lockdown in a multi-group SIR model where infection, hospitalization and fatality rates vary between groups was studied. And the authors found that optimal policies differentially targeting risk/age groups significantly outperform optimal uniform policies and most of the gains can be realized by having stricter lockdown policies on the oldest group.
Abstract: We study targeted lockdowns in a multi-group SIR model where infection, hospitalization and fatality rates vary between groups—in particular between the “young”, “the middle-aged” and the “old”. Our model enables a tractable quantitative analysis of optimal policy. For baseline parameter values for the COVID-19 pandemic applied to the US, we find that optimal policies differentially targeting risk/age groups significantly outperform optimal uniform policies and most of the gains can be realized by having stricter lockdown policies on the oldest group. Intuitively, a strict and long lockdown for the most vulnerable group both reduces infections and enables less strict lockdowns for the lower-risk groups. We also study the impacts of group distancing, testing and contract tracing, the matching technology and the expected arrival time of a vaccine on optimal policies. Overall, targeted policies that are combined with measures that reduce interactions between groups and increase testing and isolation of the infected can minimize both economic losses and deaths in our model.

Journal ArticleDOI
TL;DR: In this paper, a review aimed at synergistically reporting: (i) general design principles for hydrogels to achieve extreme mechanical and physical properties, (ii) implementation strategies for the design principles using unconventional polymer networks, and (iii) future directions for the orthogonal design of hydrogel to achieve multiple combined mechanical, physical, chemical, and biological properties.
Abstract: Hydrogels are polymer networks infiltrated with water. Many biological hydrogels in animal bodies such as muscles, heart valves, cartilages, and tendons possess extreme mechanical properties including being extremely tough, strong, resilient, adhesive, and fatigue-resistant. These mechanical properties are also critical for hydrogels' diverse applications ranging from drug delivery, tissue engineering, medical implants, wound dressings, and contact lenses to sensors, actuators, electronic devices, optical devices, batteries, water harvesters, and soft robots. Whereas numerous hydrogels have been developed over the last few decades, a set of general principles that can rationally guide the design of hydrogels using different materials and fabrication methods for various applications remain a central need in the field of soft materials. This review is aimed at synergistically reporting: (i) general design principles for hydrogels to achieve extreme mechanical and physical properties, (ii) implementation strategies for the design principles using unconventional polymer networks, and (iii) future directions for the orthogonal design of hydrogels to achieve multiple combined mechanical, physical, chemical, and biological properties. Because these design principles and implementation strategies are based on generic polymer networks, they are also applicable to other soft materials including elastomers and organogels. Overall, the review will not only provide comprehensive and systematic guidelines on the rational design of soft materials, but also provoke interdisciplinary discussions on a fundamental question: why does nature select soft materials with unconventional polymer networks to constitute the major parts of animal bodies?

Journal ArticleDOI
TL;DR: It is found that honoring the physics leads to improved robustness: when trained only on a few parameters, the PINN model can accurately predict the solution for a wide range of parameters new to the network—thus pointing to an important application of this framework to sensitivity analysis and surrogate modeling.