scispace - formally typeset
Search or ask a question

Showing papers by "École Polytechnique Fédérale de Lausanne published in 2021"


Journal ArticleDOI
23 Jun 2021
TL;DR: In this article, the authors describe the state-of-the-art in the field of federated learning from the perspective of distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, and statistics.
Abstract: The term Federated Learning was coined as recently as 2016 to describe a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective. Since then, the topic has gathered much interest across many different disciplines and the realization that solving many of these interdisciplinary problems likely requires not just machine learning but techniques from distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, statistics, and more. This monograph has contributions from leading experts across the disciplines, who describe the latest state-of-the art from their perspective. These contributions have been carefully curated into a comprehensive treatment that enables the reader to understand the work that has been done and get pointers to where effort is required to solve many of the problems before Federated Learning can become a reality in practical systems. Researchers working in the area of distributed systems will find this monograph an enlightening read that may inspire them to work on the many challenging issues that are outlined. This monograph will get the reader up to speed quickly and easily on what is likely to become an increasingly important topic: Federated Learning.

2,144 citations


Journal ArticleDOI
05 Apr 2021-Nature
TL;DR: In this paper, the pseudo-halide anion formate (HCOO−) was used to suppress anion-vacancy defects that are present at grain boundaries and at the surface of the perovskite films.
Abstract: Metal halide perovskites of the general formula ABX3—where A is a monovalent cation such as caesium, methylammonium or formamidinium; B is divalent lead, tin or germanium; and X is a halide anion—have shown great potential as light harvesters for thin-film photovoltaics1–5. Among a large number of compositions investigated, the cubic α-phase of formamidinium lead triiodide (FAPbI3) has emerged as the most promising semiconductor for highly efficient and stable perovskite solar cells6–9, and maximizing the performance of this material in such devices is of vital importance for the perovskite research community. Here we introduce an anion engineering concept that uses the pseudo-halide anion formate (HCOO−) to suppress anion-vacancy defects that are present at grain boundaries and at the surface of the perovskite films and to augment the crystallinity of the films. The resulting solar cell devices attain a power conversion efficiency of 25.6 per cent (certified 25.2 per cent), have long-term operational stability (450 hours) and show intense electroluminescence with external quantum efficiencies of more than 10 per cent. Our findings provide a direct route to eliminate the most abundant and deleterious lattice defects present in metal halide perovskites, providing a facile access to solution-processable films with improved optoelectronic performance. Incorporation of the pseudo-halide anion formate during the fabrication of α-FAPbI3 perovskite films eliminates deleterious iodide vacancies, yielding solar cell devices with a certified power conversion efficiency of 25.21 per cent and long-term operational stability.

1,616 citations


Journal ArticleDOI
Shadab Alam1, Marie Aubert, Santiago Avila2, Christophe Balland3, Julian E. Bautista4, Matthew A. Bershady5, Matthew A. Bershady6, Dmitry Bizyaev7, Dmitry Bizyaev8, Michael R. Blanton9, Adam S. Bolton10, Jo Bovy11, Jonathan Brinkmann7, Joel R. Brownstein10, Etienne Burtin12, Solène Chabanier12, Michael J. Chapman13, Peter Doohyun Choi14, Chia-Hsun Chuang15, Johan Comparat16, M. C. Cousinou, Andrei Cuceu17, Kyle S. Dawson10, Sylvain de la Torre, Arnaud de Mattia12, Victoria de Sainte Agathe3, Hélion du Mas des Bourboux10, Stephanie Escoffier, Thomas Etourneau12, James Farr17, Andreu Font-Ribera17, Peter M. Frinchaboy18, S. Fromenteau19, Héctor Gil-Marín20, Jean Marc Le Goff12, Alma X. Gonzalez-Morales21, Alma X. Gonzalez-Morales22, Violeta Gonzalez-Perez4, Violeta Gonzalez-Perez23, Kathleen Grabowski7, Julien Guy24, Adam J. Hawken, Jiamin Hou16, Hui Kong25, James C. Parker7, Mark A. Klaene7, Jean-Paul Kneib26, Sicheng Lin9, Daniel Long7, Brad W. Lyke27, Axel de la Macorra19, Paul Martini25, Karen L. Masters28, Faizan G. Mohammad13, Jeongin Moon14, Eva Maria Mueller29, Andrea Muñoz-Gutiérrez19, Adam D. Myers27, Seshadri Nadathur4, Richard Neveux12, Jeffrey A. Newman30, P. Noterdaeme3, Audrey Oravetz7, Daniel Oravetz7, Nathalie Palanque-Delabrouille12, Kaike Pan7, Romain Paviot, Will J. Percival31, Will J. Percival13, Ignasi Pérez-Ràfols3, Patrick Petitjean3, Matthew M. Pieri, Abhishek Prakash32, Anand Raichoor26, Corentin Ravoux12, Mehdi Rezaie33, J. Rich12, Ashley J. Ross25, Graziano Rossi14, Rossana Ruggeri4, Rossana Ruggeri34, V. Ruhlmann-Kleider12, Ariel G. Sánchez16, F. Javier Sánchez35, José R. Sánchez-Gallego36, Conor Sayres36, Donald P. Schneider, Hee-Jong Seo33, Arman Shafieloo37, Anže Slosar38, Alex Smith12, Julianna Stermer3, Amélie Tamone26, Jeremy L. Tinker9, Rita Tojeiro39, Mariana Vargas-Magaña19, Andrei Variu26, Yuting Wang, Benjamin A. Weaver, Anne-Marie Weijmans39, C. Yeche12, Pauline Zarrouk12, Pauline Zarrouk40, Cheng Zhao26, Gong-Bo Zhao, Zheng Zheng10 
TL;DR: In this article, the authors present the cosmological implications from final measurements of clustering using galaxies, quasars, and Lyα forests from the completed SDSS lineage of experiments in large-scale structure.
Abstract: We present the cosmological implications from final measurements of clustering using galaxies, quasars, and Lyα forests from the completed Sloan Digital Sky Survey (SDSS) lineage of experiments in large-scale structure. These experiments, composed of data from SDSS, SDSS-II, BOSS, and eBOSS, offer independent measurements of baryon acoustic oscillation (BAO) measurements of angular-diameter distances and Hubble distances relative to the sound horizon, rd, from eight different samples and six measurements of the growth rate parameter, fσ8, from redshift-space distortions (RSD). This composite sample is the most constraining of its kind and allows us to perform a comprehensive assessment of the cosmological model after two decades of dedicated spectroscopic observation. We show that the BAO data alone are able to rule out dark-energy-free models at more than eight standard deviations in an extension to the flat, ΛCDM model that allows for curvature. When combined with Planck Cosmic Microwave Background (CMB) measurements of temperature and polarization, under the same model, the BAO data provide nearly an order of magnitude improvement on curvature constraints relative to primary CMB constraints alone. Independent of distance measurements, the SDSS RSD data complement weak lensing measurements from the Dark Energy Survey (DES) in demonstrating a preference for a flat ΛCDM cosmological model when combined with Planck measurements. The combined BAO and RSD measurements indicate σ8=0.85±0.03, implying a growth rate that is consistent with predictions from Planck temperature and polarization data and with General Relativity. When combining the results of SDSS BAO and RSD, Planck, Pantheon Type Ia supernovae (SNe Ia), and DES weak lensing and clustering measurements, all multiple-parameter extensions remain consistent with a ΛCDM model. Regardless of cosmological model, the precision on each of the three parameters, ωΛ, H0, and σ8, remains at roughly 1%, showing changes of less than 0.6% in the central values between models. In a model that allows for free curvature and a time-evolving equation of state for dark energy, the combined samples produce a constraint ωk=-0.0022±0.0022. The dark energy constraints lead to w0=-0.909±0.081 and wa=-0.49-0.30+0.35, corresponding to an equation of state of wp=-1.018±0.032 at a pivot redshift zp=0.29 and a Dark Energy Task Force Figure of Merit of 94. The inverse distance ladder measurement under this model yields H0=68.18±0.79 km s-1 Mpc-1, remaining in tension with several direct determination methods; the BAO data allow Hubble constant estimates that are robust against the assumption of the cosmological model. In addition, the BAO data allow estimates of H0 that are independent of the CMB data, with similar central values and precision under a ΛCDM model. Our most constraining combination of data gives the upper limit on the sum of neutrino masses at mν<0.115 eV (95% confidence). Finally, we consider the improvements in cosmology constraints over the last decade by comparing our results to a sample representative of the period 2000-2010. We compute the relative gain across the five dimensions spanned by w, ωk, mν, H0, and σ8 and find that the SDSS BAO and RSD data reduce the total posterior volume by a factor of 40 relative to the previous generation. Adding again the Planck, DES, and Pantheon SN Ia samples leads to an overall contraction in the five-dimensional posterior volume of 3 orders of magnitude.

575 citations


Journal ArticleDOI
06 Jan 2021-Nature
TL;DR: In this paper, the authors demonstrate a computationally specific integrated photonic hardware accelerator (tensor core) that is capable of operating at speeds of trillions of multiply-accumulate operations per second.
Abstract: With the proliferation of ultrahigh-speed mobile networks and internet-connected devices, along with the rise of artificial intelligence (AI)1, the world is generating exponentially increasing amounts of data that need to be processed in a fast and efficient way. Highly parallelized, fast and scalable hardware is therefore becoming progressively more important2. Here we demonstrate a computationally specific integrated photonic hardware accelerator (tensor core) that is capable of operating at speeds of trillions of multiply-accumulate operations per second (1012 MAC operations per second or tera-MACs per second). The tensor core can be considered as the optical analogue of an application-specific integrated circuit (ASIC). It achieves parallelized photonic in-memory computing using phase-change-material memory arrays and photonic chip-based optical frequency combs (soliton microcombs3). The computation is reduced to measuring the optical transmission of reconfigurable and non-resonant passive components and can operate at a bandwidth exceeding 14 gigahertz, limited only by the speed of the modulators and photodetectors. Given recent advances in hybrid integration of soliton microcombs at microwave line rates3-5, ultralow-loss silicon nitride waveguides6,7, and high-speed on-chip detectors and modulators, our approach provides a path towards full complementary metal-oxide-semiconductor (CMOS) wafer-scale integration of the photonic tensor core. Although we focus on convolutional processing, more generally our results indicate the potential of integrated photonics for parallel, fast, and efficient computational hardware in data-heavy AI applications such as autonomous driving, live video processing, and next-generation cloud computing services.

478 citations


Journal ArticleDOI
TL;DR: The cGAS-STING pathway has emerged as a key mediator of inflammation in the settings of infection, cellular stress and tissue damage as discussed by the authors, which has enabled the development of selective small-molecule inhibitors with the potential to target the CGS-STing axis in a number of inflammatory diseases.
Abstract: The cGAS-STING signalling pathway has emerged as a key mediator of inflammation in the settings of infection, cellular stress and tissue damage Underlying this broad involvement of the cGAS-STING pathway is its capacity to sense and regulate the cellular response towards microbial and host-derived DNAs, which serve as ubiquitous danger-associated molecules Insights into the structural and molecular biology of the cGAS-STING pathway have enabled the development of selective small-molecule inhibitors with the potential to target the cGAS-STING axis in a number of inflammatory diseases in humans Here, we outline the principal elements of the cGAS-STING signalling cascade and discuss the general mechanisms underlying the association of cGAS-STING activity with various autoinflammatory, autoimmune and degenerative diseases Finally, we outline the chemical nature of recently developed cGAS and STING antagonists and summarize their potential clinical applications

399 citations


Journal ArticleDOI
TL;DR: Quantum ESPRESSO as mentioned in this paper is an open-source distribution of computer codes for quantum-mechanical materials modeling, based on density-functional theory, pseudopotentials, and plane waves.
Abstract: Quantum ESPRESSO is an open-source distribution of computer codes for quantum-mechanical materials modeling, based on density-functional theory, pseudopotentials, and plane waves, and renowned for its performance on a wide range of hardware architectures, from laptops to massively parallel computers, as well as for the breadth of its applications. In this paper we present a motivation and brief review of the ongoing effort to port Quantum ESPRESSO onto heterogeneous architectures based on hardware accelerators, which will overcome the energy constraints that are currently hindering the way towards exascale computing.

356 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a set of guidelines for analysing critical data from lignin-first approaches, including feedstock analysis and process parameters, with the ambition of uniting the lignIN-first research community around a common set of reportable metrics, including fractionation efficiency, product yields, solvent mass balances, catalyst efficiency, and requirements for additional reagents such as reducing, oxidising, or capping agents.
Abstract: The valorisation of the plant biopolymer lignin is now recognised as essential to enabling the economic viability of the lignocellulosic biorefining industry. In this context, the “lignin-first” biorefining approach, in which lignin valorisation is considered in the design phase, has demonstrated the fullest utilisation of lignocellulose. We define lignin-first methods as active stabilisation approaches that solubilise lignin from native lignocellulosic biomass while avoiding condensation reactions that lead to more recalcitrant lignin polymers. This active stabilisation can be accomplished by solvolysis and catalytic conversion of reactive intermediates to stable products or by protection-group chemistry of lignin oligomers or reactive monomers. Across the growing body of literature in this field, there are disparate approaches to report and analyse the results from lignin-first approaches, thus making quantitative comparisons between studies challenging. To that end, we present herein a set of guidelines for analysing critical data from lignin-first approaches, including feedstock analysis and process parameters, with the ambition of uniting the lignin-first research community around a common set of reportable metrics. These guidelines comprise standards and best practices or minimum requirements for feedstock analysis, stressing reporting of the fractionation efficiency, product yields, solvent mass balances, catalyst efficiency, and the requirements for additional reagents such as reducing, oxidising, or capping agents. Our goal is to establish best practices for the research community at large primarily to enable direct comparisons between studies from different laboratories. The use of these guidelines will be helpful for the newcomers to this field and pivotal for further progress in this exciting research area.

320 citations


Journal ArticleDOI
TL;DR: Daunorubicin is a famous anthracycline anticancer chemotherapy drug with many side effects that is very important to measure in biological samples and its electrochemical biosensor was developed to measure these side effects.
Abstract: Daunorubicin is a famous anthracycline anticancer chemotherapy drug with many side effects that is very important to measure in biological samples. A daunorubicin electrochemical biosensor was fabr...

312 citations


Journal ArticleDOI
03 Jun 2021
TL;DR: This Primer explains the central concepts of single-molecule localization microscopy before discussing experimental considerations regarding fluorophores, optics and data acquisition, processing and analysis, and describes recent high-impact discoveries made by SMLM techniques.
Abstract: Single-molecule localization microscopy (SMLM) describes a family of powerful imaging techniques that dramatically improve spatial resolution over standard, diffraction-limited microscopy techniques and can image biological structures at the molecular scale. In SMLM, individual fluorescent molecules are computationally localized from diffraction-limited image sequences and the localizations are used to generate a super-resolution image or a time course of super-resolution images, or to define molecular trajectories. In this Primer, we introduce the basic principles of SMLM techniques before describing the main experimental considerations when performing SMLM, including fluorescent labelling, sample preparation, hardware requirements and image acquisition in fixed and live cells. We then explain how low-resolution image sequences are computationally processed to reconstruct super-resolution images and/or extract quantitative information, and highlight a selection of biological discoveries enabled by SMLM and closely related methods. We discuss some of the main limitations and potential artefacts of SMLM, as well as ways to alleviate them. Finally, we present an outlook on advanced techniques and promising new developments in the fast-evolving field of SMLM. We hope that this Primer will be a useful reference for both newcomers and practitioners of SMLM. This Primer explains the central concepts of single-molecule localization microscopy (SMLM) before discussing experimental considerations regarding fluorophores, optics and data acquisition, processing and analysis. The Primer further describes recent high-impact discoveries made by SMLM techniques and concludes by discussing emerging methodologies.

246 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a metadata analysis of methane fluxes from all major natural, impacted and human-made aquatic ecosystems and conclude that aquatic emissions will probably increase due to urbanization, eutrophication and positive climate feedbacks.
Abstract: Atmospheric methane is a potent greenhouse gas that plays a major role in controlling the Earth’s climate The causes of the renewed increase of methane concentration since 2007 are uncertain given the multiple sources and complex biogeochemistry Here, we present a metadata analysis of methane fluxes from all major natural, impacted and human-made aquatic ecosystems Our revised bottom-up global aquatic methane emissions combine diffusive, ebullitive and/or plant-mediated fluxes from 15 aquatic ecosystems We emphasize the high variability of methane fluxes within and between aquatic ecosystems and a positively skewed distribution of empirical data, making global estimates sensitive to statistical assumptions and sampling design We find aquatic ecosystems contribute (median) 41% or (mean) 53% of total global methane emissions from anthropogenic and natural sources We show that methane emissions increase from natural to impacted aquatic ecosystems and from coastal to freshwater ecosystems We argue that aquatic emissions will probably increase due to urbanization, eutrophication and positive climate feedbacks and suggest changes in land-use management as potential mitigation strategies to reduce aquatic methane emissions Methane emissions from aquatic systems contribute approximately half of global methane emissions, according to meta-analysis of natural, impacted and human-made aquatic ecosystems and indicating potential mitigation strategies to reduce emissions

239 citations


Journal ArticleDOI
TL;DR: It is shown how biofabrication and organoid technology can be merged to control tissue self-organization from millimetre to centimetre scales, opening new avenues for drug discovery, diagnostics and regenerative medicine.
Abstract: Bioprinting promises enormous control over the spatial deposition of cells in three dimensions1–7, but current approaches have had limited success at reproducing the intricate micro-architecture, cell-type diversity and function of native tissues formed through cellular self-organization. We introduce a three-dimensional bioprinting concept that uses organoid-forming stem cells as building blocks that can be deposited directly into extracellular matrices conducive to spontaneous self-organization. By controlling the geometry and cellular density, we generated centimetre-scale tissues that comprise self-organized features such as lumens, branched vasculature and tubular intestinal epithelia with in vivo-like crypts and villus domains. Supporting cells were deposited to modulate morphogenesis in space and time, and different epithelial cells were printed sequentially to mimic the organ boundaries present in the gastrointestinal tract. We thus show how biofabrication and organoid technology can be merged to control tissue self-organization from millimetre to centimetre scales, opening new avenues for drug discovery, diagnostics and regenerative medicine. A 3D bioprinting approach has been developed to facilitate tissue morphogenesis by directly depositing organoid-forming stem cells in an extracellular matrix, with the ability to generate intestinal epithelia and branched vascular tissue constructs.

Journal ArticleDOI
TL;DR: In this paper, the authors provide an introduction to Gaussian process regression (GPR) machine learning methods in computational materials science and chemistry, focusing on the regression of atomistic properties: in particular, on the construction of interatomic potentials, or force fields, in the Gaussian approximation potential (GAP) framework.
Abstract: We provide an introduction to Gaussian process regression (GPR) machine-learning methods in computational materials science and chemistry. The focus of the present review is on the regression of atomistic properties: in particular, on the construction of interatomic potentials, or force fields, in the Gaussian Approximation Potential (GAP) framework; beyond this, we also discuss the fitting of arbitrary scalar, vectorial, and tensorial quantities. Methodological aspects of reference data generation, representation, and regression, as well as the question of how a data-driven model may be validated, are reviewed and critically discussed. A survey of applications to a variety of research questions in chemistry and materials science illustrates the rapid growth in the field. A vision is outlined for the development of the methodology in the years to come.

Journal ArticleDOI
TL;DR: The development of enantioselective methods for the ring opening of cyclopropanes has grown from a proof of concept stage to a broad range of methods for accessing enantioenriched building blocks, and further extensive developments can be expected in the future.
Abstract: This review describes the development of enantioselective methods for the ring opening of cyclopropanes. Both approaches based on the reaction of nonchiral cyclopropanes and (dynamic) kinetic resolutions and asymmetric transformations of chiral substrates are presented. The review is organized according to substrate classes, starting by the more mature field of donor-acceptor cyclopropanes. Emerging methods for enantioselective ring opening of acceptor- or donor-only cyclopropanes are then presented. The last part of the review describes the ring opening of more reactive three-membered rings substituted with unsaturations with a particular focus on vinylcyclopropanes, alkylidenecyclopropanes, and vinylidenecyclopropanes. In the last two decades, the field has grown from a proof of concept stage to a broad range of methods for accessing enantioenriched building blocks, and further extensive developments can be expected in the future.

ReportDOI
TL;DR: The COVID-19 shock creates a sudden temporary sharshortfall in revenue for firms as discussed by the authors, and they expect firms with greater financial flexibility to be better able to fund.
Abstract: The COVID-19 shock creates a sudden temporary sharshortfall in revenue for firms We expect firms with greater financial flexibility to be better able to fund

Journal ArticleDOI
TL;DR: This article showed that the relative performance of these methods is contingent on their ability to account for variation between biological replicates, and that the most widely used methods can discover hundreds of differentially expressed genes in the absence of biological differences.
Abstract: Differential expression analysis in single-cell transcriptomics enables the dissection of cell-type-specific responses to perturbations such as disease, trauma, or experimental manipulations. While many statistical methods are available to identify differentially expressed genes, the principles that distinguish these methods and their performance remain unclear. Here, we show that the relative performance of these methods is contingent on their ability to account for variation between biological replicates. Methods that ignore this inevitable variation are biased and prone to false discoveries. Indeed, the most widely used methods can discover hundreds of differentially expressed genes in the absence of biological differences. To exemplify these principles, we exposed true and false discoveries of differentially expressed genes in the injured mouse spinal cord.

Journal ArticleDOI
23 Mar 2021-Nature
TL;DR: In this article, the authors used live-cell structured illumination microscopy to capture mitochondrial dynamics and discovered two functionally and mechanistically distinct types of fission in African green monkey Cos-7 cells and mouse cardiomyocytes.
Abstract: Mitochondrial fission is a highly regulated process that, when disrupted, can alter metabolism, proliferation and apoptosis1-3. Dysregulation has been linked to neurodegeneration3,4, cardiovascular disease3 and cancer5. Key components of the fission machinery include the endoplasmic reticulum6 and actin7, which initiate constriction before dynamin-related protein 1 (DRP1)8 binds to the outer mitochondrial membrane via adaptor proteins9-11, to drive scission12. In the mitochondrial life cycle, fission enables both biogenesis of new mitochondria and clearance of dysfunctional mitochondria through mitophagy1,13. Current models of fission regulation cannot explain how those dual fates are decided. However, uncovering fate determinants is challenging, as fission is unpredictable, and mitochondrial morphology is heterogeneous, with ultrastructural features that are below the diffraction limit. Here, we used live-cell structured illumination microscopy to capture mitochondrial dynamics. By analysing hundreds of fissions in African green monkey Cos-7 cells and mouse cardiomyocytes, we discovered two functionally and mechanistically distinct types of fission. Division at the periphery enables damaged material to be shed into smaller mitochondria destined for mitophagy, whereas division at the midzone leads to the proliferation of mitochondria. Both types are mediated by DRP1, but endoplasmic reticulum- and actin-mediated pre-constriction and the adaptor MFF govern only midzone fission. Peripheral fission is preceded by lysosomal contact and is regulated by the mitochondrial outer membrane protein FIS1. These distinct molecular mechanisms explain how cells independently regulate fission, leading to distinct mitochondrial fates.

Journal ArticleDOI
TL;DR: The results indicate that antibody responses against viral S and N proteins were equally sensitive in the acute phase of infection, but that responses against N appear to wane in the postinfection phase where those against the S protein persist over time.
Abstract: Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-specific antibody responses to the spike (S) protein monomer, S protein native trimeric form, or the nucleocapsid (N) proteins were evaluated in cohorts of individuals with acute infection (n = 93) and in individuals enrolled in a postinfection seroprevalence population study (n = 578) in Switzerland. Commercial assays specific for the S1 monomer, for the N protein, or within a newly developed Luminex assay using the S protein trimer were found to be equally sensitive in antibody detection in the acute-infection-phase samples. Interestingly, compared to anti-S antibody responses, those against the N protein appear to wane in the postinfection cohort. Seroprevalence in a "positive patient contacts" group (n = 177) was underestimated by N protein assays by 10.9 to 32.2%, while the "randomly selected" general population group (n = 311) was reduced by up to 45% relative to the S protein assays. The overall reduction in seroprevalence targeting only anti-N antibodies for the total cohort ranged from 9.4 to 31%. Of note, the use of the S protein in its native trimer form was significantly more sensitive compared to monomeric S proteins. These results indicate that the assessment of anti-S IgG antibody responses against the native trimeric S protein should be implemented to estimate SARS-CoV-2 infections in population-based seroprevalence studies.IMPORTANCE In the present study, we have determined SARS-CoV-2-specific antibody responses in sera of acute and postinfection phase subjects. Our results indicate that antibody responses against viral S and N proteins were equally sensitive in the acute phase of infection, but that responses against N appear to wane in the postinfection phase where those against the S protein persist over time. The most sensitive serological assay in both acute and postinfection phases used the native S protein trimer as the binding antigen, which has significantly greater conformational epitopes for antibody binding compared to the S1 monomer protein used in other assays. We believe these results are extremely important in order to generate correct estimates of SARS-CoV-2 infections in the general population. Furthermore, the assessment of antibody responses against the trimeric S protein will be critical to evaluate the durability of the antibody response and for the characterization of a vaccine-induced antibody response.

Journal ArticleDOI
TL;DR: In this paper, the authors report world averages of measurements of b -hadron, c-hadron and -lepton properties obtained by the Heavy Flavour Averaging Group using results available through September 2018.
Abstract: This paper reports world averages of measurements of b -hadron, c -hadron, and -lepton properties obtained by the Heavy Flavour Averaging Group using results available through September 2018. In rare cases, significant results obtained several months later are also used. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, violation parameters, parameters of semileptonic decays, and Cabibbo–Kobayashi–Maskawa matrix elements.

Journal ArticleDOI
TL;DR: The Roadmap on Magnonics as mentioned in this paper is a collection of 22 sections written by leading experts in this field who review and discuss the current status but also present their vision of future perspectives.
Abstract: Magnonics is a rather young physics research field in nanomagnetism and nanoscience that addresses the use of spin waves (magnons) to transmit, store, and process information. After several papers and review articles published in the last decade, with a steadily increase in the number of citations, we are presenting the first Roadmap on Magnonics. This a collection of 22 sections written by leading experts in this field who review and discuss the current status but also present their vision of future perspectives. Today, the principal challenges in applied magnonics are the excitation of sub-100 nm wavelength magnons, their manipulation on the nanoscale and the creation of sub-micrometre devices using low-Gilbert damping magnetic materials and the interconnections to standard electronics. In this respect, magnonics offers lower energy consumption, easier integrability and compatibility with CMOS structure, reprogrammability, shorter wavelength, smaller device features, anisotropic properties, negative group velocity, non-reciprocity and efficient tunability by various external stimuli to name a few. Hence, despite being a young research field, magnonics has come a long way since its early inception. This Roadmap represents a milestone for future emerging research directions in magnonics and hopefully it will be followed by a series of articles on the same topic.

Journal ArticleDOI
TL;DR: This Review surveys the basic principles, recent advances and promising future directions for wave-based-metamaterial analogue computing systems, and describes some of the most exciting applications suggested for these Computing metamaterials, including image processing, edge detection, equation solving and machine learning.
Abstract: Despite their widespread use for performing advanced computational tasks, digital signal processors suffer from several restrictions, including low speed, high power consumption and complexity, caused by costly analogue-to-digital converters. For this reason, there has recently been a surge of interest in performing wave-based analogue computations that avoid analogue-to-digital conversion and allow massively parallel operation. In particular, novel schemes for wave-based analogue computing have been proposed based on artificially engineered photonic structures, that is, metamaterials. Such kinds of computing systems, referred to as computational metamaterials, can be as fast as the speed of light and as small as its wavelength, yet, impart complex mathematical operations on an incoming wave packet or even provide solutions to integro-differential equations. These much-sought features promise to enable a new generation of ultra-fast, compact and efficient processing and computing hardware based on light-wave propagation. In this Review, we discuss recent advances in the field of computational metamaterials, surveying the state-of-the-art metastructures proposed to perform analogue computation. We further describe some of the most exciting applications suggested for these computing systems, including image processing, edge detection, equation solving and machine learning. Finally, we provide an outlook for the possible directions and the key problems for future research. Metamaterials provide a platform to leverage optical signals for performing specific-purpose computational tasks with ultra-fast speeds. This Review surveys the basic principles, recent advances and promising future directions for wave-based-metamaterial analogue computing systems.

Journal ArticleDOI
TL;DR: In this paper, the authors highlight the recent applications of biochar in removing organic and inorganic pollutants present in industrial effluents and discuss the possible optimizations (such as the pyrolysis temperature, solution pH) allowing the increase of the adsorption capabilities of bio-char leading to organic contaminants removal.
Abstract: Currently, due to the rapid growth of urbanization and industrialization in developing countries, a large volume of wastewater is produced from industries that contain chemicals generating high environmental risks affecting human health and the economy if not treated properly. Consequently, the development of a sustainable low-cost wastewater treatment approach has attracted more attention of policymakers and scientists. The present review highlights the recent applications of biochar in removing organic and inorganic pollutants present in industrial effluents. The recent modes of preparation, physicochemical properties and adsorption mechanisms of biochar in removing organic and inorganic industrial pollutants are also reviewed comprehensively. Biochar showed high adsorption of industrial dyes up to 80%. It also discusses the recent application and mechanism of biochar-supported photocatalytic materials for the degradation of organic contaminants in wastewater. We reviewed also the possible optimizations (such as the pyrolysis temperature, solution pH) allowing the increase of the adsorption capabilities of biochar leading to organic contaminants removal. Besides, increasing the pyrolysis temperature of the biochar was seen to lead to an increase in its surface area, while it decreases their amount of oxygen-containing functional groups, consequently leading to a decrease in the adsorption of metal (loid) ions present in the medium. Finally, the review suggests that more research should be carried out to optimize the main parameters involved in biochar production and its regeneration methods. Future efforts should be also carried out towards process engineering to improve its adsorption capacity to increase the economic benefits of its implementation.

Journal ArticleDOI
TL;DR: All the manual verification, data management and data visualization components of RDP5 have been extensively updated to minimize the amount of time needed by users to individually verify and refine the program’s interpretation of each of the individual recombination events that it detects.
Abstract: For the past 20 years, the recombination detection program (RDP) project has focused on the development of a fast, flexible, and easy to use Windows-based recombination analysis tool. Whereas previous versions of this tool have relied on considerable user-mediated verification of detected recombination events, the latest iteration, RDP5, is automated enough that it can be integrated within analysis pipelines and run without any user input. The main innovation enabling this degree of automation is the implementation of statistical tests to identify recombination signals that could be attributable to evolutionary processes other than recombination. The additional analysis time required for these tests has been offset by algorithmic improvements throughout the program such that, relative to RDP4, RDP5 will still run up to five times faster and be capable of analyzing alignments containing twice as many sequences (up to 5000) that are five times longer (up to 50 million sites). For users wanting to remove signals of recombination from their datasets before using them for downstream phylogenetics-based molecular evolution analyses, RDP5 can disassemble detected recombinant sequences into their constituent parts and output a variety of different recombination-free datasets in an array of different alignment formats. For users that are interested in exploring the recombination history of their datasets, all the manual verification, data management and data visualization components of RDP5 have been extensively updated to minimize the amount of time needed by users to individually verify and refine the program's interpretation of each of the individual recombination events that it detects.

Journal ArticleDOI
TL;DR: The authors summarizes the current understanding of the nature and characteristics of the most commonly used structural and chemical descriptions of atomistic structures, highlighting the deep underlying connections between different frameworks and the ideas that lead to computationally efficient and universally applicable models.
Abstract: The first step in the construction of a regression model or a data-driven analysis, aiming to predict or elucidate the relationship between the atomic-scale structure of matter and its properties, involves transforming the Cartesian coordinates of the atoms into a suitable representation. The development of atomic-scale representations has played, and continues to play, a central role in the success of machine-learning methods for chemistry and materials science. This review summarizes the current understanding of the nature and characteristics of the most commonly used structural and chemical descriptions of atomistic structures, highlighting the deep underlying connections between different frameworks and the ideas that lead to computationally efficient and universally applicable models. It emphasizes the link between properties, structures, their physical chemistry, and their mathematical description, provides examples of recent applications to a diverse set of chemical and materials science problems, and outlines the open questions and the most promising research directions in the field.

Journal ArticleDOI
20 Jan 2021-Nature
TL;DR: In this paper, a tileable mechanical metamaterial with stable memory at the unit-cell level is presented, where each m-bit can be independently and reversibly switched between two stable states (acting as memory) using magnetic actuation to move between the equilibria of a bistable shell.
Abstract: Metamaterials are designed to realize exotic physical properties through the geometric arrangement of their underlying structural layout1,2. Traditional mechanical metamaterials achieve functionalities such as a target Poisson's ratio3 or shape transformation4-6 through unit-cell optimization7-9, often with spatial heterogeneity10-12. These functionalities are programmed into the layout of the metamaterial in a way that cannot be altered. Although recent efforts have produced means of tuning such properties post-fabrication13-19, they have not demonstrated mechanical reprogrammability analogous to that of digital devices, such as hard disk drives, in which each unit can be written to or read from in real time as required. Here we overcome this challenge by using a design framework for a tileable mechanical metamaterial with stable memory at the unit-cell level. Our design comprises an array of physical binary elements (m-bits), analogous to digital bits, with clearly delineated writing and reading phases. Each m-bit can be independently and reversibly switched between two stable states (acting as memory) using magnetic actuation to move between the equilibria of a bistable shell20-25. Under deformation, each state is associated with a distinctly different mechanical response that is fully elastic and can be reversibly cycled until the system is reprogrammed. Encoding a set of binary instructions onto the tiled array yields markedly different mechanical properties; specifically, the stiffness and strength can be made to range over an order of magnitude. We expect that the stable memory and on-demand reprogrammability of mechanical properties in this design paradigm will facilitate the development of advanced forms of mechanical metamaterials.

Journal ArticleDOI
TL;DR: In this paper, a review summarizes recent advances in the development of biodegradable plastics and their safe degradation potentials and their applicability, degradation and role in sustainable development.

Journal ArticleDOI
TL;DR: A comprehensive review of the microstructure and mechanical properties of nickel-based superalloys, manufactured using the two principle PBF techniques: Laser Powder Bed Fusion (LPBF) and Electron Beam Melting (EBM) is presented in this article.
Abstract: Powder Bed Fusion (PBF) techniques constitute a family of Additive Manufacturing (AM) processes, which are characterised by high design flexibility and no tooling requirement. This makes PBF techniques attractive to many modern manufacturing sectors (e.g. aerospace, defence, energy and automotive) where some materials, such as Nickel-based superalloys, cannot be easily processed using conventional subtractive techniques. Nickel-based superalloys are crucial materials in modern engineering and underpin the performance of many advanced mechanical systems. Their physical properties (high mechanical integrity at high temperature) make them difficult to process via traditional techniques. Consequently, manufacture of nickel-based superalloys using PBF platforms has attracted significant attention. To permit a wider application, a deep understanding of their mechanical behaviour and relation to process needs to be achieved. The motivation for this paper is to provide a comprehensive review of the mechanical properties of PBF nickel-based superalloys and how process parameters affect these, and to aid practitioners in identifying the shortcomings and the opportunities in this field. Therefore, this paper aims to review research contributions regarding the microstructure and mechanical properties of nickel-based superalloys, manufactured using the two principle PBF techniques: Laser Powder Bed Fusion (LPBF) and Electron Beam Melting (EBM). The ‘target’ microstructures are introduced alongside the characteristics of those produced by PBF process, followed by an overview of the most used building processes, as well as build quality inspection techniques. A comprehensive evaluation of the mechanical properties, including tensile strength, hardness, shear strength, fatigue resistance, creep resistance and fracture toughness of PBF nickel-based superalloys are analysed. This work concludes with summary tables for data published on these properties serving as a quick reference to scholars. Characteristic process factors influencing functional performance are also discussed and compared throughout for the purpose of identifying research opportunities and directing the research community toward the end goal of achieving part integrity that extends beyond static components only.

Journal ArticleDOI
TL;DR: In this paper, the authors highlight recent evidence of collective behaviors induced by higher-order interactions and outline three key challenges for the physics of higher order complex networks, which is the main paradigm for modeling the dynamics of interacting systems.
Abstract: Complex networks have become the main paradigm for modelling the dynamics of interacting systems. However, networks are intrinsically limited to describing pairwise interactions, whereas real-world systems are often characterized by higher-order interactions involving groups of three or more units. Higher-order structures, such as hypergraphs and simplicial complexes, are therefore a better tool to map the real organization of many social, biological and man-made systems. Here, we highlight recent evidence of collective behaviours induced by higher-order interactions, and we outline three key challenges for the physics of higher-order systems. Network representations of complex systems are limited to pairwise interactions, but real-world systems often involve higher-order interactions. This Perspective looks at the new physics emerging from attempts to characterize these interactions.

Journal ArticleDOI
TL;DR: This review summarizes the current understanding of the nature and characteristics of the most commonly used structural and chemical descriptions of atomistic structures, highlighting the deep underlying connections between different frameworks and the ideas that lead to computationally efficient and universally applicable models.
Abstract: The first step in the construction of a regression model or a data-driven analysis, aiming to predict or elucidate the relationship between the atomic scale structure of matter and its properties, involves transforming the Cartesian coordinates of the atoms into a suitable representation. The development of atomic-scale representations has played, and continues to play, a central role in the success of machine-learning methods for chemistry and materials science. This review summarizes the current understanding of the nature and characteristics of the most commonly used structural and chemical descriptions of atomistic structures, highlighting the deep underlying connections between different frameworks, and the ideas that lead to computationally efficient and universally applicable models. It emphasizes the link between properties, structures, their physical chemistry and their mathematical description, provides examples of recent applications to a diverse set of chemical and materials science problems, and outlines the open questions and the most promising research directions in the field.

Journal ArticleDOI
28 Jul 2021-Nature
TL;DR: In this article, a comprehensive single-cell transcriptomic atlas of the embryonic mouse brain between gastrulation and birth is presented, identifying almost eight hundred cellular states that describe a developmental program for the functional elements of the brain and its enclosing membranes.
Abstract: The mammalian brain develops through a complex interplay of spatial cues generated by diffusible morphogens, cell–cell interactions and intrinsic genetic programs that result in probably more than a thousand distinct cell types. A complete understanding of this process requires a systematic characterization of cell states over the entire spatiotemporal range of brain development. The ability of single-cell RNA sequencing and spatial transcriptomics to reveal the molecular heterogeneity of complex tissues has therefore been particularly powerful in the nervous system. Previous studies have explored development in specific brain regions1–8, the whole adult brain9 and even entire embryos10. Here we report a comprehensive single-cell transcriptomic atlas of the embryonic mouse brain between gastrulation and birth. We identified almost eight hundred cellular states that describe a developmental program for the functional elements of the brain and its enclosing membranes, including the early neuroepithelium, region-specific secondary organizers, and both neurogenic and gliogenic progenitors. We also used in situ mRNA sequencing to map the spatial expression patterns of key developmental genes. Integrating the in situ data with our single-cell clusters revealed the precise spatial organization of neural progenitors during the patterning of the nervous system. A comprehensive single-cell transcriptomic atlas of the mouse brain between gastrulation and birth identifies hundreds of cellular states and reveals the spatiotemporal organization of brain development.

Journal ArticleDOI
14 Jan 2021
TL;DR: This Primer summarizes the basic principles of NMR as applied to the wide range of solid systems, and describes the most common MAS NMR experiments and data analysis approaches for investigating biological macromolecules, organic materials, and inorganic solids.
Abstract: Solid-state nuclear magnetic resonance (NMR) spectroscopy is an atomic-level method used to determine the chemical structure, three-dimensional structure, and dynamics of solids and semi-solids. This Primer summarizes the basic principles of NMR as applied to the wide range of solid systems. The fundamental nuclear spin interactions and the effects of magnetic fields and radiofrequency pulses on nuclear spins are the same as in liquid-state NMR. However, because of the anisotropy of the interactions in the solid state, the majority of high-resolution solid-state NMR spectra is measured under magic-angle spinning (MAS), which has profound effects on the types of radiofrequency pulse sequences required to extract structural and dynamical information. We describe the most common MAS NMR experiments and data analysis approaches for investigating biological macromolecules, organic materials, and inorganic solids. Continuing development of sensitivity-enhancement approaches, including 1H-detected fast MAS experiments, dynamic nuclear polarization, and experiments tailored to ultrahigh magnetic fields, is described. We highlight recent applications of solid-state NMR to biological and materials chemistry. The Primer ends with a discussion of current limitations of NMR to study solids, and points to future avenues of development to further enhance the capabilities of this sophisticated spectroscopy for new applications.