Showing papers by "École Polytechnique Fédérale de Lausanne published in 2020"
••
TL;DR: Global health has steadily improved over the past 30 years as measured by age-standardised DALY rates, and there has been a marked shift towards a greater proportion of burden due to YLDs from non-communicable diseases and injuries.
5,802 citations
••
Christopher J L Murray1, Christopher J L Murray2, Christopher J L Murray3, Aleksandr Y. Aravkin2 +2269 more•Institutions (286)
TL;DR: The largest declines in risk exposure from 2010 to 2019 were among a set of risks that are strongly linked to social and economic development, including household air pollution; unsafe water, sanitation, and handwashing; and child growth failure.
3,059 citations
••
TL;DR: The COVID Human Genetic Effort established to test the general hypothesis that life-threatening COVID-19 in some or most patients may be caused by monogenic inborn errors of immunity to SARS-CoV-2 with incomplete or complete penetrance finds an enrichment in variants predicted to be loss-of-function (pLOF), with a minor allele frequency <0.001.
Abstract: Clinical outcome upon infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ranges from silent infection to lethal coronavirus disease 2019 (COVID-19). We have found an enrichment in rare variants predicted to be loss-of-function (LOF) at the 13 human loci known to govern Toll-like receptor 3 (TLR3)- and interferon regulatory factor 7 (IRF7)-dependent type I interferon (IFN) immunity to influenza virus in 659 patients with life-threatening COVID-19 pneumonia relative to 534 subjects with asymptomatic or benign infection. By testing these and other rare variants at these 13 loci, we experimentally defined LOF variants underlying autosomal-recessive or autosomal-dominant deficiencies in 23 patients (3.5%) 17 to 77 years of age. We show that human fibroblasts with mutations affecting this circuit are vulnerable to SARS-CoV-2. Inborn errors of TLR3- and IRF7-dependent type I IFN immunity can underlie life-threatening COVID-19 pneumonia in patients with no prior severe infection.
1,659 citations
••
TL;DR: The authors show that the PHD1 controls muscle mass in a hydroxylation-independent manner and prevents the degradation of leucine sensor LRS during oxygen and amino acid depletion to ensure effective mTORC1 activation in response to leucines.
Abstract: mTORC1 is an important regulator of muscle mass but how it is modulated by oxygen and nutrients is not completely understood. We show that loss of the prolyl hydroxylase domain isoform 1 oxygen sensor in mice (PHD1KO) reduces muscle mass. PHD1KO muscles show impaired mTORC1 activation in response to leucine whereas mTORC1 activation by growth factors or eccentric contractions was preserved. The ability of PHD1 to promote mTORC1 activity is independent of its hydroxylation activity but is caused by decreased protein content of the leucyl tRNA synthetase (LRS) leucine sensor. Mechanistically, PHD1 interacts with and stabilizes LRS. This interaction is promoted during oxygen and amino acid depletion and protects LRS from degradation. Finally, elderly subjects have lower PHD1 levels and LRS activity in muscle from aged versus young human subjects. In conclusion, PHD1 ensures an optimal mTORC1 response to leucine after episodes of metabolic scarcity.
1,466 citations
••
TL;DR: This Critical Review comparatively examines the activation mechanisms of peroxymonosulfate and peroxydisulfates and the formation pathways of oxidizing species and the impacts of water parameters and constituents such as pH, background organic matter, halide, phosphate, and carbonate on persulfate-driven chemistry.
Abstract: Reports that promote persulfate-based advanced oxidation process (AOP) as a viable alternative to hydrogen peroxide-based processes have been rapidly accumulating in recent water treatment literature. Various strategies to activate peroxide bonds in persulfate precursors have been proposed and the capacity to degrade a wide range of organic pollutants has been demonstrated. Compared to traditional AOPs in which hydroxyl radical serves as the main oxidant, persulfate-based AOPs have been claimed to involve different in situ generated oxidants such as sulfate radical and singlet oxygen as well as nonradical oxidation pathways. However, there exist controversial observations and interpretations around some of these claims, challenging robust scientific progress of this technology toward practical use. This Critical Review comparatively examines the activation mechanisms of peroxymonosulfate and peroxydisulfate and the formation pathways of oxidizing species. Properties of the main oxidizing species are scrutinized and the role of singlet oxygen is debated. In addition, the impacts of water parameters and constituents such as pH, background organic matter, halide, phosphate, and carbonate on persulfate-driven chemistry are discussed. The opportunity for niche applications is also presented, emphasizing the need for parallel efforts to remove currently prevalent knowledge roadblocks.
1,412 citations
••
TL;DR: Although a number of assumptions need to be reexamined, like age structure in social mixing patterns and in the distribution of mobility, hospitalization, and fatality, it is concluded that verifiable evidence exists to support the planning of emergency measures.
Abstract: The spread of coronavirus disease 2019 (COVID-19) in Italy prompted drastic measures for transmission containment. We examine the effects of these interventions, based on modeling of the unfolding epidemic. We test modeling options of the spatially explicit type, suggested by the wave of infections spreading from the initial foci to the rest of Italy. We estimate parameters of a metacommunity Susceptible-Exposed-Infected-Recovered (SEIR)-like transmission model that includes a network of 107 provinces connected by mobility at high resolution, and the critical contribution of presymptomatic and asymptomatic transmission. We estimate a generalized reproduction number ([Formula: see text] = 3.60 [3.49 to 3.84]), the spectral radius of a suitable next-generation matrix that measures the potential spread in the absence of containment interventions. The model includes the implementation of progressive restrictions after the first case confirmed in Italy (February 21, 2020) and runs until March 25, 2020. We account for uncertainty in epidemiological reporting, and time dependence of human mobility matrices and awareness-dependent exposure probabilities. We draw scenarios of different containment measures and their impact. Results suggest that the sequence of restrictions posed to mobility and human-to-human interactions have reduced transmission by 45% (42 to 49%). Averted hospitalizations are measured by running scenarios obtained by selectively relaxing the imposed restrictions and total about 200,000 individuals (as of March 25, 2020). Although a number of assumptions need to be reexamined, like age structure in social mixing patterns and in the distribution of mobility, hospitalization, and fatality, we conclude that verifiable evidence exists to support the planning of emergency measures.
948 citations
•
12 Jul 2020TL;DR: This work obtains tight convergence rates for FedAvg and proves that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence, and proposes a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the ` client-drifts' in its local updates.
Abstract: Federated Averaging (FedAvg) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FedAvg and prove that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.
As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the `client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
913 citations
••
TL;DR: The results suggest that most of the population of Geneva remained uninfected during this wave of the pandemic, despite the high prevalence of COVID-19 in the region, and highlight that the epidemic is far from coming to an end by means of fewer susceptible people in the population.
883 citations
••
University of Tokyo1, Max Planck Society2, Technische Universität München3, Academia Sinica Institute of Astronomy and Astrophysics4, University of California, Davis5, Subaru6, École Polytechnique Fédérale de Lausanne7, University of Cambridge8, University of California, Los Angeles9, Ludwig Maximilian University of Munich10, Niels Bohr Institute11, Leiden University12, Stanford University13, Kapteyn Astronomical Institute14
TL;DR: In this paper, the authors present a measurement of the Hubble constant and other cosmological parameters from a joint analysis of six gravitationally lensed quasars with measured time delays.
Abstract: We present a measurement of the Hubble constant ($H_{0}$) and other cosmological parameters from a joint analysis of six gravitationally lensed quasars with measured time delays. All lenses except the first are analyzed blindly with respect to the cosmological parameters. In a flat $\Lambda$CDM cosmology, we find $H_{0} = 73.3_{-1.8}^{+1.7}$, a 2.4% precision measurement, in agreement with local measurements of $H_{0}$ from type Ia supernovae calibrated by the distance ladder, but in $3.1\sigma$ tension with $Planck$ observations of the cosmic microwave background (CMB). This method is completely independent of both the supernovae and CMB analyses. A combination of time-delay cosmography and the distance ladder results is in $5.3\sigma$ tension with $Planck$ CMB determinations of $H_{0}$ in flat $\Lambda$CDM. We compute Bayes factors to verify that all lenses give statistically consistent results, showing that we are not underestimating our uncertainties and are able to control our systematics. We explore extensions to flat $\Lambda$CDM using constraints from time-delay cosmography alone, as well as combinations with other cosmological probes, including CMB observations from $Planck$, baryon acoustic oscillations, and type Ia supernovae. Time-delay cosmography improves the precision of the other probes, demonstrating the strong complementarity. Allowing for spatial curvature does not resolve the tension with $Planck$. Using the distance constraints from time-delay cosmography to anchor the type Ia supernova distance scale, we reduce the sensitivity of our $H_0$ inference to cosmological model assumptions. For six different cosmological models, our combined inference on $H_{0}$ ranges from $\sim73$-$78~\mathrm{km~s^{-1}~Mpc^{-1}}$, which is consistent with the local distance ladder constraints.
875 citations
••
Paris 12 Val de Marne University1, University Medical Center Groningen2, Eindhoven University of Technology3, University Hospital of Lausanne4, French Institute of Health and Medical Research5, Università Campus Bio-Medico6, University of Belgrade7, University of Cologne8, Ludwig Maximilian University of Munich9, École Polytechnique Fédérale de Lausanne10, Turku University Hospital11, University of Regensburg12, Università telematica San Raffaele13, Paris Descartes University14, Paracelsus Private Medical University of Salzburg15, University of Bern16, Universidade Nova de Lisboa17, Medical Park18, University of Göttingen19, University of Messina20, Central European Institute of Technology21, University of Siena22, University of Turku23, University of Tübingen24
TL;DR: These updated recommendations take into account all rTMS publications, including data prior to 2014, as well as currently reviewed literature until the end of 2018, and are based on the differences reached in therapeutic efficacy of real vs. sham rT MS protocols.
822 citations
••
Romina Ahumada1, Carlos Allende Prieto2, Carlos Allende Prieto3, Andres Almeida4 +342 more•Institutions (94)
TL;DR: The most recent data release from the Sloan Digital Sky Surveys (SDSS-IV) is DR16 as mentioned in this paper, which is the fourth and penultimate from the fourth phase of the survey.
Abstract: This paper documents the sixteenth data release (DR16) from the Sloan Digital Sky Surveys; the fourth and penultimate from the fourth phase (SDSS-IV). This is the first release of data from the southern hemisphere survey of the Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2); new data from APOGEE-2 North are also included. DR16 is also notable as the final data release for the main cosmological program of the Extended Baryon Oscillation Spectroscopic Survey (eBOSS), and all raw and reduced spectra from that project are released here. DR16 also includes all the data from the Time Domain Spectroscopic Survey (TDSS) and new data from the SPectroscopic IDentification of ERosita Survey (SPIDERS) programs, both of which were co-observed on eBOSS plates. DR16 has no new data from the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey (or the MaNGA Stellar Library "MaStar"). We also preview future SDSS-V operations (due to start in 2020), and summarize plans for the final SDSS-IV data release (DR17).
••
TL;DR: In this article, the authors provide a detailed review of the deformation mechanisms of HEAs with the complex concentrated alloys (CCAs) with the FCC and BCC structures, highlighting both successes and limitations.
••
TL;DR: By providing a fully integrated framework and evaluation of the impacts of high VPD on plant function, improvements in forecasting and long-term projections of climate impacts can be made.
Abstract: Recent decades have been characterized by increasing temperatures worldwide, resulting in an exponential climb in vapor pressure deficit (VPD). VPD has been identified as an increasingly important driver of plant functioning in terrestrial biomes and has been established as a major contributor in recent drought-induced plant mortality independent of other drivers associated with climate change. Despite this, few studies have isolated the physiological response of plant functioning to high VPD, thus limiting our understanding and ability to predict future impacts on terrestrial ecosystems. An abundance of evidence suggests that stomatal conductance declines under high VPD and transpiration increases in most species up until a given VPD threshold, leading to a cascade of subsequent impacts including reduced photosynthesis and growth, and higher risks of carbon starvation and hydraulic failure. Incorporation of photosynthetic and hydraulic traits in 'next-generation' land-surface models has the greatest potential for improved prediction of VPD responses at the plant- and global-scale, and will yield more mechanistic simulations of plant responses to a changing climate. By providing a fully integrated framework and evaluation of the impacts of high VPD on plant function, improvements in forecasting and long-term projections of climate impacts can be made.
••
Ben-Gurion University of the Negev1, Helmholtz-Zentrum Berlin2, National Renewable Energy Laboratory3, Forschungszentrum Jülich4, University of Erlangen-Nuremberg5, University of Rome Tor Vergata6, Massachusetts Institute of Technology7, Princeton University8, Chulalongkorn University9, Wuhan University of Technology10, Karlsruhe Institute of Technology11, University of Grenoble12, Commonwealth Scientific and Industrial Research Organisation13, University of Michigan14, Sapienza University of Rome15, École Polytechnique Fédérale de Lausanne16, VU University Amsterdam17, University of Jena18, Bangor University19, University of Maryland, College Park20, University of California, Davis21, Shaanxi Normal University22, Dalian Institute of Chemical Physics23, Chinese Academy of Sciences24, University of Southern Denmark25, University of Colorado Boulder26, State University of Campinas27, Boğaziçi University28, Sungkyunkwan University29, Swansea University30, Technische Universität Darmstadt31, University of Oxford32, University of Cambridge33, Skolkovo Institute of Science and Technology34, Yonsei University35, Imperial College London36
TL;DR: A consensus between researchers in the field is reported on procedures for testing perovskite solar cell stability, which are based on the International Summit on Organic Photovoltaic Stability (ISOS) protocols, and additional procedures to account for properties specific to PSCs are proposed.
Abstract: Improving the long-term stability of perovskite solar cells is critical to the deployment of this technology. Despite the great emphasis laid on stability-related investigations, publications lack consistency in experimental procedures and parameters reported. It is therefore challenging to reproduce and compare results and thereby develop a deep understanding of degradation mechanisms. Here, we report a consensus between researchers in the field on procedures for testing perovskite solar cell stability, which are based on the International Summit on Organic Photovoltaic Stability (ISOS) protocols. We propose additional procedures to account for properties specific to PSCs such as ion redistribution under electric fields, reversible degradation and to distinguish ambient-induced degradation from other stress factors. These protocols are not intended as a replacement of the existing qualification standards, but rather they aim to unify the stability assessment and to understand failure modes. Finally, we identify key procedural information which we suggest reporting in publications to improve reproducibility and enable large data set analysis. Reliability of stability data for perovskite solar cells is undermined by a lack of consistency in the test conditions and reporting. This Consensus Statement outlines practices for testing and reporting stability tailoring ISOS protocols for perovskite devices.
••
Tokyo Institute of Technology1, Cornell University2, Yonsei University3, Uppsala University4, Linköping University5, Memorial Sloan Kettering Cancer Center6, National Institutes of Health7, University of Porto8, Rockefeller University9, École Polytechnique Fédérale de Lausanne10, Carlos III Health Institute11, University of Tokyo12, Ohio State University13, Princeton University14, University of Oviedo15, Icahn School of Medicine at Mount Sinai16, Weizmann Institute of Science17, Lucile Packard Children's Hospital18, New York University19, Champalimaud Foundation20, University of Nebraska Medical Center21, Fred Hutchinson Cancer Research Center22, University of Pennsylvania23, University of California, San Diego24, University of Vermont25, University of Southern California26, City of Hope National Medical Center27, Lawrence Berkeley National Laboratory28
TL;DR: EVP proteins can serve as reliable biomarkers for cancer detection and determining cancer type, and a panel of tumor-type-specific EVP proteins in TEs and plasma are defined, which can classify tumors of unknown primary origin.
••
TL;DR: A motivation and brief review of the ongoing effort to port Quantum ESPRESSO onto heterogeneous architectures based on hardware accelerators, which will overcome the energy constraints that are currently hindering the way toward exascale computing are presented.
Abstract: Quantum ESPRESSO is an open-source distribution of computer codes for quantum-mechanical materials modeling, based on density-functional theory, pseudopotentials, and plane waves, and renowned for its performance on a wide range of hardware architectures, from laptops to massively parallel computers, as well as for the breadth of its applications. In this paper, we present a motivation and brief review of the ongoing effort to port Quantum ESPRESSO onto heterogeneous architectures based on hardware accelerators, which will overcome the energy constraints that are currently hindering the way toward exascale computing.
•
TL;DR: This work expresses the self-attention as a linear dot-product of kernel feature maps and makes use of the associativity property of matrix products to reduce the complexity from O(N) to N, where N is the sequence length.
Abstract: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from $\mathcal{O}\left(N^2\right)$ to $\mathcal{O}\left(N\right)$, where $N$ is the sequence length We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences
••
École Polytechnique Fédérale de Lausanne1, University of Cambridge2, Imperial College London3, University of Tokyo4, University of Geneva5, ETH Zurich6, National Presto Industries7, Tohoku University8, University of the Basque Country9, Korea Institute for Advanced Study10, Seoul National University11, University of Mainz12, University of California, Berkeley13, University of Paris14, University of Oxford15, Research Institute for Symbolic Computation16, Beihang University17, University of Zurich18, Polish Academy of Sciences19, Rutgers University20, Ikerbasque21
TL;DR: Wannier90 as mentioned in this paper is an open-source computer program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch states, which is interfaced to many widely used electronic-structure codes thanks to its independence from the basis sets representing these BLoch states.
Abstract: Wannier90 is an open-source computer program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch states. It is interfaced to many widely used electronic-structure codes thanks to its independence from the basis sets representing these Bloch states. In the past few years the development of Wannier90 has transitioned to a community-driven model; this has resulted in a number of new developments that have been recently released in Wannier90 v3.0. In this article we describe these new functionalities, that include the implementation of new features for wannierisation and disentanglement (symmetry-adapted Wannier functions, selectively-localised Wannier functions, selected columns of the density matrix) and the ability to calculate new properties (shift currents and Berry-curvature dipole, and a new interface to many-body perturbation theory); performance improvements, including parallelisation of the core code; enhancements in functionality (support for spinor-valued Wannier functions, more accurate methods to interpolate quantities in the Brillouin zone); improved usability (improved plotting routines, integration with high-throughput automation frameworks), as well as the implementation of modern software engineering practices (unit testing, continuous integration, and automatic source-code documentation). These new features, capabilities, and code development model aim to further sustain and expand the community uptake and range of applicability, that nowadays spans complex and accurate dielectric, electronic, magnetic, optical, topological and transport properties of materials.
••
TL;DR: A deposition method using methylammonium thiocyanate vapor treatment to convert δ-FAPbI3 to the desired pure α-phase below the thermodynamic phase-transition temperature is shown.
Abstract: Mixtures of cations or halides with FAPbI3 (where FA is formamidinium) lead to high efficiency in perovskite solar cells (PSCs) but also to blue-shifted absorption and long-term stability issues caused by loss of volatile methylammonium (MA) and phase segregation. We report a deposition method using MA thiocyanate (MASCN) or FASCN vapor treatment to convert yellow δ-FAPbI3 perovskite films to the desired pure α-phase. NMR quantifies MA incorporation into the framework. Molecular dynamics simulations show that SCN- anions promote the formation and stabilization of α-FAPbI3 below the thermodynamic phase-transition temperature. We used these low-defect-density α-FAPbI3 films to make PSCs with >23% power-conversion efficiency and long-term operational and thermal stability, as well as a low (330 millivolts) open-circuit voltage loss and a low (0.75 volt) turn-on voltage of electroluminescence.
••
University of Melbourne1, Potsdam Institute for Climate Impact Research2, International Institute for Applied Systems Analysis3, Commonwealth Scientific and Industrial Research Organisation4, ETH Zurich5, Earth System Research Laboratory6, École Polytechnique Fédérale de Lausanne7, National Oceanic and Atmospheric Administration8, Swiss Federal Laboratories for Materials Science and Technology9, Joint Global Change Research Institute10, Netherlands Environmental Assessment Agency11, Utrecht University12, Georgia Institute of Technology13
TL;DR: In this paper, the authors provided the greenhouse gas concentrations for these SSP scenarios, using the reduced-complexity climate-carbon-cycle model MAGICC7.0, and extended historical, observationally based concentration data with SSP trajectory projections from 2015 to 2500 for 43 greenhouse gases with monthly and latitudinal resolution.
Abstract: . Anthropogenic increases in atmospheric greenhouse gas
concentrations are the main driver of current and future climate change. The
integrated assessment community has quantified anthropogenic emissions for
the shared socio-economic pathway (SSP) scenarios, each of which represents
a different future socio-economic projection and political environment.
Here, we provide the greenhouse gas concentrations for these SSP scenarios
– using the reduced-complexity climate–carbon-cycle model MAGICC7.0. We
extend historical, observationally based concentration data with SSP
concentration projections from 2015 to 2500 for 43 greenhouse gases with monthly and latitudinal resolution. CO2 concentrations by 2100 range
from 393 to 1135 ppm for the lowest (SSP1-1.9) and highest (SSP5-8.5)
emission scenarios, respectively. We also provide the concentration
extensions beyond 2100 based on assumptions regarding the trajectories of fossil
fuels and land use change emissions, net negative emissions, and the
fraction of non- CO2 emissions. By 2150, CO2 concentrations in the
lowest emission scenario are approximately 350 ppm and approximately plateau
at that level until 2500, whereas the highest fossil-fuel-driven scenario
projects CO2 concentrations of 1737 ppm and reaches concentrations
beyond 2000 ppm by 2250. We estimate that the share of CO2 in the total
radiative forcing contribution of all considered 43 long-lived greenhouse
gases increases from 66 % for the present day to roughly 68 % to 85 % by
the time of maximum forcing in the 21st century. For this estimation,
we updated simple radiative forcing parameterizations that reflect the Oslo
Line-By-Line model results. In comparison to the representative concentration pathways (RCPs), the five main SSPs
(SSP1-1.9, SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5) are more evenly spaced
and extend to lower 2100 radiative forcing and temperatures. Performing two
pairs of six-member historical ensembles with CESM1.2.2, we estimate the
effect on surface air temperatures of applying latitudinally and seasonally
resolved GHG concentrations. We find that the ensemble differences in the
March–April–May (MAM) season provide a regional warming in higher northern
latitudes of up to 0.4 K over the historical period, latitudinally averaged
of about 0.1 K, which we estimate to be comparable to the upper bound
( ∼5 % level) of natural variability. In comparison to the
comparatively straight line of the last 2000 years, the greenhouse gas
concentrations since the onset of the industrial period and this studies'
projections over the next 100 to 500 years unequivocally depict a
“hockey-stick” upwards shape. The SSP concentration time series derived in
this study provide a harmonized set of input assumptions for long-term
climate science analysis; they also provide an indication of the wide set of
futures that societal developments and policy implementations can lead to –
ranging from multiple degrees of future warming on the one side to
approximately 1.5 ∘ C warming on the other.
••
TL;DR: This review summarizes recent experimental, computational, and theoretical research efforts that have contributed to improving the understanding and ability to predict the interactions of ABL flow with wind turbines and wind farms.
Abstract: Wind energy, together with other renewable energy sources, are expected to grow substantially in the coming decades and play a key role in mitigating climate change and achieving energy sustainability. One of the main challenges in optimizing the design, operation, control, and grid integration of wind farms is the prediction of their performance, owing to the complex multiscale two-way interactions between wind farms and the turbulent atmospheric boundary layer (ABL). From a fluid mechanical perspective, these interactions are complicated by the high Reynolds number of the ABL flow, its inherent unsteadiness due to the diurnal cycle and synoptic-forcing variability, the ubiquitous nature of thermal effects, and the heterogeneity of the terrain. Particularly important is the effect of ABL turbulence on wind-turbine wake flows and their superposition, as they are responsible for considerable turbine power losses and fatigue loads in wind farms. These flow interactions affect, in turn, the structure of the ABL and the turbulent fluxes of momentum and scalars. This review summarizes recent experimental, computational, and theoretical research efforts that have contributed to improving our understanding and ability to predict the interactions of ABL flow with wind turbines and wind farms.
••
Karlsruhe Institute of Technology1, University of Grenoble2, University of Trieste3, Qatar Airways4, University of Florida5, La Jolla Institute for Allergy and Immunology6, Rice University7, University of Toronto8, University of Pennsylvania9, Veterans Health Administration10, Ankara University11, Imperial College London12, Spanish National Research Council13, Catalan Institution for Research and Advanced Studies14, University of Texas MD Anderson Cancer Center15, Houston Methodist Hospital16, Drexel University17, École Polytechnique Fédérale de Lausanne18, University of Padua19
TL;DR: Nanoimmunity by design can help to design materials for immune modulation, either stimulating or suppressing the immune response, which would find applications in the context of vaccine development for SARS-CoV-2 or in counteracting the cytokine storm, respectively.
Abstract: The COVID-19 outbreak has fueled a global demand for effective diagnosis and treatment as well as mitigation of the spread of infection, all through large-scale approaches such as specific alternative antiviral methods and classical disinfection protocols. Based on an abundance of engineered materials identifiable by their useful physicochemical properties through versatile chemical functionalization, nanotechnology offers a number of approaches to cope with this emergency. Here, through a multidisciplinary Perspective encompassing diverse fields such as virology, biology, medicine, engineering, chemistry, materials science, and computational science, we outline how nanotechnology-based strategies can support the fight against COVID-19, as well as infectious diseases in general, including future pandemics. Considering what we know so far about the life cycle of the virus, we envision key steps where nanotechnology could counter the disease. First, nanoparticles (NPs) can offer alternative methods to classical disinfection protocols used in healthcare settings, thanks to their intrinsic antipathogenic properties and/or their ability to inactivate viruses, bacteria, fungi, or yeasts either photothermally or via photocatalysis-induced reactive oxygen species (ROS) generation. Nanotechnology tools to inactivate SARS-CoV-2 in patients could also be explored. In this case, nanomaterials could be used to deliver drugs to the pulmonary system to inhibit interaction between angiotensin-converting enzyme 2 (ACE2) receptors and viral S protein. Moreover, the concept of "nanoimmunity by design" can help us to design materials for immune modulation, either stimulating or suppressing the immune response, which would find applications in the context of vaccine development for SARS-CoV-2 or in counteracting the cytokine storm, respectively. In addition to disease prevention and therapeutic potential, nanotechnology has important roles in diagnostics, with potential to support the development of simple, fast, and cost-effective nanotechnology-based assays to monitor the presence of SARS-CoV-2 and related biomarkers. In summary, nanotechnology is critical in counteracting COVID-19 and will be vital when preparing for future pandemics.
••
New York University1, Johns Hopkins University2, University of Washington3, University of North Carolina at Chapel Hill4, Duke University5, Scripps Research Institute6, Hebrew University of Jerusalem7, Ohio State University8, University of California, San Francisco9, École Polytechnique Fédérale de Lausanne10, Baylor College of Medicine11, Vanderbilt University12, Rutgers University13, Swiss Institute of Bioinformatics14, Fred Hutchinson Cancer Research Center15, Rensselaer Polytechnic Institute16, Northeastern University17, Stanford University18, DSM19, Fox Chase Cancer Center20, University of Maryland, College Park21, University of Warsaw22, University of Denver23, Australian National University24, University of Kansas25, University of Zurich26, University of Massachusetts Dartmouth27, University of Tokyo28, Franklin & Marshall College29, Weizmann Institute of Science30, Lund University31, University of California, Santa Cruz32, University of California, Davis33
TL;DR: This Perspective reviews tools developed over the past five years in the Rosetta software, including over 80 methods, and discusses improvements to the score function, user interfaces and usability.
Abstract: The Rosetta software for macromolecular modeling, docking and design is extensively used in laboratories worldwide. During two decades of development by a community of laboratories at more than 60 institutions, Rosetta has been continuously refactored and extended. Its advantages are its performance and interoperability between broad modeling capabilities. Here we review tools developed in the last 5 years, including over 80 methods. We discuss improvements to the score function, user interfaces and usability. Rosetta is available at http://www.rosettacommons.org.
••
TL;DR: Why the testing strategy in Switzerland should be strengthened urgently, as a core component of a combination approach to control COVID-19 is explained.
Abstract: Switzerland is among the countries with the highest number of coronavirus disease-2019 (COVID-19) cases per capita in the world. There are likely many people with undetected SARS-CoV-2 infection because testing efforts are currently not detecting all infected people, including some with clinical disease compatible with COVID-19. Testing on its own will not stop the spread of SARS-CoV-2. Testing is part of a strategy. The World Health Organization recommends a combination of measures: rapid diagnosis and immediate isolation of cases, rigorous tracking and precautionary self-isolation of close contacts. In this article, we explain why the testing strategy in Switzerland should be strengthened urgently, as a core component of a combination approach to control COVID-19.
••
University of Waterloo1, National Technical University of Athens2, University of Alaska Fairbanks3, École Polytechnique Fédérale de Lausanne4, University of Kiel5, RMIT University6, University of St. Thomas (Minnesota)7, Nanyang Technological University8, Pacific Northwest National Laboratory9, Microsoft10, University of South Florida11, University of Chile12, South Dakota State University13
TL;DR: In this paper, definitions and classification of microgrid stability are presented and discussed, considering pertinent microgrid features such as voltage-frequency dependence, unbalancing, low inertia, and generation intermittency.
Abstract: This document is a summary of a report prepared by the IEEE PES Task Force (TF) on Microgrid Stability Definitions, Analysis, and Modeling, IEEE Power and Energy Society, Piscataway, NJ, USA, Tech. Rep. PES-TR66, Apr. 2018, which defines concepts and identifies relevant issues related to stability in microgrids. In this paper, definitions and classification of microgrid stability are presented and discussed, considering pertinent microgrid features such as voltage-frequency dependence, unbalancing, low inertia, and generation intermittency. A few examples are also presented, highlighting some of the stability classes defined in this paper. Further examples, along with discussions on microgrid components modeling and stability analysis tools can be found in the TF report.
••
TL;DR: In this article, a fully homomorphic encryption scheme over the torus (TFHE) is described, which revisits, generalizes and improves the FHE based on GSW and its ring variants.
Abstract: This work describes a fast fully homomorphic encryption scheme over the torus (TFHE) that revisits, generalizes and improves the fully homomorphic encryption (FHE) based on GSW and its ring variants. The simplest FHE schemes consist in bootstrapped binary gates. In this gate bootstrapping mode, we show that the scheme FHEW of Ducas and Micciancio (Eurocrypt, 2015) can be expressed only in terms of external product between a GSW and an LWE ciphertext. As a consequence of this result and of other optimizations, we decrease the running time of their bootstrapping from 690 to 13 ms single core, using 16 MB bootstrapping key instead of 1 GB, and preserving the security parameter. In leveled homomorphic mode, we propose two methods to manipulate packed data, in order to decrease the ciphertext expansion and to optimize the evaluation of lookup tables and arbitrary functions in $${\mathrm {RingGSW}}$$-based homomorphic schemes. We also extend the automata logic, introduced in Gama et al. (Eurocrypt, 2016), to the efficient leveled evaluation of weighted automata, and present a new homomorphic counter called $$\mathrm {TBSR}$$, that supports all the elementary operations that occur in a multiplication. These improvements speed up the evaluation of most arithmetic functions in a packed leveled mode, with a noise overhead that remains additive. We finally present a new circuit bootstrapping that converts $$\mathsf {LWE}$$ ciphertexts into low-noise $${\mathrm {RingGSW}}$$ ciphertexts in just 137 ms, which makes the leveled mode of TFHE composable and which is fast enough to speed up arithmetic functions, compared to the gate bootstrapping approach. Finally, we provide an alternative practical analysis of LWE based schemes, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key, and we propose concrete parameter sets and timing comparison for all our constructions.
••
TL;DR: Recent work on optical computing for artificial intelligence applications is reviewed and its promise and challenges are discussed.
Abstract: Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.
••
TL;DR: MaSIF (molecular surface interaction fingerprinting) is presented, a conceptual framework based on a geometric deep learning method to capture fingerprints that are important for specific biomolecular interactions that will lead to improvements in the understanding of protein function and design.
Abstract: Predicting interactions between proteins and other biomolecules solely based on structure remains a challenge in biology. A high-level representation of protein structure, the molecular surface, displays patterns of chemical and geometric features that fingerprint a protein's modes of interactions with other biomolecules. We hypothesize that proteins participating in similar interactions may share common fingerprints, independent of their evolutionary history. Fingerprints may be difficult to grasp by visual analysis but could be learned from large-scale datasets. We present MaSIF (molecular surface interaction fingerprinting), a conceptual framework based on a geometric deep learning method to capture fingerprints that are important for specific biomolecular interactions. We showcase MaSIF with three prediction challenges: protein pocket-ligand prediction, protein-protein interaction site prediction and ultrafast scanning of protein surfaces for prediction of protein-protein complexes. We anticipate that our conceptual framework will lead to improvements in our understanding of protein function and design.
••
TL;DR: A neuronal model that reproduces the key events leading to the formation of inclusions that recapitulate the biochemical, structural, and organizational features of bona fide LBs is described, providing a powerful platform for evaluating therapeutics targeting α-synuclein aggregation and LB formation and to identify and validate therapeutic targets for the treatment of Parkinson’s disease.
Abstract: Parkinson’s disease (PD) is characterized by the accumulation of misfolded and aggregated α-synuclein (α-syn) into intraneuronal inclusions named Lewy bodies (LBs). Although it is widely believed that α-syn plays a central role in the pathogenesis of PD, the processes that govern α-syn fibrillization and LB formation remain poorly understood. In this work, we sought to dissect the spatiotemporal events involved in the biogenesis of the LBs at the genetic, molecular, biochemical, structural, and cellular levels. Toward this goal, we further developed a seeding-based model of α-syn fibrillization to generate a neuronal model that reproduces the key events leading to LB formation, including seeding, fibrillization, and the formation of inclusions that recapitulate many of the biochemical, structural, and organizational features of bona fide LBs. Using an integrative omics, biochemical and imaging approach, we dissected the molecular events associated with the different stages of LB formation and their contribution to neuronal dysfunction and degeneration. In addition, we demonstrate that LB formation involves a complex interplay between α-syn fibrillization, posttranslational modifications, and interactions between α-syn aggregates and membranous organelles, including mitochondria, the autophagosome, and endolysosome. Finally, we show that the process of LB formation, rather than simply fibril formation, is one of the major drivers of neurodegeneration through disruption of cellular functions and inducing mitochondria damage and deficits, and synaptic dysfunctions. We believe that this model represents a powerful platform to further investigate the mechanisms of LB formation and clearance and to screen and evaluate therapeutics targeting α-syn aggregation and LB formation.
••
TL;DR: Large-scale collection of data could help curb the COVID-19 pandemic, but it should not neglect privacy and public trust.
Abstract: Large-scale collection of data could help curb the COVID-19 pandemic, but it should not neglect privacy and public trust. Best practices should be identified to maintain responsible data-collection and data-processing standards at a global scale.