scispace - formally typeset
Search or ask a question

Showing papers by "Georgia Institute of Technology published in 2021"


Journal ArticleDOI
23 Jun 2021
TL;DR: In this article, the authors describe the state-of-the-art in the field of federated learning from the perspective of distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, and statistics.
Abstract: The term Federated Learning was coined as recently as 2016 to describe a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective. Since then, the topic has gathered much interest across many different disciplines and the realization that solving many of these interdisciplinary problems likely requires not just machine learning but techniques from distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, statistics, and more. This monograph has contributions from leading experts across the disciplines, who describe the latest state-of-the art from their perspective. These contributions have been carefully curated into a comprehensive treatment that enables the reader to understand the work that has been done and get pointers to where effort is required to solve many of the problems before Federated Learning can become a reality in practical systems. Researchers working in the area of distributed systems will find this monograph an enlightening read that may inspire them to work on the many challenging issues that are outlined. This monograph will get the reader up to speed quickly and easily on what is likely to become an increasingly important topic: Federated Learning.

2,144 citations


Journal ArticleDOI
24 Feb 2021-Nature
TL;DR: In this paper, an electron transport layer with an ideal film coverage, thickness and composition was developed by tuning the chemical bath deposition of tin dioxide (SnO2) to improve the performance of metal halide perovskite solar cells.
Abstract: Metal halide perovskite solar cells (PSCs) are an emerging photovoltaic technology with the potential to disrupt the mature silicon solar cell market. Great improvements in device performance over the past few years, thanks to the development of fabrication protocols1-3, chemical compositions4,5 and phase stabilization methods6-10, have made PSCs one of the most efficient and low-cost solution-processable photovoltaic technologies. However, the light-harvesting performance of these devices is still limited by excessive charge carrier recombination. Despite much effort, the performance of the best-performing PSCs is capped by relatively low fill factors and high open-circuit voltage deficits (the radiative open-circuit voltage limit minus the high open-circuit voltage)11. Improvements in charge carrier management, which is closely tied to the fill factor and the open-circuit voltage, thus provide a path towards increasing the device performance of PSCs, and reaching their theoretical efficiency limit12. Here we report a holistic approach to improving the performance of PSCs through enhanced charge carrier management. First, we develop an electron transport layer with an ideal film coverage, thickness and composition by tuning the chemical bath deposition of tin dioxide (SnO2). Second, we decouple the passivation strategy between the bulk and the interface, leading to improved properties, while minimizing the bandgap penalty. In forward bias, our devices exhibit an electroluminescence external quantum efficiency of up to 17.2 per cent and an electroluminescence energy conversion efficiency of up to 21.6 per cent. As solar cells, they achieve a certified power conversion efficiency of 25.2 per cent, corresponding to 80.5 per cent of the thermodynamic limit of its bandgap.

1,557 citations


Journal ArticleDOI
TL;DR: 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.
Abstract: The fifth generation (5G) wireless communication networks are being deployed worldwide from 2020 and more capabilities are in the process of being standardized, such as mass connectivity, ultra-reliability, and guaranteed low latency. However, 5G will not meet all requirements of the future in 2030 and beyond, and sixth generation (6G) wireless communication networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, better intelligence level and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architecture, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing. Our vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-air-ground-sea integrated communication network. Second, all spectra will be fully explored to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the big datasets generated by the use of extremely heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of artificial intelligence (AI) and big data technologies. Fourth, network security will have to be strengthened when developing 6G networks. This article provides a comprehensive survey of recent advances and future trends in these four aspects. Clearly, 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.

935 citations


Journal ArticleDOI
TL;DR: Hifiasm as discussed by the authors is a de novo assembler that takes advantage of long high-fidelity sequence reads to faithfully represent the haplotype information in a phased assembly graph.
Abstract: Haplotype-resolved de novo assembly is the ultimate solution to the study of sequence variations in a genome. However, existing algorithms either collapse heterozygous alleles into one consensus copy or fail to cleanly separate the haplotypes to produce high-quality phased assemblies. Here we describe hifiasm, a de novo assembler that takes advantage of long high-fidelity sequence reads to faithfully represent the haplotype information in a phased assembly graph. Unlike other graph-based assemblers that only aim to maintain the contiguity of one haplotype, hifiasm strives to preserve the contiguity of all haplotypes. This feature enables the development of a graph trio binning algorithm that greatly advances over standard trio binning. On three human and five nonhuman datasets, including California redwood with a ~30-Gb hexaploid genome, we show that hifiasm frequently delivers better assemblies than existing tools and consistently outperforms others on haplotype-resolved assembly.

884 citations


Journal ArticleDOI
Carole Escartin1, Elena Galea2, Andras Lakatos3, James P. O'Callaghan4, Gabor C. Petzold5, Gabor C. Petzold6, Alberto Serrano-Pozo7, Christian Steinhäuser5, Andrea Volterra8, Giorgio Carmignoto9, Giorgio Carmignoto10, Amit Agarwal11, Nicola J. Allen12, Alfonso Araque13, Luis Barbeito14, Ari Barzilai15, Dwight E. Bergles16, Gilles Bonvento1, Arthur M. Butt17, Wei Ting Chen18, Martine Cohen-Salmon19, Colm Cunningham20, Benjamin Deneen21, Bart De Strooper22, Bart De Strooper18, Blanca Diaz-Castro23, Cinthia Farina, Marc R. Freeman24, Vittorio Gallo25, James E. Goldman26, Steven A. Goldman27, Steven A. Goldman28, Magdalena Götz29, Antonia Gutierrez30, Philip G. Haydon31, Dieter Henrik Heiland32, Elly M. Hol33, Matthew Holt18, Masamitsu Iino34, Ksenia V. Kastanenka7, Helmut Kettenmann35, Baljit S. Khakh36, Schuichi Koizumi37, C. Justin Lee, Shane A. Liddelow38, Brian A. MacVicar39, Pierre J. Magistretti40, Pierre J. Magistretti8, Albee Messing41, Anusha Mishra24, Anna V. Molofsky42, Keith K. Murai43, Christopher M. Norris44, Seiji Okada45, Stéphane H. R. Oliet46, João Filipe Oliveira47, João Filipe Oliveira48, Aude Panatier46, Vladimir Parpura49, Marcela Pekna50, Milos Pekny50, Luc Pellerin51, Gertrudis Perea52, Beatriz G. Pérez-Nievas53, Frank W. Pfrieger54, Kira E. Poskanzer42, Francisco J. Quintana7, Richard M. Ransohoff, Miriam Riquelme-Perez1, Stefanie Robel55, Christine R. Rose56, Jeffrey D. Rothstein16, Nathalie Rouach19, David H. Rowitch3, Alexey Semyanov57, Alexey Semyanov58, Swetlana Sirko29, Harald Sontheimer55, Raymond A. Swanson42, Javier Vitorica59, Ina B. Wanner36, Levi B. Wood60, Jia Qian Wu61, Binhai Zheng62, Eduardo R. Zimmer63, Robert Zorec64, Michael V. Sofroniew36, Alexei Verkhratsky65, Alexei Verkhratsky66 
Université Paris-Saclay1, Autonomous University of Barcelona2, University of Cambridge3, National Institute for Occupational Safety and Health4, University of Bonn5, German Center for Neurodegenerative Diseases6, Harvard University7, University of Lausanne8, National Research Council9, University of Padua10, Heidelberg University11, Salk Institute for Biological Studies12, University of Minnesota13, Pasteur Institute14, Tel Aviv University15, Johns Hopkins University16, University of Portsmouth17, Katholieke Universiteit Leuven18, PSL Research University19, Trinity College, Dublin20, Baylor College of Medicine21, University College London22, University of Edinburgh23, Oregon Health & Science University24, National Institutes of Health25, Columbia University26, University of Rochester27, University of Copenhagen28, Ludwig Maximilian University of Munich29, University of Málaga30, Tufts University31, University of Freiburg32, Utrecht University33, Nihon University34, Max Delbrück Center for Molecular Medicine35, University of California, Los Angeles36, University of Yamanashi37, New York University38, University of British Columbia39, King Abdullah University of Science and Technology40, University of Wisconsin-Madison41, University of California, San Francisco42, McGill University43, University of Kentucky44, Kyushu University45, University of Bordeaux46, Polytechnic Institute of Cávado and Ave47, University of Minho48, University of Alabama at Birmingham49, University of Gothenburg50, University of Poitiers51, Cajal Institute52, King's College London53, University of Strasbourg54, Virginia Tech55, University of Düsseldorf56, I.M. Sechenov First Moscow State Medical University57, Russian Academy of Sciences58, University of Seville59, Georgia Institute of Technology60, University of Texas Health Science Center at Houston61, University of California, San Diego62, Universidade Federal do Rio Grande do Sul63, University of Ljubljana64, University of Manchester65, Ikerbasque66
TL;DR: In this article, the authors point out the shortcomings of binary divisions of reactive astrocytes into good-vs-bad, neurotoxic vs-neuroprotective or A1-vs.A2.
Abstract: Reactive astrocytes are astrocytes undergoing morphological, molecular, and functional remodeling in response to injury, disease, or infection of the CNS. Although this remodeling was first described over a century ago, uncertainties and controversies remain regarding the contribution of reactive astrocytes to CNS diseases, repair, and aging. It is also unclear whether fixed categories of reactive astrocytes exist and, if so, how to identify them. We point out the shortcomings of binary divisions of reactive astrocytes into good-vs-bad, neurotoxic-vs-neuroprotective or A1-vs-A2. We advocate, instead, that research on reactive astrocytes include assessment of multiple molecular and functional parameters-preferably in vivo-plus multivariate statistics and determination of impact on pathological hallmarks in relevant models. These guidelines may spur the discovery of astrocyte-based biomarkers as well as astrocyte-targeting therapies that abrogate detrimental actions of reactive astrocytes, potentiate their neuro- and glioprotective actions, and restore or augment their homeostatic, modulatory, and defensive functions.

797 citations


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1428 moreInstitutions (155)
TL;DR: In this article, the population of 47 compact binary mergers detected with a false-alarm rate of 0.614 were dynamically assembled, and the authors found that the BBH rate likely increases with redshift, but not faster than the star formation rate.
Abstract: We report on the population of 47 compact binary mergers detected with a false-alarm rate of 0.01 are dynamically assembled. Third, we estimate merger rates, finding RBBH = 23.9-+8.614.3 Gpc-3 yr-1 for BBHs and RBNS = 320-+240490 Gpc-3 yr-1 for binary neutron stars. We find that the BBH rate likely increases with redshift (85% credibility) but not faster than the star formation rate (86% credibility). Additionally, we examine recent exceptional events in the context of our population models, finding that the asymmetric masses of GW190412 and the high component masses of GW190521 are consistent with our models, but the low secondary mass of GW190814 makes it an outlier.

468 citations


Journal ArticleDOI
06 Jan 2021
TL;DR: The BRAKER2 pipeline as mentioned in this paper generates and integrates external protein support into the iterative process of training and gene prediction by GeneMark-EP+ and AUGUSTUS, and it is favorably compared with other pipelines, e.g. MAKER2, in terms of accuracy and performance.
Abstract: The task of eukaryotic genome annotation remains challenging. Only a few genomes could serve as standards of annotation achieved through a tremendous investment of human curation efforts. Still, the correctness of all alternative isoforms, even in the best-annotated genomes, could be a good subject for further investigation. The new BRAKER2 pipeline generates and integrates external protein support into the iterative process of training and gene prediction by GeneMark-EP+ and AUGUSTUS. BRAKER2 continues the line started by BRAKER1 where self-training GeneMark-ET and AUGUSTUS made gene predictions supported by transcriptomic data. Among the challenges addressed by the new pipeline was a generation of reliable hints to protein-coding exon boundaries from likely homologous but evolutionarily distant proteins. In comparison with other pipelines for eukaryotic genome annotation, BRAKER2 is fully automatic. It is favorably compared under equal conditions with other pipelines, e.g. MAKER2, in terms of accuracy and performance. Development of BRAKER2 should facilitate solving the task of harmonization of annotation of protein-coding genes in genomes of different eukaryotic species. However, we fully understand that several more innovations are needed in transcriptomic and proteomic technologies as well as in algorithmic development to reach the goal of highly accurate annotation of eukaryotic genomes.

455 citations


Journal ArticleDOI
TL;DR: Recent progress in deep-learning-based photonic design is reviewed by providing the historical background, algorithm fundamentals and key applications, with the emphasis on various model architectures for specific photonic tasks.
Abstract: Innovative approaches and tools play an important role in shaping design, characterization and optimization for the field of photonics. As a subset of machine learning that learns multilevel abstraction of data using hierarchically structured layers, deep learning offers an efficient means to design photonic structures, spawning data-driven approaches complementary to conventional physics- and rule-based methods. Here, we review recent progress in deep-learning-based photonic design by providing the historical background, algorithm fundamentals and key applications, with the emphasis on various model architectures for specific photonic tasks. We also comment on the challenges and perspectives of this emerging research direction. The application of deep learning to the design of photonic structures and devices is reviewed, including algorithm fundamentals.

446 citations


Journal ArticleDOI
TL;DR: In this paper, a deep learning based semantic communication system, named DeepSC, for text transmission based on the Transformer, aims at maximizing the system capacity and minimizing the semantic errors by recovering the meaning of sentences, rather than bit- or symbol-errors in traditional communications.
Abstract: Recently, deep learned enabled end-to-end communication systems have been developed to merge all physical layer blocks in the traditional communication systems, which make joint transceiver optimization possible Powered by deep learning, natural language processing has achieved great success in analyzing and understanding a large amount of language texts Inspired by research results in both areas, we aim to provide a new view on communication systems from the semantic level Particularly, we propose a deep learning based semantic communication system, named DeepSC, for text transmission Based on the Transformer, the DeepSC aims at maximizing the system capacity and minimizing the semantic errors by recovering the meaning of sentences, rather than bit- or symbol-errors in traditional communications Moreover, transfer learning is used to ensure the DeepSC applicable to different communication environments and to accelerate the model training process To justify the performance of semantic communications accurately, we also initialize a new metric, named sentence similarity Compared with the traditional communication system without considering semantic information exchange, the proposed DeepSC is more robust to channel variation and is able to achieve better performance, especially in the low signal-to-noise (SNR) regime, as demonstrated by the extensive simulation results

377 citations


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1692 moreInstitutions (195)
TL;DR: In this article, the authors reported the observation of gravitational waves from two compact binary coalescences in LIGO's and Virgo's third observing run with properties consistent with neutron star-black hole (NSBH) binaries.
Abstract: We report the observation of gravitational waves from two compact binary coalescences in LIGO’s and Virgo’s third observing run with properties consistent with neutron star–black hole (NSBH) binaries. The two events are named GW200105_162426 and GW200115_042309, abbreviated as GW200105 and GW200115; the first was observed by LIGO Livingston and Virgo and the second by all three LIGO–Virgo detectors. The source of GW200105 has component masses 8.9−1.5+1.2 and 1.9−0.2+0.3M⊙ , whereas the source of GW200115 has component masses 5.7−2.1+1.8 and 1.5−0.3+0.7M⊙ (all measurements quoted at the 90% credible level). The probability that the secondary’s mass is below the maximal mass of a neutron star is 89%–96% and 87%–98%, respectively, for GW200105 and GW200115, with the ranges arising from different astrophysical assumptions. The source luminosity distances are 280−110+110 and 300−100+150Mpc , respectively. The magnitude of the primary spin of GW200105 is less than 0.23 at the 90% credible level, and its orientation is unconstrained. For GW200115, the primary spin has a negative spin projection onto the orbital angular momentum at 88% probability. We are unable to constrain the spin or tidal deformation of the secondary component for either event. We infer an NSBH merger rate density of 45−33+75Gpc−3yr−1 when assuming that GW200105 and GW200115 are representative of the NSBH population or 130−69+112Gpc−3yr−1 under the assumption of a broader distribution of component masses.

374 citations


Book ChapterDOI
27 Sep 2021
TL;DR: TransFuse as discussed by the authors combines Transformers and CNNs in a parallel style, where both global dependency and low-level spatial details can be efficiently captured in a much shallower manner.
Abstract: Medical image segmentation - the prerequisite of numerous clinical needs - has been significantly prospered by recent advances in convolutional neural networks (CNNs). However, it exhibits general limitations on modeling explicit long-range relation, and existing cures, resorting to building deep encoders along with aggressive downsampling operations, leads to redundant deepened networks and loss of localized details. Hence, the segmentation task awaits a better solution to improve the efficiency of modeling global contexts while maintaining a strong grasp of low-level details. In this paper, we propose a novel parallel-in-branch architecture, TransFuse, to address this challenge. TransFuse combines Transformers and CNNs in a parallel style, where both global dependency and low-level spatial details can be efficiently captured in a much shallower manner. Besides, a novel fusion technique - BiFusion module is created to efficiently fuse the multi-level features from both branches. Extensive experiments demonstrate that TransFuse achieves the newest state-of-the-art results on both 2D and 3D medical image sets including polyp, skin lesion, hip, and prostate segmentation, with significant parameter decrease and inference speed improvement.

Journal ArticleDOI
Urmo Võsa1, Annique Claringbould2, Annique Claringbould3, Harm-Jan Westra1, Marc Jan Bonder1, Patrick Deelen, Biao Zeng4, Holger Kirsten5, Ashis Saha6, Roman Kreuzhuber7, Roman Kreuzhuber3, Roman Kreuzhuber8, Seyhan Yazar9, Harm Brugge1, Roy Oelen1, Dylan H. de Vries1, Monique G. P. van der Wijst1, Silva Kasela10, Natalia Pervjakova10, Isabel Alves11, Marie-Julie Favé11, Mawusse Agbessi11, Mark W. Christiansen12, Rick Jansen13, Ilkka Seppälä, Lin Tong14, Alexander Teumer15, Katharina Schramm16, Gibran Hemani17, Joost Verlouw18, Hanieh Yaghootkar19, Hanieh Yaghootkar20, Hanieh Yaghootkar21, Reyhan Sönmez Flitman22, Reyhan Sönmez Flitman23, Andrew A. Brown24, Andrew A. Brown25, Viktorija Kukushkina10, Anette Kalnapenkis10, Sina Rüeger22, Eleonora Porcu22, Jaanika Kronberg10, Johannes Kettunen, Bernett Lee26, Futao Zhang27, Ting Qi27, Jose Alquicira Hernandez9, Wibowo Arindrarto28, Frank Beutner5, Peter A C 't Hoen29, Joyce B. J. van Meurs18, Jenny van Dongen13, Maarten van Iterson28, Morris A. Swertz, Julia Dmitrieva30, Mahmoud Elansary30, Benjamin P. Fairfax31, Michel Georges30, Bastiaan T. Heijmans28, Alex W. Hewitt32, Mika Kähönen, Yungil Kim6, Yungil Kim33, Julian C. Knight31, Peter Kovacs5, Knut Krohn5, Shuang Li1, Markus Loeffler5, Urko M. Marigorta34, Urko M. Marigorta4, Hailang Mei28, Yukihide Momozawa30, Martina Müller-Nurasyid16, Matthias Nauck15, Michel G. Nivard35, Brenda W.J.H. Penninx13, Jonathan K. Pritchard36, Olli T. Raitakari37, Olli T. Raitakari38, Olaf Rötzschke26, Eline Slagboom28, Coen D.A. Stehouwer39, Michael Stumvoll5, Patrick F. Sullivan40, Joachim Thiery5, Anke Tönjes5, Jan H. Veldink41, Uwe Völker15, Robert Warmerdam1, Cisca Wijmenga1, Morris Swertz, Anand Kumar Andiappan26, Grant W. Montgomery27, Samuli Ripatti42, Markus Perola43, Zoltán Kutalik22, Emmanouil T. Dermitzakis25, Emmanouil T. Dermitzakis23, Sven Bergmann23, Sven Bergmann22, Timothy M. Frayling21, Holger Prokisch44, Habibul Ahsan14, Brandon L. Pierce14, Terho Lehtimäki, Dorret I. Boomsma13, Bruce M. Psaty12, Sina A. Gharib12, Philip Awadalla11, Lili Milani10, Willem H. Ouwehand45, Willem H. Ouwehand7, Willem H. Ouwehand8, Kate Downes8, Kate Downes7, Oliver Stegle3, Oliver Stegle46, Alexis Battle6, Peter M. Visscher27, Jian Yang27, Jian Yang47, Markus Scholz5, Joseph E. Powell48, Joseph E. Powell9, Greg Gibson4, Tõnu Esko10, Lude Franke1 
TL;DR: In this article, the authors performed cis-and trans-expression quantitative trait locus (eQTL) analyses using blood-derived expression from 31,684 individuals through the eQTLGen Consortium.
Abstract: Trait-associated genetic variants affect complex phenotypes primarily via regulatory mechanisms on the transcriptome. To investigate the genetics of gene expression, we performed cis- and trans-expression quantitative trait locus (eQTL) analyses using blood-derived expression from 31,684 individuals through the eQTLGen Consortium. We detected cis-eQTL for 88% of genes, and these were replicable in numerous tissues. Distal trans-eQTL (detected for 37% of 10,317 trait-associated variants tested) showed lower replication rates, partially due to low replication power and confounding by cell type composition. However, replication analyses in single-cell RNA-seq data prioritized intracellular trans-eQTL. Trans-eQTL exerted their effects via several mechanisms, primarily through regulation by transcription factors. Expression of 13% of the genes correlated with polygenic scores for 1,263 phenotypes, pinpointing potential drivers for those traits. In summary, this work represents a large eQTL resource, and its results serve as a starting point for in-depth interpretation of complex phenotypes.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1335 moreInstitutions (144)
TL;DR: The data recorded by these instruments during their first and second observing runs are described, including the gravitational-wave strain arrays, released as time series sampled at 16384 Hz.

Journal ArticleDOI
TL;DR: A comprehensive review of the surface reconstruction of transition metal-based OER catalysts including oxides, non-oxides, hydroxides and alloys can be found in this article.
Abstract: A climax in the development of cost-effective and high-efficiency transition metal-based electrocatalysts has been witnessed recently for sustainable energy and related conversion technologies. In this regard, structure-activity relationships based on several descriptors have already been proposed to rationally design electrocatalysts. However, the dynamic reconstruction of the surface structures and compositions of catalysts during electrocatalytic water oxidation, especially during the anodic oxygen evolution reaction (OER), complicate the streamlined prediction of the catalytic activity. With the achievements in operando and in situ techniques, it has been found that electrocatalysts undergo surface reconstruction to form the actual active species in situ accompanied with an increase in their oxidation state during OER in alkaline solution. Accordingly, a thorough understanding of the surface reconstruction process plays a critical role in establishing unambiguous structure-composition-property relationships in pursuit of high-efficiency electrocatalysts. However, several issues still need to be explored before high electrocatalytic activities can be realized, as follows: (1) the identification of initiators and pathways for surface reconstruction, (2) establishing the relationships between structure, composition, and electrocatalytic activity, and (3) the rational manipulation of in situ catalyst surface reconstruction. In this review, the recent progress in the surface reconstruction of transition metal-based OER catalysts including oxides, non-oxides, hydroxides and alloys is summarized, emphasizing the fundamental understanding of reconstruction behavior from the original precatalysts to the actual catalysts based on operando analysis and theoretical calculations. The state-of-the-art strategies to tailor the surface reconstruction such as substituting/doping with metals, introducing anions, incorporating oxygen vacancies, tuning morphologies and exploiting plasmonic/thermal/photothermal effects are then introduced. Notably, comprehensive operando/in situ characterization together with computational calculations are responsible for unveiling the improvement mechanism for OER. By delivering the progress, strategies, insights, techniques, and perspectives, this review will provide a comprehensive understanding of the surface reconstruction in transition metal-based OER catalysts and future guidelines for their rational development.

Journal ArticleDOI
TL;DR: This review article provides a comprehensive account of recent progress in the development of noble-metal nanocrystals with controlled shapes, in addition to their remarkable performance in a large number of catalytic and electrocatalytic reactions.
Abstract: The successful synthesis of noble-metal nanocrystals with controlled shapes offers many opportunities to not only maneuver their physicochemical properties but also optimize their figures of merit in a wide variety of applications. In particular, heterogeneous catalysis and surface science have benefited enormously from the availability of this new class of nanomaterials as the atomic structure presented on the surface of a nanocrystal is ultimately determined by its geometric shape. The immediate advantages may include significant enhancement in catalytic activity and/or selectivity and substantial reduction in materials cost while providing a well-defined model system for mechanistic study. With a focus on the monometallic system, this review article provides a comprehensive account of recent progress in the development of noble-metal nanocrystals with controlled shapes, in addition to their remarkable performance in a large number of catalytic and electrocatalytic reactions. We hope that this review article offers the impetus and roadmap for the development of next-generation catalysts vital to a broad range of industrial applications.

Journal ArticleDOI
TL;DR: In this article, an efficient single-atom electrocatalyst with Fe-N-C sites embedded in 3D N-doped ordered mesoporous carbon framework has been proposed for oxygen reduction reaction (ORR) in alkaline electrolyte.
Abstract: Single-atom Fe-N-C electrocatalysts have emerged as the most promising oxygen reduction reaction (ORR) catalyst. However, the low Fe loading and inaccessibility of Fe-N-C sites limit the overall ORR activity. Here, we report an efficient single-atom electrocatalyst (Fe-N-C/N-OMC) with Fe-N-C sites embedded in three-dimensional (3D) N-doped ordered mesoporous carbon framework. Fe-N-C/N-OMC shows high half-wave potential, kinetic current density, turnover frequency and mass activity towards ORR in alkaline electrolyte. Experiments and theoretical calculations suggest that the ultra-high ORR activity stems from the boosted intrinsic activity of FeN4 sites by graphitic N dopants, high density of accessible active site generated by high Fe and N loadings and ordered mesoporous carbon structure as well as facilitated mass and electron transport in 3D interconnected pores. Fe-N-C/N-OMC also shows comparable ORR activity to Pt/C in acidic electrolyte. As the cathode for zinc-air battery, Fe-N-C/N-OMC exhibits high open-circuit voltage, high power density and remarkable durability.

Journal ArticleDOI
TL;DR: A comprehensive analysis of scientific information and experimental data reported in recent years on the applications of PAA-based AOPs for the removal of chemical and microbiological micropollutants from water and wastewater to facilitate an in-depth understanding of these processes.

Journal ArticleDOI
TL;DR: The Texas freeze of February 2021 left more than 4.5 million customers (more than 10 million people) without electricity at its peak, some for several days as discussed by the authors, and had cascading effects on other services reliant upon electricity including drinking water treatment and medical services.
Abstract: The Texas freeze of February 2021 left more than 4.5 million customers (more than 10 million people) without electricity at its peak, some for several days. The freeze had cascading effects on other services reliant upon electricity including drinking water treatment and medical services. Economic losses from lost output and damage are estimated to be $130 billion in Texas alone. In the wake of the freeze, there has been major fallout among regulators and utilities as actors sought to apportion blame and utilities and generators began to settle up accounts. This piece offers a retrospective on what caused the blackouts and the knock-on effects on other services, the subsequent financial and political effects of the freeze, and the implications for Texas and the country going forward. Texas failed to sufficiently winterize its electricity and gas systems after 2011. Feedback between failures in the two systems made the situation worse. Overall, the state faced outages of 30 GW of electricity as demand reached unprecedented highs. The gap between production and demand forced the non-profit grid manager, the Electric Reliability Council of Texas (ERCOT), to cut off supply to millions of customers or face a systems collapse that by some accounts was minutes away. The 2021 freeze suggests a need to rethink the state’s regulatory approach to energy to avoid future such outcomes. Weatherization, demand response, and expanded interstate interconnections are potential solutions Texas should consider to avoid generation losses, reduce demand, and tap neighboring states’ capacity.

Journal ArticleDOI
TL;DR: An overview of PPOH system is provided and the future research direction of the system in practical treatment of antibiotic wastewater is outlined and the performance of antibiotic degradation and internal mechanism in the coupled oxidation system are analyzed comprehensively.

Journal ArticleDOI
TL;DR: This work addresses issues by providing a standard, multi-institutional database and a novel scoring metric through a public competition: the PhysioNet/Computing in Cardiology Challenge 2020, setting a new bar in reproducibility for public data science competitions.
Abstract: Objective: Vast 12-lead ECGs repositories provide opportunities to develop new machine learning approaches for creating accurate and automatic diagnostic systems for cardiac abnormalities. \However, most 12-lead ECG classification studies are trained, tested, or developed in single, small, or relatively homogeneous datasets. In addition, most algorithms focus on identifying small numbers of cardiac arrhythmias that do not represent the complexity and difficulty of ECG interpretation. This work addresses these issues by providing a standard, multi-institutional database and a novel scoring metric through a public competition: the PhysioNet/Computing in Cardiology Challenge 2020. Approach: A total of 66361 12-lead ECG recordings were sourced from six hospital systems from four countries across three continents. 43,101 recordings were posted publicly with a focus on 27 diagnoses. For the first time in a public competition, we required teams to publish open-source code for both training and testing their algorithms, ensuring full scientific reproducibility. Main results: A total of 217 teams submitted 1395 algorithms during the Challenge, representing a diversity of approaches for identifying cardiac abnormalities from both academia and industry. As with previous Challenges, high-performing algorithms exhibited significant drops (≈ 10%) in performance on the hidden test data. Significance: Data from diverse institutions allowed us to assess algorithmic generalizability. A novel evaluation metric considered different misclassification errors for different cardiac abnormalities, capturing the outcomes and risks of different diagnoses. Requiring both trained models and code for training models improved the generalizability of submissions, setting a new bar in reproducibility for public data science competitions.

Journal ArticleDOI
TL;DR: In this article, a facile and cost-effective method is developed to construct an all-round hydrogel electrolyte by using cotton as the raw material, tetraethyl orthosilicate as the crosslinker, and glycerol as the antifreezing agent.
Abstract: Flexible energy storage devices are at the forefront of next-generation power supplies, one of the most important components of which is the gel electrolyte. However, shortcomings exist, more or less, for all the currently developed hydrogel electrolytes. Herein, a facile and cost-effective method is developed to construct an all-round hydrogel electrolyte by using cotton as the raw material, tetraethyl orthosilicate as the crosslinker, and glycerol as the antifreezing agent. The obtained hydrogel electrolyte has high ionic conductivity, excellent mechanical properties (e.g., high tensile strength and elasticity), ultralow freezing point, good self-healing ability, high adhesion, and good heat-resistance ability. Remarkably, this hydrogel electrolyte can provide a record-breaking high ionic conductivity of 19.4 mS cm-1 at -40 °C compared with previously reported aqueous electrolytes for zinc-ion batteries. In addition, this hydrogel electrolyte can significantly inhibit zinc dendritic growth and parasitic side reactions from -40 to 60 °C. With this hydrogel electrolyte, a flexible quasi-solid-state Zn-MnO2 battery is assembled, which shows remarkable energy densities from -40 to 60 °C. The battery also exhibits outstanding cycling durability and has high endurance under various harsh conditions. This work opens new opportunities for the development of hydrogel electrolytes.

Journal ArticleDOI
TL;DR: The Roadmap on Magnonics as mentioned in this paper is a collection of 22 sections written by leading experts in this field who review and discuss the current status but also present their vision of future perspectives.
Abstract: Magnonics is a rather young physics research field in nanomagnetism and nanoscience that addresses the use of spin waves (magnons) to transmit, store, and process information. After several papers and review articles published in the last decade, with a steadily increase in the number of citations, we are presenting the first Roadmap on Magnonics. This a collection of 22 sections written by leading experts in this field who review and discuss the current status but also present their vision of future perspectives. Today, the principal challenges in applied magnonics are the excitation of sub-100 nm wavelength magnons, their manipulation on the nanoscale and the creation of sub-micrometre devices using low-Gilbert damping magnetic materials and the interconnections to standard electronics. In this respect, magnonics offers lower energy consumption, easier integrability and compatibility with CMOS structure, reprogrammability, shorter wavelength, smaller device features, anisotropic properties, negative group velocity, non-reciprocity and efficient tunability by various external stimuli to name a few. Hence, despite being a young research field, magnonics has come a long way since its early inception. This Roadmap represents a milestone for future emerging research directions in magnonics and hopefully it will be followed by a series of articles on the same topic.

Journal ArticleDOI
TL;DR: In this paper, a triboelectric nanogenerator (TENG) is used for energy harvesting and self-powered sensing in the field of intelligent sports, and the working mechanism of TENG and its association with athletic big data are introduced.
Abstract: In the new era of the Internet-of-Things, athletic big data collection and analysis based on widely distributed sensing networks are particularly important in the development of intelligent sports. Conventional sensors usually require an external power supply, with limitations such as limited lifetime and high maintenance cost. As a newly developed mechanical energy harvesting and self-powered sensing technology, the triboelectric nanogenerator (TENG) shows great potential to overcome these limitations. Most importantly, TENGs can be fabricated using wood, paper, fibers, and polymers, which are the most frequently used materials for sports. Recent progress on the development of TENGs for the field of intelligent sports is summarized. First, the working mechanism of TENG and its association with athletic big data are introduced. Subsequently, the development of TENG-based sports sensing systems, including smart sports facilities and wearable equipment is highlighted. At last, the remaining challenges and open opportunities are also discussed.

Journal ArticleDOI
M. G. Aartsen1, Rasha Abbasi2, Markus Ackermann, Jenni Adams1  +440 moreInstitutions (60)
TL;DR: In this article, the authors present an overview of a next-generation instrument, IceCube-Gen2, which will sharpen our understanding of the processes and environments that govern the Universe at the highest energies.
Abstract: The observation of electromagnetic radiation from radio to γ-ray wavelengths has provided a wealth of information about the Universe. However, at PeV (1015 eV) energies and above, most of the Universe is impenetrable to photons. New messengers, namely cosmic neutrinos, are needed to explore the most extreme environments of the Universe where black holes, neutron stars, and stellar explosions transform gravitational energy into non-thermal cosmic rays. These energetic particles havemillions of times higher energies than those produced in the most powerful particle accelerators on Earth. As neutrinos can escape from regions otherwise opaque to radiation, they allow an unique view deep into exploding stars and the vicinity of the event horizons of black holes. The discovery of cosmic neutrinos with IceCube has opened this new window on the Universe. IceCube has been successful in finding first evidence for cosmic particle acceleration in the jet of an active galactic nucleus. Yet, ultimately, its sensitivity is too limited to detect even the brightest neutrino sources with high significance, or to detect populations of less luminous sources. In thiswhite paper, we present an overview of a next-generation instrument, IceCube-Gen2, which will sharpen our understanding of the processes and environments that govern the Universe at the highest energies. IceCube-Gen2 is designed to: (a) Resolve the high-energy neutrino sky from TeV to EeV energies (b) Investigate cosmic particle acceleration through multi-messenger observations (c) Reveal the sources and propagation of the highest energy particles in the Universe (d) Probe fundamental physics with high-energy neutrinos IceCube-Gen2 will enhance the existing IceCube detector at the South Pole. It will increase the annual rate of observed cosmic neutrinos by a factor of ten compared to IceCube, and will be able to detect sources five times fainter than its predecessor. Furthermore, through the addition of a radio array, IceCube- Gen2 will extend the energy range by several orders of magnitude compared to IceCube. Construction will take 8 years and cost about $350M. The goal is to have IceCube-Gen2 fully operational by 2033. IceCube-Gen2 will play an essential role in shaping the new era of multimessenger astronomy, fundamentally advancing our knowledge of the highenergy Universe. This challenging mission can be fully addressed only through the combination of the information from the neutrino, electromagnetic, and gravitational wave emission of high-energy sources, in concert with the new survey instruments across the electromagnetic spectrum and gravitational wave detectors which will be available in the coming years.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1273 moreInstitutions (140)
TL;DR: In this article, the first and second observing runs of the Advanced LIGO and Virgo detector network were used to obtain the first standard-siren measurement of the Hubble constant (H 0).
Abstract: This paper presents the gravitational-wave measurement of the Hubble constant (H 0) using the detections from the first and second observing runs of the Advanced LIGO and Virgo detector network. The presence of the transient electromagnetic counterpart of the binary neutron star GW170817 led to the first standard-siren measurement of H 0. Here we additionally use binary black hole detections in conjunction with galaxy catalogs and report a joint measurement. Our updated measurement is H 0 = km s−1 Mpc−1 (68.3% of the highest density posterior interval with a flat-in-log prior) which is an improvement by a factor of 1.04 (about 4%) over the GW170817-only value of km s−1 Mpc−1. A significant additional contribution currently comes from GW170814, a loud and well-localized detection from a part of the sky thoroughly covered by the Dark Energy Survey. With numerous detections anticipated over the upcoming years, an exhaustive understanding of other systematic effects are also going to become increasingly important. These results establish the path to cosmology using gravitational-wave observations with and without transient electromagnetic counterparts.

Journal ArticleDOI
TL;DR: In this paper, Albertus, P; Anandan, V; Ban, C; Balsara, N; Belharouak, I; Buettner-Garrett, J; Chen, Z; Daniel, C, Doeff, M; Dudney, NJ; Dunn, B; Harris, SJ; Herle, S; Herbert, E; Kalnaus, S, Libera, JA; Lu, D; Martin, S., McCloskey, BD; McDowell, MT; Meng, YS; Nanda, J, Sak
Abstract: Author(s): Albertus, P; Anandan, V; Ban, C; Balsara, N; Belharouak, I; Buettner-Garrett, J; Chen, Z; Daniel, C; Doeff, M; Dudney, NJ; Dunn, B; Harris, SJ; Herle, S; Herbert, E; Kalnaus, S; Libera, JA; Lu, D; Martin, S; McCloskey, BD; McDowell, MT; Meng, YS; Nanda, J; Sakamoto, J; Self, EC; Tepavcevic, S; Wachsman, E; Wang, C; Westover, AS; Xiao, J; Yersak, T

Journal ArticleDOI
TL;DR: DeepPurpose as discussed by the authors is a comprehensive and easy-to-use DL library for drug-target interaction prediction, which supports training of customized DTI prediction models by implementing 15 compound and protein encoders and over 50 neural architectures.
Abstract: Summary Accurate prediction of drug-target interactions (DTI) is crucial for drug discovery. Recently, deep learning (DL) models for show promising performance for DTI prediction. However, these models can be difficult to use for both computer scientists entering the biomedical field and bioinformaticians with limited DL experience. We present DeepPurpose, a comprehensive and easy-to-use DL library for DTI prediction. DeepPurpose supports training of customized DTI prediction models by implementing 15 compound and protein encoders and over 50 neural architectures, along with providing many other useful features. We demonstrate state-of-the-art performance of DeepPurpose on several benchmark datasets. Availability and implementation https://github.com/kexinhuang12345/DeepPurpose. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This Review discusses the use of self-powered technology that harvests energy from the body and its ambient environment to power implantable and wearable CEDs, and presents the current challenges and future perspectives for the field.
Abstract: Cardiovascular electronic devices have enormous benefits for health and quality of life but the long-term operation of these implantable and wearable devices remains a huge challenge owing to the limited life of batteries, which increases the risk of device failure and causes uncertainty among patients. A possible approach to overcoming the challenge of limited battery life is to harvest energy from the body and its ambient environment, including biomechanical, solar, thermal and biochemical energy, so that the devices can be self-powered. This strategy could allow the development of advanced features for cardiovascular electronic devices, such as extended life, miniaturization to improve comfort and conformability, and functions that integrate with real-time data transmission, mobile data processing and smart power utilization. In this Review, we present an update on self-powered cardiovascular implantable electronic devices and wearable active sensors. We summarize the existing self-powered technologies and their fundamental features. We then review the current applications of self-powered electronic devices in the cardiovascular field, which have two main goals. The first is to harvest energy from the body as a sustainable power source for cardiovascular electronic devices, such as cardiac pacemakers. The second is to use self-powered devices with low power consumption and high performance as active sensors to monitor physiological signals (for example, for active endocardial monitoring). Finally, we present the current challenges and future perspectives for the field. The design and limited life of batteries curtails the use of many cardiovascular electronic devices (CEDs). In this Review, Li and colleagues discuss the use of self-powered technology that harvests energy from the body and its ambient environment to power implantable and wearable CEDs.

Journal ArticleDOI
TL;DR: In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals as discussed by the authors.
Abstract: Generative deep learning algorithms have progressed to a point where it is difficult to tell the difference between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these “deepfakes” have advanced significantly. In this article, we explore the creation and detection of deepfakes and provide an in-depth view as to how these architectures work. The purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas that require further research and attention.

Journal ArticleDOI
01 Jul 2021
TL;DR: In this paper, a photocatalytic process was proposed to selectively retrieve seven precious metals from waste, including gold, silver, gold oxide, palladium, platinum, rhodium, ruthenium and iridium.
Abstract: Precious metals such as gold and platinum are valued materials for a variety of important applications, but their scarcity poses a risk of supply disruption. Recycling precious metals from waste provides a promising solution; however, conventional metallurgical methods bear high environmental costs and energy consumption. Here, we report a photocatalytic process that enables one to selectively retrieve seven precious metals—silver (Ag), gold (Au), palladium (Pd), platinum (Pt), rhodium (Rh), ruthenium (Ru) and iridium (Ir)—from waste circuit boards, ternary automotive catalysts and ores. The whole process does not involve strong acids or bases or toxic cyanide, but needs only light and photocatalysts such as titanium dioxide (TiO2). More than 99% of the targeted elements in the waste sources can be dissolved and the precious metals recovered after a simple reducing reaction that shows a high purity (≥98%). By demonstrating success at the kilogram scale and showing that the catalysts can be reused more than 100 times, we suggest that this approach might be industry compatible. This research opens up a new path in the development of sustainable technologies for recycling the Earth’s resources and contributing to a circular economy. Recovering precious resources from waste is essential to implement a circular economy, but the available methods carry environmental costs. In this Article, a greener photocatalytic process is shown to recover up to seven precious metals from waste successfully, offering the potential for wide application.