scispace - formally typeset
Search or ask a question

Showing papers by "University of Massachusetts Amherst published in 2019"


Journal ArticleDOI
TL;DR: SciPy as discussed by the authors is an open source scientific computing library for the Python programming language, which includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics.
Abstract: SciPy is an open source scientific computing library for the Python programming language. SciPy 1.0 was released in late 2017, about 16 years after the original version 0.1 release. SciPy has become a de facto standard for leveraging scientific algorithms in the Python programming language, with more than 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories, and millions of downloads per year. This includes usage of SciPy in almost half of all machine learning projects on GitHub, and usage by high profile projects including LIGO gravitational wave analysis and creation of the first-ever image of a black hole (M87). The library includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics. In this work, we provide an overview of the capabilities and development practices of the SciPy library and highlight some recent technical developments.

12,774 citations


Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +403 moreInstitutions (82)
TL;DR: In this article, the Event Horizon Telescope was used to reconstruct event-horizon-scale images of the supermassive black hole candidate in the center of the giant elliptical galaxy M87.
Abstract: When surrounded by a transparent emission region, black holes are expected to reveal a dark shadow caused by gravitational light bending and photon capture at the event horizon. To image and study this phenomenon, we have assembled the Event Horizon Telescope, a global very long baseline interferometry array observing at a wavelength of 1.3 mm. This allows us to reconstruct event-horizon-scale images of the supermassive black hole candidate in the center of the giant elliptical galaxy M87. We have resolved the central compact radio source as an asymmetric bright emission ring with a diameter of 42 +/- 3 mu as, which is circular and encompasses a central depression in brightness with a flux ratio greater than or similar to 10: 1. The emission ring is recovered using different calibration and imaging schemes, with its diameter and width remaining stable over four different observations carried out in different days. Overall, the observed image is consistent with expectations for the shadow of a Kerr black hole as predicted by general relativity. The asymmetry in brightness in the ring can be explained in terms of relativistic beaming of the emission from a plasma rotating close to the speed of light around a black hole. We compare our images to an extensive library of ray-traced general-relativistic magnetohydrodynamic simulations of black holes and derive a central mass of M = (6.5 +/- 0.7) x 10(9) M-circle dot. Our radio-wave observations thus provide powerful evidence for the presence of supermassive black holes in centers of galaxies and as the central engines of active galactic nuclei. They also present a new tool to explore gravity in its most extreme limit and on a mass scale that was so far not accessible.

2,589 citations


Journal ArticleDOI
Frank Arute1, Kunal Arya1, Ryan Babbush1, Dave Bacon1, Joseph C. Bardin1, Joseph C. Bardin2, Rami Barends1, Rupak Biswas3, Sergio Boixo1, Fernando G. S. L. Brandão1, Fernando G. S. L. Brandão4, David A. Buell1, B. Burkett1, Yu Chen1, Zijun Chen1, Ben Chiaro5, Roberto Collins1, William Courtney1, Andrew Dunsworth1, Edward Farhi1, Brooks Foxen1, Brooks Foxen5, Austin G. Fowler1, Craig Gidney1, Marissa Giustina1, R. Graff1, Keith Guerin1, Steve Habegger1, Matthew P. Harrigan1, Michael J. Hartmann6, Michael J. Hartmann1, Alan Ho1, Markus R. Hoffmann1, Trent Huang1, Travis S. Humble7, Sergei V. Isakov1, Evan Jeffrey1, Zhang Jiang1, Dvir Kafri1, Kostyantyn Kechedzhi1, Julian Kelly1, Paul V. Klimov1, Sergey Knysh1, Alexander N. Korotkov8, Alexander N. Korotkov1, Fedor Kostritsa1, David Landhuis1, Mike Lindmark1, E. Lucero1, Dmitry I. Lyakh7, Salvatore Mandrà3, Jarrod R. McClean1, Matt McEwen5, Anthony Megrant1, Xiao Mi1, Kristel Michielsen9, Kristel Michielsen10, Masoud Mohseni1, Josh Mutus1, Ofer Naaman1, Matthew Neeley1, Charles Neill1, Murphy Yuezhen Niu1, Eric Ostby1, Andre Petukhov1, John Platt1, Chris Quintana1, Eleanor Rieffel3, Pedram Roushan1, Nicholas C. Rubin1, Daniel Sank1, Kevin J. Satzinger1, Vadim Smelyanskiy1, Kevin J. Sung11, Kevin J. Sung1, Matthew D. Trevithick1, Amit Vainsencher1, Benjamin Villalonga1, Benjamin Villalonga12, Theodore White1, Z. Jamie Yao1, Ping Yeh1, Adam Zalcman1, Hartmut Neven1, John M. Martinis1, John M. Martinis5 
24 Oct 2019-Nature
TL;DR: Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to sample one instance of a quantum circuit a million times, which would take a state-of-the-art supercomputer around ten thousand years to compute.
Abstract: The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits2-7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253 (about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times-our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy8-14 for this specific computational task, heralding a much-anticipated computing paradigm.

2,527 citations


Journal ArticleDOI
TL;DR: This amended and improved digestion method (INFOGEST 2.0) avoids challenges associated with the original method, such as the inclusion of the oral phase and the use of gastric lipase.
Abstract: Developing a mechanistic understanding of the impact of food structure and composition on human health has increasingly involved simulating digestion in the upper gastrointestinal tract. These simulations have used a wide range of different conditions that often have very little physiological relevance, and this impedes the meaningful comparison of results. The standardized protocol presented here is based on an international consensus developed by the COST INFOGEST network. The method is designed to be used with standard laboratory equipment and requires limited experience to encourage a wide range of researchers to adopt it. It is a static digestion method that uses constant ratios of meal to digestive fluids and a constant pH for each step of digestion. This makes the method simple to use but not suitable for simulating digestion kinetics. Using this method, food samples are subjected to sequential oral, gastric and intestinal digestion while parameters such as electrolytes, enzymes, bile, dilution, pH and time of digestion are based on available physiological data. This amended and improved digestion method (INFOGEST 2.0) avoids challenges associated with the original method, such as the inclusion of the oral phase and the use of gastric lipase. The method can be used to assess the endpoints resulting from digestion of foods by analyzing the digestion products (e.g., peptides/amino acids, fatty acids, simple sugars) and evaluating the release of micronutrients from the food matrix. The whole protocol can be completed in ~7 d, including ~5 d required for the determination of enzyme activities.

1,394 citations


Posted Content
TL;DR: This paper quantifies the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and proposes actionable recommendations to reduce costs and improve equity in NLP research and practice.
Abstract: Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.

1,318 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem.
Abstract: Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks.

1,084 citations


Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +251 moreInstitutions (56)
TL;DR: In this article, the authors present measurements of the properties of the central radio source in M87 using Event Horizon Telescope data obtained during the 2017 campaign, and find that >50% of the total flux at arcsecond scales comes from near the horizon and that the emission is dramatically suppressed interior to this region by a factor >10, providing direct evidence of the predicted shadow of a black hole.
Abstract: We present measurements of the properties of the central radio source in M87 using Event Horizon Telescope data obtained during the 2017 campaign. We develop and fit geometric crescent models (asymmetric rings with interior brightness depressions) using two independent sampling algorithms that consider distinct representations of the visibility data. We show that the crescent family of models is statistically preferred over other comparably complex geometric models that we explore. We calibrate the geometric model parameters using general relativistic magnetohydrodynamic (GRMHD) models of the emission region and estimate physical properties of the source. We further fit images generated from GRMHD models directly to the data. We compare the derived emission region and black hole parameters from these analyses with those recovered from reconstructed images. There is a remarkable consistency among all methods and data sets. We find that >50% of the total flux at arcsecond scales comes from near the horizon, and that the emission is dramatically suppressed interior to this region by a factor >10, providing direct evidence of the predicted shadow of a black hole. Across all methods, we measure a crescent diameter of 42 ± 3 μas and constrain its fractional width to be <0.5. Associating the crescent feature with the emission surrounding the black hole shadow, we infer an angular gravitational radius of GM/Dc2 = 3.8 ± 0.4 μas. Folding in a distance measurement of ${16.8}_{-0.7}^{+0.8}\,\mathrm{Mpc}$ gives a black hole mass of $M=6.5\pm 0.2{| }_{\mathrm{stat}}\pm 0.7{| }_{\mathrm{sys}}\times {10}^{9}\hspace{2pt}{M}_{\odot }$. This measurement from lensed emission near the event horizon is consistent with the presence of a central Kerr black hole, as predicted by the general theory of relativity.

1,024 citations


Proceedings ArticleDOI
05 Jun 2019
TL;DR: In this article, the authors quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and proposed actionable recommendations to reduce costs and improve equity in NLP research and practice.
Abstract: Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.

998 citations


Journal ArticleDOI
TL;DR: The GVG proposes a new Global Anatomic Staging System (GLASS), which involves defining a preferred target artery path (TAP) and then estimating limb-based patency (LBP) resulting in three stages of complexity for intervention.

993 citations


Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +251 moreInstitutions (58)
TL;DR: In this article, the first Event Horizon Telescope (EHT) images of M87 were presented, using observations from April 2017 at 1.3 mm wavelength, showing a prominent ring with a diameter of ~40 μas, consistent with the size and shape of the lensed photon orbit encircling the "shadow" of a supermassive black hole.
Abstract: We present the first Event Horizon Telescope (EHT) images of M87, using observations from April 2017 at 1.3 mm wavelength. These images show a prominent ring with a diameter of ~40 μas, consistent with the size and shape of the lensed photon orbit encircling the "shadow" of a supermassive black hole. The ring is persistent across four observing nights and shows enhanced brightness in the south. To assess the reliability of these results, we implemented a two-stage imaging procedure. In the first stage, four teams, each blind to the others' work, produced images of M87 using both an established method (CLEAN) and a newer technique (regularized maximum likelihood). This stage allowed us to avoid shared human bias and to assess common features among independent reconstructions. In the second stage, we reconstructed synthetic data from a large survey of imaging parameters and then compared the results with the corresponding ground truth images. This stage allowed us to select parameters objectively to use when reconstructing images of M87. Across all tests in both stages, the ring diameter and asymmetry remained stable, insensitive to the choice of imaging technique. We describe the EHT imaging procedures, the primary image features in M87, and the dependence of these features on imaging assumptions.

952 citations


Journal ArticleDOI
TL;DR: The challenges in the integration and use in computation of large-scale memristive neural networks are discussed, both as accelerators for deep learning and as building blocks for spiking neural networks.
Abstract: With their working mechanisms based on ion migration, the switching dynamics and electrical behaviour of memristive devices resemble those of synapses and neurons, making these devices promising candidates for brain-inspired computing. Built into large-scale crossbar arrays to form neural networks, they perform efficient in-memory computing with massive parallelism by directly using physical laws. The dynamical interactions between artificial synapses and neurons equip the networks with both supervised and unsupervised learning capabilities. Moreover, their ability to interface with analogue signals from sensors without analogue/digital conversions reduces the processing time and energy overhead. Although numerous simulations have indicated the potential of these networks for brain-inspired computing, experimental implementation of large-scale memristive arrays is still in its infancy. This Review looks at the progress, challenges and possible solutions for efficient brain-inspired computation with memristive implementations, both as accelerators for deep learning and as building blocks for spiking neural networks.

Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +259 moreInstitutions (62)
TL;DR: In this article, a large library of models based on general relativistic magnetohydrodynamic (GRMHD) simulations and synthetic images produced by GRS was constructed and compared with the observed visibilities.
Abstract: The Event Horizon Telescope (EHT) has mapped the central compact radio source of the elliptical galaxy M87 at 1.3 mm with unprecedented angular resolution. Here we consider the physical implications of the asymmetric ring seen in the 2017 EHT data. To this end, we construct a large library of models based on general relativistic magnetohydrodynamic (GRMHD) simulations and synthetic images produced by general relativistic ray tracing. We compare the observed visibilities with this library and confirm that the asymmetric ring is consistent with earlier predictions of strong gravitational lensing of synchrotron emission from a hot plasma orbiting near the black hole event horizon. The ring radius and ring asymmetry depend on black hole mass and spin, respectively, and both are therefore expected to be stable when observed in future EHT campaigns. Overall, the observed image is consistent with expectations for the shadow of a spinning Kerr black hole as predicted by general relativity. If the black hole spin and M87's large scale jet are aligned, then the black hole spin vector is pointed away from Earth. Models in our library of non-spinning black holes are inconsistent with the observations as they do not produce sufficiently powerful jets. At the same time, in those models that produce a sufficiently powerful jet, the latter is powered by extraction of black hole spin energy through mechanisms akin to the Blandford-Znajek process. We briefly consider alternatives to a black hole for the central compact object. Analysis of existing EHT polarization data and data taken simultaneously at other wavelengths will soon enable new tests of the GRMHD models, as will future EHT campaigns at 230 and 345 GHz.

Proceedings ArticleDOI
19 May 2019
TL;DR: The reasons why deep learning models may leak information about their training data are investigated and new algorithms tailored to the white-box setting are designed by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks.
Abstract: Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +394 moreInstitutions (78)
TL;DR: The Event Horizon Telescope (EHT) as mentioned in this paper is a very long baseline interferometry (VLBI) array that comprises millimeter and submillimeter-wavelength telescopes separated by distances comparable to the diameter of the Earth.
Abstract: The Event Horizon Telescope (EHT) is a very long baseline interferometry (VLBI) array that comprises millimeter- and submillimeter-wavelength telescopes separated by distances comparable to the diameter of the Earth. At a nominal operating wavelength of ~1.3 mm, EHT angular resolution (λ/D) is ~25 μas, which is sufficient to resolve nearby supermassive black hole candidates on spatial and temporal scales that correspond to their event horizons. With this capability, the EHT scientific goals are to probe general relativistic effects in the strong-field regime and to study accretion and relativistic jet formation near the black hole boundary. In this Letter we describe the system design of the EHT, detail the technology and instrumentation that enable observations, and provide measures of its performance. Meeting the EHT science objectives has required several key developments that have facilitated the robust extension of the VLBI technique to EHT observing wavelengths and the production of instrumentation that can be deployed on a heterogeneous array of existing telescopes and facilities. To meet sensitivity requirements, high-bandwidth digital systems were developed that process data at rates of 64 gigabit s^(−1), exceeding those of currently operating cm-wavelength VLBI arrays by more than an order of magnitude. Associated improvements include the development of phasing systems at array facilities, new receiver installation at several sites, and the deployment of hydrogen maser frequency standards to ensure coherent data capture across the array. These efforts led to the coordination and execution of the first Global EHT observations in 2017 April, and to event-horizon-scale imaging of the supermassive black hole candidate in M87.

Journal ArticleDOI
Nasim Mavaddat1, Kyriaki Michailidou1, Kyriaki Michailidou2, Joe Dennis1  +307 moreInstitutions (105)
TL;DR: This PRS, optimized for prediction of estrogen receptor (ER)-specific disease, from the largest available genome-wide association dataset is developed and empirically validated and is a powerful and reliable predictor of breast cancer risk that may improve breast cancer prevention programs.
Abstract: Stratification of women according to their risk of breast cancer based on polygenic risk scores (PRSs) could improve screening and prevention strategies. Our aim was to develop PRSs, optimized for prediction of estrogen receptor (ER)-specific disease, from the largest available genome-wide association dataset and to empirically validate the PRSs in prospective studies. The development dataset comprised 94,075 case subjects and 75,017 control subjects of European ancestry from 69 studies, divided into training and validation sets. Samples were genotyped using genome-wide arrays, and single-nucleotide polymorphisms (SNPs) were selected by stepwise regression or lasso penalized regression. The best performing PRSs were validated in an independent test set comprising 11,428 case subjects and 18,323 control subjects from 10 prospective studies and 190,040 women from UK Biobank (3,215 incident breast cancers). For the best PRSs (313 SNPs), the odds ratio for overall disease per 1 standard deviation in ten prospective studies was 1.61 (95%CI: 1.57-1.65) with area under receiver-operator curve (AUC) = 0.630 (95%CI: 0.628-0.651). The lifetime risk of overall breast cancer in the top centile of the PRSs was 32.6%. Compared with women in the middle quintile, those in the highest 1% of risk had 4.37- and 2.78-fold risks, and those in the lowest 1% of risk had 0.16- and 0.27-fold risks, of developing ER-positive and ER-negative disease, respectively. Goodness-of-fit tests indicated that this PRS was well calibrated and predicts disease risk accurately in the tails of the distribution. This PRS is a powerful and reliable predictor of breast cancer risk that may improve breast cancer prevention programs.

Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +243 moreInstitutions (60)
TL;DR: In this paper, the Event Horizon Telescope (EHT) 1.3 mm radio wavelength observations of the supermassive black hole candidate at the center of the radio galaxy M87 and the quasar 3C 279, taken during the 2017 April 5-11 observing campaign are presented.
Abstract: We present the calibration and reduction of Event Horizon Telescope (EHT) 1.3 mm radio wavelength observations of the supermassive black hole candidate at the center of the radio galaxy M87 and the quasar 3C 279, taken during the 2017 April 5–11 observing campaign. These global very long baseline interferometric observations include for the first time the highly sensitive Atacama Large Millimeter/submillimeter Array (ALMA); reaching an angular resolution of 25 μas, with characteristic sensitivity limits of ~1 mJy on baselines to ALMA and ~10 mJy on other baselines. The observations present challenges for existing data processing tools, arising from the rapid atmospheric phase fluctuations, wide recording bandwidth, and highly heterogeneous array. In response, we developed three independent pipelines for phase calibration and fringe detection, each tailored to the specific needs of the EHT. The final data products include calibrated total intensity amplitude and phase information. They are validated through a series of quality assurance tests that show consistency across pipelines and set limits on baseline systematic errors of 2% in amplitude and 1° in phase. The M87 data reveal the presence of two nulls in correlated flux density at ~3.4 and ~8.3 Gλ and temporal evolution in closure quantities, indicating intrinsic variability of compact structure on a timescale of days, or several light-crossing times for a few billion solar-mass black hole. These measurements provide the first opportunity to image horizon-scale structure in M87.

Journal ArticleDOI
TL;DR: A comprehensive review of recent progress in salient object detection is provided and this field is situate among other closely related areas such as generic scene segmentation, object proposal generation, and saliency for fixation prediction.
Abstract: Detecting and segmenting salient objects from natural scenes, often referred to as salient object detection, has attracted great interest in computer vision. While many models have been proposed and several applications have emerged, a deep understanding of achievements and issues remains lacking. We aim to provide a comprehensive review of recent progress in salient object detection and situate this field among other closely related areas such as generic scene segmentation, object proposal generation, and saliency for fixation prediction. Covering 228 publications, we survey i) roots, key concepts, and tasks, ii) core techniques and main modeling trends, and iii) datasets and evaluation metrics for salient object detection. We also discuss open problems such as evaluation metrics and dataset bias in model performance, and suggest future research directions.

Journal ArticleDOI
TL;DR: This tutorial review has highlighted multiple nanoparticle-based approaches to eliminate bacterial infections, providing crucial insight into the design of elements that play critical roles in creating antimicrobial nanotherapeutics, including NP-surface functionality in designing nanomaterials as self-therapeutic agents and delivery vehicles for antimicrobial cargo.
Abstract: The dramatic increase in antimicrobial resistance for pathogenic bacteria constitutes a key threat to human health. The Centers for Disease Control and Prevention has recently stated that the world is on the verge of entering the "post-antibiotic era", one where more people will die from bacterial infections than from cancer. Recently, nanoparticles (NPs) have emerged as new tools that can be used to combat deadly bacterial infections. Nanoparticle-based strategies can overcome the barriers faced by traditional antimicrobials, including antibiotic resistance. In this tutorial review, we have highlighted multiple nanoparticle-based approaches to eliminate bacterial infections, providing crucial insight into the design of elements that play critical roles in creating antimicrobial nanotherapeutics. In particular, we have focused on the pivotal role played by NP-surface functionality in designing nanomaterials as self-therapeutic agents and delivery vehicles for antimicrobial cargo.

14 Jun 2019
TL;DR: In this paper, Abd Elgawad et al. discuss the sea level rise and its implications for low lying islands, coastlines and communities in the Middle East and Asia.
Abstract: Do Not Cite, Quote or Distribute 4-1 Total pages: 139 1 Chapter 4: Sea Level Rise and Implications for Low Lying Islands, Coasts and Communities 2 3 Coordinating Lead Authors: Michael Oppenheimer (USA), Bruce Glavovic (New Zealand), Tuhin Ghosh 4 (India) 5 6 Lead Authors: Amro Abd-Elgawad (Egypt), Rongshuo Cai (China), Miguel Cifuentes-Jara (Costa Rica), 7 Rob Deconto (USA), John Hay (Cook Islands), Jochen Hinkel (Germany), Federico Isla (Argentina), 8 Alexandre K. Magnan (France), Ben Marzeion (Germany), Benoit Meyssignac (France), Zita Sebesvari 9 (Hungary), AJ Smit (South Africa), Roderik van de Wal (Netherlands) 10 11 Contributing Authors: Maya Buchanan (USA), Gonéri Le Cozannet (France), Catia Domingues 12 (Australia), Virginie Duvat (France), Tamsin Edwards (UK), Miguel D. Fortes (Philippines), Thomas 13 Frederikse (Netherlands), Jean-Pierre Gattuso (France), Robert Kopp (USA), Erwin Lambert (Netherlands), 14 Elizabeth McLeod (USA), Mark Merrifield (USA), Siddharth Narayan (US), Robert J. Nicholls (UK), 15 Fabrice Renaud (UK), Jonathan Simm (UK), Jon Woodruff (USA), Poh Poh Wong (Singapore), Siyuan Xian 16 (USA) 17 18 Review Editors: Ayako Abe-Ouchi (Japan), Kapil Gupta (India), Joy Pereira (Malaysia) 19 20 Chapter Scientist Maya Buchanan (USA) 21 22 Date of Draft: 20 April 2018 23 24 Notes: TSU Compiled Version 25 26

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1491 moreInstitutions (239)
TL;DR: In this article, the authors present the second volume of the Future Circular Collider Conceptual Design Report, devoted to the electron-positron collider FCC-ee, and present the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities of the integrated FCC programme would serve the worldwide community throughout the 21st century. The FCC study also investigates an LHC energy upgrade, using FCC-hh technology. This document constitutes the second volume of the FCC Conceptual Design Report, devoted to the electron-positron collider FCC-ee. After summarizing the physics discovery opportunities, it presents the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan. FCC-ee can be built with today’s technology. Most of the FCC-ee infrastructure could be reused for FCC-hh. Combining concepts from past and present lepton colliders and adding a few novel elements, the FCC-ee design promises outstandingly high luminosity. This will make the FCC-ee a unique precision instrument to study the heaviest known particles (Z, W and H bosons and the top quark), offering great direct and indirect sensitivity to new physics.

Journal ArticleDOI
23 Aug 2019-Science
TL;DR: The state of science relating soil organisms to biogeochemical processes is reviewed, focusing particularly on the importance of microbial community variation on decomposition and turnover of soil organic matter.
Abstract: Soil organisms represent the most biologically diverse community on land and govern the turnover of the largest organic matter pool in the terrestrial biosphere. The highly complex nature of these communities at local scales has traditionally obscured efforts to identify unifying patterns in global soil biodiversity and biogeochemistry. As a result, environmental covariates have generally been used as a proxy to represent the variation in soil community activity in global biogeochemical models. Yet over the past decade, broad-scale studies have begun to see past this local heterogeneity to identify unifying patterns in the biomass, diversity, and composition of certain soil groups across the globe. These unifying patterns provide new insights into the fundamental distribution and dynamics of organic matter on land.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate the effect of minimum wages on low-wage jobs using 138 prominent state-level minimum wage changes between 1979 and 2016 in the U.S using a dierence-in-dierences approach.
Abstract: We estimate the eect of minimum wages on low-wage jobs using 138 prominent state-level minimum wage changes between 1979 and 2016 in the U.S using a dierence-in-dierences approach. We first estimate the eect of the minimum wage increase on employment changes by wage bins throughout the hourly wage distribution. We then focus on the bottom part of the wage distribution and compare the number of excess jobs paying at or slightly above the new minimum wage to the missing jobs paying below it to infer the employment eect. We find that the overall number of low-wage jobs remained essentially unchanged over the five years following the increase. At the same time, the direct eect of the minimum wage on average earnings was amplified by modest wage spillovers at the bottom of the wage distribution. Our estimates by detailed demographic groups show that the lack of job loss is not explained by labor-labor substitution at the bottom of the wage distribution. We also find no evidence of disemployment when we consider higher levels of minimum wages. However, we do find some evidence of reduced employment in tradable sectors. We also show how decomposing the overall employment eect by wage bins allows a transparent way of assessing the plausibility of estimates.

Journal ArticleDOI
TL;DR: This manuscript describes the most recommendable methodologies for the fabrication, characterization, and simulation of RS devices, as well as the proper methods to display the data obtained.
Abstract: Resistive switching (RS) is an interesting property shown by some materials systems that, especially during the last decade, has gained a lot of interest for the fabrication of electronic devices, with electronic nonvolatile memories being those that have received the most attention. The presence and quality of the RS phenomenon in a materials system can be studied using different prototype cells, performing different experiments, displaying different figures of merit, and developing different computational analyses. Therefore, the real usefulness and impact of the findings presented in each study for the RS technology will be also different. This manuscript describes the most recommendable methodologies for the fabrication, characterization, and simulation of RS devices, as well as the proper methods to display the data obtained. The idea is to help the scientific community to evaluate the real usefulness and impact of an RS study for the development of RS technology. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

Journal ArticleDOI
25 Oct 2019-Science
TL;DR: This Review explores grand challenges in wind energy research that must be addressed to enable wind energy to supply one-third to one-half, or even more, of the world’s electricity needs.
Abstract: Harvested by advanced technical systems honed over decades of research and development, wind energy has become a mainstream energy resource. However, continued innovation is needed to realize the potential of wind to serve the global demand for clean energy. Here, we outline three interdependent, cross-disciplinary grand challenges underpinning this research endeavor. The first is the need for a deeper understanding of the physics of atmospheric flow in the critical zone of plant operation. The second involves science and engineering of the largest dynamic, rotating machines in the world. The third encompasses optimization and control of fleets of wind plants working synergistically within the electricity grid. Addressing these challenges could enable wind power to provide as much as half of our global electricity needs and perhaps beyond.

Journal ArticleDOI
TL;DR: Although substantial progress has been made in reducing neonatal mortality since 1990, increased efforts to improve progress are still needed to achieve the SDG target of 12 deaths per 1000 livebirths or fewer by 2030, which more than 60 countries need to accelerate their progress to reach.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: In this paper, the authors describe the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider in collaboration with national institutes, laboratories and universities worldwide, and enhanced by a strong participation of industrial partners.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson, with a mass of 125 GeV, completes the matrix of particles and interactions that has constituted the “Standard Model” for several decades. This model is a consistent and predictive theory, which has so far proven successful at describing all phenomena accessible to collider experiments. However, several experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do likewise point to the existence of physics beyond the Standard Model. This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct discovery will reach several tens of TeV, and allow, for example, to produce new particles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e– collider phase. This collider will also precisely measure the Higgs self-coupling and thoroughly explore the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. WIMPs as thermal dark matter candidates will be discovered, or ruled out. As a single project, this particle collider infrastructure will serve the world-wide physics community for about 25 years and, in combination with a lepton collider (see FCC conceptual design report volume 2), will provide a research tool until the end of the 21st century. Collision energies beyond 100 TeV can be considered when using high-temperature superconductors. The European Strategy for Particle Physics (ESPP) update 2013 stated “To stay at the forefront of particle physics, Europe needs to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next Strategy update”. The FCC study has implemented the ESPP recommendation by developing a long-term vision for an “accelerator project in a global context”. This document describes the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider “in collaboration with national institutes, laboratories and universities worldwide”, and enhanced by a strong participation of industrial partners. Now, a coordinated preparation effort can be based on a core of an ever-growing consortium of already more than 135 institutes worldwide. The technology for constructing a high-energy circular hadron collider can be brought to the technology readiness level required for constructing within the coming ten years through a focused R&D programme. The FCC-hh concept comprises in the baseline scenario a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb3Sn technology pioneered at the HL-LHC, an energy-efficient cryogenic refrigeration infrastructure based on a neon-helium (Nelium) light gas mixture, a high-reliability and low loss cryogen distribution infrastructure based on Invar, high-power distributed beam transfer using superconducting elements and local magnet energy recovery and re-use technologies that are already gradually introduced at other CERN accelerators. On a longer timescale, high-temperature superconductors can be developed together with industrial partners to achieve an even more energy efficient particle collider or to reach even higher collision energies.The re-use of the LHC and its injector chain, which also serve for a concurrently running physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. Strategic R&D for FCC-hh aims at minimising construction cost and energy consumption, while maximising the socio-economic impact. It will mitigate technology-related risks and ensure that industry can benefit from an acceptable utility. Concerning the implementation, a preparatory phase of about eight years is both necessary and adequate to establish the project governance and organisation structures, to build the international machine and experiment consortia, to develop a territorial implantation plan in agreement with the host-states’ requirements, to optimise the disposal of land and underground volumes, and to prepare the civil engineering project. Such a large-scale, international fundamental research infrastructure, tightly involving industrial partners and providing training at all education levels, will be a strong motor of economic and societal development in all participating nations. The FCC study has implemented a set of actions towards a coherent vision for the world-wide high-energy and particle physics community, providing a collaborative framework for topically complementary and geographically well-balanced contributions. This conceptual design report lays the foundation for a subsequent infrastructure preparatory and technical design phase.

Journal ArticleDOI
10 May 2019-Science
TL;DR: An ionic floating-gate memory array based on a polymer redox transistor connected to a conductive-bridge memory (CBM) is introduced, enabling linear and symmetric weight updates in parallel over an entire crossbar array at megahertz rates over 109 write-read cycles.
Abstract: Neuromorphic computers could overcome efficiency bottlenecks inherent to conventional computing through parallel programming and readout of artificial neural network weights in a crossbar memory array. However, selective and linear weight updates and 1 billion write-read operations and support >1-megahertz write-read frequencies.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1501 moreInstitutions (239)
TL;DR: In this article, the physics opportunities of the Future Circular Collider (FC) were reviewed, covering its e+e-, pp, ep and heavy ion programs, and the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions.
Abstract: We review the physics opportunities of the Future Circular Collider, covering its e+e-, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.

Journal ArticleDOI
TL;DR: Recommendations arising from community discussions emerging out of the first International Conference on Hydrogen-Exchange Mass Spectrometry (IC-HDX; 2017) are provided, meant to represent both a consensus viewpoint and an opportunity to stimulate further additions and refinements as the field advances.
Abstract: Hydrogen deuterium exchange mass spectrometry (HDX-MS) is a powerful biophysical technique being increasingly applied to a wide variety of problems. As the HDX-MS community continues to grow, adoption of best practices in data collection, analysis, presentation and interpretation will greatly enhance the accessibility of this technique to nonspecialists. Here we provide recommendations arising from community discussions emerging out of the first International Conference on Hydrogen-Exchange Mass Spectrometry (IC-HDX; 2017). It is meant to represent both a consensus viewpoint and an opportunity to stimulate further additions and refinements as the field advances.

Journal ArticleDOI
03 Sep 2019-JAMA
TL;DR: Among outpatient health care personnel, N95 respirators vs medical masks as worn by participants in this trial resulted in no significant difference in the incidence of laboratory-confirmed influenza.
Abstract: Importance Clinical studies have been inconclusive about the effectiveness of N95 respirators and medical masks in preventing health care personnel (HCP) from acquiring workplace viral respiratory infections. Objective To compare the effect of N95 respirators vs medical masks for prevention of influenza and other viral respiratory infections among HCP. Design, Setting, and Participants A cluster randomized pragmatic effectiveness study conducted at 137 outpatient study sites at 7 US medical centers between September 2011 and May 2015, with final follow-up in June 2016. Each year for 4 years, during the 12-week period of peak viral respiratory illness, pairs of outpatient sites (clusters) within each center were matched and randomly assigned to the N95 respirator or medical mask groups. Interventions Overall, 1993 participants in 189 clusters were randomly assigned to wear N95 respirators (2512 HCP-seasons of observation) and 2058 in 191 clusters were randomly assigned to wear medical masks (2668 HCP-seasons) when near patients with respiratory illness. Main Outcomes and Measures The primary outcome was the incidence of laboratory-confirmed influenza. Secondary outcomes included incidence of acute respiratory illness, laboratory-detected respiratory infections, laboratory-confirmed respiratory illness, and influenzalike illness. Adherence to interventions was assessed. Results Among 2862 randomized participants (mean [SD] age, 43 [11.5] years; 2369 [82.8%]) women), 2371 completed the study and accounted for 5180 HCP-seasons. There were 207 laboratory-confirmed influenza infection events (8.2% of HCP-seasons) in the N95 respirator group and 193 (7.2% of HCP-seasons) in the medical mask group (difference, 1.0%, [95% CI, −0.5% to 2.5%];P = .18) (adjusted odds ratio [OR], 1.18 [95% CI, 0.95-1.45]). There were 1556 acute respiratory illness events in the respirator group vs 1711 in the mask group (difference, −21.9 per 1000 HCP-seasons [95% CI, −48.2 to 4.4];P = .10); 679 laboratory-detected respiratory infections in the respirator group vs 745 in the mask group (difference, −8.9 per 1000 HCP-seasons, [95% CI, −33.3 to 15.4];P = .47); 371 laboratory-confirmed respiratory illness events in the respirator group vs 417 in the mask group (difference, −8.6 per 1000 HCP-seasons [95% CI, −28.2 to 10.9];P = .39); and 128 influenzalike illness events in the respirator group vs 166 in the mask group (difference, −11.3 per 1000 HCP-seasons [95% CI, −23.8 to 1.3];P = .08). In the respirator group, 89.4% of participants reported “always” or “sometimes” wearing their assigned devices vs 90.2% in the mask group. Conclusions and Relevance Among outpatient health care personnel, N95 respirators vs medical masks as worn by participants in this trial resulted in no significant difference in the incidence of laboratory-confirmed influenza. Trial Registration ClinicalTrials.gov Identifier:NCT01249625