scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this paper, a grid of supernovae resulting from massive stars with solar metallicity and masses from 9.0 to 120 solar masses are calculated for nucleosynthesis, light curves, explosion energies, and remnant masses.
Abstract: Nucleosynthesis, light curves, explosion energies, and remnant masses are calculated for a grid of supernovae resulting from massive stars with solar metallicity and masses from 9.0 to 120 solar masses. The full evolution is followed using an adaptive reaction network of up to 2000 nuclei. A novel aspect of the survey is the use of a one-dimensional neutrino transport model for the explosion. This explosion model has been calibrated to give the observed energy for SN 1987A, using several standard progenitors, and for the Crab supernova using a 9.6 solar mass progenitor. As a result of using a calibrated central engine, the final kinetic energy of the supernova is variable and sensitive to the structure of the presupernova star. Many progenitors with extended core structures do not explode, but become black holes, and the masses of exploding stars do not form a simply connected set. The resulting nucleosynthesis agrees reasonably well with the sun provided that a reasonable contribution from Type Ia supernovae is also allowed, but with a deficiency of light s-process isotopes. The resulting neutron star IMF has a mean gravitational mass near 1.4 solar masses. The average black hole mass is about 9 solar masses if only the helium core implodes, and 14 solar masses if the entire presupernova star collapses. Only ~10% of supernovae come from stars over 20 solar masses and some of these are Type Ib or Ic. Some useful systematics of Type IIp light curves are explored.

892 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive first-principles study of the electronic structure of 51 semiconducting monolayer transition-metal dichalcogenides and -oxides in the 2H and 1T hexagonal phases is presented.
Abstract: We present a comprehensive first-principles study of the electronic structure of 51 semiconducting monolayer transition-metal dichalcogenides and -oxides in the 2H and 1T hexagonal phases. The quasiparticle (QP) band structures with spin–orbit coupling are calculated in the G0W0 approximation, and comparison is made with different density functional theory descriptions. Pitfalls related to the convergence of GW calculations for two-dimensional (2D) materials are discussed together with possible solutions. The monolayer band edge positions relative to vacuum are used to estimate the band alignment at various heterostructure interfaces. The sensitivity of the band structures to the in-plane lattice constant is analyzed and rationalized in terms of the electronic structure. Finally, the q-dependent dielectric functions and effective electron and hole masses are obtained from the QP band structure and used as input to a 2D hydrogenic model to estimate exciton binding energies. Throughout the paper we focus on...

892 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1319 moreInstitutions (78)
02 Nov 2017-Nature
TL;DR: A measurement of the Hubble constant is reported that combines the distance to the source inferred purely from the gravitational-wave signal with the recession velocity inferred from measurements of the redshift using the electromagnetic data.
Abstract: On 17 August 2017, the Advanced LIGO1 and Virgo2 detectors observed the gravitational-wave event GW170817—a strong signal from the merger of a binary neutron-star system3. Less than two seconds after the merger, a γ-ray burst (GRB 170817A) was detected within a region of the sky consistent with the LIGO–Virgo-derived location of the gravitational-wave source4, 5, 6. This sky region was subsequently observed by optical astronomy facilities7, resulting in the identification8, 9, 10, 11, 12, 13 of an optical transient signal within about ten arcseconds of the galaxy NGC 4993. This detection of GW170817 in both gravitational waves and electromagnetic waves represents the first ‘multi-messenger’ astronomical observation. Such observations enable GW170817 to be used as a ‘standard siren’14, 15, 16, 17, 18 (meaning that the absolute distance to the source can be determined directly from the gravitational-wave measurements) to measure the Hubble constant. This quantity represents the local expansion rate of the Universe, sets the overall scale of the Universe and is of fundamental importance to cosmology. Here we report a measurement of the Hubble constant that combines the distance to the source inferred purely from the gravitational-wave signal with the recession velocity inferred from measurements of the redshift using the electromagnetic data. In contrast to previous measurements, ours does not require the use of a cosmic ‘distance ladder’19: the gravitational-wave analysis can be used to estimate the luminosity distance out to cosmological scales directly, without the use of intermediate astronomical distance measurements. We determine the Hubble constant to be about 70 kilometres per second per megaparsec. This value is consistent with existing measurements20, 21, while being completely independent of them. Additional standard siren measurements from future gravitational-wave sources will enable the Hubble constant to be constrained to high precision.

892 citations


Posted Content
TL;DR: It is found that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization, and the need for more careful baseline evaluations in future research on structured pruning methods is suggested.
Abstract: Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.

892 citations


Journal ArticleDOI
TL;DR: In this large, population-based study, the use of ACE inhibitors and ARBs was more frequent among patients with Covid-19 than among controls because of their higher prevalence of cardiovascular disease, and there was no evidence that ACE inhibitors or ARBs affected the risk of COVID-19.
Abstract: Background A potential association between the use of angiotensin-receptor blockers (ARBs) and angiotensin-converting–enzyme (ACE) inhibitors and the risk of coronavirus disease 2019 (Covi...

892 citations


Journal ArticleDOI
01 Mar 2019

892 citations



Proceedings ArticleDOI
01 Oct 2019
TL;DR: Temporal Shift Module (TSM) as mentioned in this paper shifts part of the channels along the temporal dimension to facilitate information exchanged among neighboring frames, which can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters.
Abstract: The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: https://github. com/mit-han-lab/temporal-shift-module.

892 citations


Journal ArticleDOI
TL;DR: The results published in this report were obtained with the data available as of mid-May 2020 and are thus based on an incomplete data set, as some deals had not yet reported relevant data, and can thus be affected by outliers.
Abstract: Most charts in this document will be made available in our COVID-19 Tracker, which we intend to update and complete on a regular basis as more data becomes available. We expect to see delinquencies rise substantially over the next months, and as of end of May, the April data indeed shows soaring arrears in several markets.2 We also expect that payment holidays and other types of loan modifications will become increasingly visible in our upcoming LLD, given that clear reporting instructions are now available.3 Starting from a few cases in early February 2020, COVID-19 spread throughout Europe, forcing governments to enact severe social distancing measures. By the end of April, four of the countries with the greatest number of confirmed COVID-19 deaths in the world were European countries (Spain, Italy, France, UK), with more than 100,000 documented COVID-19 related deaths in total. As the death curves start to flatten (Exhibit 1), the damage to the economy takes centre stage. PLEASE NOTE: The results published in this report were obtained with the data available as of mid-May 2020 and are thus based on an incomplete data set, as some deals had not yet reported relevant data. Our results can thus be affected by outliers. We intend to update these charts as more data becomes available, and the results may change.

891 citations


Journal ArticleDOI
TL;DR: It is shown that higher plant diversity increases rhizosphere carbon inputs into the microbial community resulting in both increased microbial activity and carbon storage, indicating that the increase in carbon storage is mainly limited by the integration of new carbon into soil and less by the decomposition of existing soil carbon.
Abstract: Plant diversity strongly influences ecosystem functions and services, such as soil carbon storage. However, the mechanisms underlying the positive plant diversity effects on soil carbon storage are poorly understood. We explored this relationship using long-term data from a grassland biodiversity experiment (The Jena Experiment) and radiocarbon ((14)C) modelling. Here we show that higher plant diversity increases rhizosphere carbon inputs into the microbial community resulting in both increased microbial activity and carbon storage. Increases in soil carbon were related to the enhanced accumulation of recently fixed carbon in high-diversity plots, while plant diversity had less pronounced effects on the decomposition rate of existing carbon. The present study shows that elevated carbon storage at high plant diversity is a direct function of the soil microbial community, indicating that the increase in carbon storage is mainly limited by the integration of new carbon into soil and less by the decomposition of existing soil carbon.

891 citations


Proceedings Article
15 Feb 2018
TL;DR: Capsule Networks as mentioned in this paper use a logistic unit to represent the presence of an entity and a 4x4 matrix to learn the relationship between that entity and the viewer (the pose).
Abstract: A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45\% compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network.

Journal ArticleDOI
01 Dec 2016-Nature
TL;DR: The HITI method presented here establishes new avenues for basic research and targeted gene therapies and demonstrates the efficacy of HITI in improving visual function using a rat model of the retinal degeneration condition retinitis pigmentosa.
Abstract: Targeted genome editing via engineered nucleases is an exciting area of biomedical research and holds potential for clinical applications. Despite rapid advances in the field, in vivo targeted transgene integration is still infeasible because current tools are inefficient, especially for non-dividing cells, which compose most adult tissues. This poses a barrier for uncovering fundamental biological principles and developing treatments for a broad range of genetic disorders. Based on clustered regularly interspaced short palindromic repeat/Cas9 (CRISPR/Cas9) technology, here we devise a homology-independent targeted integration (HITI) strategy, which allows for robust DNA knock-in in both dividing and non-dividing cells in vitro and, more importantly, in vivo (for example, in neurons of postnatal mammals). As a proof of concept of its therapeutic potential, we demonstrate the efficacy of HITI in improving visual function using a rat model of the retinal degeneration condition retinitis pigmentosa. The HITI method presented here establishes new avenues for basic research and targeted gene therapies.

Journal ArticleDOI
10 Mar 2016-Cell
TL;DR: How new technologies can enable metabolic engineering to be scaled up to the industrial level, either by cutting off the lines of control for endogenous metabolism or by infiltrating the system with disruptive, heterologous pathways that overcome cellular regulation is discussed.

Journal ArticleDOI
TL;DR: A survey of techniques for approximate computing (AC), which discusses strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units, processor components, memory technologies, and so forth, as well as programming frameworks for AC.
Abstract: Approximate computing trades off computation quality with effort expended, and as rising performance demands confront plateauing resource budgets, approximate computing has become not merely attractive, but even imperative. In this article, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU, and FPGA), processor components, memory technologies, and so forth, as well as programming frameworks for AC. We classify these techniques based on several key characteristics to emphasize their similarities and differences. The aim of this article is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.

Journal ArticleDOI
TL;DR: Some of the main strategies in cancer immunotherapy (cancer vaccines, adoptive cellular immunotherapy, immune checkpoint blockade, and oncolytic viruses) are outlined and the progress in the synergistic design of immune-targeting combination therapies is discussed.
Abstract: These are exciting times for cancer immunotherapy. After many years of disappointing results, the tide has finally changed and immunotherapy has become a clinically validated treatment for many cancers. Immunotherapeutic strategies include cancer vaccines, oncolytic viruses, adoptive transfer of ex vivo activated T and natural killer cells, and administration of antibodies or recombinant proteins that either costimulate cells or block the so-called immune checkpoint pathways. The recent success of several immunotherapeutic regimes, such as monoclonal antibody blocking of cytotoxic T lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD1), has boosted the development of this treatment modality, with the consequence that new therapeutic targets and schemes which combine various immunological agents are now being described at a breathtaking pace. In this review, we outline some of the main strategies in cancer immunotherapy (cancer vaccines, adoptive cellular immunotherapy, immune checkpoint blockade, and oncolytic viruses) and discuss the progress in the synergistic design of immune-targeting combination therapies.

Journal ArticleDOI
TL;DR: New targets have been set for value-based payment: 85% of Medicare fee-for-service payments should be tied to quality or value by 2016 and 30% through alternative payment models by 2018.
Abstract: New targets have been set for value-based payment: 85% of Medicare fee-for-service payments should be tied to quality or value by 2016, and 30% of Medicare payments should be tied to quality or value through alternative payment models by 2016 (50% by 2018).

Book ChapterDOI
19 Apr 2018
TL;DR: Part One: PRINCIPLES OF GENDER CONSTRUCTION Doing Gender - Candace West and Don H Zimmerman Social Location and Gender-Role Attitudes - Karen Dugger A Comparison of Black and White Women Masculinities and Athletic Careers - Michael Messner Part Two: GENDE-MENTION in FAMILY LIFE 'Helped Put in a Quilt' - Karen V Hansen Men's Work and Male Intimacy in Nineteenth Century New England Bargaining with Patriarchy - Deniz Kandiyoti Family, Feminism, and Race in America
Abstract: PART ONE: PRINCIPLES OF GENDER CONSTRUCTION Doing Gender - Candace West and Don H Zimmerman Social Location and Gender-Role Attitudes - Karen Dugger A Comparison of Black and White Women Masculinities and Athletic Careers - Michael Messner PART TWO: GENDER CONSTRUCTION IN FAMILY LIFE 'Helped Put in a Quilt' - Karen V Hansen Men's Work and Male Intimacy in Nineteenth Century New England Bargaining with Patriarchy - Deniz Kandiyoti Family, Feminism, and Race in America - Maxine Baca Zinn PART THREE: GENDER CONSTRUCTION IN THE WORK PLACE Bringing the Men Back in - Barbara F Reskin Sex Differentiation and the Devaluation of Women's Work Hierarchies, Jobs, Bodies - Joan Acker A Theory of Gendered Organizations Labor-Market Gendered Inequality in Minority Groups - Elizabeth M Almquist Feminization of Poverty and Comparable Worth - Johanna Brenner Radical Versus Liberal Approaches PART FOUR: FEMINIST RESEARCH STRATEGIES When Gender is Not Enough - Catherine Kohler Riessman Women Interviewing Women Race and Class Bias in Qualitative Research on Women - Lynn Weber Cannon, Elizabeth Higgenbotham and Maryanne Leung PART FIVE: RACIAL ETHNIC IDENTITY AND FEMINIST POLITICS The Development of Feminist Consciousness Among Asian American Women - Esther Ngan-Ling Chow The Development of Chicana Feminist Discourse, 1970-1980 - Alma M Garcia New Bedford Massachusetts - Lynn S Chaucer March 6 - March 22, 1984 The 'Before and After' of a Group Rape PART SIX: DECONSTRUCTING GENDER Theories of Gender Equality - Judith Buber Agassi Lessons from the Israeli Kibbutz 'It's Our Church, Too!' Women's Position in the Catholic Church Today - Susan A Farrell Dismantling Noah's Ark - Judith Lorber

Posted Content
TL;DR: This paper survey the recent advanced techniques for compacting and accelerating CNNs model developed, roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation.
Abstract: Deep neural networks (DNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past five years, tremendous progress has been made in this area. In this paper, we review the recent techniques for compacting and accelerating DNN models. In general, these techniques are divided into four categories: parameter pruning and quantization, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and quantization are described first, after that the other techniques are introduced. For each category, we also provide insightful analysis about the performance, related applications, advantages, and drawbacks. Then we go through some very recent successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrices, the main datasets used for evaluating the model performance, and recent benchmark efforts. Finally, we conclude this paper, discuss remaining the challenges and possible directions for future work.

Journal ArticleDOI
TL;DR: Due to the rapid spread and increasing number of coronavirus disease 19 (COVID-19) cases caused by a new coronav virus, severe acute respiratory syndrome coronaviruses 2 (SARS-CoV-2), rapid and accurat...
Abstract: Due to the rapid spread and increasing number of coronavirus disease 19 (COVID-19) cases caused by a new coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), rapid and accurat...

Journal ArticleDOI
TL;DR: In this paper, a method for generating one-pixel adversarial perturbations based on differential evolution (DE) is proposed, which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Abstract: Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE). It requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74.03% and 22.91% confidence on average. We also show the same vulnerability on the original CIFAR-10 dataset. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness.

Journal ArticleDOI
TL;DR: The analysis indicates that the prevalence of adult obesity and severe obesity will continue to increase nationwide, with large disparities across states and demographic subgroups.
Abstract: Background Although the national obesity epidemic has been well documented, less is known about obesity at the U.S. state level. Current estimates are based on body measures reported by pe...

Journal ArticleDOI
TL;DR: ‘‘Pandemic fear’’ and COVID-19: mental health burden and strategies

Journal ArticleDOI
24 Feb 2017-Science
TL;DR: The effects of the expansion of Tet2-mutant cells in atherosclerosis-prone, low-density lipoprotein receptor–deficient mice are studied and it is found that partial bone marrow reconstitution with TET2-deficient cells was sufficient for their clonal expansion and led to a marked increase in Atherosclerotic plaque size.
Abstract: Human aging is associated with an increased frequency of somatic mutations in hematopoietic cells. Several of these recurrent mutations, including those in the gene encoding the epigenetic modifier enzyme TET2, promote expansion of the mutant blood cells. This clonal hematopoiesis correlates with an increased risk of atherosclerotic cardiovascular disease. We studied the effects of the expansion of Tet2-mutant cells in atherosclerosis-prone, low-density lipoprotein receptor–deficient (Ldlr–/–) mice. We found that partial bone marrow reconstitution with TET2-deficient cells was sufficient for their clonal expansion and led to a marked increase in atherosclerotic plaque size. TET2-deficient macrophages exhibited an increase in NLRP3 inflammasome–mediated interleukin-1β secretion. An NLRP3 inhibitor showed greater atheroprotective activity in chimeric mice reconstituted with TET2-deficient cells than in nonchimeric mice. These results support the hypothesis that somatic TET2 mutations in blood cells play a causal role in atherosclerosis.

Journal ArticleDOI
Jingjing Liang1, Thomas W. Crowther2, Nicolas Picard3, Susan K. Wiser4, Mo Zhou1, Giorgio Alberti5, Ernst Detlef Schulze6, A. David McGuire7, Fabio Bozzato, Hans Pretzsch8, Sergio de-Miguel, Alain Paquette9, Bruno Hérault10, Michael Scherer-Lorenzen11, Christopher B. Barrett12, Henry B. Glick2, Geerten M. Hengeveld13, Gert-Jan Nabuurs13, Sebastian Pfautsch14, Helder Viana15, Helder Viana16, Alexander Christian Vibrans, Christian Ammer17, Peter Schall17, David David Verbyla7, N. M. Tchebakova18, Markus Fischer19, James V. Watson1, Han Y. H. Chen20, Xiangdong Lei, Mart-Jan Schelhaas13, Huicui Lu13, Damiano Gianelle, Elena I. Parfenova18, Christian Salas21, Eungul Lee1, Boknam Lee22, Hyun-Seok Kim, Helge Bruelheide23, David A. Coomes24, Daniel Piotto, Terry Sunderland25, Terry Sunderland26, Bernhard Schmid27, Sylvie Gourlet-Fleury, Bonaventure Sonké28, Rebecca Tavani3, Jun Zhu29, Susanne Brandl8, Jordi Vayreda, Fumiaki Kitahara, Eric B. Searle20, Victor J. Neldner30, Michael R. Ngugi30, Christopher Baraloto31, Christopher Baraloto32, Lorenzo Frizzera, Radomir Bałazy33, Jacek Oleksyn34, Jacek Oleksyn35, Tomasz Zawiła-Niedźwiecki36, Olivier Bouriaud37, Filippo Bussotti38, Leena Finér, Bogdan Jaroszewicz39, Tommaso Jucker24, Fernando Valladares40, Fernando Valladares41, Andrzej M. Jagodziński34, Pablo Luis Peri42, Pablo Luis Peri43, Pablo Luis Peri44, Christelle Gonmadje28, William Marthy45, Timothy G. O'Brien45, Emanuel H. Martin46, Andrew R. Marshall47, Francesco Rovero, Robert Bitariho, Pascal A. Niklaus27, Patricia Alvarez-Loayza48, Nurdin Chamuya49, Renato Valencia50, Frédéric Mortier, Verginia Wortel, Nestor L. Engone-Obiang51, Leandro Valle Ferreira52, David E. Odeke, R. Vásquez, Simon L. Lewis53, Simon L. Lewis54, Peter B. Reich35, Peter B. Reich14 
West Virginia University1, Yale University2, Food and Agriculture Organization3, Landcare Research4, University of Udine5, Max Planck Society6, University of Alaska Fairbanks7, Technische Universität München8, Université du Québec à Montréal9, University of the French West Indies and Guiana10, University of Freiburg Faculty of Biology11, Cornell University12, Wageningen University and Research Centre13, University of Sydney14, Polytechnic Institute of Viseu15, University of Trás-os-Montes and Alto Douro16, University of Göttingen17, Russian Academy of Sciences18, Oeschger Centre for Climate Change Research19, Lakehead University20, University of La Frontera21, Seoul National University22, Martin Luther University of Halle-Wittenberg23, University of Cambridge24, James Cook University25, Center for International Forestry Research26, University of Zurich27, University of Yaoundé I28, University of Wisconsin-Madison29, Queensland Government30, Florida International University31, Institut national de la recherche agronomique32, Forest Research Institute33, Polish Academy of Sciences34, University of Minnesota35, Warsaw University of Life Sciences36, Ştefan cel Mare University of Suceava37, University of Florence38, University of Warsaw39, King Juan Carlos University40, Spanish National Research Council41, National University of Austral Patagonia42, National Scientific and Technical Research Council43, International Trademark Association44, Wildlife Conservation Society45, College of African Wildlife Management46, University of York47, Durham University48, Ontario Ministry of Natural Resources49, Pontificia Universidad Católica del Ecuador50, Centre national de la recherche scientifique51, Museu Paraense Emílio Goeldi52, University College London53, University of Leeds54
14 Oct 2016-Science
TL;DR: A consistent positive concave-down effect of biodiversity on forest productivity across the world is revealed, showing that a continued biodiversity loss would result in an accelerating decline in forest productivity worldwide.
Abstract: The biodiversity-productivity relationship (BPR) is foundational to our understanding of the global extinction crisis and its impacts on ecosystem functioning. Understanding BPR is critical for the accurate valuation and effective conservation of biodiversity. Using ground-sourced data from 777,126 permanent plots, spanning 44 countries and most terrestrial biomes, we reveal a globally consistent positive concave-down BPR, showing that continued biodiversity loss would result in an accelerating decline in forest productivity worldwide. The value of biodiversity in maintaining commercial forest productivity alone-US$166 billion to 490 billion per year according to our estimation-is more than twice what it would cost to implement effective global conservation. This highlights the need for a worldwide reassessment of biodiversity values, forest management strategies, and conservation priorities.

Posted Content
TL;DR: This work shows that failure to converge typically is not due to a suboptimal estimation algorithm, but is a consequence of attempting to fit a model that is too complex to be properly supported by the data, irrespective of whether estimation is based on maximum likelihood or on Bayesian hierarchical modeling with uninformative or weakly informative priors.
Abstract: The analysis of experimental data with mixed-effects models requires decisions about the specification of the appropriate random-effects structure. Recently, Barr, Levy, Scheepers, and Tily, 2013 recommended fitting `maximal' models with all possible random effect components included. Estimation of maximal models, however, may not converge. We show that failure to converge typically is not due to a suboptimal estimation algorithm, but is a consequence of attempting to fit a model that is too complex to be properly supported by the data, irrespective of whether estimation is based on maximum likelihood or on Bayesian hierarchical modeling with uninformative or weakly informative priors. Importantly, even under convergence, overparameterization may lead to uninterpretable models. We provide diagnostic tools for detecting overparameterization and guiding model simplification.

Journal ArticleDOI
TL;DR: In this article, the superconducting properties of NbSe2 as it approaches the monolayer limit are investigated by means of magnetotransport measurements, uncovering evidence of spin-momentum locking.
Abstract: The superconducting properties of NbSe2 as it approaches the monolayer limit are investigated by means of magnetotransport measurements, uncovering evidence of spin–momentum locking.

Proceedings ArticleDOI
01 Jan 2019
TL;DR: The proposed memory-augmented autoencoder called MemAE is free of assumptions on the data type and thus general to be applied to different tasks and proves the excellent generalization and high effectiveness of the proposed MemAE.
Abstract: Deep autoencoder has been extensively used for anomaly detection. Training on the normal data, the autoencoder is expected to produce higher reconstruction error for the abnormal inputs than the normal ones, which is adopted as a criterion for identifying anomalies. However, this assumption does not always hold in practice. It has been observed that sometimes the autoencoder "generalizes" so well that it can also reconstruct anomalies well, leading to the miss detection of anomalies. To mitigate this drawback for autoencoder based anomaly detector, we propose to augment the autoencoder with a memory module and develop an improved autoencoder called memory-augmented autoencoder, i.e. MemAE. Given an input, MemAE firstly obtains the encoding from the encoder and then uses it as a query to retrieve the most relevant memory items for reconstruction. At the training stage, the memory contents are updated and are encouraged to represent the prototypical elements of the normal data. At the test stage, the learned memory will be fixed, and the reconstruction is obtained from a few selected memory records of the normal data. The reconstruction will thus tend to be close to a normal sample. Thus the reconstructed errors on anomalies will be strengthened for anomaly detection. MemAE is free of assumptions on the data type and thus general to be applied to different tasks. Experiments on various datasets prove the excellent generalization and high effectiveness of the proposed MemAE.

Journal ArticleDOI
07 Aug 2015-Science
TL;DR: Laser-induced phase patterning is used to fabricate an ohmic heterophase homojunction between semiconducting hexagonal and metallic monoclinic molybdenum ditelluride that is stable up to 300°C and increases the carrier mobility of the MoTe2 transistor by a factor of about 50, while retaining a high on/off current ratio of 106.
Abstract: Artificial van der Waals heterostructures with two-dimensional (2D) atomic crystals are promising as an active channel or as a buffer contact layer for next-generation devices. However, genuine 2D heterostructure devices remain limited because of impurity-involved transfer process and metastable and inhomogeneous heterostructure formation. We used laser-induced phase patterning, a polymorph engineering, to fabricate an ohmic heterophase homojunction between semiconducting hexagonal (2H) and metallic monoclinic (1T') molybdenum ditelluride (MoTe2) that is stable up to 300°C and increases the carrier mobility of the MoTe2 transistor by a factor of about 50, while retaining a high on/off current ratio of 10(6). In situ scanning transmission electron microscopy results combined with theoretical calculations reveal that the Te vacancy triggers the local phase transition in MoTe2, achieving a true 2D device with an ohmic contact.

Journal ArticleDOI
TL;DR: This paper presents an overview of SA and its link to uncertainty analysis, model calibration and evaluation, robust decision-making, and provides practical guidelines by developing a workflow for the application of SA.
Abstract: Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. In this paper we review the SA literature with the goal of providing: (i) a comprehensive view of SA approaches also in relation to other methodologies for model identification and application; (ii) a systematic classification of the most commonly used SA methods; (iii) practical guidelines for the application of SA. The paper aims at delivering an introduction to SA for non-specialist readers, as well as practical advice with best practice examples from the literature; and at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research. We present an overview of SA and its link to uncertainty analysis, model calibration and evaluation, robust decision-making.We provide a systematic review of existing approaches, which can support users in the choice of an SA method.We provide practical guidelines by developing a workflow for the application of SA and discuss critical choices.We give best practice examples from the literature and highlight trends and gaps for future research.

Journal ArticleDOI
31 Aug 2018-Science
TL;DR: The basic mechanisms that set the CRISPR-Cas toolkit apart from other programmable gene-editing technologies are described, highlighting the diverse and naturally evolved systems now functionalized as biotechnologies.
Abstract: The diversity, modularity, and efficacy of CRISPR-Cas systems are driving a biotechnological revolution. RNA-guided Cas enzymes have been adopted as tools to manipulate the genomes of cultured cells, animals, and plants, accelerating the pace of fundamental research and enabling clinical and agricultural breakthroughs. We describe the basic mechanisms that set the CRISPR-Cas toolkit apart from other programmable gene-editing technologies, highlighting the diverse and naturally evolved systems now functionalized as biotechnologies. We discuss the rapidly evolving landscape of CRISPR-Cas applications, from gene editing to transcriptional regulation, imaging, and diagnostics. Continuing functional dissection and an expanding landscape of applications position CRISPR-Cas tools at the cutting edge of nucleic acid manipulation that is rewriting biology.