scispace - formally typeset
Search or ask a question

Showing papers by "Imperial College London published in 2019"


Proceedings ArticleDOI
15 Jun 2019
TL;DR: This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.
Abstract: One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that can enhance the discriminative power. Centre loss penalises the distance between deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in the angular space and therefore penalises the angles between deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to its exact correspondence to geodesic distance on a hypersphere. We present arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks which includes a new large-scale image database with trillions of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead. To facilitate future research, the code has been made available.

4,312 citations


Journal ArticleDOI
TL;DR: This work proposes a new neural network module suitable for CNN-based high-level tasks on point clouds, including classification and segmentation called EdgeConv, which acts on graphs dynamically computed in each layer of the network.
Abstract: Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information, so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds, including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks, including ModelNet40, ShapeNetPart, and S3DIS.

3,727 citations


Journal ArticleDOI
TL;DR: In patients with type 2 diabetes and kidney disease, the risk of kidney failure and cardiovascular events was lower in the canagliflozin group than in the placebo group at a median follow-up of 2.62 years.
Abstract: Background Type 2 diabetes mellitus is the leading cause of kidney failure worldwide, but few effective long-term treatments are available. In cardiovascular trials of inhibitors of sodium...

3,233 citations


Journal ArticleDOI
TL;DR: A series of major new developments in the BEAST 2 core platform and model hierarchy that have occurred since the first release of the software, culminating in the recent 2.5 release are described.
Abstract: Elaboration of Bayesian phylogenetic inference methods has continued at pace in recent years with major new advances in nearly all aspects of the joint modelling of evolutionary data. It is increasingly appreciated that some evolutionary questions can only be adequately answered by combining evidence from multiple independent sources of data, including genome sequences, sampling dates, phenotypic data, radiocarbon dates, fossil occurrences, and biogeographic range information among others. Including all relevant data into a single joint model is very challenging both conceptually and computationally. Advanced computational software packages that allow robust development of compatible (sub-)models which can be composed into a full model hierarchy have played a key role in these developments. Developing such software frameworks is increasingly a major scientific activity in its own right, and comes with specific challenges, from practical software design, development and engineering challenges to statistical and conceptual modelling challenges. BEAST 2 is one such computational software platform, and was first announced over 4 years ago. Here we describe a series of major new developments in the BEAST 2 core platform and model hierarchy that have occurred since the first release of the software, culminating in the recent 2.5 release.

2,045 citations


Journal ArticleDOI
TL;DR: A comprehensive review of the potential role that hydrogen could play in the provision of electricity, heat, industry, transport and energy storage in a low-carbon energy system, and an assessment of the status of hydrogen in being able to fulfil that potential is presented in this article.
Abstract: Hydrogen technologies have experienced cycles of excessive expectations followed by disillusion. Nonetheless, a growing body of evidence suggests these technologies form an attractive option for the deep decarbonisation of global energy systems, and that recent improvements in their cost and performance point towards economic viability as well. This paper is a comprehensive review of the potential role that hydrogen could play in the provision of electricity, heat, industry, transport and energy storage in a low-carbon energy system, and an assessment of the status of hydrogen in being able to fulfil that potential. The picture that emerges is one of qualified promise: hydrogen is well established in certain niches such as forklift trucks, while mainstream applications are now forthcoming. Hydrogen vehicles are available commercially in several countries, and 225 000 fuel cell home heating systems have been sold. This represents a step change from the situation of only five years ago. This review shows that challenges around cost and performance remain, and considerable improvements are still required for hydrogen to become truly competitive. But such competitiveness in the medium-term future no longer seems an unrealistic prospect, which fully justifies the growing interest and policy support for these technologies around the world.

1,938 citations


Journal ArticleDOI
TL;DR: This work aims to demonstrate the efforts towards in-situ applicability of EMMARM, which aims to provide real-time information about concrete mechanical properties such as E-modulus and compressive strength.

1,480 citations


Journal ArticleDOI
TL;DR: This paper aims to provide a detailed survey of different indoor localization techniques, such as angle of arrival (AoA), time of flight (ToF), return time ofFlight (RTOF), and received signal strength (RSS) based on technologies that have been proposed in the literature.
Abstract: Indoor localization has recently witnessed an increase in interest, due to the potential wide range of services it can provide by leveraging Internet of Things (IoT), and ubiquitous connectivity. Different techniques, wireless technologies and mechanisms have been proposed in the literature to provide indoor localization services in order to improve the services provided to the users. However, there is a lack of an up-to-date survey paper that incorporates some of the recently proposed accurate and reliable localization systems. In this paper, we aim to provide a detailed survey of different indoor localization techniques, such as angle of arrival (AoA), time of flight (ToF), return time of flight (RTOF), and received signal strength (RSS); based on technologies, such as WiFi, radio frequency identification device (RFID), ultra wideband (UWB), Bluetooth, and systems that have been proposed in the literature. This paper primarily discusses localization and positioning of human users and their devices. We highlight the strengths of the existing systems proposed in the literature. In contrast with the existing surveys, we also evaluate different systems from the perspective of energy efficiency, availability, cost, reception range, latency, scalability, and tracking accuracy. Rather than comparing the technologies or techniques, we compare the localization systems and summarize their working principle. We also discuss remaining challenges to accurate indoor localization.

1,447 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract: Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

1,441 citations


Journal ArticleDOI
TL;DR: Current prevalence and trends of insufficient physical activity among school-going adolescents aged 11–17 years by country, region, and globally are described and urgent scaling up of implementation of known effective policies and programmes is needed.

1,293 citations


Journal ArticleDOI
21 Oct 2019-Nature
TL;DR: The FSP1–CoQ10–NAD(P)H pathway exists as a stand-alone parallel system, which co-operates with GPX4 and glutathione to suppress phospholipid peroxidation and ferroptosis in cells.
Abstract: Ferroptosis is an iron-dependent form of necrotic cell death marked by oxidative damage to phospholipids1,2. To date, ferroptosis has been thought to be controlled only by the phospholipid hydroperoxide-reducing enzyme glutathione peroxidase 4 (GPX4)3,4 and radical-trapping antioxidants5,6. However, elucidation of the factors that underlie the sensitivity of a given cell type to ferroptosis7 is crucial to understand the pathophysiological role of ferroptosis and how it may be exploited for the treatment of cancer. Although metabolic constraints8 and phospholipid composition9,10 contribute to ferroptosis sensitivity, no cell-autonomous mechanisms have been identified that account for the resistance of cells to ferroptosis. Here we used an expression cloning approach to identify genes in human cancer cells that are able to complement the loss of GPX4. We found that the flavoprotein apoptosis-inducing factor mitochondria-associated 2 (AIFM2) is a previously unrecognized anti-ferroptotic gene. AIFM2, which we renamed ferroptosis suppressor protein 1 (FSP1) and which was initially described as a pro-apoptotic gene11, confers protection against ferroptosis elicited by GPX4 deletion. We further demonstrate that the suppression of ferroptosis by FSP1 is mediated by ubiquinone (also known as coenzyme Q10, CoQ10): the reduced form, ubiquinol, traps lipid peroxyl radicals that mediate lipid peroxidation, whereas FSP1 catalyses the regeneration of CoQ10 using NAD(P)H. Pharmacological targeting of FSP1 strongly synergizes with GPX4 inhibitors to trigger ferroptosis in a number of cancer entities. In conclusion, the FSP1–CoQ10–NAD(P)H pathway exists as a stand-alone parallel system, which co-operates with GPX4 and glutathione to suppress phospholipid peroxidation and ferroptosis. In the absence of GPX4, FSP1 regenerates ubiquinol from the oxidized form, ubiquinone, using NAD(P)H and suppresses phospholipid peroxidation and ferroptosis in cells.

1,256 citations


Journal ArticleDOI
31 Oct 2019-Cell
TL;DR: A consensus from the International Cell Senescence Association (ICSA) is presented, defining and discussing key cellular and molecular features of senescence and offering recommendations on how to use them as biomarkers.

Journal ArticleDOI
TL;DR: In this 8th release of JASPAR, the CORE collection has been expanded with 245 new PFMs, and 156 PFMs were updated, and the genomic tracks, inference tool, and TF-binding profile similarity clusters were updated.
Abstract: JASPAR (http://jaspar.genereg.net) is an open-access database of curated, non-redundant transcription factor (TF)-binding profiles stored as position frequency matrices (PFMs) for TFs across multiple species in six taxonomic groups. In this 8th release of JASPAR, the CORE collection has been expanded with 245 new PFMs (169 for vertebrates, 42 for plants, 17 for nematodes, 10 for insects, and 7 for fungi), and 156 PFMs were updated (125 for vertebrates, 28 for plants and 3 for insects). These new profiles represent an 18% expansion compared to the previous release. JASPAR 2020 comes with a novel collection of unvalidated TF-binding profiles for which our curators did not find orthogonal supporting evidence in the literature. This collection has a dedicated web form to engage the community in the curation of unvalidated TF-binding profiles. Moreover, we created a Q&A forum to ease the communication between the user community and JASPAR curators. Finally, we updated the genomic tracks, inference tool, and TF-binding profile similarity clusters. All the data is available through the JASPAR website, its associated RESTful API, and through the JASPAR2020 R/Bioconductor package.

Journal ArticleDOI
TL;DR: The STROCSS 2019 guideline is presented as a considered update to improve reporting of cohort, cross-sectional and case-control studies in surgery to improve content and readability.

Journal ArticleDOI
27 Sep 2019-Gut
TL;DR: Comprehensive up-to-date guidance is provided regarding indications for, initiation and monitoring of immunosuppressive therapies, nutrition interventions, pre-, peri- and postoperative management, as well as structure and function of the multidisciplinary team and integration between primary and secondary care.
Abstract: Ulcerative colitis and Crohn’s disease are the principal forms of inflammatory bowel disease. Both represent chronic inflammation of the gastrointestinal tract, which displays heterogeneity in inflammatory and symptomatic burden between patients and within individuals over time. Optimal management relies on understanding and tailoring evidence-based interventions by clinicians in partnership with patients. This guideline for management of inflammatory bowel disease in adults over 16 years of age was developed by Stakeholders representing UK physicians (British Society of Gastroenterology), surgeons (Association of Coloproctology of Great Britain and Ireland), specialist nurses (Royal College of Nursing), paediatricians (British Society of Paediatric Gastroenterology, Hepatology and Nutrition), dietitians (British Dietetic Association), radiologists (British Society of Gastrointestinal and Abdominal Radiology), general practitioners (Primary Care Society for Gastroenterology) and patients (Crohn’s and Colitis UK). A systematic review of 88 247 publications and a Delphi consensus process involving 81 multidisciplinary clinicians and patients was undertaken to develop 168 evidence- and expert opinion-based recommendations for pharmacological, non-pharmacological and surgical interventions, as well as optimal service delivery in the management of both ulcerative colitis and Crohn’s disease. Comprehensive up-to-date guidance is provided regarding indications for, initiation and monitoring of immunosuppressive therapies, nutrition interventions, pre-, peri- and postoperative management, as well as structure and function of the multidisciplinary team and integration between primary and secondary care. Twenty research priorities to inform future clinical management are presented, alongside objective measurement of priority importance, determined by 2379 electronic survey responses from individuals living with ulcerative colitis and Crohn’s disease, including patients, their families and friends.

Journal ArticleDOI
Peter A. R. Ade1, James E. Aguirre2, Z. Ahmed3, Simone Aiola4  +276 moreInstitutions (53)
TL;DR: The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in the early 2020s as mentioned in this paper.
Abstract: The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in the early 2020s. We describe the scientific goals of the experiment, motivate the design, and forecast its performance. SO will measure the temperature and polarization anisotropy of the cosmic microwave background in six frequency bands centered at: 27, 39, 93, 145, 225 and 280 GHz. The initial configuration of SO will have three small-aperture 0.5-m telescopes and one large-aperture 6-m telescope, with a total of 60,000 cryogenic bolometers. Our key science goals are to characterize the primordial perturbations, measure the number of relativistic species and the mass of neutrinos, test for deviations from a cosmological constant, improve our understanding of galaxy evolution, and constrain the duration of reionization. The small aperture telescopes will target the largest angular scales observable from Chile, mapping ≈ 10% of the sky to a white noise level of 2 μK-arcmin in combined 93 and 145 GHz bands, to measure the primordial tensor-to-scalar ratio, r, at a target level of σ(r)=0.003. The large aperture telescope will map ≈ 40% of the sky at arcminute angular resolution to an expected white noise level of 6 μK-arcmin in combined 93 and 145 GHz bands, overlapping with the majority of the Large Synoptic Survey Telescope sky region and partially with the Dark Energy Spectroscopic Instrument. With up to an order of magnitude lower polarization noise than maps from the Planck satellite, the high-resolution sky maps will constrain cosmological parameters derived from the damping tail, gravitational lensing of the microwave background, the primordial bispectrum, and the thermal and kinematic Sunyaev-Zel'dovich effects, and will aid in delensing the large-angle polarization signal to measure the tensor-to-scalar ratio. The survey will also provide a legacy catalog of 16,000 galaxy clusters and more than 20,000 extragalactic sources.

Journal ArticleDOI
TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper, we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

Journal ArticleDOI
TL;DR: Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency.

Journal ArticleDOI
13 Dec 2019-Science
TL;DR: The first integrated global-scale intergovernmental assessment of the status, trends, and future of the links between people and nature provides an unprecedented picture of the extent of the authors' mutual dependence, the breadth and depth of the ongoing and impending crisis, and the interconnectedness among sectors and regions.
Abstract: The human impact on life on Earth has increased sharply since the 1970s, driven by the demands of a growing population with rising average per capita income. Nature is currently supplying more materials than ever before, but this has come at the high cost of unprecedented global declines in the extent and integrity of ecosystems, distinctness of local ecological communities, abundance and number of wild species, and the number of local domesticated varieties. Such changes reduce vital benefits that people receive from nature and threaten the quality of life of future generations. Both the benefits of an expanding economy and the costs of reducing nature's benefits are unequally distributed. The fabric of life on which we all depend-nature and its contributions to people-is unravelling rapidly. Despite the severity of the threats and lack of enough progress in tackling them to date, opportunities exist to change future trajectories through transformative action. Such action must begin immediately, however, and address the root economic, social, and technological causes of nature's deterioration.

Journal ArticleDOI
07 Nov 2019-Nature
TL;DR: The capture and use of carbon dioxide to create valuable products might lower the net costs of reducing emissions or removing carbon dioxide from the atmosphere, but barriers to implementation remain substantial and resource constraints prevent the simultaneous deployment of all pathways.
Abstract: The capture and use of carbon dioxide to create valuable products might lower the net costs of reducing emissions or removing carbon dioxide from the atmosphere. Here we review ten pathways for the utilization of carbon dioxide. Pathways that involve chemicals, fuels and microalgae might reduce emissions of carbon dioxide but have limited potential for its removal, whereas pathways that involve construction materials can both utilize and remove carbon dioxide. Land-based pathways can increase agricultural output and remove carbon dioxide. Our assessment suggests that each pathway could scale to over 0.5 gigatonnes of carbon dioxide utilization annually. However, barriers to implementation remain substantial and resource constraints prevent the simultaneous deployment of all pathways. Ten pathways for the utilization of carbon dioxide are reviewed, considering their potential scale, economics and barriers to implementation.

Journal ArticleDOI
TL;DR: The adverse-event profile of nintedanib observed in this trial was similar to that observed in patients with idiopathic pulmonary fibrosis; gastrointestinal adverse events, including diarrhea, were more common with nintinganib than with placebo.
Abstract: Background Interstitial lung disease (ILD) is a common manifestation of systemic sclerosis and a leading cause of systemic sclerosis-related death. Nintedanib, a tyrosine kinase inhibitor, has been shown to have antifibrotic and antiinflammatory effects in preclinical models of systemic sclerosis and ILD. Methods We conducted a randomized, double-blind, placebo-controlled trial to investigate the efficacy and safety of nintedanib in patients with ILD associated with systemic sclerosis. Patients who had systemic sclerosis with an onset of the first non-Raynaud's symptom within the past 7 years and a high-resolution computed tomographic scan that showed fibrosis affecting at least 10% of the lungs were randomly assigned, in a 1:1 ratio, to receive 150 mg of nintedanib, administered orally twice daily, or placebo. The primary end point was the annual rate of decline in forced vital capacity (FVC), assessed over a 52-week period. Key secondary end points were absolute changes from baseline in the modified Rodnan skin score and in the total score on the St. George's Respiratory Questionnaire (SGRQ) at week 52. Results A total of 576 patients received at least one dose of nintedanib or placebo; 51.9% had diffuse cutaneous systemic sclerosis, and 48.4% were receiving mycophenolate at baseline. In the primary end-point analysis, the adjusted annual rate of change in FVC was -52.4 ml per year in the nintedanib group and -93.3 ml per year in the placebo group (difference, 41.0 ml per year; 95% confidence interval [CI], 2.9 to 79.0; P = 0.04). Sensitivity analyses based on multiple imputation for missing data yielded P values for the primary end point ranging from 0.06 to 0.10. The change from baseline in the modified Rodnan skin score and the total score on the SGRQ at week 52 did not differ significantly between the trial groups, with differences of -0.21 (95% CI, -0.94 to 0.53; P = 0.58) and 1.69 (95% CI, -0.73 to 4.12 [not adjusted for multiple comparisons]), respectively. Diarrhea, the most common adverse event, was reported in 75.7% of the patients in the nintedanib group and in 31.6% of those in the placebo group. Conclusions Among patients with ILD associated with systemic sclerosis, the annual rate of decline in FVC was lower with nintedanib than with placebo; no clinical benefit of nintedanib was observed for other manifestations of systemic sclerosis. The adverse-event profile of nintedanib observed in this trial was similar to that observed in patients with idiopathic pulmonary fibrosis; gastrointestinal adverse events, including diarrhea, were more common with nintedanib than with placebo. (Funded by Boehringer Ingelheim; SENSCIS ClinicalTrials.gov number, NCT02597933.).

Journal ArticleDOI
22 May 2019-Nature
TL;DR: A protocol for the electrochemical reduction of nitrogen to ammonia enables isotope-sensitive quantification of the ammonia produced and the identification and removal of contaminants, and should help to prevent false positives from appearing in the literature.
Abstract: The electrochemical synthesis of ammonia from nitrogen under mild conditions using renewable electricity is an attractive alternative1–4 to the energy-intensive Haber–Bosch process, which dominates industrial ammonia production. However, there are considerable scientific and technical challenges5,6 facing the electrochemical alternative, and most experimental studies reported so far have achieved only low selectivities and conversions. The amount of ammonia produced is usually so small that it cannot be firmly attributed to electrochemical nitrogen fixation7–9 rather than contamination from ammonia that is either present in air, human breath or ion-conducting membranes9, or generated from labile nitrogen-containing compounds (for example, nitrates, amines, nitrites and nitrogen oxides) that are typically present in the nitrogen gas stream10, in the atmosphere or even in the catalyst itself. Although these sources of experimental artefacts are beginning to be recognized and managed11,12, concerted efforts to develop effective electrochemical nitrogen reduction processes would benefit from benchmarking protocols for the reaction and from a standardized set of control experiments designed to identify and then eliminate or quantify the sources of contamination. Here we propose a rigorous procedure using 15N2 that enables us to reliably detect and quantify the electrochemical reduction of nitrogen to ammonia. We demonstrate experimentally the importance of various sources of contamination, and show how to remove labile nitrogen-containing compounds from the nitrogen gas as well as how to perform quantitative isotope measurements with cycling of 15N2 gas to reduce both contamination and the cost of isotope measurements. Following this protocol, we find that no ammonia is produced when using the most promising pure-metal catalysts for this reaction in aqueous media, and we successfully confirm and quantify ammonia synthesis using lithium electrodeposition in tetrahydrofuran13. The use of this rigorous protocol should help to prevent false positives from appearing in the literature, thus enabling the field to focus on viable pathways towards the practical electrochemical reduction of nitrogen to ammonia. A protocol for the electrochemical reduction of nitrogen to ammonia enables isotope-sensitive quantification of the ammonia produced and the identification and removal of contaminants.

Journal ArticleDOI
TL;DR: The 2019 report of The Lancet Countdown on health and climate change : ensuring that the health of a child born today is not defined by a changing climate is ensured.

Journal ArticleDOI
TL;DR: In this paper, the authors identify three key themes related to digitization (Openness, affordances, and generativity) and outline broad research issues relating to each, and suggest that such themes that are innate to digital technologies could serve as a common conceptual platform that allows for connections between issues at different levels as well as the integration of ideas from different disciplines/areas.

Posted Content
TL;DR: This work proposes pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective, PEGASUS, and demonstrates it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores.
Abstract: Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.

Journal ArticleDOI
TL;DR: This review explores the structure-property relationships of a library of non-fullerene acceptors, highlighting the important chemical modifications that have led to progress in the field and provides an outlook for future innovations in electron acceptors for use in organic photovoltaics.
Abstract: Fullerenes have formed an integral part of high performance organic solar cells over the last 20 years, however their inherent limitations in terms of synthetic flexibility, cost and stability have acted as a motivation to develop replacements; the so-called non-fullerene electron acceptors. A rapid evolution of such materials has taken place over the last few years, yielding a number of promising candidates that can exceed the device performance of fullerenes and provide opportunities to improve upon the stability and processability of organic solar cells. In this review we explore the structure–property relationships of a library of non-fullerene acceptors, highlighting the important chemical modifications that have led to progress in the field and provide an outlook for future innovations in electron acceptors for use in organic photovoltaics.

Journal ArticleDOI
TL;DR: This protocol provides an overview of all new features of the COBRA Toolbox and can be adapted to generate and analyze constraint-based models in a wide variety of scenarios.
Abstract: Constraint-based reconstruction and analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental molecular systems biology data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive desktop software suite of interoperable COBRA methods. It has found widespread application in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. This protocol is an update to the COBRA Toolbox v.1.0 and v.2.0. Version 3.0 includes new methods for quality-controlled reconstruction, modeling, topological analysis, strain and experimental design, and network visualization, as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimization solvers for multi-scale, multi-cellular, and reaction kinetic modeling, respectively. This protocol provides an overview of all these new features and can be adapted to generate and analyze constraint-based models in a wide variety of scenarios. The COBRA Toolbox v.3.0 provides an unparalleled depth of COBRA methods.

Journal ArticleDOI
01 Aug 2019
TL;DR: Robust model-based charging optimisation strategies are identified as key to enabling fast charging in all conditions, with a particular focus on techniques capable of achieving high speeds and good temperature homogeneities.
Abstract: In the recent years, lithium-ion batteries have become the battery technology of choice for portable devices, electric vehicles and grid storage. While increasing numbers of car manufacturers are introducing electrified models into their offering, range anxiety and the length of time required to recharge the batteries are still a common concern. The high currents needed to accelerate the charging process have been known to reduce energy efficiency and cause accelerated capacity and power fade. Fast charging is a multiscale problem, therefore insights from atomic to system level are required to understand and improve fast charging performance. The present paper reviews the literature on the physical phenomena that limit battery charging speeds, the degradation mechanisms that commonly result from charging at high currents, and the approaches that have been proposed to address these issues. Special attention is paid to low temperature charging. Alternative fast charging protocols are presented and critically assessed. Safety implications are explored, including the potential influence of fast charging on thermal runaway characteristics. Finally, knowledge gaps are identified and recommendations are made for the direction of future research. The need to develop reliable onboard methods to detect lithium plating and mechanical degradation is highlighted. Robust model-based charging optimisation strategies are identified as key to enabling fast charging in all conditions. Thermal management strategies to both cool batteries during charging and preheat them in cold weather are acknowledged as critical, with a particular focus on techniques capable of achieving high speeds and good temperature homogeneities.

Journal ArticleDOI
Andrea Cossarizza1, Hyun-Dong Chang, Andreas Radbruch, Andreas Acs2  +459 moreInstitutions (160)
TL;DR: These guidelines are a consensus work of a considerable number of members of the immunology and flow cytometry community providing the theory and key practical aspects offlow cytometry enabling immunologists to avoid the common errors that often undermine immunological data.
Abstract: These guidelines are a consensus work of a considerable number of members of the immunology and flow cytometry community. They provide the theory and key practical aspects of flow cytometry enabling immunologists to avoid the common errors that often undermine immunological data. Notably, there are comprehensive sections of all major immune cell types with helpful Tables detailing phenotypes in murine and human cells. The latest flow cytometry techniques and applications are also described, featuring examples of the data that can be generated and, importantly, how the data can be analysed. Furthermore, there are sections detailing tips, tricks and pitfalls to avoid, all written and peer-reviewed by leading experts in the field, making this an essential research companion.

Journal ArticleDOI
29 Mar 2019-Science
TL;DR: A global, quantitative assessment of the amphibian chytridiomycosis panzootic demonstrates its role in the decline of at least 501 amphibian species over the past half-century and represents the greatest recorded loss of biodiversity attributable to a disease.
Abstract: Anthropogenic trade and development have broken down dispersal barriers, facilitating the spread of diseases that threaten Earth's biodiversity. We present a global, quantitative assessment of the amphibian chytridiomycosis panzootic, one of the most impactful examples of disease spread, and demonstrate its role in the decline of at least 501 amphibian species over the past half-century, including 90 presumed extinctions. The effects of chytridiomycosis have been greatest in large-bodied, range-restricted anurans in wet climates in the Americas and Australia. Declines peaked in the 1980s, and only 12% of declined species show signs of recovery, whereas 39% are experiencing ongoing decline. There is risk of further chytridiomycosis outbreaks in new areas. The chytridiomycosis panzootic represents the greatest recorded loss of biodiversity attributable to a disease.

Journal ArticleDOI
TL;DR: The numerous beneficial effects of GLP-1 render this hormone an interesting candidate for the development of pharmacotherapies to treat obesity, diabetes, and neurodegenerative disorders.
Abstract: Background The glucagon-like peptide-1 (GLP-1) is a multifaceted hormone with broad pharmacological potential. Among the numerous metabolic effects of GLP-1 are the glucose-dependent stimulation of insulin secretion, decrease of gastric emptying, inhibition of food intake, increase of natriuresis and diuresis, and modulation of rodent β-cell proliferation. GLP-1 also has cardio- and neuroprotective effects, decreases inflammation and apoptosis, and has implications for learning and memory, reward behavior, and palatability. Biochemically modified for enhanced potency and sustained action, GLP-1 receptor agonists are successfully in clinical use for the treatment of type-2 diabetes, and several GLP-1-based pharmacotherapies are in clinical evaluation for the treatment of obesity. Scope of review In this review, we provide a detailed overview on the multifaceted nature of GLP-1 and its pharmacology and discuss its therapeutic implications on various diseases. Major conclusions Since its discovery, GLP-1 has emerged as a pleiotropic hormone with a myriad of metabolic functions that go well beyond its classical identification as an incretin hormone. The numerous beneficial effects of GLP-1 render this hormone an interesting candidate for the development of pharmacotherapies to treat obesity, diabetes, and neurodegenerative disorders