scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.
Abstract: In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods. In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement. The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot. The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals. The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.

2,399 citations


Journal ArticleDOI
TL;DR: This AASLD 2018 Hepatitis B Guidance provides a data-supported approach to screening, prevention, diagnosis, and clinical management of patients with hepatitis B.

2,399 citations


Proceedings ArticleDOI
06 May 2019
TL;DR: MobileNetV3 as mentioned in this paper is the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design and achieves state-of-the-art results for mobile classification, detection and segmentation.
Abstract: We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2% more accurate on ImageNet classification while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small is 6.6% more accurate compared to a MobileNetV2 model with comparable latency. MobileNetV3-Large detection is over 25% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.

2,397 citations


Journal ArticleDOI
21 Oct 2015-Nature
TL;DR: The data imply statistically significant rejection of the local-realist null hypothesis and could be used for testing less-conventional theories, and for implementing device-independent quantum-secure communication and randomness certification.
Abstract: More than 50 years ago, John Bell proved that no theory of nature that obeys locality and realism can reproduce all the predictions of quantum theory: in any local-realist theory, the correlations between outcomes of measurements on distant particles satisfy an inequality that can be violated if the particles are entangled. Numerous Bell inequality tests have been reported; however, all experiments reported so far required additional assumptions to obtain a contradiction with local realism, resulting in 'loopholes'. Here we report a Bell experiment that is free of any such additional assumption and thus directly tests the principles underlying Bell's inequality. We use an event-ready scheme that enables the generation of robust entanglement between distant electron spins (estimated state fidelity of 0.92 ± 0.03). Efficient spin read-out avoids the fair-sampling assumption (detection loophole), while the use of fast random-basis selection and spin read-out combined with a spatial separation of 1.3 kilometres ensure the required locality conditions. We performed 245 trials that tested the CHSH-Bell inequality S ≤ 2 and found S = 2.42 ± 0.20 (where S quantifies the correlation between measurement outcomes). A null-hypothesis test yields a probability of at most P = 0.039 that a local-realist model for space-like separated sites could produce data with a violation at least as large as we observe, even when allowing for memory in the devices. Our data hence imply statistically significant rejection of the local-realist null hypothesis. This conclusion may be further consolidated in future experiments; for instance, reaching a value of P = 0.001 would require approximately 700 trials for an observed S = 2.4. With improvements, our experiment could be used for testing less-conventional theories, and for implementing device-independent quantum-secure communication and randomness certification.

2,397 citations


Journal ArticleDOI
TL;DR: A summary of the technical advances that are incorporated in the fourth major release of the Q-Chem quantum chemistry program is provided in this paper, covering approximately the last seven years, including developments in density functional theory and algorithms, nuclear magnetic resonance (NMR) property evaluation, coupled cluster and perturbation theories, methods for electronically excited and open-shell species, tools for treating extended environments, algorithms for walking on potential surfaces, analysis tools, energy and electron transfer modelling, parallel computing capabilities, and graphical user interfaces.
Abstract: A summary of the technical advances that are incorporated in the fourth major release of the Q-Chem quantum chemistry program is provided, covering approximately the last seven years. These include developments in density functional theory methods and algorithms, nuclear magnetic resonance (NMR) property evaluation, coupled cluster and perturbation theories, methods for electronically excited and open-shell species, tools for treating extended environments, algorithms for walking on potential surfaces, analysis tools, energy and electron transfer modelling, parallel computing capabilities, and graphical user interfaces. In addition, a selection of example case studies that illustrate these capabilities is given. These include extensive benchmarks of the comparative accuracy of modern density functionals for bonded and non-bonded interactions, tests of attenuated second order Moller–Plesset (MP2) methods for intermolecular interactions, a variety of parallel performance benchmarks, and tests of the accuracy of implicit solvation models. Some specific chemical examples include calculations on the strongly correlated Cr_2 dimer, exploring zeolite-catalysed ethane dehydrogenation, energy decomposition analysis of a charged ter-molecular complex arising from glycerol photoionisation, and natural transition orbitals for a Frenkel exciton state in a nine-unit model of a self-assembling nanotube.

2,396 citations


Journal ArticleDOI
TL;DR: CD19-CAR T cell therapy is feasible, safe, and mediates potent anti-leukaemic activity in children and young adults with chemotherapy-resistant B-precursor acute lymphoblastic leukaemia and non-Hodgkin lymphoma.

2,394 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this paper, the authors introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively.
Abstract: A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.

2,393 citations


Journal ArticleDOI
07 Jun 2016-JAMA
TL;DR: Analyses of changes over the decade from 2005 through 2014, adjusted for age, race/Hispanic origin, smoking status, and education, showed significant increasing linear trends among women for overall obesity and for class 3 obesity but not among men.
Abstract: Importance Between 1980 and 2000, the prevalence of obesity increased significantly among adult men and women in the United States; further significant increases were observed through 2003-2004 for men but not women. Subsequent comparisons of data from 2003-2004 with data through 2011-2012 showed no significant increases for men or women. Objective To examine obesity prevalence for 2013-2014 and trends over the decade from 2005 through 2014 adjusting for sex, age, race/Hispanic origin, smoking status, and education. Design, Setting, and Participants Analysis of data obtained from the National Health and Nutrition Examination Survey (NHANES), a cross-sectional, nationally representative health examination survey of the US civilian noninstitutionalized population that includes measured weight and height. Exposures Survey period. Main Outcomes and Measures Prevalence of obesity (body mass index ≥30) and class 3 obesity (body mass index ≥40). Results This report is based on data from 2638 adult men (mean age, 46.8 years) and 2817 women (mean age, 48.4 years) from the most recent 2 years (2013-2014) of NHANES and data from 21 013 participants in previous NHANES surveys from 2005 through 2012. For the years 2013-2014, the overall age-adjusted prevalence of obesity was 37.7% (95% CI, 35.8%-39.7%); among men, it was 35.0% (95% CI, 32.8%-37.3%); and among women, it was 40.4% (95% CI, 37.6%-43.3%). The corresponding prevalence of class 3 obesity overall was 7.7% (95% CI, 6.2%-9.3%); among men, it was 5.5% (95% CI, 4.0%-7.2%); and among women, it was 9.9% (95% CI, 7.5%-12.3%). Analyses of changes over the decade from 2005 through 2014, adjusted for age, race/Hispanic origin, smoking status, and education, showed significant increasing linear trends among women for overall obesity ( P = .004) and for class 3 obesity ( P = .01) but not among men ( P = .30 for overall obesity; P = .14 for class 3 obesity). Conclusions and Relevance In this nationally representative survey of adults in the United States, the age-adjusted prevalence of obesity in 2013-2014 was 35.0% among men and 40.4% among women. The corresponding values for class 3 obesity were 5.5% for men and 9.9% for women. For women, the prevalence of overall obesity and of class 3 obesity showed significant linear trends for increase between 2005 and 2014; there were no significant trends for men. Other studies are needed to determine the reasons for these trends.

2,392 citations


Journal ArticleDOI
Per Nilsen1
TL;DR: A taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science is proposed to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers.
Abstract: Implementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers. Theoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks). This article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited “how-to” support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation.

2,392 citations


Journal ArticleDOI
20 Nov 2017
TL;DR: In this paper, the authors provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs, and discuss various hardware platforms and architectures that support DNN, and highlight key trends in reducing the computation cost of deep neural networks either solely via hardware design changes or via joint hardware and DNN algorithm changes.
Abstract: Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems. This article aims to provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic codesigns, being proposed in academia and industry. The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the tradeoffs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.

2,391 citations


19 May 2016
TL;DR: Suggested reading taken from the last 12 months of the Commission’s weekly publication “On the Radar” highlights papers and reporting exploring the evidence for the effectiveness of antimicrobial stewardship in hospitals, the scope of the problem of antimacterial resistance, and some specific stewardship strategies.
Abstract: Below is a selection of suggested reading taken from the last 12 months of the Commission’s weekly publication “On the Radar”. The selection highlights papers and reporting exploring the evidence for the effectiveness of antimicrobial stewardship in hospitals, the scope of the problem of antimicrobial resistance, and some specific stewardship strategies. Commission projects supporting antimicrobial stewardship are also highlighted.

Journal ArticleDOI
TL;DR: Through logical differential diagnosis, levels of evidence for autoimmune encephalitis (possible, probable, or definite) are achieved, which can lead to prompt immunotherapy.
Abstract: Summary Encephalitis is a severe inflammatory disorder of the brain with many possible causes and a complex differential diagnosis. Advances in autoimmune encephalitis research in the past 10 years have led to the identification of new syndromes and biomarkers that have transformed the diagnostic approach to these disorders. However, existing criteria for autoimmune encephalitis are too reliant on antibody testing and response to immunotherapy, which might delay the diagnosis. We reviewed the literature and gathered the experience of a team of experts with the aims of developing a practical, syndrome-based diagnostic approach to autoimmune encephalitis and providing guidelines to navigate through the differential diagnosis. Because autoantibody test results and response to therapy are not available at disease onset, we based the initial diagnostic approach on neurological assessment and conventional tests that are accessible to most clinicians. Through logical differential diagnosis, levels of evidence for autoimmune encephalitis (possible, probable, or definite) are achieved, which can lead to prompt immunotherapy.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper expands the internal patch search space by allowing geometric variations, and proposes a compositional model to simultaneously handle both types of transformations to accommodate local shape variations.
Abstract: Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.

Journal ArticleDOI
TL;DR: This document is developed for physicians and healthcare providers who are involved in athlete care, whether at a recreational, elite or professional level, and provides an overview of issues that may be of importance to healthcare providers involved in the management of SRC.
Abstract: The 2017 Concussion in Sport Group (CISG) consensus statement is designed to build on the principles outlined in the previous statements1–4 and to develop further conceptual understanding of sport-related concussion (SRC) using an expert consensus-based approach. This document is developed for physicians and healthcare providers who are involved in athlete care, whether at a recreational, elite or professional level. While agreement exists on the principal messages conveyed by this document, the authors acknowledge that the science of SRC is evolving and therefore individual management and return-to-play decisions remain in the realm of clinical judgement. This consensus document reflects the current state of knowledge and will need to be modified as new knowledge develops. It provides an overview of issues that may be of importance to healthcare providers involved in the management of SRC. This paper should be read in conjunction with the systematic reviews and methodology paper that accompany it. First and foremost, this document is intended to guide clinical practice; however, the authors feel that it can also help form the agenda for future research relevant to SRC by identifying knowledge gaps. A series of specific clinical questions were developed as part of the consensus process for the Berlin 2016 meeting. Each consensus question was the subject of a specific formal systematic review, which is published concurrently with this summary statement. Readers are directed to these background papers in conjunction with this summary statement as they provide the context for the issues and include the scope of published research, search strategy and citations reviewed for each question. This 2017 consensus statement also summarises each topic and recommendations in the context of all five CISG meetings (that is, 2001, 2004, 2008, 2012 as well as 2016). Approximately 60 000 published articles were screened by the expert panels for the Berlin …

Proceedings ArticleDOI
21 Jul 2017
TL;DR: It is concluded that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on the authors' newly proposed DIV2K.
Abstract: This paper introduces a novel large dataset for example-based single image super-resolution and studies the state-of-the-art as emerged from the NTIRE 2017 challenge. The challenge is the first challenge of its kind, with 6 competitions, hundreds of participants and tens of proposed solutions. Our newly collected DIVerse 2K resolution image dataset (DIV2K) was employed by the challenge. In our study we compare the solutions from the challenge to a set of representative methods from the literature and evaluate them using diverse measures on our proposed DIV2K dataset. Moreover, we conduct a number of experiments and draw conclusions on several topics of interest. We conclude that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on our newly proposed DIV2K.

Journal ArticleDOI
TL;DR: Recommendations for specific organ system-based toxicity diagnosis and management are presented and, in general, permanent discontinuation of ICPis is recommended with grade 4 toxicities, with the exception of endocrinopathies that have been controlled by hormone replacement.
Abstract: PurposeTo increase awareness, outline strategies, and offer guidance on the recommended management of immune-related adverse events in patients treated with immune checkpoint inhibitor (ICPi) therapyMethodsA multidisciplinary, multi-organizational panel of experts in medical oncology, dermatology, gastroenterology, rheumatology, pulmonology, endocrinology, urology, neurology, hematology, emergency medicine, nursing, trialist, and advocacy was convened to develop the clinical practice guideline Guideline development involved a systematic review of the literature and an informal consensus process The systematic review focused on guidelines, systematic reviews and meta-analyses, randomized controlled trials, and case series published from 2000 through 2017ResultsThe systematic review identified 204 eligible publications Much of the evidence consisted of systematic reviews of observational data, consensus guidelines, case series, and case reports Due to the paucity of high-quality evidence on management

Journal ArticleDOI
TL;DR: The key feature of RDP4 that differentiates it from other recombination detection tools is its flexibility, which can be run either in fully automated mode from the command line interface or with a graphically rich user interface that enables detailed exploration of both individual recombination events and overall recombination patterns.
Abstract: RDP4 is the latest version of recombination detection program (RDP), a Windows computer program that implements an extensive array of methods for detecting and visualising recombination in, and stripping evidence of recombination from, virus genome sequence alignments. RDP4 is capable of analysing twice as many sequences (up to 2,500) that are up to three times longer (up to 10 Mb) than those that could be analysed by older versions of the program. RDP4 is therefore also applicable to the analysis of bacterial full-genome sequence datasets. Other novelties in RDP4 include (1) the capacity to differentiate between recombination and genome segment reassortment, (2) the estimation of recombination breakpoint confidence intervals, (3) a variety of ‘recombination aware’ phylogenetic tree construction and comparison tools, (4) new matrix-based visualisation tools for examining both individual recombination events and the overall phylogenetic impacts of multiple recombination events and (5) new tests to detect the influences of gene arrangements, encoded protein structure, nucleic acid secondary structure, nucleotide composition, and nucleotide diversity on recombination breakpoint patterns. The key feature of RDP4 that differentiates it from other recombination detection tools is its flexibility. It can be run either in fully automated mode from the command line interface or with a graphically rich user interface that enables detailed exploration of both individual recombination events and overall recombination patterns.

Journal ArticleDOI
TL;DR: This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.
Abstract: Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.

Journal ArticleDOI
TL;DR: This extends OrthoFinder’s high accuracy orthogroup inference to provide phylogenetic inference of orthologs, rooted gene trees, gene duplication events, the rooted species tree, and comparative genomics statistics.
Abstract: Here, we present a major advance of the OrthoFinder method. This extends OrthoFinder’s high accuracy orthogroup inference to provide phylogenetic inference of orthologs, rooted gene trees, gene duplication events, the rooted species tree, and comparative genomics statistics. Each output is benchmarked on appropriate real or simulated datasets, and where comparable methods exist, OrthoFinder is equivalent to or outperforms these methods. Furthermore, OrthoFinder is the most accurate ortholog inference method on the Quest for Orthologs benchmark test. Finally, OrthoFinder’s comprehensive phylogenetic analysis is achieved with equivalent speed and scalability to the fastest, score-based heuristic methods. OrthoFinder is available at https://github.com/davidemms/OrthoFinder.

Journal ArticleDOI
Christina Fitzmaurice1, Christina Fitzmaurice2, Daniel Dicker1, Daniel Dicker2, Amanda W Pain1, Hannah Hamavid1, Maziar Moradi-Lakeh1, Michael F. MacIntyre1, Michael F. MacIntyre3, Christine Allen1, Gillian M. Hansen1, Rachel Woodbrook1, Charles D.A. Wolfe1, Randah R. Hamadeh4, Ami R. Moore5, A. Werdecker6, Bradford D. Gessner, Braden Te Ao, Brian J. McMahon7, Chante Karimkhani8, Chuanhua Yu9, Graham S Cooke10, David C. Schwebel11, David O. Carpenter12, David M. Pereira13, Denis Nash, Dhruv S. Kazi14, Diego De Leo15, Dietrich Plass16, Kingsley N. Ukwaja17, George D. Thurston, Kim Yun Jin18, Edgar P. Simard19, Edward J Mills20, Eun-Kee Park21, Ferrán Catalá-López22, Gabrielle deVeber, Carolyn C. Gotay23, Gulfaraz Khan24, H. Dean Hosgood25, Itamar S. Santos26, Janet L Leasher27, Jasvinder A. Singh28, James Leigh12, Jost B. Jonas29, Juan R. Sanabria30, Justin Beardsley31, Justin Beardsley32, Kathryn H. Jacobsen33, Ken Takahashi34, Richard C. Franklin, Luca Ronfani35, Marcella Montico36, Luigi Naldi36, Marcello Tonelli, Johanna M. Geleijnse37, Max Petzold38, Mark G. Shrime39, Mark G. Shrime40, Mustafa Z. Younis41, Naohiro Yonemoto42, Nicholas J K Breitborde, Paul S. F. Yip43, Farshad Pourmalek44, Paulo A. Lotufo24, Alireza Esteghamati27, Graeme J. Hankey45, Raghib Ali46, Raimundas Lunevicius33, Reza Malekzadeh47, Robert P. Dellavalle45, Robert G. Weintraub48, Robert G. Weintraub49, Robyn M. Lucas50, Robyn M. Lucas51, Roderick J Hay52, David Rojas-Rueda, Ronny Westerman, Sadaf G. Sepanlou53, Sandra Nolte, Scott B. Patten54, Scott Weichenthal37, Semaw Ferede Abera55, Seyed-Mohammad Fereshtehnejad56, Ivy Shiue57, Tim Driscoll58, Tim Driscoll59, Tommi J. Vasankari29, Ubai Alsharif, Vafa Rahimi-Movaghar54, Vasiliy Victorovich Vlassov45, W. S. Marcenes60, Wubegzier Mekonnen61, Yohannes Adama Melaku62, Yuichiro Yano56, Al Artaman63, Ismael Campos, Jennifer H MacLachlan41, Ulrich O Mueller, Daniel Kim53, Matias Trillini64, Babak Eshrati65, Hywel C Williams66, Kenji Shibuya67, Rakhi Dandona68, Kinnari S. Murthy69, Benjamin C Cowie69, Azmeraw T. Amare, Carl Abelardo T. Antonio70, Carlos A Castañeda-Orjuela71, Coen H. Van Gool, Francesco Saverio Violante, In-Hwan Oh72, Kedede Deribe73, Kjetil Søreide74, Kjetil Søreide62, Luke D. Knibbs75, Luke D. Knibbs76, Maia Kereselidze77, Mark Green78, Rosario Cardenas79, Nobhojit Roy80, Taavi Tillmann57, Yongmei Li81, Hans Krueger82, Lorenzo Monasta24, Subhojit Dey36, Sara Sheikhbahaei, Nima Hafezi-Nejad45, G Anil Kumar45, Chandrashekhar T Sreeramareddy69, Lalit Dandona83, Haidong Wang1, Haidong Wang69, Stein Emil Vollset1, Ali Mokdad75, Ali Mokdad84, Joshua A. Salomon1, Rafael Lozano41, Theo Vos1, Mohammad H. Forouzanfar1, Alan D. Lopez1, Christopher J L Murray51, Mohsen Naghavi1 
Institute for Health Metrics and Evaluation1, University of Washington2, Iran University of Medical Sciences3, King's College London4, Arabian Gulf University5, University of North Texas6, Auckland University of Technology7, Alaska Native Tribal Health Consortium8, Columbia University9, Wuhan University10, Imperial College London11, University of Alabama at Birmingham12, University at Albany, SUNY13, City University of New York14, University of California, San Francisco15, Griffith University16, Environment Agency17, New York University18, Southern University College19, Emory University20, University of Ottawa21, Kosin University22, University of Toronto23, University of British Columbia24, United Arab Emirates University25, Albert Einstein College of Medicine26, University of São Paulo27, Nova Southeastern University28, University of Sydney29, Heidelberg University30, Case Western Reserve University31, Cancer Treatment Centers of America32, University of Oxford33, George Mason University34, James Cook University35, University of Trieste36, University of Calgary37, Wageningen University and Research Centre38, University of the Witwatersrand39, University of Gothenburg40, Harvard University41, Jackson State University42, University of Arizona43, University of Hong Kong44, Tehran University of Medical Sciences45, University of Western Australia46, Aintree University Hospitals NHS Foundation Trust47, Veterans Health Administration48, University of Colorado Denver49, Royal Children's Hospital50, University of Melbourne51, Australian National University52, University of Marburg53, Charité54, Health Canada55, College of Health Sciences, Bahrain56, Karolinska Institutet57, Northumbria University58, University of Edinburgh59, National Research University – Higher School of Economics60, Queen Mary University of London61, Addis Ababa University62, Northwestern University63, Northeastern University64, Mario Negri Institute for Pharmacological Research65, Arak University of Medical Sciences66, University of Nottingham67, University of Tokyo68, Public Health Foundation of India69, University of Groningen70, University of the Philippines Manila71, University of Bologna72, Kyung Hee University73, Brighton and Sussex Medical School74, University of Bergen75, Stavanger University Hospital76, University of Queensland77, National Centre for Disease Control78, University of Sheffield79, Universidad Autónoma Metropolitana80, University College London81, Genentech82, Universiti Tunku Abdul Rahman83, Norwegian Institute of Public Health84
TL;DR: To estimate mortality, incidence, years lived with disability, years of life lost, and disability-adjusted life-years for 28 cancers in 188 countries by sex from 1990 to 2013, the general methodology of the Global Burden of Disease 2013 study was used.
Abstract: Importance Cancer is among the leading causes of death worldwide. Current estimates of cancer burden in individual countries and regions are necessary to inform local cancer control strategies. Objective To estimate mortality, incidence, years lived with disability (YLDs), years of life lost (YLLs), and disability-adjusted life-years (DALYs) for 28 cancers in 188 countries by sex from 1990 to 2013. Evidence Review The general methodology of the Global Burden of Disease (GBD) 2013 study was used. Cancer registries were the source for cancer incidence data as well as mortality incidence (MI) ratios. Sources for cause of death data include vital registration system data, verbal autopsy studies, and other sources. The MI ratios were used to transform incidence data to mortality estimates and cause of death estimates to incidence estimates. Cancer prevalence was estimated using MI ratios as surrogates for survival data; YLDs were calculated by multiplying prevalence estimates with disability weights, which were derived from population-based surveys; YLLs were computed by multiplying the number of estimated cancer deaths at each age with a reference life expectancy; and DALYs were calculated as the sum of YLDs and YLLs. Findings In 2013 there were 14.9 million incident cancer cases, 8.2 million deaths, and 196.3 million DALYs. Prostate cancer was the leading cause for cancer incidence (1.4 million) for men and breast cancer for women (1.8 million). Tracheal, bronchus, and lung (TBL) cancer was the leading cause for cancer death in men and women, with 1.6 million deaths. For men, TBL cancer was the leading cause of DALYs (24.9 million). For women, breast cancer was the leading cause of DALYs (13.1 million). Age-standardized incidence rates (ASIRs) per 100 000 and age-standardized death rates (ASDRs) per 100 000 for both sexes in 2013 were higher in developing vs developed countries for stomach cancer (ASIR, 17 vs 14; ASDR, 15 vs 11), liver cancer (ASIR, 15 vs 7; ASDR, 16 vs 7), esophageal cancer (ASIR, 9 vs 4; ASDR, 9 vs 4), cervical cancer (ASIR, 8 vs 5; ASDR, 4 vs 2), lip and oral cavity cancer (ASIR, 7 vs 6; ASDR, 2 vs 2), and nasopharyngeal cancer (ASIR, 1.5 vs 0.4; ASDR, 1.2 vs 0.3). Between 1990 and 2013, ASIRs for all cancers combined (except nonmelanoma skin cancer and Kaposi sarcoma) increased by more than 10% in 113 countries and decreased by more than 10% in 12 of 188 countries. Conclusions and Relevance Cancer poses a major threat to public health worldwide, and incidence rates have increased in most countries since 1990. The trend is a particular threat to developing nations with health systems that are ill-equipped to deal with complex and expensive cancer treatments. The annual update on the Global Burden of Cancer will provide all stakeholders with timely estimates to guide policy efforts in cancer prevention, screening, treatment, and palliation.

Journal ArticleDOI
TL;DR: The first global map (228 countries) of antibiotic consumption in livestock is presented and it is projected that antimicrobial consumption will rise by 67% by 2030, and nearly double in Brazil, Russia, India, China, and South Africa.
Abstract: Demand for animal protein for human consumption is rising globally at an unprecedented rate. Modern animal production practices are associated with regular use of antimicrobials, potentially increasing selection pressure on bacteria to become resistant. Despite the significant potential consequences for antimicrobial resistance, there has been no quantitative measurement of global antimicrobial consumption by livestock. We address this gap by using Bayesian statistical models combining maps of livestock densities, economic projections of demand for meat products, and current estimates of antimicrobial consumption in high-income countries to map antimicrobial use in food animals for 2010 and 2030. We estimate that the global average annual consumption of antimicrobials per kilogram of animal produced was 45 mg⋅kg(-1), 148 mg⋅kg(-1), and 172 mg⋅kg(-1) for cattle, chicken, and pigs, respectively. Starting from this baseline, we estimate that between 2010 and 2030, the global consumption of antimicrobials will increase by 67%, from 63,151 ± 1,560 tons to 105,596 ± 3,605 tons. Up to a third of the increase in consumption in livestock between 2010 and 2030 is imputable to shifting production practices in middle-income countries where extensive farming systems will be replaced by large-scale intensive farming operations that routinely use antimicrobials in subtherapeutic doses. For Brazil, Russia, India, China, and South Africa, the increase in antimicrobial consumption will be 99%, up to seven times the projected population growth in this group of countries. Better understanding of the consequences of the uninhibited growth in veterinary antimicrobial consumption is needed to assess its potential effects on animal and human health.

Journal ArticleDOI
TL;DR: The burden of CKD was much higher than expected for the level of development, whereas the disease burden in western, eastern, and central sub-Saharan Africa, east Asia, south Asia, central and eastern Europe, Australasia, and western Europe was lower than expected.

Posted Content
TL;DR: Context Encoders as mentioned in this paper is a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings, which can be used for semantic inpainting tasks, either stand-alone or as initialization for nonparametric methods.
Abstract: We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

Journal ArticleDOI
TL;DR: The objective-response rate and the progression-free survival among patients with advanced melanoma who had not previously received treatment were significantly greater with nivolumab combined with ipilimumab than with ipILimumab monotherapy.
Abstract: BackgroundIn a phase 1 dose-escalation study, combined inhibition of T-cell checkpoint pathways by nivolumab and ipilimumab was associated with a high rate of objective response, including complete responses, among patients with advanced melanoma. MethodsIn this double-blind study involving 142 patients with metastatic melanoma who had not previously received treatment, we randomly assigned patients in a 2:1 ratio to receive ipilimumab (3 mg per kilogram of body weight) combined with either nivolumab (1 mg per kilogram) or placebo once every 3 weeks for four doses, followed by nivolumab (3 mg per kilogram) or placebo every 2 weeks until the occurrence of disease progression or unacceptable toxic effects. The primary end point was the rate of investigator-assessed, confirmed objective response among patients with BRAF V600 wild-type tumors. ResultsAmong patients with BRAF wild-type tumors, the rate of confirmed objective response was 61% (44 of 72 patients) in the group that received both ipilimumab and ni...

Posted Content
TL;DR: Stacked hourglass networks as mentioned in this paper were proposed for human pose estimation, where features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body, and repeated bottom-up, top-down processing with intermediate supervision is critical to improving the performance of the network.
Abstract: This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a "stacked hourglass" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.

Proceedings Article
30 Apr 2020
TL;DR: This work presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT, and uses a self-supervised loss that focuses on modeling inter-sentence coherence.
Abstract: Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.

Journal ArticleDOI
TL;DR: IDSA considers adherence to these guidelines to be voluntary, with the ultimate determination regarding their application to be made by the physician in the light of each patient's individual circumstances.
Abstract: It is important to realize that guidelines cannot always account for individual variation among patients. They are not intended to supplant physician judgment with respect to particular patients or special clinical situations. IDSA considers adherence to these guidelines to be voluntary, with the ultimate determination regarding their application to be made by the physician in the light of each patient's individual circumstances.

Journal ArticleDOI
TL;DR: This method probes DNA accessibility with hyperactive Tn5 transposase, which inserts sequencing adapters into accessible regions of chromatin, which can be used to infer regions of increased accessibility, as well as to map regions of transcription‐factor binding and nucleosome position.
Abstract: This unit describes Assay for Transposase-Accessible Chromatin with high-throughput sequencing (ATAC-seq), a method for mapping chromatin accessibility genome-wide. This method probes DNA accessibility with hyperactive Tn5 transposase, which inserts sequencing adapters into accessible regions of chromatin. Sequencing reads can then be used to infer regions of increased accessibility, as well as to map regions of transcription-factor binding and nucleosome position. The method is a fast and sensitive alternative to DNase-seq for assaying chromatin accessibility genome-wide, or to MNase-seq for assaying nucleosome positions in accessible regions of the genome.

Journal ArticleDOI
TL;DR: This work demonstrates the presence of exosomal and nonexosomal subpopulations within small EVs, and proposes their differential separation by immuno-isolation using either CD63, CD81, or CD9, and provides guidelines to define subtypes of EVs for future functional studies.
Abstract: Extracellular vesicles (EVs) have become the focus of rising interest because of their numerous functions in physiology and pathology. Cells release heterogeneous vesicles of different sizes and intracellular origins, including small EVs formed inside endosomal compartments (i.e., exosomes) and EVs of various sizes budding from the plasma membrane. Specific markers for the analysis and isolation of different EV populations are missing, imposing important limitations to understanding EV functions. Here, EVs from human dendritic cells were first separated by their sedimentation speed, and then either by their behavior upon upward floatation into iodixanol gradients or by immuno-isolation. Extensive quantitative proteomic analysis allowing comparison of the isolated populations showed that several classically used exosome markers, like major histocompatibility complex, flotillin, and heat-shock 70-kDa proteins, are similarly present in all EVs. We identified proteins specifically enriched in small EVs, and define a set of five protein categories displaying different relative abundance in distinct EV populations. We demonstrate the presence of exosomal and nonexosomal subpopulations within small EVs, and propose their differential separation by immuno-isolation using either CD63, CD81, or CD9. Our work thus provides guidelines to define subtypes of EVs for future functional studies.

Posted Content
TL;DR: The task of free-form and open-ended Visual Question Answering (VQA) is proposed, given an image and a natural language question about the image, the task is to provide an accurate natural language answer.
Abstract: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (this http URL), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).