scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Olfactory and gustatory disorders are prevalent symptoms in European CO VID-19 patients, who may not have nasal symptoms, and the sudden anosmia or ageusia need to be recognized by the international scientific community as important symptoms of the COVID-19 infection.
Abstract: To investigate the occurrence of olfactory and gustatory dysfunctions in patients with laboratory-confirmed COVID-19 infection. Patients with laboratory-confirmed COVID-19 infection were recruited from 12 European hospitals. The following epidemiological and clinical outcomes have been studied: age, sex, ethnicity, comorbidities, and general and otolaryngological symptoms. Patients completed olfactory and gustatory questionnaires based on the smell and taste component of the National Health and Nutrition Examination Survey, and the short version of the Questionnaire of Olfactory Disorders-Negative Statements (sQOD-NS). A total of 417 mild-to-moderate COVID-19 patients completed the study (263 females). The most prevalent general symptoms consisted of cough, myalgia, and loss of appetite. Face pain and nasal obstruction were the most disease-related otolaryngological symptoms. 85.6% and 88.0% of patients reported olfactory and gustatory dysfunctions, respectively. There was a significant association between both disorders (p < 0.001). Olfactory dysfunction (OD) appeared before the other symptoms in 11.8% of cases. The sQO-NS scores were significantly lower in patients with anosmia compared with normosmic or hyposmic individuals (p = 0.001). Among the 18.2% of patients without nasal obstruction or rhinorrhea, 79.7% were hyposmic or anosmic. The early olfactory recovery rate was 44.0%. Females were significantly more affected by olfactory and gustatory dysfunctions than males (p = 0.001). Olfactory and gustatory disorders are prevalent symptoms in European COVID-19 patients, who may not have nasal symptoms. The sudden anosmia or ageusia need to be recognized by the international scientific community as important symptoms of the COVID-19 infection.

2,030 citations


Journal ArticleDOI
TL;DR: Both the previous and these new guidelines specifically aim to achieve standardised uptake value harmonisation in multicentre settings.
Abstract: The purpose of these guidelines is to assist physicians in recommending, performing, interpreting and reporting the results of FDG PET/CT for oncological imaging of adult patients. PET is a quantitative imaging technique and therefore requires a common quality control (QC)/quality assurance (QA) procedure to maintain the accuracy and precision of quantitation. Repeatability and reproducibility are two essential requirements for any quantitative measurement and/or imaging biomarker. Repeatability relates to the uncertainty in obtaining the same result in the same patient when he or she is examined more than once on the same system. However, imaging biomarkers should also have adequate reproducibility, i.e. the ability to yield the same result in the same patient when that patient is examined on different systems and at different imaging sites. Adequate repeatability and reproducibility are essential for the clinical management of patients and the use of FDG PET/CT within multicentre trials. A common standardised imaging procedure will help promote the appropriate use of FDG PET/CT imaging and increase the value of publications and, therefore, their contribution to evidence-based medicine. Moreover, consistency in numerical values between platforms and institutes that acquire the data will potentially enhance the role of semiquantitative and quantitative image interpretation. Precision and accuracy are additionally important as FDG PET/CT is used to evaluate tumour response as well as for diagnosis, prognosis and staging. Therefore both the previous and these new guidelines specifically aim to achieve standardised uptake value harmonisation in multicentre settings.

2,029 citations


Journal ArticleDOI
TL;DR: In this article, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test is presented, which is based on LO, NLO and NNLO QCD theory and also includes electroweak corrections.
Abstract: We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.

2,028 citations


Journal ArticleDOI
TL;DR: The ERIC study aimed to refine a published compilation of implementation strategy terms and definitions by systematically gathering input from a wide range of stakeholders with expertise in implementation science and clinical practice to generate consensus on implementation strategies and definitions.
Abstract: Identifying, developing, and testing implementation strategies are important goals of implementation science. However, these efforts have been complicated by the use of inconsistent language and inadequate descriptions of implementation strategies in the literature. The Expert Recommendations for Implementing Change (ERIC) study aimed to refine a published compilation of implementation strategy terms and definitions by systematically gathering input from a wide range of stakeholders with expertise in implementation science and clinical practice. Purposive sampling was used to recruit a panel of experts in implementation and clinical practice who engaged in three rounds of a modified Delphi process to generate consensus on implementation strategies and definitions. The first and second rounds involved Web-based surveys soliciting comments on implementation strategy terms and definitions. After each round, iterative refinements were made based upon participant feedback. The third round involved a live polling and consensus process via a Web-based platform and conference call. Participants identified substantial concerns with 31% of the terms and/or definitions and suggested five additional strategies. Seventy-five percent of definitions from the originally published compilation of strategies were retained after voting. Ultimately, the expert panel reached consensus on a final compilation of 73 implementation strategies. This research advances the field by improving the conceptual clarity, relevance, and comprehensiveness of implementation strategies that can be used in isolation or combination in implementation research and practice. Future phases of ERIC will focus on developing conceptually distinct categories of strategies as well as ratings for each strategy’s importance and feasibility. Next, the expert panel will recommend multifaceted strategies for hypothetical yet real-world scenarios that vary by sites’ endorsement of evidence-based programs and practices and the strength of contextual supports that surround the effort.

2,028 citations


Journal ArticleDOI
TL;DR: Prevention and early detection of lung cancer with an emphasis on lung cancer screening is discussed, and the importance of smoking prevention and cessation is acknowledged.

2,027 citations


Book
21 Dec 2021
TL;DR: Part 2 Linear inverse problems: examples of linear inverse problems singular value decomposition (SVD) inversion methods revisited Fourier based methods for specific problems comments and concluding remarks.
Abstract: This is a graduate textbook on the principles of linear inverse problems, methods of their approximate solution, and practical application in imaging The level of mathematical treatment is kept as low as possible to make the book suitable for a wide range of readers from different backgrounds in science and engineering Mathematical prerequisites are first courses in analysis, geometry, linear algebra, probability theory, and Fourier analysis The authors concentrate on presenting easily implementable and fast solution algorithms With examples and exercises throughout, the book will provide the reader with the appropriate background for a clear understanding of the essence of inverse problems (ill-posedness and its cure) and, consequently, for an intelligent assessment of the rapidly growing literature on these problems

2,027 citations


Journal ArticleDOI
TL;DR: The three-part survey paper aims to give a comprehensive review of real-time fault diagnosis and fault-tolerant control, with particular attention on the results reported in the last decade.
Abstract: With the continuous increase in complexity and expense of industrial systems, there is less tolerance for performance degradation, productivity decrease, and safety hazards, which greatly necessitates to detect and identify any kinds of potential abnormalities and faults as early as possible and implement real-time fault-tolerant operation for minimizing performance degradation and avoiding dangerous situations. During the last four decades, fruitful results have been reported about fault diagnosis and fault-tolerant control methods and their applications in a variety of engineering systems. The three-part survey paper aims to give a comprehensive review of real-time fault diagnosis and fault-tolerant control, with particular attention on the results reported in the last decade. In this paper, fault diagnosis approaches and their applications are comprehensively reviewed from model- and signal-based perspectives, respectively.

2,026 citations


Journal ArticleDOI
29 Nov 2017-Nature
TL;DR: This work demonstrates a method for creating controlled many-body quantum matter that combines deterministically prepared, reconfigurable arrays of individually trapped cold atoms with strong, coherent interactions enabled by excitation to Rydberg states, and realizes a programmable Ising-type quantum spin model with tunable interactions and system sizes of up to 51 qubits.
Abstract: Controllable, coherent many-body systems can provide insights into the fundamental properties of quantum matter, enable the realization of new quantum phases and could ultimately lead to computational systems that outperform existing computers based on classical approaches. Here we demonstrate a method for creating controlled many-body quantum matter that combines deterministically prepared, reconfigurable arrays of individually trapped cold atoms with strong, coherent interactions enabled by excitation to Rydberg states. We realize a programmable Ising-type quantum spin model with tunable interactions and system sizes of up to 51 qubits. Within this model, we observe phase transitions into spatially ordered states that break various discrete symmetries, verify the high-fidelity preparation of these states and investigate the dynamics across the phase transition in large arrays of atoms. In particular, we observe robust many-body dynamics corresponding to persistent oscillations of the order after a rapid quantum quench that results from a sudden transition across the phase boundary. Our method provides a way of exploring many-body phenomena on a programmable quantum simulator and could enable realizations of new quantum algorithms.

2,026 citations


Journal ArticleDOI
TL;DR: Scolnic et al. as discussed by the authors presented optical light curves, redshifts, and classifications for 365 spectroscopically confirmed Type Ia supernovae (SNe Ia) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey.
Abstract: Author(s): Scolnic, DM; Jones, DO; Rest, A; Pan, YC; Chornock, R; Foley, RJ; Huber, ME; Kessler, R; Narayan, G; Riess, AG; Rodney, S; Berger, E; Brout, DJ; Challis, PJ; Drout, M; Finkbeiner, D; Lunnan, R; Kirshner, RP; Sanders, NE; Schlafly, E; Smartt, S; Stubbs, CW; Tonry, J; Wood-Vasey, WM; Foley, M; Hand, J; Johnson, E; Burgett, WS; Chambers, KC; Draper, PW; Hodapp, KW; Kaiser, N; Kudritzki, RP; Magnier, EA; Metcalfe, N; Bresolin, F; Gall, E; Kotak, R; McCrum, M; Smith, KW | Abstract: We present optical light curves, redshifts, and classifications for 365 spectroscopically confirmed Type Ia supernovae (SNe Ia) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey. We detail improvements to the PS1 SN photometry, astrometry, and calibration that reduce the systematic uncertainties in the PS1 SN Ia distances. We combine the subset of 279 PS1 SNe Ia (0.03 l z l 0.68) with useful distance estimates of SNe Ia from the Sloan Digital Sky Survey (SDSS), SNLS, and various low-z and Hubble Space Telescope samples to form the largest combined sample of SNe Ia, consisting of a total of 1048 SNe Ia in the range of 0.01 l z l 2.3, which we call the Pantheon Sample. When combining Planck 2015 cosmic microwave background (CMB) measurements with the Pantheon SN sample, we find Wm = 0.307 ± 0.012 and w = -1.026 ± 0.041 for the wCDM model. When the SN and CMB constraints are combined with constraints from BAO and local H0 measurements, the analysis yields the most precise measurement of dark energy to date: w0 = -1.007 ± 0.089 and wa = -0.222 ± 0.407 for the w0waCDM model. Tension with a cosmological constant previously seen in an analysis of PS1 and low-z SNe has diminished after an increase of 2× in the statistics of the PS1 sample, improved calibration and photometry, and stricter light-curve quality cuts. We find that the systematic uncertainties in our measurements of dark energy are almost as large as the statistical uncertainties, primarily due to limitations of modeling the low-redshift sample. This must be addressed for future progress in using SNe Ia to measure dark energy.

2,025 citations


Posted Content
Yulun Zhang1, Kunpeng Li1, Kai Li1, Lichen Wang1, Bineng Zhong1, Yun Fu1 
TL;DR: This work proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections, and proposes a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels.
Abstract: Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.

2,025 citations


Journal ArticleDOI
TL;DR: It is hoped that the definitions resulting from this comprehensive, transparent, and broad-based participatory process will result in standardized terminology that is widely supported and adopted, thereby advancing future research, interventions, policies, and practices related to sedentary behaviors.
Abstract: Background: The prominence of sedentary behavior research in health science has grown rapidly. With this growth there is increasing urgency for clear, common and accepted terminology and definitions. Such standardization is difficult to achieve, especially across multi-disciplinary researchers, practitioners, and industries. The Sedentary Behavior Research Network (SBRN) undertook a Terminology Consensus Project to address this need. Method: First, a literature review was completed to identify key terms in sedentary behavior research. These key terms were then reviewed and modified by a Steering Committee formed by SBRN. Next, SBRN members were invited to contribute to this project and interested participants reviewed and provided feedback on the proposed list of terms and draft definitions through an online survey. Finally, a conceptual model and consensus definitions (including caveats and examples for all age groups and functional abilities) were finalized based on the feedback received from the 87 SBRN member participants who responded to the original invitation and survey. Results: Consensus definitions for the terms physical inactivity, stationary behavior, sedentary behavior, standing, screen time, non-screen-based sedentary time, sitting, reclining, lying, sedentary behavior pattern, as well as how the terms bouts, breaks, and interruptions should be used in this context are provided. Conclusion: It is hoped that the definitions resulting from this comprehensive, transparent, and broad-based participatory process will result in standardized terminology that is widely supported and adopted, thereby advancing future research, interventions, policies, and practices related to sedentary behaviors.

Journal ArticleDOI
TL;DR: Author(s): Saran, Rajiv; Robinson, Bruce; Abbott, Kevin C; Agodoa, Lawrence YC; Ayanian, John; Balkrishnan, Rajesh; Bragg-Gresham, Jennifer; Cao, Jie; Chen, Joline LT; Cope, Elizabeth; Dharmarajan, Sai; Dietrich, Xue; Eckard, Ashley; Eggers, Paul W; Gaber, Charles; Gillen, Daniel;

Journal ArticleDOI
TL;DR: In this paper, the authors provide a detailed overview and historical perspective on state-of-the-art solutions, and elaborate on the fundamental differences with other technologies, the most important open research issues to tackle, and the reasons why the use of reconfigurable intelligent surfaces necessitates to rethink the communication-theoretic models currently employed in wireless networks.
Abstract: The future of mobile communications looks exciting with the potential new use cases and challenging requirements of future 6th generation (6G) and beyond wireless networks. Since the beginning of the modern era of wireless communications, the propagation medium has been perceived as a randomly behaving entity between the transmitter and the receiver, which degrades the quality of the received signal due to the uncontrollable interactions of the transmitted radio waves with the surrounding objects. The recent advent of reconfigurable intelligent surfaces in wireless communications enables, on the other hand, network operators to control the scattering, reflection, and refraction characteristics of the radio waves, by overcoming the negative effects of natural wireless propagation. Recent results have revealed that reconfigurable intelligent surfaces can effectively control the wavefront, e.g., the phase, amplitude, frequency, and even polarization, of the impinging signals without the need of complex decoding, encoding, and radio frequency processing operations. Motivated by the potential of this emerging technology, the present article is aimed to provide the readers with a detailed overview and historical perspective on state-of-the-art solutions, and to elaborate on the fundamental differences with other technologies, the most important open research issues to tackle, and the reasons why the use of reconfigurable intelligent surfaces necessitates to rethink the communication-theoretic models currently employed in wireless networks. This article also explores theoretical performance limits of reconfigurable intelligent surface-assisted communication systems using mathematical techniques and elaborates on the potential use cases of intelligent surfaces in 6G and beyond wireless networks.

Journal ArticleDOI
TL;DR: This publication describes how to perform a meta-analysis with the freely available statistical software environment R, using a working example taken from the field of mental health.
Abstract: Objective Meta-analysis is of fundamental importance to obtain an unbiased assessment of the available evidence. In general, the use of meta-analysis has been increasing over the last three decades with mental health as a major research topic. It is then essential to well understand its methodology and interpret its results. In this publication, we describe how to perform a meta-analysis with the freely available statistical software environment R, using a working example taken from the field of mental health. Methods R package meta is used to conduct standard meta-analysis. Sensitivity analyses for missing binary outcome data and potential selection bias are conducted with R package metasens. All essential R commands are provided and clearly described to conduct and report analyses. Results The working example considers a binary outcome: we show how to conduct a fixed effect and random effects meta-analysis and subgroup analysis, produce a forest and funnel plot and to test and adjust for funnel plot asymmetry. All these steps work similar for other outcome types. Conclusions R represents a powerful and flexible tool to conduct meta-analyses. This publication gives a brief glimpse into the topic and provides directions to more advanced meta-analysis methods available in R.

Journal ArticleDOI
TL;DR: This work reviews studies that explored the association between an abnormal expansion of Proteobacteria and a compromised ability to maintain a balanced gut microbial community and proposes that an increased prevalence of ProTeobacteria is a potential diagnostic signature of dysbiosis and risk of disease.

Posted Content
TL;DR: This paper showed that even in the physical world scenarios, machine learning systems are vulnerable to adversarial examples, and they demonstrate this by feeding adversarial images obtained from a cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system.
Abstract: Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: The Siamese region proposal network (Siamese-RPN) is proposed which is end-to-end trained off-line with large-scale image pairs for visual object tracking and consists of SiAMESe subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch.
Abstract: Visual object tracking has been a fundamental topic in recent years and many deep learning based trackers have achieved state-of-the-art performance on multiple benchmarks. However, most of these trackers can hardly get top performance with real-time speed. In this paper, we propose the Siamese region proposal network (Siamese-RPN) which is end-to-end trained off-line with large-scale image pairs. Specifically, it consists of Siamese subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch. In the inference phase, the proposed framework is formulated as a local one-shot detection task. We can pre-compute the template branch of the Siamese subnetwork and formulate the correlation layers as trivial convolution layers to perform online tracking. Benefit from the proposal refinement, traditional multi-scale test and online fine-tuning can be discarded. The Siamese-RPN runs at 160 FPS while achieving leading performance in VOT2015, VOT2016 and VOT2017 real-time challenges.

Journal ArticleDOI
22 May 2020-BMJ
TL;DR: Age and comorbidities were found to be strong predictors of hospital admission and to a lesser extent of critical illness and mortality in people with coronavirus disease 2019 in the United States; however, impairment of oxygen on admission and markers of inflammation were most strongly associated with critical illnesses and mortality.
Abstract: Objective To describe outcomes of people admitted to hospital with coronavirus disease 2019 (covid-19) in the United States, and the clinical and laboratory characteristics associated with severity of illness. Design Prospective cohort study. Setting Single academic medical center in New York City and Long Island. Participants 5279 patients with laboratory confirmed severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) infection between 1 March 2020 and 8 April 2020. The final date of follow up was 5 May 2020. Main outcome measures Outcomes were admission to hospital, critical illness (intensive care, mechanical ventilation, discharge to hospice care, or death), and discharge to hospice care or death. Predictors included patient characteristics, medical history, vital signs, and laboratory results. Multivariable logistic regression was conducted to identify risk factors for adverse outcomes, and competing risk survival analysis for mortality. Results Of 11 544 people tested for SARS-Cov-2, 5566 (48.2%) were positive. After exclusions, 5279 were included. 2741 of these 5279 (51.9%) were admitted to hospital, of whom 1904 (69.5%) were discharged alive without hospice care and 665 (24.3%) were discharged to hospice care or died. Of 647 (23.6%) patients requiring mechanical ventilation, 391 (60.4%) died and 170 (26.2%) were extubated or discharged. The strongest risk for hospital admission was associated with age, with an odds ratio of >2 for all age groups older than 44 years and 37.9 (95% confidence interval 26.1 to 56.0) for ages 75 years and older. Other risks were heart failure (4.4, 2.6 to 8.0), male sex (2.8, 2.4 to 3.2), chronic kidney disease (2.6, 1.9 to 3.6), and any increase in body mass index (BMI) (eg, for BMI >40: 2.5, 1.8 to 3.4). The strongest risks for critical illness besides age were associated with heart failure (1.9, 1.4 to 2.5), BMI >40 (1.5, 1.0 to 2.2), and male sex (1.5, 1.3 to 1.8). Admission oxygen saturation of 1 (4.8, 2.1 to 10.9), C reactive protein level >200 (5.1, 2.8 to 9.2), and D-dimer level >2500 (3.9, 2.6 to 6.0) were, however, more strongly associated with critical illness than age or comorbidities. Risk of critical illness decreased significantly over the study period. Similar associations were found for mortality alone. Conclusions Age and comorbidities were found to be strong predictors of hospital admission and to a lesser extent of critical illness and mortality in people with covid-19; however, impairment of oxygen on admission and markers of inflammation were most strongly associated with critical illness and mortality. Outcomes seem to be improving over time, potentially suggesting improvements in care.

Journal ArticleDOI
TL;DR: At a median of 10 years, prostate-cancer-specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments.
Abstract: BACKGROUND The comparative effectiveness of treatments for prostate cancer that is detected by prostate-specific antigen (PSA) testing remains uncertain. METHODS We compared active monitoring, radical prostatectomy, and external-beam radiotherapy for the treatment of clinically localized prostate cancer. Between 1999 and 2009, a total of 82,429 men 50 to 69 years of age received a PSA test; 2664 received a diagnosis of localized prostate cancer, and 1643 agreed to undergo randomization to active monitoring (545 men), surgery (553), or radiotherapy (545). The primary outcome was prostate-cancer mortality at a median of 10 years of follow-up. Secondary outcomes included the rates of disease progression, metastases, and all-cause deaths. RESULTS There were 17 prostate-cancer–specific deaths overall: 8 in the active-monitoring group (1.5 deaths per 1000 person-years; 95% confidence interval [CI], 0.7 to 3.0), 5 in the surgery group (0.9 per 1000 person-years; 95% CI, 0.4 to 2.2), and 4 in the radiotherapy group (0.7 per 1000 person-years; 95% CI, 0.3 to 2.0); the difference among the groups was not significant (P=0.48 for the overall comparison). In addition, no significant difference was seen among the groups in the number of deaths from any cause (169 deaths overall; P=0.87 for the comparison among the three groups). Metastases developed in more men in the active-monitoring group (33 men; 6.3 events per 1000 person-years; 95% CI, 4.5 to 8.8) than in the surgery group (13 men; 2.4 per 1000 person-years; 95% CI, 1.4 to 4.2) or the radiotherapy group (16 men; 3.0 per 1000 person-years; 95% CI, 1.9 to 4.9) (P=0.004 for the overall comparison). Higher rates of disease progression were seen in the active-monitoring group (112 men; 22.9 events per 1000 person-years; 95% CI, 19.0 to 27.5) than in the surgery group (46 men; 8.9 events per 1000 person-years; 95% CI, 6.7 to 11.9) or the radiotherapy group (46 men; 9.0 events per 1000 person-years; 95% CI, 6.7 to 12.0) (P<0.001 for the overall comparison). CONCLUSIONS At a median of 10 years, prostate-cancer–specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments. Surgery and radiotherapy were associated with lower incidences of disease progression and metastases than was active monitoring.

Journal ArticleDOI
05 Jun 2015-Science
TL;DR: Examination of the news that millions of Facebook users' peers shared, what information these users were presented with, and what they ultimately consumed found that friends shared substantially less cross-cutting news from sources aligned with an opposing ideology.
Abstract: Exposure to news, opinion and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using de-identified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks, and examine the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked News Feed, and further studied users’ choices to click through to ideologically discordant content. Compared to algorithmic ranking, individuals’ choices about what to consume had a stronger effect limiting exposure to cross-cutting content.

Journal ArticleDOI
TL;DR: The International League Against Epilepsy presents a revised operational classification of seizure types to recognize that some seizure types can have either a focal or generalized onset, to allow classification when the onset is unobserved, to include some missing seizure types, and to adopt more transparent names.
Abstract: The International League Against Epilepsy (ILAE) presents a revised operational classification of seizure types. The purpose of such a revision is to recognize that some seizure types can have either a focal or generalized onset, to allow classification when the onset is unobserved, to include some missing seizure types, and to adopt more transparent names. Because current knowledge is insufficient to form a scientifically based classification, the 2017 Classification is operational (practical) and based on the 1981 Classification, extended in 2010. Changes include the following: (1) "partial" becomes "focal"; (2) awareness is used as a classifier of focal seizures; (3) the terms dyscognitive, simple partial, complex partial, psychic, and secondarily generalized are eliminated; (4) new focal seizure types include automatisms, behavior arrest, hyperkinetic, autonomic, cognitive, and emotional; (5) atonic, clonic, epileptic spasms, myoclonic, and tonic seizures can be of either focal or generalized onset; (6) focal to bilateral tonic-clonic seizure replaces secondarily generalized seizure; (7) new generalized seizure types are absence with eyelid myoclonia, myoclonic absence, myoclonic-atonic, myoclonic-tonic-clonic; and (8) seizures of unknown onset may have features that can still be classified. The new classification does not represent a fundamental change, but allows greater flexibility and transparency in naming seizure types.

Journal ArticleDOI
TL;DR: In this article, a review of recent progress in cognitive science suggests that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it.
Abstract: Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.

Posted Content
Ziyu Wang1, Tom Schaul1, Matteo Hessel1, Hado van Hasselt1, Marc Lanctot1, Nando de Freitas1 
TL;DR: This paper presents a new neural network architecture for model-free reinforcement learning that leads to better policy evaluation in the presence of many similar-valued actions and enables the RL agent to outperform the state-of-the-art on the Atari 2600 domain.
Abstract: In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.

Journal ArticleDOI
TL;DR: Six cycles of docetaxel at the beginning of ADT for metastatic prostate cancer resulted in significantly longer overall survival than that with ADT alone.
Abstract: BACKGROUND Androgen-deprivation therapy (ADT) has been the backbone of treatment for metastatic prostate cancer since the 1940s. We assessed whether concomitant treatment with ADT plus docetaxel would result in longer overall survival than that with ADT alone. METHODS We assigned men with metastatic, hormone-sensitive prostate cancer to receive either ADT plus docetaxel (at a dose of 75 mg per square meter of body-surface area every 3 weeks for six cycles) or ADT alone. The primary objective was to test the hypothesis that the median overall survival would be 33.3% longer among patients receiving docetaxel added to ADT early during therapy than among patients receiving ADT alone. RESULTS A total of 790 patients (median age, 63 years) underwent randomization. After a median follow-up of 28.9 months, the median overall survival was 13.6 months longer with ADT plus docetaxel (combination therapy) than with ADT alone (57.6 months vs. 44.0 months; hazard ratio for death in the combination group, 0.61; 95% confidence interval [CI], 0.47 to 0.80; P<0.001). The median time to biochemical, symptomatic, or radiographic progression was 20.2 months in the combination group, as compared with 11.7 months in the ADT-alone group (hazard ratio, 0.61; 95% CI, 0.51 to 0.72; P<0.001). The rate of a prostate-specific antigen level of less than 0.2 ng per milliliter at 12 months was 27.7% in the combination group versus 16.8% in the ADT-alone group (P<0.001). In the combination group, the rate of grade 3 or 4 febrile neutropenia was 6.2%, the rate of grade 3 or 4 infection with neutropenia was 2.3%, and the rate of grade 3 sensory neuropathy and of grade 3 motor neuropathy was 0.5%. CONCLUSIONS Six cycles of docetaxel at the beginning of ADT for metastatic prostate cancer resulted in significantly longer overall survival than that with ADT alone. (Funded by the National Cancer Institute and others; ClinicalTrials.gov number, NCT00309985.)

Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, the authors propose to redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images.
Abstract: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.


Journal ArticleDOI
TL;DR: Understanding of pathogenic mechanisms and clinical features of NAFLD is driving progress in therapeutic strategies now in clinical trials and the emerging targets for drug development that involve either single agents or combination therapies intended to arrest or reverse disease progression are discussed.
Abstract: There has been a rise in the prevalence of nonalcoholic fatty liver disease (NAFLD), paralleling a worldwide increase in diabetes and metabolic syndrome. NAFLD, a continuum of liver abnormalities from nonalcoholic fatty liver (NAFL) to nonalcoholic steatohepatitis (NASH), has a variable course but can lead to cirrhosis and liver cancer. Here we review the pathogenic and clinical features of NAFLD, its major comorbidities, clinical progression and risk of complications and in vitro and animal models of NAFLD enabling refinement of therapeutic targets that can accelerate drug development. We also discuss evolving principles of clinical trial design to evaluate drug efficacy and the emerging targets for drug development that involve either single agents or combination therapies intended to arrest or reverse disease progression.

Journal ArticleDOI
25 Oct 2019-Science
TL;DR: It is suggested that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
Abstract: Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.

Journal ArticleDOI
TL;DR: Consistent evidence from numerous and multiple different types of clinical and genetic studies unequivocally establishes that LDL causes ASCVD.
Abstract: Aims To appraise the clinical and genetic evidence that low-density lipoproteins (LDLs) cause atherosclerotic cardiovascular disease (ASCVD).

Journal ArticleDOI
27 Mar 2020-JAMA
TL;DR: In this preliminary uncontrolled case series of 5 critically ill patients with COVID-19 and ARDS, administration of convalescent plasma containing neutralizing antibody was followed by improvement in their clinical status, and these observations require evaluation in clinical trials.
Abstract: Importance Coronavirus disease 2019 (COVID-19) is a pandemic with no specific therapeutic agents and substantial mortality. It is critical to find new treatments. Objective To determine whether convalescent plasma transfusion may be beneficial in the treatment of critically ill patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Design, Setting, and Participants Case series of 5 critically ill patients with laboratory-confirmed COVID-19 and acute respiratory distress syndrome (ARDS) who met the following criteria: severe pneumonia with rapid progression and continuously high viral load despite antiviral treatment; Pao2/Fio2 Exposures Patients received transfusion with convalescent plasma with a SARS-CoV-2–specific antibody (IgG) binding titer greater than 1:1000 (end point dilution titer, by enzyme-linked immunosorbent assay [ELISA]) and a neutralization titer greater than 40 (end point dilution titer) that had been obtained from 5 patients who recovered from COVID-19. Convalescent plasma was administered between 10 and 22 days after admission. Main Outcomes and Measures Changes of body temperature, Sequential Organ Failure Assessment (SOFA) score (range 0-24, with higher scores indicating more severe illness), Pao2/Fio2, viral load, serum antibody titer, routine blood biochemical index, ARDS, and ventilatory and extracorporeal membrane oxygenation (ECMO) supports before and after convalescent plasma transfusion. Results All 5 patients (age range, 36-65 years; 2 women) were receiving mechanical ventilation at the time of treatment and all had received antiviral agents and methylprednisolone. Following plasma transfusion, body temperature normalized within 3 days in 4 of 5 patients, the SOFA score decreased, and Pao2/Fio2increased within 12 days (range, 172-276 before and 284-366 after). Viral loads also decreased and became negative within 12 days after the transfusion, and SARS-CoV-2–specific ELISA and neutralizing antibody titers increased following the transfusion (range, 40-60 before and 80-320 on day 7). ARDS resolved in 4 patients at 12 days after transfusion, and 3 patients were weaned from mechanical ventilation within 2 weeks of treatment. Of the 5 patients, 3 have been discharged from the hospital (length of stay: 53, 51, and 55 days), and 2 are in stable condition at 37 days after transfusion. Conclusions and Relevance In this preliminary uncontrolled case series of 5 critically ill patients with COVID-19 and ARDS, administration of convalescent plasma containing neutralizing antibody was followed by improvement in their clinical status. The limited sample size and study design preclude a definitive statement about the potential effectiveness of this treatment, and these observations require evaluation in clinical trials.