scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Considering the evidence-based literature review, the National Osteoporosis Foundation recommends lifestyle choices that promote maximal bone health from childhood through young to late adolescence and outline a research agenda to address current gaps in knowledge.
Abstract: Lifestyle choices influence 20–40 % of adult peak bone mass. Therefore, optimization of lifestyle factors known to influence peak bone mass and strength is an important strategy aimed at reducing risk of osteoporosis or low bone mass later in life. The National Osteoporosis Foundation has issued this scientific statement to provide evidence-based guidance and a national implementation strategy for the purpose of helping individuals achieve maximal peak bone mass early in life. In this scientific statement, we (1) report the results of an evidence-based review of the literature since 2000 on factors that influence achieving the full genetic potential for skeletal mass; (2) recommend lifestyle choices that promote maximal bone health throughout the lifespan; (3) outline a research agenda to address current gaps; and (4) identify implementation strategies. We conducted a systematic review of the role of individual nutrients, food patterns, special issues, contraceptives, and physical activity on bone mass and strength development in youth. An evidence grading system was applied to describe the strength of available evidence on these individual modifiable lifestyle factors that may (or may not) influence the development of peak bone mass (Table 1). A summary of the grades for each of these factors is given below. We describe the underpinning biology of these relationships as well as other factors for which a systematic review approach was not possible. Articles published since 2000, all of which followed the report by Heaney et al. [1] published in that year, were considered for this scientific statement. This current review is a systematic update of the previous review conducted by the National Osteoporosis Foundation [1]. Considering the evidence-based literature review, we recommend lifestyle choices that promote maximal bone health from childhood through young to late adolescence and outline a research agenda to address current gaps in knowledge. The best evidence (grade A) is available for positive effects of calcium intake and physical activity, especially during the late childhood and peripubertal years—a critical period for bone accretion. Good evidence is also available for a role of vitamin D and dairy consumption and a detriment of DMPA injections. However, more rigorous trial data on many other lifestyle choices are needed and this need is outlined in our research agenda. Implementation strategies for lifestyle modifications to promote development of peak bone mass and strength within one’s genetic potential require a multisectored (i.e., family, schools, healthcare systems) approach.

759 citations


Posted Content
TL;DR: In this article, the authors propose to explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains.
Abstract: The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.

759 citations


Journal ArticleDOI
TL;DR: Visible light-responsive photocatalytic technology holds great potential in water treatment to enhance purification efficiency, as well as to augment water supply through the safe usage of unconventional water sources as mentioned in this paper.
Abstract: Visible light-responsive photocatalytic technology holds great potential in water treatment to enhance purification efficiency, as well as to augment water supply through the safe usage of unconventional water sources. This review summarizes the recent progress in the design and fabrication of visible light-responsive photocatalysts via various synthetic strategies, including the modification of traditional photocatalysts by doping, dye sensitization, or by forming a heterostructure, coupled with π-conjugated architecture, as well as the great efforts made within the exploration of novel visible light-responsive photocatalysts. Background information on the fundamentals of heterogeneous photocatalysis, the pathways of visible light-responsive photocatalysis, and the unique features of visible light-responsive photocatalysts are presented. The photocatalytic properties of the resulting visible light-responsive photocatalysts are also covered in relation to the water treatment, i.e., regarding the photocatalytic degradation of organic compounds and inorganic pollutants, as well as photocatalytic disinfection. Finally, this review concludes with a summary and perspectives on the current challenges faced and new directions in this emerging area of research.

759 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a survey covering existing techniques to increase interpretability of machine learning models and discuss crucial issues that the community should consider in future work such as designing user-friendly explanations and developing comprehensive evaluation metrics to further push forward the area of interpretable machine learning.
Abstract: Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these models arrive at a particular decision. Although many approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. We provide a survey covering existing techniques to increase the interpretability of machine learning models. We also discuss crucial issues that the community should consider in future work such as designing user-friendly explanations and developing comprehensive evaluation metrics to further push forward the area of interpretable machine learning.

759 citations


Journal ArticleDOI
28 Dec 2017-Cell
TL;DR: It is demonstrated that selenolate-based catalysis of the essential mammalian selenoprotein GPX4 is unexpectedly dispensable for normal embryogenesis and the survival of a specific type of interneurons emerges to exclusively depend on selenocysteine-containing GPX 4, thereby preventing fatal epileptic seizures.

759 citations


Journal ArticleDOI
25 May 2018-Science
TL;DR: It is proposed that the nucleus is a buffered system in which high RNA concentrations keep RBPs soluble, and low RNA/protein ratios promote phase separation into liquid droplets, whereas high ratios prevent droplet formation in vitro.
Abstract: Prion-like RNA binding proteins (RBPs) such as TDP43 and FUS are largely soluble in the nucleus but form solid pathological aggregates when mislocalized to the cytoplasm. What keeps these proteins soluble in the nucleus and promotes aggregation in the cytoplasm is still unknown. We report here that RNA critically regulates the phase behavior of prion-like RBPs. Low RNA/protein ratios promote phase separation into liquid droplets, whereas high ratios prevent droplet formation in vitro. Reduction of nuclear RNA levels or genetic ablation of RNA binding causes excessive phase separation and the formation of cytotoxic solid-like assemblies in cells. We propose that the nucleus is a buffered system in which high RNA concentrations keep RBPs soluble. Changes in RNA levels or RNA binding abilities of RBPs cause aberrant phase transitions.

759 citations


Posted Content
TL;DR: The Tree-LSTM is introduced, a generalization of LSTMs to tree-structured network topologies that outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences and sentiment classification.
Abstract: Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).

759 citations


Journal ArticleDOI
Xu Chen1
TL;DR: This paper designs a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution and demonstrates that the proposed mechanism can achieve efficient computation off loading performance and scale well as the system size increases.
Abstract: Mobile cloud computing is envisioned as a promising approach to augment computation capabilities of mobile devices for emerging resource-hungry mobile applications. In this paper, we propose a game theoretic approach for achieving efficient computation offloading for mobile cloud computing. We formulate the decentralized computation offloading decision making problem among mobile device users as a decentralized computation offloading game. We analyze the structural property of the game and show that the game always admits a Nash equilibrium. We then design a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution. Numerical results demonstrate that the proposed mechanism can achieve efficient computation offloading performance and scale well as the system size increases.

759 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work solves the problem of salient object detection by investigating how to expand the role of pooling in convolutional neural networks by building a global guidance module (GGM) and designing a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down path- way.
Abstract: We solve the problem of salient object detection by investigating how to expand the role of pooling in convolutional neural networks. Based on the U-shape architecture, we first build a global guidance module (GGM) upon the bottom-up pathway, aiming at providing layers at different feature levels the location information of potential salient objects. We further design a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down path- way. By adding FAMs after the fusion operations in the top-down pathway, coarse-level features from the GGM can be seamlessly merged with features at various scales. These two pooling-based modules allow the high-level semantic features to be progressively refined, yielding detail enriched saliency maps. Experiment results show that our proposed approach can more accurately locate the salient objects with sharpened details and hence substantially improve the performance compared to the previous state-of-the-arts. Our approach is fast as well and can run at a speed of more than 30 FPS when processing a 300×400 image. Code can be found at http://mmcheng.net/poolnet/.

759 citations


Journal ArticleDOI
TL;DR: Three-dimensional brain magnetic resonance imaging data was meta-analyzed to identify subcortical brain volumes that robustly discriminate major depressive disorder patients from healthy controls and showed robust smaller hippocampal volumes in MDD patients, moderated by age of onset and first episode versus recurrent episode status.
Abstract: The pattern of structural brain alterations associated with major depressive disorder (MDD) remains unresolved. This is in part due to small sample sizes of neuroimaging studies resulting in limited statistical power, disease heterogeneity and the complex interactions between clinical characteristics and brain morphology. To address this, we meta-analyzed three-dimensional brain magnetic resonance imaging data from 1728 MDD patients and 7199 controls from 15 research samples worldwide, to identify subcortical brain volumes that robustly discriminate MDD patients from healthy controls. Relative to controls, patients had significantly lower hippocampal volumes (Cohen's d=-0.14, % difference=-1.24). This effect was driven by patients with recurrent MDD (Cohen's d=-0.17, % difference=-1.44), and we detected no differences between first episode patients and controls. Age of onset ⩽21 was associated with a smaller hippocampus (Cohen's d=-0.20, % difference=-1.85) and a trend toward smaller amygdala (Cohen's d=-0.11, % difference=-1.23) and larger lateral ventricles (Cohen's d=0.12, % difference=5.11). Symptom severity at study inclusion was not associated with any regional brain volumes. Sample characteristics such as mean age, proportion of antidepressant users and proportion of remitted patients, and methodological characteristics did not significantly moderate alterations in brain volumes in MDD. Samples with a higher proportion of antipsychotic medication users showed larger caudate volumes in MDD patients compared with controls. This currently largest worldwide effort to identify subcortical brain alterations showed robust smaller hippocampal volumes in MDD patients, moderated by age of onset and first episode versus recurrent episode status.

759 citations


Journal ArticleDOI
TL;DR: It is shown that QS signals are stochastically produced in young biofilms of Pseudomonas putida and act mainly as self-regulatory signals rather than inducing neighbouring cells, and that heterogeneity in QS can serve as a mechanism to drive phenotypic heterogeneity in self-directed behaviour.
Abstract: Bacteria secrete signalling molecules (AHLs) to coordinate actions such as biofilm formation and the release of public goods, in a process called quorum sensing. Here, the authors show that AHLs are stochastically produced and control asocial (self-directed) traits in young biofilms of P. putida.

Journal ArticleDOI
22 Oct 2015-Nature
TL;DR: Large sequencing data sets of clinically informative samples enable the discovery of novel genes associated with cancer, the network of relationships between the driver events, and their impact on disease relapse and clinical outcome.
Abstract: Which genetic alterations drive tumorigenesis and how they evolve over the course of disease and therapy are central questions in cancer biology Here we identify 44 recurrently mutated genes and 11 recurrent somatic copy number variations through whole-exome sequencing of 538 chronic lymphocytic leukaemia (CLL) and matched germline DNA samples, 278 of which were collected in a prospective clinical trial These include previously unrecognized putative cancer drivers (RPS15, IKZF3), and collectively identify RNA processing and export, MYC activity, and MAPK signalling as central pathways involved in CLL Clonality analysis of this large data set further enabled reconstruction of temporal relationships between driver events Direct comparison between matched pre-treatment and relapse samples from 59 patients demonstrated highly frequent clonal evolution Thus, large sequencing data sets of clinically informative samples enable the discovery of novel genes associated with cancer, the network of relationships between the driver events, and their impact on disease relapse and clinical outcome

Journal ArticleDOI
TL;DR: The burden of COVID-19 infection in North American PICUs is described and confirmed that severe illness in children is significant but far less frequent than in adults and prehospital comorbidities appear to be an important factor in children.
Abstract: Importance The recent and ongoing coronavirus disease 2019 (COVID-19) pandemic has taken an unprecedented toll on adults critically ill with COVID-19 infection. While there is evidence that the burden of COVID-19 infection in hospitalized children is lesser than in their adult counterparts, to date, there are only limited reports describing COVID-19 in pediatric intensive care units (PICUs). Objective To provide an early description and characterization of COVID-19 infection in North American PICUs, focusing on mode of presentation, presence of comorbidities, severity of disease, therapeutic interventions, clinical trajectory, and early outcomes. Design, Setting, and Participants This cross-sectional study included children positive for COVID-19 admitted to 46 North American PICUs between March 14 and April 3, 2020. with follow-up to April 10, 2020. Main Outcomes and Measures Prehospital characteristics, clinical trajectory, and hospital outcomes of children admitted to PICUs with confirmed COVID-19 infection. Results Of the 48 children with COVID-19 admitted to participating PICUs, 25 (52%) were male, and the median (range) age was 13 (4.2-16.6) years. Forty patients (83%) had significant preexisting comorbidities; 35 (73%) presented with respiratory symptoms and 18 (38%) required invasive ventilation. Eleven patients (23%) had failure of 2 or more organ systems. Extracorporeal membrane oxygenation was required for 1 patient (2%). Targeted therapies were used in 28 patients (61%), with hydroxychloroquine being the most commonly used agent either alone (11 patients) or in combination (10 patients). At the completion of the follow-up period, 2 patients (4%) had died and 15 (31%) were still hospitalized, with 3 still requiring ventilatory support and 1 receiving extracorporeal membrane oxygenation. The median (range) PICU and hospital lengths of stay for those who had been discharged were 5 (3-9) days and 7 (4-13) days, respectively. Conclusions and Relevance This early report describes the burden of COVID-19 infection in North American PICUs and confirms that severe illness in children is significant but far less frequent than in adults. Prehospital comorbidities appear to be an important factor in children. These preliminary observations provide an important platform for larger and more extensive studies of children with COVID-19 infection.

Journal ArticleDOI
TL;DR: In this article, a new frequency-domain phenomenological model of the gravitational-wave signal from the inspiral, merger and ringdown of non-precessing (aligned-spin) black-hole binaries is presented.
Abstract: We present a new frequency-domain phenomenological model of the gravitational-wave signal from the inspiral, merger and ringdown of nonprecessing (aligned-spin) black-hole binaries. The model is calibrated to 19 hybrid effective-one-body–numerical-relativity waveforms up to mass ratios of 1∶18 and black-hole spins of |a/m|∼0.85 (0.98 for equal-mass systems). The inspiral part of the model consists of an extension of frequency-domain post-Newtonian expressions, using higher-order terms fit to the hybrids. The merger ringdown is based on a phenomenological ansatz that has been significantly improved over previous models. The model exhibits mismatches of typically less than 1% against all 19 calibration hybrids and an additional 29 verification hybrids, which provide strong evidence that, over the calibration region, the model is sufficiently accurate for all relevant gravitational-wave astronomy applications with the Advanced LIGO and Virgo detectors. Beyond the calibration region the model produces physically reasonable results, although we recommend caution in assuming that any merger-ringdown waveform model is accurate outside its calibration region. As an example, we note that an alternative nonprecessing model, SEOBNRv2 (calibrated up to spins of only 0.5 for unequal-mass systems), exhibits mismatch errors of up to 10% for high spins outside its calibration region. We conclude that waveform models would benefit most from a larger number of numerical-relativity simulations of high-aligned-spin unequal-mass binaries.

Proceedings Article
01 Jan 2015
TL;DR: The Variational dropout method is proposed, a generalization of Gaussian dropout, but with a more flexibly parameterized posterior, often leading to better generalization in stochastic gradient variational Bayes.
Abstract: We explore an as yet unexploited opportunity for drastically improving the efficiency of stochastic gradient variational Bayes (SGVB) with global model parameters. Regular SGVB estimators rely on sampling of parameters once per minibatch of data, and have variance that is constant w.r.t. the minibatch size. The efficiency of such estimators can be drastically improved upon by translating uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such reparameterizations with local noise can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence.We find an important connection with regularization by dropout: the original Gaussian dropout objective corresponds to SGVB with local noise, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose \emph{variational dropout}, a generalization of Gaussian dropout, but with a more flexibly parameterized posterior, often leading to better generalization. The method is demonstrated through several experiments.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: A novel Cascaded Partial Decoder (CPD) framework for fast and accurate salient object detection and applies the proposed framework to optimize existing multi-level feature aggregation models and significantly improve their efficiency and accuracy.
Abstract: Existing state-of-the-art salient object detection networks rely on aggregating multi-level features of pre-trained convolutional neural networks (CNNs). However, compared to high-level features, low-level features contribute less to performance. Meanwhile, they raise more computational cost because of their larger spatial resolutions. In this paper, we propose a novel Cascaded Partial Decoder (CPD) framework for fast and accurate salient object detection. On the one hand, the framework constructs partial decoder which discards larger resolution features of shallow layers for acceleration. On the other hand, we observe that integrating features of deep layers will obtain relatively precise saliency map. Therefore we directly utilize generated saliency map to recurrently optimize features of deep layers. This strategy efficiently suppresses distractors in the features and significantly improves their representation ability. Experiments conducted on five benchmark datasets exhibit that the proposed model not only achieves state-of-the-art but also runs much faster than existing models. Besides, we apply the proposed framework to optimize existing multi-level feature aggregation models and significantly improve their efficiency and accuracy.

Journal ArticleDOI
TL;DR: In this paper, a new global fit of neutrino oscillation parameters within the simplest three-neutrino picture was presented, including new data which appeared since their previous analysis.

Journal ArticleDOI
TL;DR: Experts assembled to review, debate and summarize the challenges of IB validation and qualification produced 14 key recommendations for accelerating the clinical translation of IBs, which highlight the role of parallel (rather than sequential) tracks of technical validation, biological/clinical validation and assessment of cost-effectiveness.
Abstract: Imaging biomarkers (IBs) are integral to the routine management of patients with cancer. IBs used daily in oncology include clinical TNM stage, objective response and left ventricular ejection fraction. Other CT, MRI, PET and ultrasonography biomarkers are used extensively in cancer research and drug development. New IBs need to be established either as useful tools for testing research hypotheses in clinical trials and research studies, or as clinical decision-making tools for use in healthcare, by crossing 'translational gaps' through validation and qualification. Important differences exist between IBs and biospecimen-derived biomarkers and, therefore, the development of IBs requires a tailored 'roadmap'. Recognizing this need, Cancer Research UK (CRUK) and the European Organisation for Research and Treatment of Cancer (EORTC) assembled experts to review, debate and summarize the challenges of IB validation and qualification. This consensus group has produced 14 key recommendations for accelerating the clinical translation of IBs, which highlight the role of parallel (rather than sequential) tracks of technical (assay) validation, biological/clinical validation and assessment of cost-effectiveness; the need for IB standardization and accreditation systems; the need to continually revisit IB precision; an alternative framework for biological/clinical validation of IBs; and the essential requirements for multicentre studies to qualify IBs for clinical use.

Proceedings Article
29 Jan 2018
TL;DR: This work proposes a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value, providing an adaptive regularizer that encourages robustness against all attacks.
Abstract: While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most \epsilon = 0.1 can cause more than 35% test error.

Journal ArticleDOI
TL;DR: As Amazon's Mechanical Turk (MTurk) has surged in popularity throughout political science, scholars have increasingly challenged the external validity of inferences made drawing upon MTurk samples as mentioned in this paper.
Abstract: As Amazon’s Mechanical Turk (MTurk) has surged in popularity throughout political science, scholars have increasingly challenged the external validity of inferences made drawing upon MTurk samples....

Journal ArticleDOI
TL;DR: This article reviews static and dynamic interfacial effects in magnetism, focusing on interfacially-driven magnetic effects and phenomena associated with spin-orbit coupling and intrinsic symmetry breaking at interfaces, identifying the most exciting new scientific results and pointing to promising future research directions.
Abstract: This article reviews static and dynamic interfacial effects in magnetism, focusing on interfacially-driven magnetic effects and phenomena associated with spin-orbit coupling and intrinsic symmetry breaking at interfaces. It provides a historical background and literature survey, but focuses on recent progress, identifying the most exciting new scientific results and pointing to promising future research directions. It starts with an introduction and overview of how basic magnetic properties are affected by interfaces, then turns to a discussion of charge and spin transport through and near interfaces and how these can be used to control the properties of the magnetic layer. Important concepts include spin accumulation, spin currents, spin transfer torque, and spin pumping. An overview is provided to the current state of knowledge and existing review literature on interfacial effects such as exchange bias, exchange spring magnets, spin Hall effect, oxide heterostructures, and topological insulators. The article highlights recent discoveries of interface-induced magnetism and non-collinear spin textures, non-linear dynamics including spin torque transfer and magnetization reversal induced by interfaces, and interfacial effects in ultrafast magnetization processes.

Journal ArticleDOI
TL;DR: This technical review will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.
Abstract: Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.


Journal ArticleDOI
01 Jun 2016-BMJ Open
TL;DR: Chronic pain affects between one-third and one-half of the population of the UK, corresponding to just under 28 million adults, based on data from the best available published studies, and is likely to increase further in line with an ageing population.
Abstract: Objectives There is little consensus regarding the burden of pain in the UK. The purpose of this review was to synthesise existing data on the prevalence of various chronic pain phenotypes in order to produce accurate and contemporary national estimates. Design Major electronic databases were searched for articles published after 1990, reporting population-based prevalence estimates of chronic pain (pain lasting >3 months), chronic widespread pain, fibromyalgia and chronic neuropathic pain. Pooled prevalence estimates were calculated for chronic pain and chronic widespread pain. Results Of the 1737 articles generated through our searches, 19 studies matched our inclusion criteria, presenting data from 139 933 adult residents of the UK. The prevalence of chronic pain, derived from 7 studies, ranged from 35.0% to 51.3% (pooled estimate 43.5%, 95% CIs 38.4% to 48.6%). The prevalence of moderate-severely disabling chronic pain (Von Korff grades III/IV), based on 4 studies, ranged from 10.4% to 14.3%. 12 studies stratified chronic pain prevalence by age group, demonstrating a trend towards increasing prevalence with increasing age from 14.3% in 18–25 years old, to 62% in the over 75 age group, although the prevalence of chronic pain in young people (18–39 years old) may be as high as 30%. Reported prevalence estimates were summarised for chronic widespread pain (pooled estimate 14.2%, 95% CI 12.3% to 16.1%; 5 studies), chronic neuropathic pain (8.2% to 8.9%; 2 studies) and fibromyalgia (5.4%; 1 study). Chronic pain was more common in female than male participants, across all measured phenotypes. Conclusions Chronic pain affects between one-third and one-half of the population of the UK, corresponding to just under 28 million adults, based on data from the best available published studies. This figure is likely to increase further in line with an ageing population.

Journal Article
TL;DR: Automatic differentiation (AD) is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs as discussed by the authors, which is a small but established field with applications in areas including computational uid dynamics, atmospheric sciences, and engineering design optimization.
Abstract: Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "auto-diff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational uid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.

Journal ArticleDOI
TL;DR: Advances in resolution of phenotype and gene expression facilitate the integration of mouse and human immunology, support efforts to unravel human DC function in vivo and continue to present new translational opportunities to medicine.
Abstract: Dendritic cells (DC) are a class of bone-marrow-derived cells arising from lympho-myeloid haematopoiesis that form an essential interface between the innate sensing of pathogens and the activation of adaptive immunity. This task requires a wide range of mechanisms and responses, which are divided between three major DC subsets: plasmacytoid DC (pDC), myeloid/conventional DC1 (cDC1) and myeloid/conventional DC2 (cDC2). Each DC subset develops under the control of a specific repertoire of transcription factors involving differential levels of IRF8 and IRF4 in collaboration with PU.1, ID2, E2-2, ZEB2, KLF4, IKZF1 and BATF3. DC haematopoiesis is conserved between mammalian species and is distinct from monocyte development. Although monocytes can differentiate into DC, especially during inflammation, most quiescent tissues contain significant resident populations of DC lineage cells. An extended range of surface markers facilitates the identification of specific DC subsets although it remains difficult to dissociate cDC2 from monocyte-derived DC in some settings. Recent studies based on an increasing level of resolution of phenotype and gene expression have identified pre-DC in human blood and heterogeneity among cDC2. These advances facilitate the integration of mouse and human immunology, support efforts to unravel human DC function in vivo and continue to present new translational opportunities to medicine.

Proceedings ArticleDOI
22 May 2016
TL;DR: This paper presents a binary analysis framework that implements a number of analysis techniques that have been proposed in the past and implements these techniques in a unifying framework, which allows other researchers to compose them and develop new approaches.
Abstract: Finding and exploiting vulnerabilities in binary code is a challenging task. The lack of high-level, semantically rich information about data structures and control constructs makes the analysis of program properties harder to scale. However, the importance of binary analysis is on the rise. In many situations binary analysis is the only possible way to prove (or disprove) properties about the code that is actually executed. In this paper, we present a binary analysis framework that implements a number of analysis techniques that have been proposed in the past. We present a systematized implementation of these techniques, which allows other researchers to compose them and develop new approaches. In addition, the implementation of these techniques in a unifying framework allows for the direct comparison of these apporaches and the identification of their advantages and disadvantages. The evaluation included in this paper is performed using a recent dataset created by DARPA for evaluating the effectiveness of binary vulnerability analysis techniques. Our framework has been open-sourced and is available to the security community.

Journal ArticleDOI
TL;DR: This update of a previously published Cochrane review sought to critically appraise and summarise current evidence on the effectiveness and resource use of CGA for older adults admitted to hospital, and to estimate its cost-effectiveness.
Abstract: Background Comprehensive geriatric assessment (CGA) is a multi-dimensional, multi-disciplinary diagnostic and therapeutic process conducted to determine the medical, mental, and functional problems of older people with frailty so that a co-ordinated and integrated plan for treatment and follow-up can be developed. This is an update of a previously published Cochrane review. Objectives We sought to critically appraise and summarise current evidence on the effectiveness and resource use of CGA for older adults admitted to hospital, and to use these data to estimate its cost-effectiveness. Search methods We searched CENTRAL, MEDLINE, Embase, three other databases, and two trials registers on 5 October 2016; we also checked reference lists and contacted study authors. Selection criteria We included randomised trials that compared inpatient CGA (delivered on geriatric wards or by mobile teams) versus usual care on a general medical ward or on a ward for older people, usually admitted to hospital for acute care or for inpatient rehabilitation after an acute admission. Data collection and analysis We followed standard methodological procedures expected by Cochrane and Effective Practice and Organisation of Care (EPOC). We used the GRADE approach to assess the certainty of evidence for the most important outcomes. For this update, we requested individual patient data (IPD) from trialists, and we conducted a survey of trialists to obtain details of delivery of CGA. We calculated risk ratios (RRs), mean differences (MDs), or standardised mean differences (SMDs), and combined data using fixed-effect meta-analysis. We estimated cost-effectiveness by comparing inpatient CGA versus hospital admission without CGA in terms of cost per quality-adjusted life year (QALY) gained, cost per life year (LY) gained, and cost per life year living at home (LYLAH) gained. Main results We included 29 trials recruiting 13,766 participants across nine, mostly high-income countries. CGA increases the likelihood that patients will be alive and in their own homes at 3 to 12 months' follow-up (risk ratio (RR) 1.06, 95% confidence interval (CI) 1.01 to 1.10; 16 trials, 6799 participants; high-certainty evidence), results in little or no difference in mortality at 3 to 12 months' follow-up (RR 1.00, 95% CI 0.93 to 1.07; 21 trials, 10,023 participants; high-certainty evidence), decreases the likelihood that patients will be admitted to a nursing home at 3 to 12 months follow-up (RR 0.80, 95% CI 0.72 to 0.89; 14 trials, 6285 participants; high-certainty evidence) and results in little or no difference in dependence (RR 0.97, 95% CI 0.89 to 1.04; 14 trials, 6551 participants; high-certainty evidence). CGA may make little or no difference to cognitive function (SMD ranged from -0.22 to 0.35 (5 trials, 3534 participants; low-certainty evidence)). Mean length of stay ranged from 1.63 days to 40.7 days in the intervention group, and ranged from 1.8 days to 42.8 days in the comparison group. Healthcare costs per participant in the CGA group were on average GBP 234 (95% CI GBP -144 to GBP 605) higher than in the usual care group (17 trials, 5303 participants; low-certainty evidence). CGA may lead to a slight increase in QALYs of 0.012 (95% CI -0.024 to 0.048) at GBP 19,802 per QALY gained (3 trials; low-certainty evidence), a slight increase in LYs of 0.037 (95% CI 0.001 to 0.073), at GBP 6305 per LY gained (4 trials; low-certainty evidence), and a slight increase in LYLAH of 0.019 (95% CI -0.019 to 0.155) at GBP 12,568 per LYLAH gained (2 trials; low-certainty evidence). The probability that CGA would be cost-effective at a GBP 20,000 ceiling ratio for QALY, LY, and LYLAH was 0.50, 0.89, and 0.47, respectively (17 trials, 5303 participants; low-certainty evidence). Authors' conclusions Older patients are more likely to be alive and in their own homes at follow-up if they received CGA on admission to hospital. We are uncertain whether data show a difference in effect between wards and teams, as this analysis was underpowered. CGA may lead to a small increase in costs, and evidence for cost-effectiveness is of low-certainty due to imprecision and inconsistency among studies. Further research that reports cost estimates that are setting-specific across different sectors of care are required.

Journal ArticleDOI
TL;DR: The roles of APA in diverse cellular processes, including mRNA metabolism, protein diversification and protein localization, and more generally in gene regulation are discussed, and the molecular mechanisms underlying APA are discussed.
Abstract: Alternative polyadenylation (APA) is an RNA-processing mechanism that generates distinct 3' termini on mRNAs and other RNA polymerase II transcripts. It is widespread across all eukaryotic species and is recognized as a major mechanism of gene regulation. APA exhibits tissue specificity and is important for cell proliferation and differentiation. In this Review, we discuss the roles of APA in diverse cellular processes, including mRNA metabolism, protein diversification and protein localization, and more generally in gene regulation. We also discuss the molecular mechanisms underlying APA, such as variation in the concentration of core processing factors and RNA-binding proteins, as well as transcription-based regulation.

Journal ArticleDOI
TL;DR: In this paper, the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 were estimated using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER).
Abstract: Neutron stars are not only of astrophysical interest, but are also of great interest to nuclear physicists because their attributes can be used to determine the properties of the dense matter in their cores. One of the most informative approaches for determining the equation of state (EoS) of this dense matter is to measure both a star’s equatorial circumferential radius R e and its gravitational mass M. Here we report estimates of the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 obtained using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER). This approach is thought to be less subject to systematic errors than other approaches for estimating neutron star radii. We explored a variety of emission patterns on the stellar surface. Our best-fit model has three oval, uniform-temperature emitting spots and provides an excellent description of the pulse waveform observed using NICER. The radius and mass estimates given by this model are km and (68%). The independent analysis reported in the companion paper by Riley et al. explores different emitting spot models, but finds spot shapes and locations and estimates of R e and M that are consistent with those found in this work. We show that our measurements of R e and M for PSR J0030+0451 improve the astrophysical constraints on the EoS of cold, catalyzed matter above nuclear saturation density.