scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Ranger as mentioned in this paper is a C++ application and R package for high-dimensional data, which is a fast implementation of random forests for high dimensional data and supports ensemble of classification, regression and survival trees.
Abstract: We introduce the C++ application and R package ranger. The software is a fast implementation of random forests for high dimensional data. Ensembles of classification, regression and survival trees are supported. We describe the implementation, provide examples, validate the package with a reference implementation, and compare runtime and memory usage with other implementations. The new software proves to scale best with the number of features, samples, trees, and features tried for splitting. Finally, we show that ranger is the fastest and most memory efficient implementation of random forests to analyze data on the scale of a genome-wide association study.

1,423 citations


Journal ArticleDOI
08 Sep 2015-JAMA
TL;DR: The recent prevalence and update US trends in total diabetes, diagnosed diabetes, and undiagnosed diabetes using National Health and Nutrition Examination Survey (NHANES) data was estimated using cross-sectional surveys conducted between 1988-1994 and 1999-2012.
Abstract: Importance Previous studies have shown increasing prevalence of diabetes in the United States. New US data are available to estimate prevalence of and trends in diabetes. Objective To estimate the recent prevalence and update US trends in total diabetes, diagnosed diabetes, and undiagnosed diabetes using National Health and Nutrition Examination Survey (NHANES) data. Design, setting, and participants Cross-sectional surveys conducted between 1988-1994 and 1999-2012 of nationally representative samples of the civilian, noninstitutionalized US population; 2781 adults from 2011-2012 were used to estimate recent prevalence and an additional 23,634 adults from 1988-2010 were used to estimate trends. Main outcomes and measures The prevalence of diabetes was defined using a previous diagnosis of diabetes or, if diabetes was not previously diagnosed, by (1) a hemoglobin A1c level of 6.5% or greater or a fasting plasma glucose (FPG) level of 126 mg/dL or greater (hemoglobin A1c or FPG definition) or (2) additionally including 2-hour plasma glucose (2-hour PG) level of 200 mg/dL or greater (hemoglobin A1c, FPG, or 2-hour PG definition). Prediabetes was defined as a hemoglobin A1c level of 5.7% to 6.4%, an FPG level of 100 mg/dL to 125 mg/dL, or a 2-hour PG level of 140 mg/dL to 199 mg/dL. Results In the overall 2011-2012 population, the unadjusted prevalence (using the hemoglobin A1c, FPG, or 2-hour PG definitions for diabetes and prediabetes) was 14.3% (95% CI, 12.2%-16.8%) for total diabetes, 9.1% (95% CI, 7.8%-10.6%) for diagnosed diabetes, 5.2% (95% CI, 4.0%-6.9%) for undiagnosed diabetes, and 38.0% (95% CI, 34.7%-41.3%) for prediabetes; among those with diabetes, 36.4% (95% CI, 30.5%-42.7%) were undiagnosed. The unadjusted prevalence of total diabetes (using the hemoglobin A1c or FPG definition) was 12.3% (95% CI, 10.8%-14.1%); among those with diabetes, 25.2% (95% CI, 21.1%-29.8%) were undiagnosed. Compared with non-Hispanic white participants (11.3% [95% CI, 9.0%-14.1%]), the age-standardized prevalence of total diabetes (using the hemoglobin A1c, FPG, or 2-hour PG definition) was higher among non-Hispanic black participants (21.8% [95% CI, 17.7%-26.7%]; P Conclusions and relevance In 2011-2012, the estimated prevalence of diabetes was 12% to 14% among US adults, depending on the criteria used, with a higher prevalence among participants who were non-Hispanic black, non-Hispanic Asian, and Hispanic. Between 1988-1994 and 2011-2012, the prevalence of diabetes increased in the overall population and in all subgroups evaluated.

1,422 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

1,421 citations


Journal ArticleDOI
TL;DR: No abstract available Keywords: European Society of Cardiology; arrhythmias; cancer therapy; cardio-oncology; cardiotoxicity; chemotherapy; early detection; ischaemia; myocardial dysfunction; surveillance.
Abstract: No abstract available Keywords: European Society of Cardiology; arrhythmias; cancer therapy; cardio-oncology; cardiotoxicity; chemotherapy; early detection; ischaemia; myocardial dysfunction; surveillance.

1,421 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This paper proposes to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving and argues that the direct perception representation provides the right level of abstraction.
Abstract: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.

1,420 citations


Journal ArticleDOI
TL;DR: Rsubread is presented, a Bioconductor software package that provides high-performance alignment and read counting functions for RNA-seq reads that integrates read mapping and quantification in a single package and has no software dependencies other than R itself.
Abstract: We present Rsubread, a Bioconductor software package that provides high-performance alignment and read counting functions for RNA-seq reads. Rsubread is based on the successful Subread suite with the added ease-of-use of the R programming environment, creating a matrix of read counts directly as an R object ready for downstream analysis. It integrates read mapping and quantification in a single package and has no software dependencies other than R itself. We demonstrate Rsubread's ability to detect exon-exon junctions de novo and to quantify expression at the level of either genes, exons or exon junctions. The resulting read counts can be input directly into a wide range of downstream statistical analyses using other Bioconductor packages. Using SEQC data and simulations, we compare Rsubread to TopHat2, STAR and HTSeq as well as to counting functions in the Bioconductor infrastructure packages. We consider the performance of these tools on the combined quantification task starting from raw sequence reads through to summary counts, and in particular evaluate the performance of different combinations of alignment and counting algorithms. We show that Rsubread is faster and uses less memory than competitor tools and produces read count summaries that more accurately correlate with true values.

1,420 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.
Abstract: Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.

1,419 citations


Proceedings ArticleDOI
04 Apr 2018
TL;DR: The most recent edition of the dermoscopic image analysis benchmark challenge as discussed by the authors was organized to support research and development of algorithms for automated diagnosis of melanoma, the most lethal skin cancer.
Abstract: This article describes the design, implementation, and results of the latest installment of the dermoscopic image analysis benchmark challenge. The goal is to support research and development of algorithms for automated diagnosis of melanoma, the most lethal skin cancer. The challenge was divided into 3 tasks: lesion segmentation, feature detection, and disease classification. Participation involved 593 registrations, 81 pre-submissions, 46 finalized submissions (including a 4-page manuscript), and approximately 50 attendees, making this the largest standardized and comparative study in this field to date. While the official challenge duration and ranking of participants has concluded, the dataset snapshots remain available for further research and development.

1,419 citations


Journal ArticleDOI
TL;DR: Cross-talk between cancer cells and the proximal immune cells ultimately results in an environment that fosters tumor growth and metastasis, and understanding the nature of this dialog will allow for improved therapeutics that simultaneously target multiple components of the TME, increasing the likelihood of favorable patient outcomes.
Abstract: Cancer development and progression occurs in concert with alterations in the surrounding stroma. Cancer cells can functionally sculpt their microenvironment through the secretion of various cytokines, chemokines, and other factors. This results in a reprogramming of the surrounding cells, enabling them to play a determinative role in tumor survival and progression. Immune cells are important constituents of the tumor stroma and critically take part in this process. Growing evidence suggests that the innate immune cells (macrophages, neutrophils, dendritic cells, innate lymphoid cells, myeloid-derived suppressor cells, and natural killer cells) as well as adaptive immune cells (T cells and B cells) contribute to tumor progression when present in the tumor microenvironment (TME). Cross-talk between cancer cells and the proximal immune cells ultimately results in an environment that fosters tumor growth and metastasis. Understanding the nature of this dialog will allow for improved therapeutics that simultaneously target multiple components of the TME, increasing the likelihood of favorable patient outcomes.

1,418 citations


Posted Content
TL;DR: This paper develops an extension of Spectral Networks which incorporates a Graph Estimation procedure, that is test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.
Abstract: Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.

1,418 citations


Journal ArticleDOI
11 Mar 2016-Science
TL;DR: In this paper, a new bacterium, Ideonella sakaiensis 201-F6, was found to be able to use PET as its major energy and carbon source, producing two enzymes capable of hydrolyzing PET and the reaction intermediate, mono(2-hydroxyethyl) terephthalic acid.
Abstract: Poly(ethylene terephthalate) (PET) is used extensively worldwide in plastic products, and its accumulation in the environment has become a global concern. Because the ability to enzymatically degrade PET has been thought to be limited to a few fungal species, biodegradation is not yet a viable remediation or recycling strategy. By screening natural microbial communities exposed to PET in the environment, we isolated a novel bacterium, Ideonella sakaiensis 201-F6, that is able to use PET as its major energy and carbon source. When grown on PET, this strain produces two enzymes capable of hydrolyzing PET and the reaction intermediate, mono(2-hydroxyethyl) terephthalic acid. Both enzymes are required to enzymatically convert PET efficiently into its two environmentally benign monomers, terephthalic acid and ethylene glycol.

Posted Content
TL;DR: This paper proposes the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images and generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications.
Abstract: Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.

Journal ArticleDOI
TL;DR: The results demonstrate the feasibility of selectively ablating senescent cells and the efficacy of senolytics for alleviating symptoms of frailty and extending healthspan.
Abstract: The healthspan of mice is enhanced by killing senescent cells using a transgenic suicide gene. Achieving the same using small molecules would have a tremendous impact on quality of life and the burden of age-related chronic diseases. Here, we describe the rationale for identification and validation of a new class of drugs termed senolytics, which selectively kill senescent cells. By transcript analysis, we discovered increased expression of pro-survival networks in senescent cells, consistent with their established resistance to apoptosis. Using siRNA to silence expression of key nodes of this network, including ephrins (EFNB1 or 3), PI3Kδ, p21, BCL-xL, or plasminogen-activated inhibitor-2, killed senescent cells, but not proliferating or quiescent, differentiated cells. Drugs targeting these same factors selectively killed senescent cells. Dasatinib eliminated senescent human fat cell progenitors, while quercetin was more effective against senescent human endothelial cells and mouse BM-MSCs. The combination of dasatinib and quercetin was effective in eliminating senescent MEFs. In vivo, this combination reduced senescent cell burden in chronologically aged, radiation-exposed, and progeroid Ercc1(-/Δ) mice. In old mice, cardiac function and carotid vascular reactivity were improved 5 days after a single dose. Following irradiation of one limb in mice, a single dose led to improved exercise capacity for at least 7 months following drug treatment. Periodic drug administration extended healthspan in Ercc1(-/∆) mice, delaying age-related symptoms and pathology, osteoporosis, and loss of intervertebral disk proteoglycans. These results demonstrate the feasibility of selectively ablating senescent cells and the efficacy of senolytics for alleviating symptoms of frailty and extending healthspan.

Journal ArticleDOI
22 Jan 2016-Science
TL;DR: Lewis reviews the status of solar thermal and solar fuels approaches for harnessing solar energy, as well as technology gaps for achieving cost-effective scalable deployment combined with storage technologies to provide reliable, dispatchable energy.
Abstract: Major developments, as well as remaining challenges and the associated research opportunities, are evaluated for three technologically distinct approaches to solar energy utilization: solar electricity, solar thermal, and solar fuels technologies. Much progress has been made, but research opportunities are still present for all approaches. Both evolutionary and revolutionary technology development, involving foundational research, applied research, learning by doing, demonstration projects, and deployment at scale will be needed to continue this technology-innovation ecosystem. Most of the approaches still offer the potential to provide much higher efficiencies, much lower costs, improved scalability, and new functionality, relative to the embodiments of solar energy-conversion systems that have been developed to date.

Journal ArticleDOI
TL;DR: An update on CoV infections and relevant diseases, particularly the host defense against CoV‐induced inflammation of lung tissue, as well as the role of the innate immune system in the pathogenesis and clinical treatment is provided.
Abstract: Coronaviruses (CoVs) are by far the largest group of known positive-sense RNA viruses having an extensive range of natural hosts. In the past few decades, newly evolved Coronaviruses have posed a global threat to public health. The immune response is essential to control and eliminate CoV infections, however, maladjusted immune responses may result in immunopathology and impaired pulmonary gas exchange. Gaining a deeper understanding of the interaction between Coronaviruses and the innate immune systems of the hosts may shed light on the development and persistence of inflammation in the lungs and hopefully can reduce the risk of lung inflammation caused by CoVs. In this review, we provide an update on CoV infections and relevant diseases, particularly the host defense against CoV-induced inflammation of lung tissue, as well as the role of the innate immune system in the pathogenesis and clinical treatment.

Journal ArticleDOI
30 Jun 2015-eLife
TL;DR: In this paper, the authors compile the largest contemporary database for both species and pair it with relevant environmental variables predicting their global distribution, showing Aedes distributions to be the widest ever recorded; now extensive in all continents, including North America and Europe.
Abstract: Dengue and chikungunya are increasing global public health concerns due to their rapid geographical spread and increasing disease burden. Knowledge of the contemporary distribution of their shared vectors, Aedes aegypti and Aedes albopictus remains incomplete and is complicated by an ongoing range expansion fuelled by increased global trade and travel. Mapping the global distribution of these vectors and the geographical determinants of their ranges is essential for public health planning. Here we compile the largest contemporary database for both species and pair it with relevant environmental variables predicting their global distribution. We show Aedes distributions to be the widest ever recorded; now extensive in all continents, including North America and Europe. These maps will help define the spatial limits of current autochthonous transmission of dengue and chikungunya viruses. It is only with this kind of rigorous entomological baseline that we can hope to project future health impacts of these viruses.

Journal ArticleDOI
TL;DR: This paper showed that the two-way fixed effects estimator equals a weighted average of all possible two-group/two-period DD estimators in the data and decompose the difference between two specifications, and provide a new analysis of models that include time-varying controls.

Journal ArticleDOI
28 Jul 2016-Cell
TL;DR: It is reported how cancer-driven alterations identified in 11,289 tumors from 29 tissues can be mapped onto 1,001 molecularly annotated human cancer cell lines and correlated with sensitivity to 265 drugs.

Journal ArticleDOI
TL;DR: In patients with SMA1, a single intravenous infusion of adenoviral vector containing DNA coding for SMN resulted in longer survival, superior achievement of motor milestones, and better motor function than in historical cohorts.
Abstract: BackgroundSpinal muscular atrophy type 1 (SMA1) is a progressive, monogenic motor neuron disease with an onset during infancy that results in failure to achieve motor milestones and in death or the need for mechanical ventilation by 2 years of age. We studied functional replacement of the mutated gene encoding survival motor neuron 1 (SMN1) in this disease. MethodsFifteen patients with SMA1 received a single dose of intravenous adeno-associated virus serotype 9 carrying SMN complementary DNA encoding the missing SMN protein. Three of the patients received a low dose (6.7×1013 vg per kilogram of body weight), and 12 received a high dose (2.0×1014 vg per kilogram). The primary outcome was safety. The secondary outcome was the time until death or the need for permanent ventilatory assistance. In exploratory analyses, we compared scores on the CHOP INTEND (Children’s Hospital of Philadelphia Infant Test of Neuromuscular Disorders) scale of motor function (ranging from 0 to 64, with higher scores indicating be...

Journal ArticleDOI
01 Jan 2020-Nature
TL;DR: A robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening and using a combination of AI and human inputs could help to improve screening efficiency.
Abstract: Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful1. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives2. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening. An artificial intelligence (AI) system performs as well as or better than radiologists at detecting breast cancer from mammograms, and using a combination of AI and human inputs could help to improve screening efficiency.

Journal ArticleDOI
27 Oct 2016-Nature
TL;DR: A machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer.
Abstract: Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

Journal ArticleDOI
TL;DR: This Critical Review comparatively examines the activation mechanisms of peroxymonosulfate and peroxydisulfates and the formation pathways of oxidizing species and the impacts of water parameters and constituents such as pH, background organic matter, halide, phosphate, and carbonate on persulfate-driven chemistry.
Abstract: Reports that promote persulfate-based advanced oxidation process (AOP) as a viable alternative to hydrogen peroxide-based processes have been rapidly accumulating in recent water treatment literature. Various strategies to activate peroxide bonds in persulfate precursors have been proposed and the capacity to degrade a wide range of organic pollutants has been demonstrated. Compared to traditional AOPs in which hydroxyl radical serves as the main oxidant, persulfate-based AOPs have been claimed to involve different in situ generated oxidants such as sulfate radical and singlet oxygen as well as nonradical oxidation pathways. However, there exist controversial observations and interpretations around some of these claims, challenging robust scientific progress of this technology toward practical use. This Critical Review comparatively examines the activation mechanisms of peroxymonosulfate and peroxydisulfate and the formation pathways of oxidizing species. Properties of the main oxidizing species are scrutinized and the role of singlet oxygen is debated. In addition, the impacts of water parameters and constituents such as pH, background organic matter, halide, phosphate, and carbonate on persulfate-driven chemistry are discussed. The opportunity for niche applications is also presented, emphasizing the need for parallel efforts to remove currently prevalent knowledge roadblocks.

Journal ArticleDOI
23 Sep 2015-PLOS ONE
TL;DR: Racism was associated with poorer mental health, including depression, anxiety, psychological stress and various other outcomes, and the association between racism and physical health was significantly stronger for Asian American and Latino(a) American participants compared with African American participants.
Abstract: Despite a growing body of epidemiological evidence in recent years documenting the health impacts of racism, the cumulative evidence base has yet to be synthesized in a comprehensive meta-analysis focused specifically on racism as a determinant of health. This meta-analysis reviewed the literature focusing on the relationship between reported racism and mental and physical health outcomes. Data from 293 studies reported in 333 articles published between 1983 and 2013, and conducted predominately in the U.S., were analysed using random effects models and mean weighted effect sizes. Racism was associated with poorer mental health (negative mental health: r = -.23, 95% CI [-.24,-.21], k = 227; positive mental health: r = -.13, 95% CI [-.16,-.10], k = 113), including depression, anxiety, psychological stress and various other outcomes. Racism was also associated with poorer general health (r = -.13 (95% CI [-.18,-.09], k = 30), and poorer physical health (r = -.09, 95% CI [-.12,-.06], k = 50). Moderation effects were found for some outcomes with regard to study and exposure characteristics. Effect sizes of racism on mental health were stronger in cross-sectional compared with longitudinal data and in non-representative samples compared with representative samples. Age, sex, birthplace and education level did not moderate the effects of racism on health. Ethnicity significantly moderated the effect of racism on negative mental health and physical health: the association between racism and negative mental health was significantly stronger for Asian American and Latino(a) American participants compared with African American participants, and the association between racism and physical health was significantly stronger for Latino(a) American participants compared with African American participants. Protocol PROSPERO registration number: CRD42013005464.

Journal ArticleDOI
TL;DR: BRAF V600 appears to be a targetable oncogene in some, but not all, nonmelanoma cancers and preliminary vemurafenib activity was observed in non-small-cell lung cancer and in Erdheim-Chester disease and Langerhans'-cell histiocytosis.
Abstract: BackgroundBRAF V600 mutations occur in various nonmelanoma cancers. We undertook a histology-independent phase 2 “basket” study of vemurafenib in BRAF V600 mutation–positive nonmelanoma cancers. MethodsWe enrolled patients in six prespecified cancer cohorts; patients with all other tumor types were enrolled in a seventh cohort. A total of 122 patients with BRAF V600 mutation–positive cancer were treated, including 27 patients with colorectal cancer who received vemurafenib and cetuximab. The primary end point was the response rate; secondary end points included progression-free and overall survival. ResultsIn the cohort with non–small-cell lung cancer, the response rate was 42% (95% confidence interval [CI], 20 to 67) and median progression-free survival was 7.3 months (95% CI, 3.5 to 10.8). In the cohort with Erdheim–Chester disease or Langerhans’-cell histiocytosis, the response rate was 43% (95% CI, 18 to 71); the median treatment duration was 5.9 months (range, 0.6 to 18.6), and no patients had diseas...

Proceedings ArticleDOI
Guoliang Ji1, Shizhu He1, Liheng Xu1, Kang Liu1, Jun Zhao1 
01 Jul 2015
TL;DR: A more fine-grained model named TransD, which is an improvement of TransR/CTransR, which not only considers the diversity of relations, but also entities, which makes it can be applied on large scale graphs.
Abstract: Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR/CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR/CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR/CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms state-of-the-art methods.

Journal ArticleDOI
TL;DR: Therapy with CPAP plus usual care, as compared with usual care alone, did not prevent cardiovascular events in patients with moderate-to-severe obstructive sleep apnea and established cardiovascular disease.
Abstract: BackgroundObstructive sleep apnea is associated with an increased risk of cardiovascular events; whether treatment with continuous positive airway pressure (CPAP) prevents major cardiovascular events is uncertain. MethodsAfter a 1-week run-in period during which the participants used sham CPAP, we randomly assigned 2717 eligible adults between 45 and 75 years of age who had moderate-to-severe obstructive sleep apnea and coronary or cerebrovascular disease to receive CPAP treatment plus usual care (CPAP group) or usual care alone (usual-care group). The primary composite end point was death from cardiovascular causes, myocardial infarction, stroke, or hospitalization for unstable angina, heart failure, or transient ischemic attack. Secondary end points included other cardiovascular outcomes, health-related quality of life, snoring symptoms, daytime sleepiness, and mood. ResultsMost of the participants were men who had moderate-to-severe obstructive sleep apnea and minimal sleepiness. In the CPAP group, the...

Journal ArticleDOI
TL;DR: Critical Supply Shortages U.S. hospitals are already reporting shortages of key equipment needed to care for critically ill patients with Covid-19, including ventilators and personal protective equipment.
Abstract: Critical Supply Shortages U.S. hospitals are already reporting shortages of key equipment needed to care for critically ill patients with Covid-19, including ventilators and personal protective equ...

Journal ArticleDOI
TL;DR: In this autopsy series, the authors found that SARS-CoV-2 has an organotropism beyond the respiratory tract, including the kidneys, heart, liver, and brai...
Abstract: Multiorgan and Renal Tropism of SARS-CoV-2 In this autopsy series, the authors found that SARS-CoV-2 has an organotropism beyond the respiratory tract, including the kidneys, heart, liver, and brai...

Journal ArticleDOI
27 Aug 2015-PeerJ
TL;DR: MetaBAT as mentioned in this paper integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning, and automatically forms hundreds of high quality genome bins on a very large assembly consisting millions of contigs.
Abstract: Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. It automatically forms hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.

Journal ArticleDOI
08 Jan 2015-Nature
TL;DR: It is shown that development of resources in the Arctic and any increase in unconventional oil production are incommensurate with efforts to limit average global warming to 2 °C, and policy makers’ instincts to exploit rapidly and completely their territorial fossil fuels are inconsistent with this temperature limit.
Abstract: Policy makers have generally agreed that the average global temperature rise caused by greenhouse gas emissions should not exceed 2 °C above the average global temperature of pre-industrial times. It has been estimated that to have at least a 50 per cent chance of keeping warming below 2 °C throughout the twenty-first century, the cumulative carbon emissions between 2011 and 2050 need to be limited to around 1,100 gigatonnes of carbon dioxide (Gt CO2). However, the greenhouse gas emissions contained in present estimates of global fossil fuel reserves are around three times higher than this, and so the unabated use of all current fossil fuel reserves is incompatible with a warming limit of 2 °C. Here we use a single integrated assessment model that contains estimates of the quantities, locations and nature of the world's oil, gas and coal reserves and resources, and which is shown to be consistent with a wide variety of modelling approaches with different assumptions, to explore the implications of this emissions limit for fossil fuel production in different regions. Our results suggest that, globally, a third of oil reserves, half of gas reserves and over 80 per cent of current coal reserves should remain unused from 2010 to 2050 in order to meet the target of 2 °C. We show that development of resources in the Arctic and any increase in unconventional oil production are incommensurate with efforts to limit average global warming to 2 °C. Our results show that policy makers' instincts to exploit rapidly and completely their territorial fossil fuels are, in aggregate, inconsistent with their commitments to this temperature limit. Implementation of this policy commitment would also render unnecessary continued substantial expenditure on fossil fuel exploration, because any new discoveries could not lead to increased aggregate production.