Showing papers by "Stanford University published in 2015"
••
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Abstract: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
30,811 citations
••
TL;DR: The 1000 Genomes Project set out to provide a comprehensive description of common human genetic variation by applying whole-genome sequencing to a diverse set of individuals from multiple populations, and has reconstructed the genomes of 2,504 individuals from 26 populations using a combination of low-coverage whole-generation sequencing, deep exome sequencing, and dense microarray genotyping.
Abstract: The 1000 Genomes Project set out to provide a comprehensive description of common human genetic variation by applying whole-genome sequencing to a diverse set of individuals from multiple populations. Here we report completion of the project, having reconstructed the genomes of 2,504 individuals from 26 populations using a combination of low-coverage whole-genome sequencing, deep exome sequencing, and dense microarray genotyping. We characterized a broad spectrum of genetic variation, in total over 88 million variants (84.7 million single nucleotide polymorphisms (SNPs), 3.6 million short insertions/deletions (indels), and 60,000 structural variants), all phased onto high-quality haplotypes. This resource includes >99% of SNP variants with a frequency of >1% for a variety of ancestries. We describe the distribution of genetic variation across the global sample, and discuss the implications for common disease studies.
12,661 citations
••
17 Aug 2015TL;DR: A global approach which always attends to all source words and a local one that only looks at a subset of source words at a time are examined, demonstrating the effectiveness of both approaches on the WMT translation tasks between English and German in both directions.
Abstract: An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems that already incorporate known techniques such as dropout. Our ensemble model using different attention architectures yields a new state-of-the-art result in the WMT’15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker. 1
8,055 citations
••
TL;DR: CIBERSORT outperformed other methods with respect to noise, unknown mixture content and closely related cell types when applied to enumeration of hematopoietic subsets in RNA mixtures from fresh, frozen and fixed tissues, including solid tumors.
Abstract: We introduce CIBERSORT, a method for characterizing cell composition of complex tissues from their gene expression profiles When applied to enumeration of hematopoietic subsets in RNA mixtures from fresh, frozen and fixed tissues, including solid tumors, CIBERSORT outperformed other methods with respect to noise, unknown mixture content and closely related cell types CIBERSORT should enable large-scale analysis of RNA mixtures for cellular biomarkers and therapeutic targets (http://cibersortstanfordedu/)
6,967 citations
••
TL;DR: This study showed that mismatch-repair status predicted clinical benefit of immune checkpoint blockade with pembrolizumab, and high somatic mutation loads were associated with prolonged progression-free survival.
Abstract: BackgroundSomatic mutations have the potential to encode “non-self” immunogenic antigens. We hypothesized that tumors with a large number of somatic mutations due to mismatch-repair defects may be susceptible to immune checkpoint blockade. MethodsWe conducted a phase 2 study to evaluate the clinical activity of pembrolizumab, an anti–programmed death 1 immune checkpoint inhibitor, in 41 patients with progressive metastatic carcinoma with or without mismatch-repair deficiency. Pembrolizumab was administered intravenously at a dose of 10 mg per kilogram of body weight every 14 days in patients with mismatch repair–deficient colorectal cancers, patients with mismatch repair–proficient colorectal cancers, and patients with mismatch repair–deficient cancers that were not colorectal. The coprimary end points were the immune-related objective response rate and the 20-week immune-related progression-free survival rate. ResultsThe immune-related objective response rate and immune-related progression-free survival ...
6,835 citations
••
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as discussed by the authors, the authors used the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data.
5,792 citations
••
Mohammad H. Forouzanfar1, Lily Alexander, H. Ross Anderson, Victoria F Bachman1 +733 more•Institutions (289)
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as discussed by the authors provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.
5,668 citations
••
Alexander A. Aarts, Joanna E. Anderson1, Christopher J. Anderson2, Peter Raymond Attridge3 +287 more•Institutions (116)
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
5,532 citations
••
Memorial Sloan Kettering Cancer Center1, Institut Gustave Roussy2, Harvard University3, Roswell Park Cancer Institute4, Johns Hopkins University5, Stanford University6, University of Washington7, Vanderbilt University8, Fox Chase Cancer Center9, Macquarie University10, Aarhus University11, University of Helsinki12, The Royal Marsden NHS Foundation Trust13, University of Duisburg-Essen14, Niigata University15, Swansea University16, University of British Columbia17, Bristol-Myers Squibb18, University of Texas MD Anderson Cancer Center19
TL;DR: Overall survival was longer and fewer grade 3 or 4 adverse events occurred with nivolumab than with everolimus among patients with previously treated advanced renal-cell carcinoma.
Abstract: BackgroundNivolumab, a programmed death 1 (PD-1) checkpoint inhibitor, was associated with encouraging overall survival in uncontrolled studies involving previously treated patients with advanced renal-cell carcinoma. This randomized, open-label, phase 3 study compared nivolumab with everolimus in patients with renal-cell carcinoma who had received previous treatment. MethodsA total of 821 patients with advanced clear-cell renal-cell carcinoma for which they had received previous treatment with one or two regimens of antiangiogenic therapy were randomly assigned (in a 1:1 ratio) to receive 3 mg of nivolumab per kilogram of body weight intravenously every 2 weeks or a 10-mg everolimus tablet orally once daily. The primary end point was overall survival. The secondary end points included the objective response rate and safety. ResultsThe median overall survival was 25.0 months (95% confidence interval [CI], 21.8 to not estimable) with nivolumab and 19.6 months (95% CI, 17.6 to 23.1) with everolimus. The haz...
4,643 citations
••
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as mentioned in this paper, the authors estimated the quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013.
4,510 citations
••
University of California, Los Angeles1, University of Calgary2, University of Duisburg-Essen3, University at Buffalo4, University of Toronto5, Stanford University6, University of Missouri–Kansas City7, Heidelberg University8, University of Kiel9, University of Pittsburgh10, University of Bern11, Emory University12, University of Tennessee13, Rush University Medical Center14, Goethe University Frankfurt15
TL;DR: In patients receiving intravenous t-PA for acute ischemic stroke, thrombectomy with the use of a stent retriever within 6 hours after onset improved functional outcomes at 90 days.
Abstract: BACKGROUND Among patients with acute ischemic stroke due to occlusions in the proximal anterior intracranial circulation, less than 40% regain functional independence when treated with intravenous tissue plasminogen activator (t-PA) alone. Thrombectomy with the use of a stent retriever, in addition to intravenous t-PA, increases reperfusion rates and may improve long-term functional outcome. METHODS We randomly assigned eligible patients with stroke who were receiving or had received intravenous t-PA to continue with t-PA alone (control group) or to undergo endovascular thrombectomy with the use of a stent retriever within 6 hours after symptom onset (intervention group). Patients had confirmed occlusions in the proximal anterior intracranial circulation and an absence of large ischemic-core lesions. The primary outcome was the severity of global disability at 90 days, as assessed by means of the modified Rankin scale (with scores ranging from 0 [no symptoms] to 6 [death]). RESULTS The study was stopped early because of efficacy. At 39 centers, 196 patients underwent randomization (98 patients in each group). In the intervention group, the median time from qualifying imaging to groin puncture was 57 minutes, and the rate of substantial reperfusion at the end of the procedure was 88%. Thrombectomy with the stent retriever plus intravenous t-PA reduced disability at 90 days over the entire range of scores on the modified Rankin scale (P<0.001). The rate of functional independence (modified Rankin scale score, 0 to 2) was higher in the intervention group than in the control group (60% vs. 35%, P<0.001). There were no significant between-group differences in 90-day mortality (9% vs. 12%, P = 0.50) or symptomatic intracranial hemorrhage (0% vs. 3%, P = 0.12). CONCLUSIONS In patients receiving intravenous t-PA for acute ischemic stroke due to occlusions in the proximal anterior intracranial circulation, thrombectomy with a stent retriever within 6 hours after onset improved functional outcomes at 90 days. (Funded by Covidien; SWIFT PRIME ClinicalTrials.gov number, NCT01657461.)
••
07 Jun 2015TL;DR: A model that generates natural language descriptions of images and their regions based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding is presented.
Abstract: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.
•
07 Dec 2015TL;DR: In this paper, the authors proposed a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections using a three-step method.
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
••
University Hospital Bonn1, University of California, Riverside2, Harvard University3, Case Western Reserve University4, University of Illinois at Chicago5, European Institute6, Stanford University7, VA Palo Alto Healthcare System8, Spanish National Research Council9, Cleveland Clinic Lerner Research Institute10, Hong Kong University of Science and Technology11, University of California, Los Angeles12, University of Southern Denmark13, University of Cambridge14, University of the Basque Country15, Ikerbasque16, University of Manchester17, RIKEN Brain Science Institute18, University of Eastern Finland19, University of Bonn20, University of Massachusetts Medical School21, Center of Advanced European Studies and Research22, University of Southern California23, University of South Florida24, Duke University25, Southampton General Hospital26, University of Southampton27, Moorgreen Hospital28, Louisiana State University29, Imperial College London30, Centre national de la recherche scientifique31, Karolinska Institutet32, Max Planck Society33, University of Tübingen34, University of Groningen35, University of Colorado Denver36, Douglas Mental Health University Institute37
TL;DR: Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction.
Abstract: Increasing evidence suggests that Alzheimer's disease pathogenesis is not restricted to the neuronal compartment, but includes strong interactions with immunological mechanisms in the brain. Misfolded and aggregated proteins bind to pattern recognition receptors on microglia and astroglia, and trigger an innate immune response characterised by release of inflammatory mediators, which contribute to disease progression and severity. Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction. External factors, including systemic inflammation and obesity, are likely to interfere with immunological processes of the brain and further promote disease progression. Modulation of risk factors and targeting of these immune mechanisms could lead to future therapeutic or preventive strategies for Alzheimer's disease.
••
TL;DR: The process of developing specific advice for the reporting of systematic reviews that incorporate network meta-analyses is described, and the guidance generated from this process is presented.
Abstract: The PRISMA statement is a reporting guideline designed to improve the completeness of reporting of systematic reviews and meta-analyses. Authors have used this guideline worldwide to prepare their reviews for publication. In the past, these reports typically compared 2 treatment alternatives. With the evolution of systematic reviews that compare multiple treatments, some of them only indirectly, authors face novel challenges for conducting and reporting their reviews. This extension of the PRISMA (Preferred Reporting Items for Systematic Reviews and Metaanalyses) statement was developed specifically to improve the reporting of systematic reviews incorporating network meta-analyses.
••
Technische Universität München1, ETH Zurich2, University of Bern3, Harvard University4, National Institutes of Health5, University of Debrecen6, University Hospital Heidelberg7, McGill University8, University of Pennsylvania9, French Institute for Research in Computer Science and Automation10, University at Buffalo11, Microsoft12, University of Cambridge13, Stanford University14, University of Virginia15, Imperial College London16, Massachusetts Institute of Technology17, Columbia University18, Sabancı University19, Old Dominion University20, RMIT University21, Purdue University22, General Electric23
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource
••
21 Aug 2015TL;DR: The Stanford Natural Language Inference (SNLI) corpus as discussed by the authors is a large-scale collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning.
Abstract: Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
••
TL;DR: In virtually all medical domains, diagnostic and prognostic multivariable prediction models are being developed, validated, updated, and implemented with the aim to assist doctors and individuals in estimating probabilities and potentially influence their decision making.
Abstract: The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.
••
Veterans Health Administration1, Medical University of Vienna2, University of Minnesota3, University of Regensburg4, National Institutes of Health5, Mayo Clinic6, Harvard University7, Fred Hutchinson Cancer Research Center8, University of British Columbia9, Stanford University10, University of Michigan11, Johns Hopkins University12
TL;DR: In this article, the authors proposed a new clinical scoring system (0-3) that describes the extent and severity of chronic graft-versus-host disease for each organ or site at any given time, taking functional impact into account.
••
TL;DR: A standard protocol is used as a primary screen for evaluating the activity, short-term (2 h) stability, and electrochemically active surface area (ECSA) of 18 and 26 electrocatalysts for the hydrogen evolution reaction (HER and OER) under conditions relevant to an integrated solar water-splitting device in aqueous acidic or alkaline solution.
Abstract: Objective comparisons of electrocatalyst activity and stability using standard methods under identical conditions are necessary to evaluate the viability of existing electrocatalysts for integration into solar-fuel devices as well as to help inform the development of new catalytic systems. Herein, we use a standard protocol as a primary screen for evaluating the activity, short-term (2 h) stability, and electrochemically active surface area (ECSA) of 18 electrocatalysts for the hydrogen evolution reaction (HER) and 26 electrocatalysts for the oxygen evolution reaction (OER) under conditions relevant to an integrated solar water-splitting device in aqueous acidic or alkaline solution. Our primary figure of merit is the overpotential necessary to achieve a magnitude current density of 10 mA cm–2 per geometric area, the approximate current density expected for a 10% efficient solar-to-fuels conversion device under 1 sun illumination. The specific activity per ECSA of each material is also reported. Among HER...
••
28 Feb 2015TL;DR: The authors introduced the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies, which outperformed all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
Abstract: A Long Short-Term Memory (LSTM) network is a type of recurrent neural network architecture which has recently obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
••
Stanford University1, Baylor College of Medicine2, University of Pittsburgh3, University of California, Los Angeles4, Sapienza University of Rome5, Loyola University Chicago6, University of Texas at Austin7, University of Texas Southwestern Medical Center8, Boston Children's Hospital9, University of Chicago10, Johns Hopkins University School of Medicine11, Georgetown University12, University of Toronto13, Gannon University14, American Academy of Pediatrics15, University of Louisville16, University of Washington17, Eastern Virginia Medical School18
TL;DR: A scientifically rigorous update to the National Sleep Foundation's sleep duration recommendations, determined expert recommendations for sufficient sleep durations across the lifespan using the RAND/UCLA Appropriateness Method.
••
TL;DR: Estimates of extinction rates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way and a window of opportunity is rapidly closing.
Abstract: The oft-repeated claim that Earth’s biota is entering a sixth “mass extinction” depends on clearly demonstrating that current extinction rates are far above the “background” rates prevailing between the five previous mass extinctions. Earlier estimates of extinction rates have been criticized for using assumptions that might overestimate the severity of the extinction crisis. We assess, using extremely conservative assumptions, whether human activities are causing a mass extinction. First, we use a recent estimate of a background rate of 2 mammal extinctions per 10,000 species per 100 years (that is, 2 E/MSY), which is twice as high as widely used previous estimates. We then compare this rate with the current rate of mammal and vertebrate extinctions. The latter is conservatively low because listing a species as extinct requires meeting stringent criteria. Even under our assumptions, which would tend to minimize evidence of an incipient mass extinction, the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate. Under the 2 E/MSY background rate, the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 800 and 10,000 years to disappear. These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing.
••
TL;DR: This method probes DNA accessibility with hyperactive Tn5 transposase, which inserts sequencing adapters into accessible regions of chromatin, which can be used to infer regions of increased accessibility, as well as to map regions of transcription‐factor binding and nucleosome position.
Abstract: This unit describes Assay for Transposase-Accessible Chromatin with high-throughput sequencing (ATAC-seq), a method for mapping chromatin accessibility genome-wide. This method probes DNA accessibility with hyperactive Tn5 transposase, which inserts sequencing adapters into accessible regions of chromatin. Sequencing reads can then be used to infer regions of increased accessibility, as well as to map regions of transcription-factor binding and nucleosome position. The method is a fast and sensitive alternative to DNase-seq for assaying chromatin accessibility genome-wide, or to MNase-seq for assaying nucleosome positions in accessible regions of the genome.
••
TL;DR: Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology by automating the postprocessing of results of model‐based population structure analyses.
Abstract: The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present CLUMPAK (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, CLUMPAK identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software CLUMPP. Next, CLUMPAK identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in CLUMPP and simplifying the comparison of clustering results across different K values. CLUMPAK incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. CLUMPAK, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology.
••
Boston Children's Hospital1, Harvard University2, King's College London3, Lund University4, Massachusetts Eye and Ear Infirmary5, University of São Paulo6, University of California, San Diego7, Imperial College London8, Partners In Health9, Brigham and Women's Hospital10, Royal North Shore Hospital11, Medical College of Wisconsin12, Nanyang Technological University13, Monash University14, University of Sierra Leone15, University of Oxford16, Mongolian National University17, University of Malawi18, Flinders University19, Beth Israel Deaconess Medical Center20, Bhabha Atomic Research Centre21, Royal Australasian College of Surgeons22, Stanford University23, University of California, San Francisco24
TL;DR: The need for surgical services in low- and middleincome countries will continue to rise substantially from now until 2030, with a large projected increase in the incidence of cancer, road traffic injuries, and cardiovascular and metabolic diseases in LMICs.
••
TL;DR: A pan-cancer resource and meta-analysis of expression signatures from ∼18,000 human tumors with overall survival outcomes across 39 malignancies is presented and it is found that expression of favorably prognostic genes, including KLRB1 (encoding CD161), largely reflect tumor-associated leukocytes.
Abstract: Molecular profiles of tumors and tumor-associated cells hold great promise as biomarkers of clinical outcomes. However, existing data sets are fragmented and difficult to analyze systematically. Here we present a pan-cancer resource and meta-analysis of expression signatures from ∼18,000 human tumors with overall survival outcomes across 39 malignancies. By using this resource, we identified a forkhead box MI (FOXM1) regulatory network as a major predictor of adverse outcomes, and we found that expression of favorably prognostic genes, including KLRB1 (encoding CD161), largely reflect tumor-associated leukocytes. By applying CIBERSORT, a computational approach for inferring leukocyte representation in bulk tumor transcriptomes, we identified complex associations between 22 distinct leukocyte subsets and cancer survival. For example, tumor-associated neutrophil and plasma cell signatures emerged as significant but opposite predictors of survival for diverse solid tumors, including breast and lung adenocarcinomas. This resource and associated analytical tools (http://precog.stanford.edu) may help delineate prognostic genes and leukocyte subsets within and across cancers, shed light on the impact of tumor heterogeneity on cancer outcomes, and facilitate the discovery of biomarkers and therapeutic targets.
••
TL;DR: A review of the current status and future prospects of the field of emotion regulation can be found in this paper, where the authors define emotion and emotion regulation and distinguish both from related constructs.
Abstract: One of the fastest growing areas within psychology is the field of emotion regulation. However, enthusiasm for this topic continues to outstrip conceptual clarity, and there remains considerable uncertainty as to what is even meant by “emotion regulation.” The goal of this review is to examine the current status and future prospects of this rapidly growing field. In the first section, I define emotion and emotion regulation and distinguish both from related constructs. In the second section, I use the process model of emotion regulation to selectively review evidence that different regulation strategies have different consequences. In the third section, I introduce the extended process model of emotion regulation; this model considers emotion regulation to be one type of valuation, and distinguishes three emotion regulation stages (identification, selection, implementation). In the final section, I consider five key growth points for the field of emotion regulation.
••
TL;DR: MM status predicts clinical benefit of immune checkpoint blockade with pembrolizumab, and high total somatic mutation loads were associated with PFS.
Abstract: LBA100 Background: Somatic mutations have the potential to be recognized as “non-self” immunogenic antigens. Tumors with genetic defects in mismatch repair (MMR) harbor many more mutations than tumors of the same type without such repair defects. We hypothesized that tumors with mismatch repair defects would therefore be particularly susceptible to immune checkpoint blockade. Methods: We conducted a phase II study to evaluate the clinical activity of anti-PD-1, pembrolizumab, in 41 patients with previously-treated, progressive metastatic disease with and without MMR-deficiency. Pembrolizumab was administered at 10 mg/kg intravenously every 14 days to three cohorts of patients: those with MMR-deficient colorectal cancers (CRCs) (N = 11); those with MMR-proficient CRCs (N = 21), and those with MMR-deficient cancers of types other than colorectal (N = 9). The co-primary endpoints were immune-related objective response rate (irORR) and immune-related progression-free survival (irPFS) at 20 weeks. Results: The...
••
07 Dec 2015TL;DR: In this article, a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced.
Abstract: Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.