scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 2015"


Journal ArticleDOI
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Abstract: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.

30,811 citations


Journal ArticleDOI
Adam Auton1, Gonçalo R. Abecasis2, David Altshuler3, Richard Durbin4  +514 moreInstitutions (90)
01 Oct 2015-Nature
TL;DR: The 1000 Genomes Project set out to provide a comprehensive description of common human genetic variation by applying whole-genome sequencing to a diverse set of individuals from multiple populations, and has reconstructed the genomes of 2,504 individuals from 26 populations using a combination of low-coverage whole-generation sequencing, deep exome sequencing, and dense microarray genotyping.
Abstract: The 1000 Genomes Project set out to provide a comprehensive description of common human genetic variation by applying whole-genome sequencing to a diverse set of individuals from multiple populations. Here we report completion of the project, having reconstructed the genomes of 2,504 individuals from 26 populations using a combination of low-coverage whole-genome sequencing, deep exome sequencing, and dense microarray genotyping. We characterized a broad spectrum of genetic variation, in total over 88 million variants (84.7 million single nucleotide polymorphisms (SNPs), 3.6 million short insertions/deletions (indels), and 60,000 structural variants), all phased onto high-quality haplotypes. This resource includes >99% of SNP variants with a frequency of >1% for a variety of ancestries. We describe the distribution of genetic variation across the global sample, and discuss the implications for common disease studies.

12,661 citations


Proceedings ArticleDOI
17 Aug 2015
TL;DR: A global approach which always attends to all source words and a local one that only looks at a subset of source words at a time are examined, demonstrating the effectiveness of both approaches on the WMT translation tasks between English and German in both directions.
Abstract: An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems that already incorporate known techniques such as dropout. Our ensemble model using different attention architectures yields a new state-of-the-art result in the WMT’15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker. 1

8,055 citations


Journal ArticleDOI
TL;DR: CIBERSORT outperformed other methods with respect to noise, unknown mixture content and closely related cell types when applied to enumeration of hematopoietic subsets in RNA mixtures from fresh, frozen and fixed tissues, including solid tumors.
Abstract: We introduce CIBERSORT, a method for characterizing cell composition of complex tissues from their gene expression profiles When applied to enumeration of hematopoietic subsets in RNA mixtures from fresh, frozen and fixed tissues, including solid tumors, CIBERSORT outperformed other methods with respect to noise, unknown mixture content and closely related cell types CIBERSORT should enable large-scale analysis of RNA mixtures for cellular biomarkers and therapeutic targets (http://cibersortstanfordedu/)

6,967 citations


Journal ArticleDOI
TL;DR: This study showed that mismatch-repair status predicted clinical benefit of immune checkpoint blockade with pembrolizumab, and high somatic mutation loads were associated with prolonged progression-free survival.
Abstract: BackgroundSomatic mutations have the potential to encode “non-self” immunogenic antigens. We hypothesized that tumors with a large number of somatic mutations due to mismatch-repair defects may be susceptible to immune checkpoint blockade. MethodsWe conducted a phase 2 study to evaluate the clinical activity of pembrolizumab, an anti–programmed death 1 immune checkpoint inhibitor, in 41 patients with progressive metastatic carcinoma with or without mismatch-repair deficiency. Pembrolizumab was administered intravenously at a dose of 10 mg per kilogram of body weight every 14 days in patients with mismatch repair–deficient colorectal cancers, patients with mismatch repair–proficient colorectal cancers, and patients with mismatch repair–deficient cancers that were not colorectal. The coprimary end points were the immune-related objective response rate and the 20-week immune-related progression-free survival rate. ResultsThe immune-related objective response rate and immune-related progression-free survival ...

6,835 citations


Journal ArticleDOI
Mohsen Naghavi1, Haidong Wang1, Rafael Lozano1, Adrian Davis2  +728 moreInstitutions (294)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as discussed by the authors, the authors used the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data.

5,792 citations


Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as discussed by the authors provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

5,668 citations


Journal ArticleDOI
28 Aug 2015-Science
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

5,532 citations


Journal ArticleDOI
TL;DR: Overall survival was longer and fewer grade 3 or 4 adverse events occurred with nivolumab than with everolimus among patients with previously treated advanced renal-cell carcinoma.
Abstract: BackgroundNivolumab, a programmed death 1 (PD-1) checkpoint inhibitor, was associated with encouraging overall survival in uncontrolled studies involving previously treated patients with advanced renal-cell carcinoma. This randomized, open-label, phase 3 study compared nivolumab with everolimus in patients with renal-cell carcinoma who had received previous treatment. MethodsA total of 821 patients with advanced clear-cell renal-cell carcinoma for which they had received previous treatment with one or two regimens of antiangiogenic therapy were randomly assigned (in a 1:1 ratio) to receive 3 mg of nivolumab per kilogram of body weight intravenously every 2 weeks or a 10-mg everolimus tablet orally once daily. The primary end point was overall survival. The secondary end points included the objective response rate and safety. ResultsThe median overall survival was 25.0 months (95% confidence interval [CI], 21.8 to not estimable) with nivolumab and 19.6 months (95% CI, 17.6 to 23.1) with everolimus. The haz...

4,643 citations


Journal ArticleDOI
Theo Vos1, Ryan M Barber1, Brad Bell1, Amelia Bertozzi-Villa1  +686 moreInstitutions (287)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as mentioned in this paper, the authors estimated the quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013.

4,510 citations


Journal ArticleDOI
TL;DR: In patients receiving intravenous t-PA for acute ischemic stroke, thrombectomy with the use of a stent retriever within 6 hours after onset improved functional outcomes at 90 days.
Abstract: BACKGROUND Among patients with acute ischemic stroke due to occlusions in the proximal anterior intracranial circulation, less than 40% regain functional independence when treated with intravenous tissue plasminogen activator (t-PA) alone. Thrombectomy with the use of a stent retriever, in addition to intravenous t-PA, increases reperfusion rates and may improve long-term functional outcome. METHODS We randomly assigned eligible patients with stroke who were receiving or had received intravenous t-PA to continue with t-PA alone (control group) or to undergo endovascular thrombectomy with the use of a stent retriever within 6 hours after symptom onset (intervention group). Patients had confirmed occlusions in the proximal anterior intracranial circulation and an absence of large ischemic-core lesions. The primary outcome was the severity of global disability at 90 days, as assessed by means of the modified Rankin scale (with scores ranging from 0 [no symptoms] to 6 [death]). RESULTS The study was stopped early because of efficacy. At 39 centers, 196 patients underwent randomization (98 patients in each group). In the intervention group, the median time from qualifying imaging to groin puncture was 57 minutes, and the rate of substantial reperfusion at the end of the procedure was 88%. Thrombectomy with the stent retriever plus intravenous t-PA reduced disability at 90 days over the entire range of scores on the modified Rankin scale (P<0.001). The rate of functional independence (modified Rankin scale score, 0 to 2) was higher in the intervention group than in the control group (60% vs. 35%, P<0.001). There were no significant between-group differences in 90-day mortality (9% vs. 12%, P = 0.50) or symptomatic intracranial hemorrhage (0% vs. 3%, P = 0.12). CONCLUSIONS In patients receiving intravenous t-PA for acute ischemic stroke due to occlusions in the proximal anterior intracranial circulation, thrombectomy with a stent retriever within 6 hours after onset improved functional outcomes at 90 days. (Funded by Covidien; SWIFT PRIME ClinicalTrials.gov number, NCT01657461.)

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A model that generates natural language descriptions of images and their regions based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding is presented.
Abstract: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.

Proceedings Article
07 Dec 2015
TL;DR: In this paper, the authors proposed a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections using a three-step method.
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.

Journal ArticleDOI
TL;DR: Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction.
Abstract: Increasing evidence suggests that Alzheimer's disease pathogenesis is not restricted to the neuronal compartment, but includes strong interactions with immunological mechanisms in the brain. Misfolded and aggregated proteins bind to pattern recognition receptors on microglia and astroglia, and trigger an innate immune response characterised by release of inflammatory mediators, which contribute to disease progression and severity. Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction. External factors, including systemic inflammation and obesity, are likely to interfere with immunological processes of the brain and further promote disease progression. Modulation of risk factors and targeting of these immune mechanisms could lead to future therapeutic or preventive strategies for Alzheimer's disease.

Journal ArticleDOI
TL;DR: The process of developing specific advice for the reporting of systematic reviews that incorporate network meta-analyses is described, and the guidance generated from this process is presented.
Abstract: The PRISMA statement is a reporting guideline designed to improve the completeness of reporting of systematic reviews and meta-analyses. Authors have used this guideline worldwide to prepare their reviews for publication. In the past, these reports typically compared 2 treatment alternatives. With the evolution of systematic reviews that compare multiple treatments, some of them only indirectly, authors face novel challenges for conducting and reporting their reviews. This extension of the PRISMA (Preferred Reporting Items for Systematic Reviews and Metaanalyses) statement was developed specifically to improve the reporting of systematic reviews incorporating network meta-analyses.

Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

Proceedings ArticleDOI
21 Aug 2015
TL;DR: The Stanford Natural Language Inference (SNLI) corpus as discussed by the authors is a large-scale collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning.
Abstract: Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

Journal ArticleDOI
TL;DR: In virtually all medical domains, diagnostic and prognostic multivariable prediction models are being developed, validated, updated, and implemented with the aim to assist doctors and individuals in estimating probabilities and potentially influence their decision making.
Abstract: The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.


Journal ArticleDOI
TL;DR: A standard protocol is used as a primary screen for evaluating the activity, short-term (2 h) stability, and electrochemically active surface area (ECSA) of 18 and 26 electrocatalysts for the hydrogen evolution reaction (HER and OER) under conditions relevant to an integrated solar water-splitting device in aqueous acidic or alkaline solution.
Abstract: Objective comparisons of electrocatalyst activity and stability using standard methods under identical conditions are necessary to evaluate the viability of existing electrocatalysts for integration into solar-fuel devices as well as to help inform the development of new catalytic systems. Herein, we use a standard protocol as a primary screen for evaluating the activity, short-term (2 h) stability, and electrochemically active surface area (ECSA) of 18 electrocatalysts for the hydrogen evolution reaction (HER) and 26 electrocatalysts for the oxygen evolution reaction (OER) under conditions relevant to an integrated solar water-splitting device in aqueous acidic or alkaline solution. Our primary figure of merit is the overpotential necessary to achieve a magnitude current density of 10 mA cm–2 per geometric area, the approximate current density expected for a 10% efficient solar-to-fuels conversion device under 1 sun illumination. The specific activity per ECSA of each material is also reported. Among HER...

Proceedings ArticleDOI
28 Feb 2015
TL;DR: The authors introduced the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies, which outperformed all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
Abstract: A Long Short-Term Memory (LSTM) network is a type of recurrent neural network architecture which has recently obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).


Journal ArticleDOI
TL;DR: Estimates of extinction rates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way and a window of opportunity is rapidly closing.
Abstract: The oft-repeated claim that Earth’s biota is entering a sixth “mass extinction” depends on clearly demonstrating that current extinction rates are far above the “background” rates prevailing between the five previous mass extinctions. Earlier estimates of extinction rates have been criticized for using assumptions that might overestimate the severity of the extinction crisis. We assess, using extremely conservative assumptions, whether human activities are causing a mass extinction. First, we use a recent estimate of a background rate of 2 mammal extinctions per 10,000 species per 100 years (that is, 2 E/MSY), which is twice as high as widely used previous estimates. We then compare this rate with the current rate of mammal and vertebrate extinctions. The latter is conservatively low because listing a species as extinct requires meeting stringent criteria. Even under our assumptions, which would tend to minimize evidence of an incipient mass extinction, the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate. Under the 2 E/MSY background rate, the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 800 and 10,000 years to disappear. These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing.

Journal ArticleDOI
TL;DR: This method probes DNA accessibility with hyperactive Tn5 transposase, which inserts sequencing adapters into accessible regions of chromatin, which can be used to infer regions of increased accessibility, as well as to map regions of transcription‐factor binding and nucleosome position.
Abstract: This unit describes Assay for Transposase-Accessible Chromatin with high-throughput sequencing (ATAC-seq), a method for mapping chromatin accessibility genome-wide. This method probes DNA accessibility with hyperactive Tn5 transposase, which inserts sequencing adapters into accessible regions of chromatin. Sequencing reads can then be used to infer regions of increased accessibility, as well as to map regions of transcription-factor binding and nucleosome position. The method is a fast and sensitive alternative to DNase-seq for assaying chromatin accessibility genome-wide, or to MNase-seq for assaying nucleosome positions in accessible regions of the genome.

Journal ArticleDOI
TL;DR: Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology by automating the postprocessing of results of model‐based population structure analyses.
Abstract: The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present CLUMPAK (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, CLUMPAK identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software CLUMPP. Next, CLUMPAK identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in CLUMPP and simplifying the comparison of clustering results across different K values. CLUMPAK incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. CLUMPAK, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology.

Journal ArticleDOI
TL;DR: The need for surgical services in low- and middleincome countries will continue to rise substantially from now until 2030, with a large projected increase in the incidence of cancer, road traffic injuries, and cardiovascular and metabolic diseases in LMICs.

Journal ArticleDOI
TL;DR: A pan-cancer resource and meta-analysis of expression signatures from ∼18,000 human tumors with overall survival outcomes across 39 malignancies is presented and it is found that expression of favorably prognostic genes, including KLRB1 (encoding CD161), largely reflect tumor-associated leukocytes.
Abstract: Molecular profiles of tumors and tumor-associated cells hold great promise as biomarkers of clinical outcomes. However, existing data sets are fragmented and difficult to analyze systematically. Here we present a pan-cancer resource and meta-analysis of expression signatures from ∼18,000 human tumors with overall survival outcomes across 39 malignancies. By using this resource, we identified a forkhead box MI (FOXM1) regulatory network as a major predictor of adverse outcomes, and we found that expression of favorably prognostic genes, including KLRB1 (encoding CD161), largely reflect tumor-associated leukocytes. By applying CIBERSORT, a computational approach for inferring leukocyte representation in bulk tumor transcriptomes, we identified complex associations between 22 distinct leukocyte subsets and cancer survival. For example, tumor-associated neutrophil and plasma cell signatures emerged as significant but opposite predictors of survival for diverse solid tumors, including breast and lung adenocarcinomas. This resource and associated analytical tools (http://precog.stanford.edu) may help delineate prognostic genes and leukocyte subsets within and across cancers, shed light on the impact of tumor heterogeneity on cancer outcomes, and facilitate the discovery of biomarkers and therapeutic targets.

Journal ArticleDOI
TL;DR: A review of the current status and future prospects of the field of emotion regulation can be found in this paper, where the authors define emotion and emotion regulation and distinguish both from related constructs.
Abstract: One of the fastest growing areas within psychology is the field of emotion regulation. However, enthusiasm for this topic continues to outstrip conceptual clarity, and there remains considerable uncertainty as to what is even meant by “emotion regulation.” The goal of this review is to examine the current status and future prospects of this rapidly growing field. In the first section, I define emotion and emotion regulation and distinguish both from related constructs. In the second section, I use the process model of emotion regulation to selectively review evidence that different regulation strategies have different consequences. In the third section, I introduce the extended process model of emotion regulation; this model considers emotion regulation to be one type of valuation, and distinguishes three emotion regulation stages (identification, selection, implementation). In the final section, I consider five key growth points for the field of emotion regulation.

Journal ArticleDOI
TL;DR: MM status predicts clinical benefit of immune checkpoint blockade with pembrolizumab, and high total somatic mutation loads were associated with PFS.
Abstract: LBA100 Background: Somatic mutations have the potential to be recognized as “non-self” immunogenic antigens. Tumors with genetic defects in mismatch repair (MMR) harbor many more mutations than tumors of the same type without such repair defects. We hypothesized that tumors with mismatch repair defects would therefore be particularly susceptible to immune checkpoint blockade. Methods: We conducted a phase II study to evaluate the clinical activity of anti-PD-1, pembrolizumab, in 41 patients with previously-treated, progressive metastatic disease with and without MMR-deficiency. Pembrolizumab was administered at 10 mg/kg intravenously every 14 days to three cohorts of patients: those with MMR-deficient colorectal cancers (CRCs) (N = 11); those with MMR-proficient CRCs (N = 21), and those with MMR-deficient cancers of types other than colorectal (N = 9). The co-primary endpoints were immune-related objective response rate (irORR) and immune-related progression-free survival (irPFS) at 20 weeks. Results: The...

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this article, a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced.
Abstract: Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.