scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Intratumoral genome diversity parallels progression and predicts outcome in pediatric cancer

TL;DR: It is found that microdiversity predicts poor cancer-specific survival (60%; P=0.009), independent of other risk factors, in a cohort of 44 patients with chemotherapy-treated childhood kidney cancer, and survival was 100% for patients lacking microd diversity.
Abstract: Genetic differences among neoplastic cells within the same tumour have been proposed to drive cancer progression and treatment failure. Whether data on intratumoral diversity can be used to predict clinical outcome remains unclear. We here address this issue by quantifying genetic intratumoral diversity in a set of chemotherapy-treated childhood tumours. By analysis of multiple tumour samples from seven patients we demonstrate intratumoral diversity in all patients analysed after chemotherapy, typically presenting as multiple clones within a single millimetre-sized tumour sample (microdiversity). We show that microdiversity often acts as the foundation for further genome evolution in metastases. In addition, we find that microdiversity predicts poor cancer-specific survival (60%; P=0.009), independent of other risk factors, in a cohort of 44 patients with chemotherapy-treated childhood kidney cancer. Survival was 100% for patients lacking microdiversity. Thus, intratumoral genetic diversity is common in childhood cancers after chemotherapy and may be an important factor behind treatment failure.

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2011
TL;DR: The sheer volume and scope of data posed by this flood of data pose a significant challenge to the development of efficient and intuitive visualization tools able to scale to very large data sets and to flexibly integrate multiple data types, including clinical data.
Abstract: Rapid improvements in sequencing and array-based platforms are resulting in a flood of diverse genome-wide data, including data from exome and whole-genome sequencing, epigenetic surveys, expression profiling of coding and noncoding RNAs, single nucleotide polymorphism (SNP) and copy number profiling, and functional assays. Analysis of these large, diverse data sets holds the promise of a more comprehensive understanding of the genome and its relation to human disease. Experienced and knowledgeable human review is an essential component of this process, complementing computational approaches. This calls for efficient and intuitive visualization tools able to scale to very large data sets and to flexibly integrate multiple data types, including clinical data. However, the sheer volume and scope of data pose a significant challenge to the development of such tools.

2,187 citations

25 May 2011
TL;DR: A quantitative analysis of the timing of the genetic evolution of pancreatic cancer was performed, indicating at least a decade between the occurrence of the initiating mutation and the birth of the parental, non-metastatic founder cell.
Abstract: Metastasis, the dissemination and growth of neoplastic cells in an organ distinct from that in which they originated, is the most common cause of death in cancer patients. This is particularly true for pancreatic cancers, where most patients are diagnosed with metastatic disease and few show a sustained response to chemotherapy or radiation therapy. Whether the dismal prognosis of patients with pancreatic cancer compared to patients with other types of cancer is a result of late diagnosis or early dissemination of disease to distant organs is not known. Here we rely on data generated by sequencing the genomes of seven pancreatic cancer metastases to evaluate the clonal relationships among primary and metastatic cancers. We find that clonal populations that give rise to distant metastases are represented within the primary carcinoma, but these clones are genetically evolved from the original parental, non-metastatic clone. Thus, genetic heterogeneity of metastases reflects that within the primary carcinoma. A quantitative analysis of the timing of the genetic evolution of pancreatic cancer was performed, indicating at least a decade between the occurrence of the initiating mutation and the birth of the parental, non-metastatic founder cell. At least five more years are required for the acquisition of metastatic ability and patients die an average of two years thereafter. These data provide novel insights into the genetic features underlying pancreatic cancer progression and define a broad time window of opportunity for early detection to prevent deaths from metastatic disease.

2,019 citations

Journal ArticleDOI
TL;DR: A future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients is suggested.
Abstract: Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.

466 citations

Journal ArticleDOI
01 Nov 2016
TL;DR: Different tumour sequencing strategies currently used for precision oncology are highlighted, described their individual strengths and weaknesses, and emphasise their feasibility in different clinical settings, to evaluate the possibility of NGS implementation in current and future clinical trials, and point to the significance of N GS for translational research.
Abstract: We live in an era of genomic medicine. The past five years brought about many significant achievements in the field of cancer genetics, driven by rapidly evolving technologies and plummeting costs of next-generation sequencing (NGS). The official completion of the Cancer Genome Project in 2014 led many to envision the clinical implementation of cancer genomic data as the next logical step in cancer therapy. Stemming from this vision, the term ‘precision oncology’ was coined to illustrate the novelty of this individualised approach. The basic assumption of precision oncology is that molecular markers detected by NGS will predict response to targeted therapies independently from tumour histology. However, along with a ubiquitous availability of NGS, the complexity and heterogeneity at the individual patient level had to be acknowledged. Not only does the latter present challenges to clinical decision-making based on sequencing data, it is also an obstacle to the rational design of clinical trials. Novel tissue-agnostic trial designs were quickly developed to overcome these challenges. Results from some of these trials have recently demonstrated the feasibility and efficacy of this approach. On the other hand, there is an increasing amount of whole-exome and whole-genome NGS data which allows us to assess ever smaller differences between individual patients with cancer. In this review, we highlight different tumour sequencing strategies currently used for precision oncology, describe their individual strengths and weaknesses, and emphasise their feasibility in different clinical settings. Further, we evaluate the possibility of NGS implementation in current and future clinical trials, and point to the significance of NGS for translational research.

133 citations

Journal ArticleDOI
TL;DR: The genetic landscape of Wilms tumour is reviewed and how precision medicine guided by genomic information might lead to new therapeutic approaches and improve patient survival is discussed, to provide promising therapeutic avenues for patients with relapsed or refractory disease.
Abstract: Wilms tumour is the most common renal malignancy of childhood. The disease is curable in the majority of cases, albeit at considerable cost in terms of late treatment-related effects in some children. However, one in ten children with Wilms tumour will die of their disease despite modern treatment approaches. The genetic changes that underpin Wilms tumour have been defined by studies of familial cases and by unbiased DNA sequencing of tumour genomes. Together, these approaches have defined the landscape of cancer genes that are operative in Wilms tumour, many of which are intricately linked to the control of fetal nephrogenesis. Advances in our understanding of the germline and somatic genetic changes that underlie Wilms tumour may translate into better patient outcomes. Improvements in risk stratification have already been seen through the introduction of molecular biomarkers into clinical practice. A host of additional biomarkers are due to undergo clinical validation. Identifying actionable mutations has led to potential new targets, with some novel compounds undergoing testing in early phase trials. Avenues that warrant further exploration include targeting Wilms tumour cancer genes with a non-redundant role in nephrogenesis and targeting the fetal renal transcriptome. Wilms tumour is the most common renal malignancy of childhood. Here, the authors review the genetic landscape of Wilms tumour and discuss how precision medicine guided by genomic information might lead to new therapeutic approaches and improve patient survival.

128 citations

References
More filters
Journal ArticleDOI
TL;DR: Burrows-Wheeler Alignment tool (BWA) is implemented, a new read alignment package that is based on backward search with Burrows–Wheeler Transform (BWT), to efficiently align short sequencing reads against a large reference sequence such as the human genome, allowing mismatches and gaps.
Abstract: Motivation: The enormous amount of short reads generated by the new DNA sequencing technologies call for the development of fast and accurate read alignment programs. A first generation of hash table-based methods has been developed, including MAQ, which is accurate, feature rich and fast enough to align short reads from a single individual. However, MAQ does not support gapped alignment for single-end reads, which makes it unsuitable for alignment of longer reads where indels may occur frequently. The speed of MAQ is also a concern when the alignment is scaled up to the resequencing of hundreds of individuals. Results: We implemented Burrows-Wheeler Alignment tool (BWA), a new read alignment package that is based on backward search with Burrows–Wheeler Transform (BWT), to efficiently align short sequencing reads against a large reference sequence such as the human genome, allowing mismatches and gaps. BWA supports both base space reads, e.g. from Illumina sequencing machines, and color space reads from AB SOLiD machines. Evaluations on both simulated and real data suggest that BWA is ~10–20× faster than MAQ, while achieving similar accuracy. In addition, BWA outputs alignment in the new standard SAM (Sequence Alignment/Map) format. Variant calling and other downstream analyses after the alignment can be achieved with the open source SAMtools software package. Availability: http://maq.sourceforge.net Contact: [email protected]

43,862 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an approach for efficient and intuitive visualization tools able to scale to very large data sets and to flexibly integrate multiple data types, including clinical data.
Abstract: Rapid improvements in sequencing and array-based platforms are resulting in a flood of diverse genome-wide data, including data from exome and whole-genome sequencing, epigenetic surveys, expression profiling of coding and noncoding RNAs, single nucleotide polymorphism (SNP) and copy number profiling, and functional assays. Analysis of these large, diverse data sets holds the promise of a more comprehensive understanding of the genome and its relation to human disease. Experienced and knowledgeable human review is an essential component of this process, complementing computational approaches. This calls for efficient and intuitive visualization tools able to scale to very large data sets and to flexibly integrate multiple data types, including clinical data. However, the sheer volume and scope of data pose a significant challenge to the development of such tools.

10,798 citations

Journal ArticleDOI
TL;DR: The ANNOVAR tool to annotate single nucleotide variants and insertions/deletions, such as examining their functional consequence on genes, inferring cytogenetic bands, reporting functional importance scores, finding variants in conserved regions, or identifying variants reported in the 1000 Genomes Project and dbSNP is developed.
Abstract: High-throughput sequencing platforms are generating massive amounts of genetic variation data for diverse genomes, but it remains a challenge to pinpoint a small subset of functionally important variants. To fill these unmet needs, we developed the ANNOVAR tool to annotate single nucleotide variants (SNVs) and insertions/deletions, such as examining their functional consequence on genes, inferring cytogenetic bands, reporting functional importance scores, finding variants in conserved regions, or identifying variants reported in the 1000 Genomes Project and dbSNP. ANNOVAR can utilize annotation databases from the UCSC Genome Browser or any annotation data set conforming to Generic Feature Format version 3 (GFF3). We also illustrate a 'variants reduction' protocol on 4.7 million SNVs and indels from a human genome, including two causal mutations for Miller syndrome, a rare recessive disease. Through a stepwise procedure, we excluded variants that are unlikely to be causal, and identified 20 candidate genes including the causal gene. Using a desktop computer, ANNOVAR requires ∼4 min to perform gene-based annotation and ∼15 min to perform variants reduction on 4.7 million variants, making it practical to handle hundreds of human genomes in a day. ANNOVAR is freely available at http://www.openbioinformatics.org/annovar/.

10,461 citations

Journal ArticleDOI
TL;DR: A unified analytic framework to discover and genotype variation among multiple samples simultaneously that achieves sensitive and specific results across five sequencing technologies and three distinct, canonical experimental designs is presented.
Abstract: Recent advances in sequencing technology make it possible to comprehensively catalogue genetic variation in population samples, creating a foundation for understanding human disease, ancestry and evolution. The amounts of raw data produced are prodigious and many computational steps are required to translate this output into high-quality variant calls. We present a unified analytic framework to discover and genotype variation among multiple samples simultaneously that achieves sensitive and specific results across five sequencing technologies and three distinct, canonical experimental designs. Our process includes (1) initial read mapping; (2) local realignment around indels; (3) base quality score recalibration; (4) SNP discovery and genotyping to find all potential variants; and (5) machine learning to separate true segregating variation from machine artifacts common to next-generation sequencing technologies. We discuss the application of these tools, instantiated in the Genome Analysis Toolkit (GATK), to deep whole-genome, whole-exome capture, and multi-sample low-pass (~4×) 1000 Genomes Project datasets.

10,056 citations

Journal ArticleDOI
TL;DR: The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution.
Abstract: Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today’s sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.

6,930 citations

Related Papers (5)