scispace - formally typeset
Search or ask a question
Author

Changyang Li

Bio: Changyang Li is an academic researcher from University of Sydney. The author has contributed to research in topics: Image segmentation & Scale-space segmentation. The author has an hindex of 16, co-authored 37 publications receiving 952 citations. Previous affiliations of Changyang Li include Information Technology University.

Papers
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper proposes a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details and proposes the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations.
Abstract: In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.

202 citations

Journal ArticleDOI
TL;DR: It is suggested that saliency detection using the reconstruction errors derived from a sparse representation model coupled with a novel background detection can more accurately discriminate the lesion from surrounding regions.
Abstract: The segmentation of skin lesions in dermoscopic images is a fundamental step in automated computer-aided diagnosis of melanoma. Conventional segmentation methods, however, have difficulties when the lesion borders are indistinct and when contrast between the lesion and the surrounding skin is low. They also perform poorly when there is a heterogeneous background or a lesion that touches the image boundaries; this then results in under- and oversegmentation of the skin lesion. We suggest that saliency detection using the reconstruction errors derived from a sparse representation model coupled with a novel background detection can more accurately discriminate the lesion from surrounding regions. We further propose a Bayesian framework that better delineates the shape and boundaries of the lesion. We also evaluated our approach on two public datasets comprising 1100 dermoscopic images and compared it to other conventional and state-of-the-art unsupervised (i.e., no training required) lesion segmentation methods, as well as the state-of-the-art unsupervised saliency detection methods. Our results show that our approach is more accurate and robust in segmenting lesions compared to other methods. We also discuss the general extension of our framework as a saliency optimization algorithm for lesion segmentation.

135 citations

Journal ArticleDOI
TL;DR: A level set model incorporating likelihood energy with the edge energy incorporates the ramp associated with the edges for weak boundaries for hepatic tumor segmentation and outperformed the Chan-Vese and geodesic level set models.
Abstract: In computed tomography of liver tumors there is often heterogeneous density, weak boundaries, and the liver tumors are surrounded by other abdominal structures with similar densities. These pose limitations to accurate the hepatic tumor segmentation. We propose a level set model incorporating likelihood energy with the edge energy. The minimization of the likelihood energy approximates the density distribution of the target and the multimodal density distribution of the background that can have multiple regions. In the edge energy formulation, our edge detector preserves the ramp associated with the edges for weak boundaries. We compared our approach to the Chan-Vese and the geodesic level set models and the manual segmentation performed by clinical experts. The Chan-Vese model was not successful in segmenting hepatic tumors and our model outperformed the geodesic level set model. Our results on 18 clinical datasets showed that our algorithm had a Jaccard distance error of 14.4 ± 5.3%, relative volume difference of -8.1 ± 2.1%, average surface distance of 2.4 ± 0.8 mm, RMS surface distance of 2.9 ± 0.7 mm, and the maximum surface distance of 7.2 ± 3.1 mm.

117 citations

Journal ArticleDOI
Yuchen Yuan1, Changyang Li1, Jinman Kim1, Weidong Cai1, David Dagan Feng1 
TL;DR: A regularized random walk ranking model is proposed, which introduces prior saliency estimation to every pixel in the image by taking both region and pixel image features into account, thus leading to pixel-detailed and superpixel-independent saliency maps.
Abstract: In recent saliency detection research, many graph-based algorithms have applied boundary priors as background queries, which may generate completely "reversed" saliency maps if the salient objects are on the image boundaries. Moreover, these algorithms usually depend heavily on pre-processed superpixel segmentation, which may lead to notable degradation in image detail features. In this paper, a novel saliency detection method is proposed to overcome the above issues. First, we propose a saliency reversion correction process, which locates and removes the boundary-adjacent foreground superpixels, and thereby increases the accuracy and robustness of the boundary prior-based saliency estimations. Second, we propose a regularized random walk ranking model, which introduces prior saliency estimation to every pixel in the image by taking both region and pixel image features into account, thus leading to pixel-detailed and superpixel-independent saliency maps. Experiments are conducted on four well-recognized data sets; the results indicate the superiority of our proposed method against 14 state-of-the-art methods, and demonstrate its general extensibility as a saliency optimization algorithm. We further evaluate our method on a new data set comprised of images that we define as boundary adjacent object saliency, on which our method performs better than the comparison methods.

112 citations

Journal ArticleDOI
TL;DR: DeepGene, an advanced cancer type classifier, is proposed, which addresses the obstacles in existing SMCC studies and outperforms three widely adopted existing classifiers, mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.
Abstract: With the developments of DNA sequencing technology, large amounts of sequencing data have become available in recent years and provide unprecedented opportunities for advanced association studies between somatic point mutations and cancer types/subtypes, which may contribute to more accurate somatic point mutation based cancer classification (SMCC). However in existing SMCC methods, issues like high data sparsity, small volume of sample size, and the application of simple linear classifiers, are major obstacles in improving the classification performance. To address the obstacles in existing SMCC studies, we propose DeepGene, an advanced deep neural network (DNN) based classifier, that consists of three steps: firstly, the clustered gene filtering (CGF) concentrates the gene data by mutation occurrence frequency, filtering out the majority of irrelevant genes; secondly, the indexed sparsity reduction (ISR) converts the gene data into indexes of its non-zero elements, thereby significantly suppressing the impact of data sparsity; finally, the data after CGF and ISR is fed into a DNN classifier, which extracts high-level features for accurate classification. Experimental results on our curated TCGA-DeepGene dataset, which is a reformulated subset of the TCGA dataset containing 12 selected types of cancer, show that CGF, ISR and DNN all contribute in improving the overall classification performance. We further compare DeepGene with three widely adopted classifiers and demonstrate that DeepGene has at least 24% performance improvement in terms of testing accuracy. Based on deep learning and somatic point mutation data, we devise DeepGene, an advanced cancer type classifier, which addresses the obstacles in existing SMCC studies. Experiments indicate that DeepGene outperforms three widely adopted existing classifiers, which is mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.

99 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal Article
TL;DR: In this paper, the coding exons of the family of 518 protein kinases were sequenced in 210 cancers of diverse histological types to explore the nature of the information that will be derived from cancer genome sequencing.
Abstract: AACR Centennial Conference: Translational Cancer Medicine-- Nov 4-8, 2007; Singapore PL02-05 All cancers are due to abnormalities in DNA. The availability of the human genome sequence has led to the proposal that resequencing of cancer genomes will reveal the full complement of somatic mutations and hence all the cancer genes. To explore the nature of the information that will be derived from cancer genome sequencing we have sequenced the coding exons of the family of 518 protein kinases, ~1.3Mb DNA per cancer sample, in 210 cancers of diverse histological types. Despite the screen being directed toward the coding regions of a gene family that has previously been strongly implicated in oncogenesis, the results indicate that the majority of somatic mutations detected are “passengers”. There is considerable variation in the number and pattern of these mutations between individual cancers, indicating substantial diversity of processes of molecular evolution between cancers. The imprints of exogenous mutagenic exposures, mutagenic treatment regimes and DNA repair defects can all be seen in the distinctive mutational signatures of individual cancers. This systematic mutation screen and others have previously yielded a number of cancer genes that are frequently mutated in one or more cancer types and which are now anticancer drug targets (for example BRAF , PIK3CA , and EGFR ). However, detailed analyses of the data from our screen additionally suggest that there exist a large number of additional “driver” mutations which are distributed across a substantial number of genes. It therefore appears that cells may be able to utilise mutations in a large repertoire of potential cancer genes to acquire the neoplastic phenotype. However, many of these genes are employed only infrequently. These findings may have implications for future anticancer drug development.

2,737 citations

Journal ArticleDOI
TL;DR: A general understanding of AI methods, particularly those pertaining to image-based tasks, is established and how these methods could impact multiple facets of radiology is explored, with a general focus on applications in oncology.
Abstract: Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

1,599 citations

Journal ArticleDOI
TL;DR: This work proposes a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D Dense UNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation.
Abstract: Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2-D and 3-D FCNs, serve as the backbone in many volumetric image segmentation. However, 2-D convolutions cannot fully leverage the spatial information along the third dimension while 3-D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D DenseUNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of the H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion layer. We extensively evaluated our method on the data set of the MICCAI 2017 Liver Tumor Segmentation Challenge and 3DIRCADb data set. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model.

1,561 citations

Journal ArticleDOI
TL;DR: The potential of liquid biopsies is highlighted by studies that show they can track the evolutionary dynamics and heterogeneity of tumours and can detect very early emergence of therapy resistance, residual disease and recurrence, but their analytical validity and clinical utility must be rigorously demonstrated before this potential can be realized.
Abstract: Precision oncology seeks to leverage molecular information about cancer to improve patient outcomes. Tissue biopsy samples are widely used to characterize tumours but are limited by constraints on sampling frequency and their incomplete representation of the entire tumour bulk. Now, attention is turning to minimally invasive liquid biopsies, which enable analysis of tumour components (including circulating tumour cells and circulating tumour DNA) in bodily fluids such as blood. The potential of liquid biopsies is highlighted by studies that show they can track the evolutionary dynamics and heterogeneity of tumours and can detect very early emergence of therapy resistance, residual disease and recurrence. However, the analytical validity and clinical utility of liquid biopsies must be rigorously demonstrated before this potential can be realized.

809 citations