scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors presented an experimental and numerical study of fatigue behavior of a wide bonded lap joint with a process-induced defect of semi-circular shape, and the fatigue life was predicted using a normalised strain energy release rate parameter calculated by finite element method.

24 citations

Journal ArticleDOI
TL;DR: In this article , the B, N, F tri-doped lignin-derived carbon nanofibers (BNF-LCFs) have been prepared by electrospinning and pyrolysis.

24 citations

Proceedings ArticleDOI
27 Dec 2018
TL;DR: A novel memory-based random walk method that can simultaneously identify multiple target local communities to which the query nodes belong and allows walkers with similar visiting histories to reinforce each other so that they can better capture the community structure instead of being biased to the query node.
Abstract: Local community detection, which aims to find a target community containing a set of query nodes, has recently drawn intense research interest. The existing local community detection methods usually assume all query nodes are from the same community and only find a single target community. This is a strict requirement and does not allow much flexibility. In many real-world applications, however, we may not have any prior knowledge about the community memberships of the query nodes, and different query nodes may be from different communities. To address this limitation of the existing methods, we propose a novel memory-based random walk method, MRW, that can simultaneously identify multiple target local communities to which the query nodes belong. In MRW, each query node is associated with a random walker. Different from commonly used memoryless random walk models, MRW records the entire visiting history of each walker. The visiting histories of walkers can help unravel whether they are from the same community or not. Intuitively, walkers with similar visiting histories are more likely to be in the same community. Moreover, MRW allows walkers with similar visiting histories to reinforce each other so that they can better capture the community structure instead of being biased to the query nodes. We provide rigorous theoretical foundation for the proposed method and develop efficient algorithms to identify multiple target local communities simultaneously. Comprehensive experimental evaluations on a variety of real-world datasets demonstrate the effectiveness and efficiency of the proposed method.

24 citations

Journal ArticleDOI
TL;DR: A new method, HTreeQA, that uses a tristate semi-perfect phylogeny tree to approximate the perfect phylogeny used in existing methods, which is suitable for QTL mapping on any mouse populations, including the incipient Collaborative Cross lines.
Abstract: With the advances in high-throughput genotyping technology, the study of quantitative trait loci (QTL) has emerged as a promising tool to understand the genetic basis of complex traits. Methodology development for the study of QTL recently has attracted significant research attention. Local phylogeny-based methods have been demonstrated to be powerful tools for uncovering significant associations between phenotypes and single-nucleotide polymorphism markers. However, most existing methods are designed for homozygous genotypes, and a separate haplotype reconstruction step is often needed to resolve heterozygous genotypes. This approach has limited power to detect nonadditive genetic effects and imposes an extensive computational burden. In this article, we propose a new method, HTreeQA, that uses a tristate semi-perfect phylogeny tree to approximate the perfect phylogeny used in existing methods. The semi-perfect phylogeny trees are used as high-level markers for association study. HTreeQA uses the genotype data as direct input without phasing. HTreeQA can handle complex local population structures. It is suitable for QTL mapping on any mouse populations, including the incipient Collaborative Cross lines. Applied HTreeQA, significant QTLs are found for two phenotypes of the PreCC lines, white head spot and running distance at day 5/6. These findings are consistent with known genes and QTL discovered in independent studies. Simulation studies under three different genetic models show that HTreeQA can detect a wider range of genetic effects and is more efficient than existing phylogeny-based approaches. We also provide rigorous theoretical analysis to show that HTreeQA has a lower error rate than alternative methods.

24 citations

Journal ArticleDOI
TL;DR: In this paper, the authors performed genomic, transcriptional, and proteomic profiling using whole-exome sequencing, mRNA-seq, and reverse-phase protein array analysis in a cohort of the lung, breast, and renal cell carcinomas consisting of brain metastasis and patient-matched primary or extracranial metastatic tissues.
Abstract: The deadly complication of brain metastasis (BM) is largely confined to a relatively narrow cross-section of systemic malignancies, suggesting a fundamental role for biological mechanisms shared across commonly brain metastatic tumor types. To identify and characterize such mechanisms, we performed genomic, transcriptional, and proteomic profiling using whole-exome sequencing, mRNA-seq, and reverse-phase protein array analysis in a cohort of the lung, breast, and renal cell carcinomas consisting of BM and patient-matched primary or extracranial metastatic tissues. While no specific genomic alterations were associated with BM, correlations with impaired cellular immunity, upregulated oxidative phosphorylation (OXPHOS), and canonical oncogenic signaling pathways including phosphoinositide 3-kinase (PI3K) signaling, were apparent across multiple tumor histologies. Multiplexed immunofluorescence analysis confirmed significant T cell depletion in BM, indicative of a fundamentally altered immune microenvironment. Moreover, functional studies using in vitro and in vivo modeling demonstrated heightened oxidative metabolism in BM along with sensitivity to OXPHOS inhibition in murine BM models and brain metastatic derivatives relative to isogenic parentals. These findings demonstrate that pathophysiological rewiring of oncogenic signaling, cellular metabolism, and immune microenvironment broadly characterizes BM. Further clarification of this biology will likely reveal promising targets for therapeutic development against BM arising from a broad variety of systemic cancers.

23 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations