scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
21 Jan 2021-Cell
TL;DR: It is discovered that mis-spliced RNA itself is a molecular trigger for tumor killing through viral mimicry, indicating that dsRNA-sensing pathways respond to global aberrations of RNA splicing in cancer and provoking the hypothesis that STTs may provide unexplored strategies to activate anti-tumor immune pathways.

71 citations

Journal ArticleDOI
TL;DR: A one-step electron-beam lithography process for the fabrication of a high-aspect ratio nanopin array is presented, which can lead to ultrasensitive surface-enhanced Raman scattering chemical sensor arrays, etc.
Abstract: A one-step electron-beam lithography process for the fabrication of a high-aspect ratio nanopin array is presented. Each nanopin is a metal-capped dielectric pillar upon a ring-shaped metallic disc. Highly tunable optical properties and the electromagnetic interplay between the metallic components were studied by experiment and simulation. The two metallic pieces play asymmetrical roles in their coupling to each other due to their drastic size difference. The structure can lead to ultrasensitive surface-enhanced Raman scattering chemical sensor arrays, etc.

71 citations

Journal ArticleDOI
TL;DR: In this paper, a review of recent developments in magnetic plasmonics arising from the coupling effect in metamaterials is given, where it is shown that the coupling between these units produces multiple discrete resonance modes due to hybridization.
Abstract: Magnetic metamaterials consist of magnetic resonators smaller in size than their excitation wavelengths. Their unique electromagnetic properties were characterized by the effective media theory at the early stage. However, the effective media model does not take into account the interactions between magnetic elements; thus, the effective properties of bulk metamaterials are the result of the “averaged effect” of many uncoupled resonators. In recent years, it has been shown that the interaction between magnetic resonators could lead to some novel phenomena and interesting applications that do not exist in conventional uncoupled metamaterials. In this paper, we will give a review of recent developments in magnetic plasmonics arising from the coupling effect in metamaterials. For the system composed of several identical magnetic resonators, the coupling between these units produces multiple discrete resonance modes due to hybridization. In the case of a system comprising an infinite number of magnetic elements, these multiple discrete resonances can be extended to form a continuous frequency band by strong coupling. This kind of broadband and tunable magnetic metamaterial may have interesting applications. Many novel metamaterials and nanophotonic devices could be developed from coupled resonator systems in the future. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

71 citations

Journal ArticleDOI
TL;DR: CT features including lesion size, tumour growth pattern, and EVFDM were predictors of the risk stratifications for GISTs, according to the 2008 NIH criteria.
Abstract: To determine the predictive CT imaging features for risk stratifications in patients with primary gastrointestinal stromal tumours (GISTs). One hundred and twenty-nine patients with histologically confirmed primary GISTs (diameter >2 cm) were enrolled. CT imaging features were reviewed. Tumour risk stratifications were determined according to the 2008 NIH criteria where GISTs were classified into four categories according to the tumour size, location, mitosis count, and tumour rupture. The association between risk stratifications and CT features was analyzed using univariate analysis, followed by multinomial logistic regression and receiver operating characteristic (ROC) curve analysis. CT imaging features including tumour margin, size, shape, tumour growth pattern, direct organ invasion, necrosis, enlarged vessels feeding or draining the mass (EVFDM), lymphadenopathy, and contrast enhancement pattern were associated with the risk stratifications, as determined by univariate analysis (P < 0.05). Only lesion size, growth pattern and EVFDM remained independent risk factors in multinomial logistic regression analysis (OR = 3.480–100.384). ROC curve analysis showed that the area under curve of the obtained multinomial logistic regression model was 0.806 (95 % CI: 0.727–0.885). CT features including lesion size, tumour growth pattern, and EVFDM were predictors of the risk stratifications for GIST. • CT features were of predictive value for risk stratification of GISTs. • Tumour size, growth patterns, and EVFDM were risk predictors of GISTs. • Large size, mixed growth pattern, or EVFDM indicated high risk GIST.

70 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations