scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper presented a proof-of-principle strategy that reprogrammed host liver with genetic circuits to direct the synthesis and self-assembly of siRNAs into secretory exosomes and facilitate the in vivo delivery of siRNA through circulating exosome.
Abstract: RNAi therapy has undergone two stages of development, direct injection of synthetic siRNAs and delivery with artificial vehicles or conjugated ligands; both have not solved the problem of efficient in vivo siRNA delivery. Here, we present a proof-of-principle strategy that reprogrammes host liver with genetic circuits to direct the synthesis and self-assembly of siRNAs into secretory exosomes and facilitate the in vivo delivery of siRNAs through circulating exosomes. By combination of different genetic circuit modules, in vivo assembled siRNAs are systematically distributed to multiple tissues or targeted to specific tissues (e.g., brain), inducing potent target gene silencing in these tissues. The therapeutic value of our strategy is demonstrated by programmed silencing of critical targets associated with various diseases, including EGFR/KRAS in lung cancer, EGFR/TNC in glioblastoma and PTP1B in obesity. Overall, our strategy represents a next generation RNAi therapeutics, which makes RNAi therapy feasible.

35 citations

Proceedings ArticleDOI
01 Jul 2018
TL;DR: In this article, a deep reinforcement learning scheme was proposed to deal with complex situations where multi-modality sensor data is collected and a selective attention mechanism was introduced to focus on the crucial dimensions of the data.
Abstract: Multimodel wearable sensor data classificationplays an important role in ubiquitous computingand has a wide range of applications in variousscenarios from healthcare to entertainment. How-ever, most of the existing work in this field em-ploys domain-specific approaches and is thus inef-fective in complex situations where multi-modalitysensor data is collected. Moreover, the wearablesensor data is less informative than the conven-tional data such as texts or images. In this paper,to improve the adaptability of such classificationmethods across different application contexts, weturn this classification task into a game and applya deep reinforcement learning scheme to dynami-cally deal with complex situations. We also intro-duce a selective attention mechanism into the rein-forcement learning scheme to focus on the crucialdimensions of the data. This mechanism helps tocapture extra information from the signal, and canthus significantly improve the discriminative powerof the classifier. We carry out several experimentson three wearable sensor datasets, and demonstratecompetitive performance of the proposed approachcompared to several state-of-the-art baselines.

35 citations

Proceedings Article
25 Apr 2008
TL;DR: In this paper, the authors proposed a general framework for fast co-clustering large datasets, CRD, which achieves an execution time linear in m and n, where m is the number of columns and n is the numbers of rows in the data matrix respectively.
Abstract: The problem of simultaneously clustering columns and rows (co-clustering) arises in important applications, such as text data mining, microarray analysis, and recommendation system analysis. Compared with the classical clustering algorithms, co-clustering algorithms have been shown to be more effective in discovering hidden clustering structures in the data matrix. The complexity of previous co-clustering algorithms is usually O(m × n), where m and n are the numbers of rows and columns in the data matrix respectively. This limits their applicability to data matrices involving a large number of columns and rows. Moreover, some huge datasets can not be entirely held in main memory during co-clustering which violates the assumption made by the previous algorithms. In this paper, we propose a general framework for fast co-clustering large datasets, CRD. By utilizing recently developed sampling-based matrix decomposition methods, CRD achieves an execution time linear in m and n. Also, CRD does not require the whole data matrix be in the main memory. We conducted extensive experiments on both real and synthetic data. Compared with previous co-clustering algorithms, CRD achieves competitive accuracy but with much less computational cost.

35 citations

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2869 moreInstitutions (169)
TL;DR: In this article, a search for a massive gauge boson decaying to a top quark and a bottom quark is performed with the ATLAS detector in [Formula: see text] collisions at the LHC.
Abstract: A search for a massive [Formula: see text] gauge boson decaying to a top quark and a bottom quark is performed with the ATLAS detector in [Formula: see text] collisions at the LHC. The dataset was taken at a centre-of-mass energy of [Formula: see text] and corresponds to [Formula: see text] of integrated luminosity. This analysis is done in the hadronic decay mode of the top quark, where novel jet substructure techniques are used to identify jets from high-momentum top quarks. This allows for a search for high-mass [Formula: see text] bosons in the range 1.5-3.0 [Formula: see text]. [Formula: see text]-tagging is used to identify jets originating from [Formula: see text]-quarks. The data are consistent with Standard Model background-only expectations, and upper limits at 95 % confidence level are set on the [Formula: see text] cross section times branching ratio ranging from [Formula: see text] to [Formula: see text] for left-handed [Formula: see text] bosons, and ranging from [Formula: see text] to [Formula: see text] for [Formula: see text] bosons with purely right-handed couplings. Upper limits at 95 % confidence level are set on the [Formula: see text]-boson coupling to [Formula: see text] as a function of the [Formula: see text] mass using an effective field theory approach, which is independent of details of particular models predicting a [Formula: see text] boson.

35 citations

Journal ArticleDOI
TL;DR: It is demonstrated that an EMT mechanism involving protein internalization impacts cell migration, while epithelial plasticity is identified as a determinant of metastatic organotropism in pancreatic cancer.

35 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations