scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the use of inverse power scaling laws in fatigue damage assessment is discussed, reviewing the engineering standards and pointing out their inherent limitations, and a physically consistent general scaling law is obtained by rigorous mathematical analysis in the framework of random vibration theory.

46 citations

Journal ArticleDOI
TL;DR: In this paper, a generalized quantum search algorithm, where phase inversions for the marked state and the prepared state are replaced by pi /2 phase rotations, is realized in a 2-qubit NMR heteronuclear system.

45 citations

Journal ArticleDOI
TL;DR: In this article, the influence of build orientation and post-processing treatments (annealing or hot isostatic pressing) on the fatigue and fracture behaviors of L-PBF stainless steel 316L in the high cycle fatigue region was examined.

45 citations

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2859 moreInstitutions (169)
TL;DR: In this article, a search for the production of single-top-quarks in association with missing energy is performed in proton-proton collisions at a centre-of-mass energy of [Formula: see text] with the ATLAS experiment at the large hadron collider using data collected in 2012.
Abstract: A search for the production of single-top-quarks in association with missing energy is performed in proton-proton collisions at a centre-of-mass energy of [Formula: see text] with the ATLAS experiment at the large hadron collider using data collected in 2012, corresponding to an integrated luminosity of [Formula: see text] fb[Formula: see text]. In this search, the [Formula: see text] boson from the top quark is required to decay into an electron or a muon and a neutrino. No deviation from the standard model prediction is observed, and upper limits are set on the production cross-section for resonant and non-resonant production of an invisible exotic state in association with a right-handed top quark. In the case of resonant production, for a spin-[Formula: see text] resonance with a mass of [Formula: see text] GeV, an effective coupling strength above [Formula: see text] is excluded at 95[Formula: see text] confidence level for the top quark and an invisible spin-[Formula: see text] state with mass between [Formula: see text] and [Formula: see text] GeV. In the case of non-resonant production, an effective coupling strength above [Formula: see text] is excluded at 95[Formula: see text] confidence level for the top quark and an invisible spin-[Formula: see text] state with mass between [Formula: see text] and [Formula: see text] GeV.

45 citations

Journal ArticleDOI
TL;DR: In this article , the authors proposed a new solution for developing high-efficiency heterogeneous catalysts, which not only improves the functionalization strategies for nanoporous metal-organic frameworks, but also enriches the functionalisation strategies of nanoporous MOFs by introducing a strong Lewis basic group of fluorine.
Abstract: The high catalytic activity of metal-organic frameworks (MOFs) can be realized by increasing their effective active sites, which prompts us to perform the functionalization on selected linkers by introducing a strong Lewis basic group of fluorine. Herein, the exquisite combination of paddle-wheel [Cu2(CO2)4(H2O)] clusters and meticulously designed fluorine-funtionalized tetratopic 2',3'-difluoro-[p-terphenyl]-3,3″,5,5″-tetracarboxylic acid (F-H4ptta) engenders one peculiar nanocaged {Cu2}-organic framework of {[Cu2(F-ptta)(H2O)2]·5DMF·2H2O}n (NUC-54), which features two types of nanocaged voids (9.8 Å × 17.2 Å and 10.1 Å × 12.4 Å) shaped by 12 paddle-wheel [Cu2(COO)4H2O)2] secondary building units, leaving a calculated solvent-accessible void volume of 60.6%. Because of the introduction of plentifully Lewis base sites of fluorine groups, activated NUC-54a exhibits excellent catalytic performance on the cycloaddition reaction of CO2 with various epoxides under mild conditions. Moreover, to expand the catalytic scope, the deacetalization-Knoevenagel condensation reactions of benzaldehyde dimethyl acetal and malononitrile were performed using the heterogenous catalyst of NUC-54a. Also, NUC-54a features high recyclability and catalytic stability with excellent catalytic performance in subsequent catalytic tests. Therefore, this work not only puts forward a new solution for developing high-efficiency heterogeneous catalysts, but also enriches the functionalization strategies for nanoporous MOFs.

45 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations