scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that cancer can be non-invasively detected up to four years before current standard of care and patients whose disease is diagnosed in its early stages have better outcomes.
Abstract: Early detection has the potential to reduce cancer mortality, but an effective screening test must demonstrate asymptomatic cancer detection years before conventional diagnosis in a longitudinal study. In the Taizhou Longitudinal Study (TZL), 123,115 healthy subjects provided plasma samples for long-term storage and were then monitored for cancer occurrence. Here we report the preliminary results of PanSeer, a noninvasive blood test based on circulating tumor DNA methylation, on TZL plasma samples from 605 asymptomatic individuals, 191 of whom were later diagnosed with stomach, esophageal, colorectal, lung or liver cancer within four years of blood draw. We also assay plasma samples from an additional 223 cancer patients, plus 200 primary tumor and normal tissues. We show that PanSeer detects five common types of cancer in 88% (95% CI: 80–93%) of post-diagnosis patients with a specificity of 96% (95% CI: 93–98%), We also demonstrate that PanSeer detects cancer in 95% (95% CI: 89–98%) of asymptomatic individuals who were later diagnosed, though future longitudinal studies are required to confirm this result. These results demonstrate that cancer can be non-invasively detected up to four years before current standard of care. Patients whose disease is diagnosed in its early stages have better outcomes. In this study, the authors develop a non invasive blood test based on circulating tumor DNA methylation that can potentially detect cancer occurrence even in asymptomatic patients.

295 citations

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, J. Abdallah4  +2914 moreInstitutions (169)
TL;DR: In this article, the jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton-proton collision data with a centre-of-mass energy of [Formula: see text]TeV corresponding to an integrated luminosity of [formula] see text][formula:see text].
Abstract: The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton-proton collision data with a centre-of-mass energy of [Formula: see text] TeV corresponding to an integrated luminosity of [Formula: see text][Formula: see text]. Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-[Formula: see text] algorithm with distance parameters [Formula: see text] or [Formula: see text], and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a [Formula: see text] boson, for [Formula: see text] and pseudorapidities [Formula: see text]. The effect of multiple proton-proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region ([Formula: see text]) for jets with [Formula: see text]. For central jets at lower [Formula: see text], the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton-proton collisions and test-beam data, which also provide the estimate for [Formula: see text] TeV. The calibration of forward jets is derived from dijet [Formula: see text] balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-[Formula: see text] jets at [Formula: see text]. Additional JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or gluons, are also discussed. The magnitude of these uncertainties depends on the event sample used in a given physics analysis, but typically amounts to 0.5-3 %.

294 citations

Journal ArticleDOI
TL;DR: It is elucidated that hAJ activates the mTOR pathway in cancer cells, which drives the progression from single cells to micrometastases and provides potential therapeutic targets to block progression toward osteolytic metastases.

292 citations

Journal ArticleDOI
TL;DR: The cosmic ray spectrum is measured using the two air-fluorescence detectors of the High Resolution Fly's Eye observatory operating in monocular mode to fit the spectrum to a model consisting of galactic and extragalactic sources.
Abstract: We have measured the cosmic ray spectrum above 10(17.2) eV using the two air-fluorescence detectors of the High Resolution Fly's Eye observatory operating in monocular mode. We describe the detector, phototube, and atmospheric calibrations, as well as the analysis techniques for the two detectors. We fit the spectrum to a model consisting of galactic and extragalactic sources.

289 citations

Journal ArticleDOI
TL;DR: Transformation optics provides an alternative approach to controlling the propagation of light by spatially varying the optical properties of a material and grey-scale lithography is used to adiabatically tailor the topology of a dielectric layer adjacent to a metal surface to demonstrate a plasmonic Luneburg lens that can focus surface plAsmon polaritons.
Abstract: Plasmonics takes advantage of the properties of surface plasmon polaritons, which are localized or propagating quasiparticles in which photons are coupled to the quasi-free electrons in metals. In particular, plasmonic devices can confine light in regions with dimensions that are smaller than the wavelength of the photons in free space, and this makes it possible to match the different length scales associated with photonics and electronics in a single nanoscale device. Broad applications of plasmonics that have been demonstrated to date include biological sensing, sub-diffraction-limit imaging, focusing and lithography and nano-optical circuitry. Plasmonics-based optical elements such as waveguides, lenses, beamsplitters and reflectors have been implemented by structuring metal surfaces or placing dielectric structures on metals to manipulate the two-dimensional surface plasmon waves. However, the abrupt discontinuities in the material properties or geometries of these elements lead to increased scattering of surface plasmon polaritons, which significantly reduces the efficiency of these components. Transformation optics provides an alternative approach to controlling the propagation of light by spatially varying the optical properties of a material. Here, motivated by this approach, we use grey-scale lithography to adiabatically tailor the topology of a dielectric layer adjacent to a metal surface to demonstrate a plasmonic Luneburg lens that can focus surface plasmon polaritons. We also make a plasmonic Eaton lens that can bend surface plasmon polaritons. Because the optical properties are changed gradually rather than abruptly in these lenses, losses due to scattering can be significantly reduced in comparison with previously reported plasmonic elements.

286 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations