scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Proceedings ArticleDOI
Shuang-Nan Zhang, Marco Feroci1, Andrea Santangelo2, Yongwei Dong  +181 moreInstitutions (41)
TL;DR: eXTP as discussed by the authors is a science mission designed to study the state of matter under extreme conditions of density, gravity and magnetism, which carries a unique and unprecedented suite of state-of-the-art scientific instruments enabling for the first time ever the simultaneous spectral-timing-polarimetry studies of cosmic sources in the energy range from 0.5-30 keV.
Abstract: eXTP is a science mission designed to study the state of matter under extreme conditions of density, gravity and magnetism. Primary goals are the determination of the equation of state of matter at supra-nuclear density, the measurement of QED effects in highly magnetized star, and the study of accretion in the strong-field regime of gravity. Primary targets include isolated and binary neutron stars, strong magnetic field systems like magnetars, and stellar-mass and supermassive black holes. The mission carries a unique and unprecedented suite of state-of-the-art scientific instruments enabling for the first time ever the simultaneous spectral-timing-polarimetry studies of cosmic sources in the energy range from 0.5-30 keV (and beyond). Key elements of the payload are: the Spectroscopic Focusing Array (SFA) - a set of 11 X-ray optics for a total effective area of similar to 0.9 m(2) and 0.6 m(2) at 2 keV and 6 keV respectively, equipped with Silicon Drift Detectors offering < 180 eV spectral resolution; the Large Area Detector (LAD) - a deployable set of 640 Silicon Drift Detectors, for a total effective area of similar to 3.4 m(2), between 6 and 10 keV, and spectral resolution better than 250 eV; the Polarimetry Focusing Array (PFA) - a set of 2 X-ray telescope, for a total effective area of 250 cm(2) at 2 keV, equipped with imaging gas pixel photoelectric polarimeters; the Wide Field Monitor (WFM) - a set of 3 coded mask wide field units, equipped with position-sensitive Silicon Drift Detectors, each covering a 90 degrees x 90 degrees field of view. The eXTP international consortium includes major institutions of the Chinese Academy of Sciences and Universities in China, as well as major institutions in several European countries and the United States. The predecessor of eXTP, the XTP mission concept, has been selected and funded as one of the so-called background missions in the Strategic Priority Space Science Program of the Chinese Academy of Sciences since 2011. The strong European participation has significantly enhanced the scientific capabilities of eXTP. The planned launch date of the mission is earlier than 2025.

184 citations

Journal ArticleDOI
Zhengguo Cao1, Felix Aharonian2, Felix Aharonian3, Q. An4  +261 moreInstitutions (23)
17 May 2021-Nature
TL;DR: In this article, the authors reported the detection of more than 530 photons at energies above 100 teraelectronvolts and up to 1.4 PeV from 12 sources in the Galaxy.
Abstract: The extension of the cosmic-ray spectrum beyond 1 petaelectronvolt (PeV; 1015 electronvolts) indicates the existence of the so-called PeVatrons—cosmic-ray factories that accelerate particles to PeV energies. We need to locate and identify such objects to find the origin of Galactic cosmic rays1. The principal signature of both electron and proton PeVatrons is ultrahigh-energy (exceeding 100 TeV) γ radiation. Evidence of the presence of a proton PeVatron has been found in the Galactic Centre, according to the detection of a hard-spectrum radiation extending to 0.04 PeV (ref. 2). Although γ-rays with energies slightly higher than 0.1 PeV have been reported from a few objects in the Galactic plane3–6, unbiased identification and in-depth exploration of PeVatrons requires detection of γ-rays with energies well above 0.1 PeV. Here we report the detection of more than 530 photons at energies above 100 teraelectronvolts and up to 1.4 PeV from 12 ultrahigh-energy γ-ray sources with a statistical significance greater than seven standard deviations. Despite having several potential counterparts in their proximity, including pulsar wind nebulae, supernova remnants and star-forming regions, the PeVatrons responsible for the ultrahigh-energy γ-rays have not yet been firmly localized and identified (except for the Crab Nebula), leaving open the origin of these extreme accelerators. Observations of γ-rays with energies up to 1.4 PeV find that 12 sources in the Galaxy are PeVatrons, one of which is the Crab Nebula.

184 citations

Journal ArticleDOI
Jaroslav Adam1, Dagmar Adamová2, Madan M. Aggarwal3, G. Aglieri Rinella4  +1008 moreInstitutions (100)
TL;DR: In this article, the Pb-Pb collisions were measured at root s(NN) = 5.02 TeV and their correlation with experimental observables sensitive to the centrality of the collision was investigated.
Abstract: We report measurements of the primary charged-particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at root s(NN) = 5.02 TeV and investigate their correlation with experimental observables sensitive to the centrality of the collision. Centrality classes are defined by using different event-activity estimators, i.e., charged-particle multiplicities measured in three different pseudorapidity regions as well as the energy measured at beam rapidity (zero degree). The procedures to determine the centrality, quantified by the number of participants (N-part) or the number of nucleon-nucleon binary collisions (N-coll) are described. We show that, in contrast to Pb-Pb collisions, in p-Pb collisions large multiplicity fluctuations together with the small range of participants available generate a dynamical bias in centrality classes based on particle multiplicity. We propose to use the zero-degree energy, which we expect not to introduce a dynamical bias, as an alternative event-centrality estimator. Based on zero-degree energy-centrality classes, the N-part dependence of particle production is studied. Under the assumption that the multiplicity measured in the Pb-going rapidity region scales with the number of Pb participants, an approximate independence of the multiplicity per participating nucleon measured at mid-rapidity of the number of participating nucleons is observed. Furthermore, at high-pT the p-Pb spectra are found to be consistent with the pp spectra scaled by N-coll for all centrality classes. Our results represent valuable input for the study of the event-activity dependence of hard probes in p-Pb collisions and, hence, help to establish baselines for the interpretation of the Pb-Pb data.

184 citations

Journal ArticleDOI
Jaroslav Adam1, Dagmar Adamová2, Madan M. Aggarwal3, G. Aglieri Rinella4  +986 moreInstitutions (95)
TL;DR: The pseudorapidity density of charged particles, dNch/dη, at midrapidity in Pb-Pb collisions has been measured at a center-of-mass energy per nucleon pair of √sNN=5.02 TeV as discussed by the authors.
Abstract: The pseudorapidity density of charged particles, dNch/dη, at midrapidity in Pb-Pb collisions has been measured at a center-of-mass energy per nucleon pair of √sNN=5.02 TeV. For the 5% most central collisions, we measure a value of 1943 ± 54. The rise in dNch/dη as a function of √sNN p is steeper than that observed in proton-proton collisions and follows the trend established by measurements at lower energy. The increase of dNch/dη as a function of the average number of participant nucleons, ⟨Npart⟩, calculated in a Glauber model, is compared with the previous measurement at √sNN=2.76 TeV. A constant factor of about 1.2 describes the increase in dNch/dη from √sNN=2.76 to 5.02 TeV for all centrality classes, within the measured range of 0%–80% centrality. The results are also compared to models based on different mechanisms for particle production in nuclear collisions.

184 citations

Journal ArticleDOI
TL;DR: This article proposes an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS, and has broader applicability and is more efficient than existing methods for large sample study.
Abstract: As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene–gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genomewide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.

182 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations