scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
20 Oct 2017-Science
TL;DR: How the speed of light can be controlled using designed materials and fabricated structures is reviewed and how the combination of slow light and nanotechnology gives rise to a number of effects of interest in signal processing and optoelectronic communication is shown.
Abstract: There has recently been a surge of interest in the physics and applications of broadband ultraslow waves in nanoscale structures operating below the diffraction limit. They range from light waves or surface plasmons in nanoplasmonic devices to sound waves in acoustic-metamaterial waveguides, as well as fermions and phonon polaritons in graphene and van der Waals crystals and heterostructures. We review the underlying physics of these structures, which upend traditional wave-slowing approaches based on resonances or on periodic configurations above the diffraction limit. Light can now be tightly focused on the nanoscale at intensities up to ~1000 times larger than the output of incumbent near-field scanning optical microscopes, while exhibiting greatly boosted density of states and strong wave-matter interactions. We elucidate the general methodology by which broadband and, simultaneously, large wave decelerations, well below the diffraction limit, can be obtained in the above interdisciplinary fields. We also highlight a range of applications for renewable energy, biosensing, quantum optics, high-density magnetic data storage, and nanoscale chemical mapping.

121 citations

Journal ArticleDOI
TL;DR: This study identifies a means to access deep subwavelength features by use of a metamaterial superlens, a precursor of superlensing, by regenerating evanescent waves by excitation of a surface plasmon.
Abstract: We investigated a precursor of superlensing: regenerating evanescent waves by excitation of a surface plasmon. Because the permittivity of a silver slab approaches -1, we experimentally observed a broadening of surface-plasmon bandwidth. Our study identifies a means to access deep subwavelength features by use of a metamaterial superlens.

121 citations

Journal ArticleDOI
Wei Fan1, Xiang Zhang1, Zhang Yi1, Zhang Youfang1, Tianxi Liu1 
TL;DR: In this paper, SiO2 nanoparticles crosslinked polyimide aerogels synthesized by one-pot freeze-drying are presented, which show excellent mechanical properties and super-insulating behavior in a wide temperature range.

120 citations

Journal ArticleDOI
TL;DR: It is demonstrated that Mic60/Mitofilin homeostasis regulated by Yme1L is central to the MICOS assembly, which is required for maintenance of mitochondrial morphology and organization of mtDNA nucleoids.
Abstract: The MICOS complex (mitochondrial contact site and cristae organizing system) is essential for mitochondrial inner membrane organization and mitochondrial membrane contacts, however, the molecular regulation of MICOS assembly and the physiological functions of MICOS in mammals remain obscure. Here, we report that Mic60/Mitofilin has a critical role in the MICOS assembly, which determines the mitochondrial morphology and mitochondrial DNA (mtDNA) organization. The downregulation of Mic60/Mitofilin or Mic19/CHCHD3 results in instability of other MICOS components, disassembly of MICOS complex and disorganized mitochondrial cristae. We show that there exists direct interaction between Mic60/Mitofilin and Mic19/CHCHD3, which is crucial for their stabilization in mammals. Importantly, we identified that the mitochondrial i-AAA protease Yme1L regulates Mic60/Mitofilin homeostasis. Impaired MICOS assembly causes the formation of 'giant mitochondria' because of dysregulated mitochondrial fusion and fission. Also, mtDNA nucleoids are disorganized and clustered in these giant mitochondria in which mtDNA transcription is attenuated because of remarkable downregulation of some key mtDNA nucleoid-associated proteins. Together, these findings demonstrate that Mic60/Mitofilin homeostasis regulated by Yme1L is central to the MICOS assembly, which is required for maintenance of mitochondrial morphology and organization of mtDNA nucleoids.

119 citations

Journal ArticleDOI
Jaroslav Adam1, Dagmar Adamová2, Madan M. Aggarwal3, G. Aglieri Rinella4  +1006 moreInstitutions (97)
TL;DR: In this paper, the production yields for prompt charmed mesons D0, D+ and D∗+, and their antiparticles, were measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, (Formula presented.), of 2.76 TeV.
Abstract: The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, (Formula presented.) , of 2.76 TeV. The production yields for rapidity |y| < 0.5 are presented as a function of transverse momentum, pT, in the interval 1–36 GeV/c for the centrality class 0–10% and in the interval 1–16 GeV/c for the centrality class 30–50%. The nuclear modification factor RAA was computed using a proton-proton reference at (Formula presented.) TeV, based on measurements at (Formula presented.) TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.

118 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations