scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
22 Apr 2005-Science
TL;DR: This work demonstrated sub–diffraction-limited imaging with 60-nanometer half-pitch resolution, or one-sixth of the illumination wavelength, using silver as a natural optical superlens and showed that arbitrary nanostructures can be imaged with good fidelity.
Abstract: Recent theory has predicted a superlens that is capable of producing sub–diffraction-limited images. This superlens would allow the recovery of evanescent waves in an image via the excitation of surface plasmons. Using silver as a natural optical superlens, we demonstrated sub–diffraction-limited imaging with 60-nanometer half-pitch resolution, or one-sixth of the illumination wavelength. By proper design of the working wavelength and the thickness of silver that allows access to a broad spectrum of subwavelength features, we also showed that arbitrary nanostructures can be imaged with good fidelity. The optical superlens promises exciting avenues to nanoscale optical imaging and ultrasmall optoelectronic devices.

3,753 citations

Journal ArticleDOI
26 Apr 2017
TL;DR: In this paper, the authors reported the experimental discovery of intrinsic ferromagnetism in Cr 2 Ge 2 Te 6 atomic layers by scanning magneto-optic Kerr microscopy.
Abstract: We report the experimental discovery of intrinsic ferromagnetism in Cr 2 Ge 2 Te 6 atomic layers by scanning magneto-optic Kerr microscopy. In this 2D van der Waals ferromagnet, unprecedented control of transition temperature is realized via small magnetic fields.

3,215 citations

Journal ArticleDOI
02 Jun 2011-Nature
TL;DR: Graphene-based optical modulation mechanism, with combined advantages of compact footprint, low operation voltage and ultrafast modulation speed across a broad range of wavelengths, can enable novel architectures for on-chip optical communications.
Abstract: Graphene, the single-atom-thick form of carbon, holds promise for many applications, notably in electronics where it can complement or be integrated with silicon-based devices. Intense efforts have been devoted to develop a key enabling device, a broadband, fast optical modulator with a small device footprint. Now Liu et al. demonstrate an exciting new possibility for graphene in the area of on-chip optical communication: a graphene-based optical modulator integrated with a silicon chip. This new device relies on the electrical tuning of the Fermi level of the graphene sheet, and achieves modulation of guided light at frequencies over 1 gigahertz, together with a broad operating spectrum. At just 25 square micrometres in area, it is one of the smallest of its type. Integrated optical modulators with high modulation speed, small footprint and large optical bandwidth are poised to be the enabling devices for on-chip optical interconnects1,2. Semiconductor modulators have therefore been heavily researched over the past few years. However, the device footprint of silicon-based modulators is of the order of millimetres, owing to its weak electro-optical properties3. Germanium and compound semiconductors, on the other hand, face the major challenge of integration with existing silicon electronics and photonics platforms4,5,6. Integrating silicon modulators with high-quality-factor optical resonators increases the modulation strength, but these devices suffer from intrinsic narrow bandwidth and require sophisticated optical design; they also have stringent fabrication requirements and limited temperature tolerances7. Finding a complementary metal-oxide-semiconductor (CMOS)-compatible material with adequate modulation speed and strength has therefore become a task of not only scientific interest, but also industrial importance. Here we experimentally demonstrate a broadband, high-speed, waveguide-integrated electroabsorption modulator based on monolayer graphene. By electrically tuning the Fermi level of the graphene sheet, we demonstrate modulation of the guided light at frequencies over 1 GHz, together with a broad operation spectrum that ranges from 1.35 to 1.6 µm under ambient conditions. The high modulation efficiency of graphene results in an active device area of merely 25 µm2, which is among the smallest to date. This graphene-based optical modulation mechanism, with combined advantages of compact footprint, low operation voltage and ultrafast modulation speed across a broad range of wavelengths, can enable novel architectures for on-chip optical communications.

3,105 citations

Proceedings Article
07 Dec 2015
TL;DR: In this paper, the use of character-level convolutional networks (ConvNets) for text classification has been explored and compared with traditional models such as bag of words, n-grams and their TFIDF variants.
Abstract: This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.

3,052 citations

Proceedings Article
23 Feb 2014
TL;DR: In this article, a multiscale and sliding window approach is proposed to predict object boundaries, which is then accumulated rather than suppressed in order to increase detection confidence, and OverFeat is the winner of the ImageNet Large Scale Visual Recognition Challenge 2013.
Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.

3,043 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations