scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the HiRes data with threshold energies of 40 and 57 EeV were shown to be incompatible with the tracer model at a 95% confidence level unless the typical deflection angle was > 10 deg and with an isotropic flux.
Abstract: Stereo data collected by the HiRes experiment over a six-year period are examined for large-scale anisotropy related to the inhomogeneous distribution of matter in the nearby universe We consider the generic case of small cosmic-ray deflections and a large number of sources tracing the matter distribution In this matter tracer model the expected cosmic-ray flux depends essentially on a single free parameter, the typical deflection angle {theta} {sub s} We find that the HiRes data with threshold energies of 40 EeV and 57 EeV are incompatible with the matter tracer model at a 95% confidence level unless {theta} {sub s} > 10 deg and are compatible with an isotropic flux The data set above 10 EeV is compatible with both the matter tracer model and an isotropic flux

54 citations

Journal ArticleDOI
TL;DR: In this paper, a planar metallic mask was used for near-field optical lithography with half-pitch resolution up to 60nm using I-line (365nm) wavelength.
Abstract: The development of a near-field optical lithography is presented in this paper. By accessing short modal wavelengths of localized surface plasmon modes on a planar metallic mask, the resolution can be significantly increased while using conventional UV light source. Taking into account the real material properties, numerical studies indicate that the ultimate lithographic resolution at 20nm is achievable through a silver mask by using 365nm wavelength light. The surface quality of the silver mask is improved by adding an adhesion layer of titanium during the mask fabrication. Using a two-dimensional hole array silver mask, we experimentally demonstrated nanolithography with half-pitch resolution down to 60nm, far beyond the resolution limit of conventional lithography using I-line (365nm) wavelength.

54 citations

Proceedings ArticleDOI
10 Aug 2015
TL;DR: NoNClus is developed, a novel method based on non-negative matrix factorization (NMF), to cluster an NoN, a network of networks (NoN) that allows multiple underlying clustering structures across different networks.
Abstract: Integrating multiple graphs (or networks) has been shown to be a promising approach to improve the graph clustering accuracy. Various multi-view and multi-domain graph clustering methods have recently been developed to integrate multiple networks. In these methods, a network is treated as a view or domain.The key assumption is that there is a common clustering structure shared across all views (domains), and different views (domains) provide compatible and complementary information on this underlying clustering structure. However, in many emerging real-life applications, different networks have different data distributions, where the assumption that all networks share a single common clustering structure does not hold. In this paper, we propose a flexible and robust framework that allows multiple underlying clustering structures across different networks. Our method models the domain similarity as a network, which can be utilized to regularize the clustering structures in different networks. We refer to such a data model as a network of networks (NoN). We develop NoNClus, a novel method based on non-negative matrix factorization (NMF), to cluster an NoN. We provide rigorous theoretical analysis of NoNClus in terms of its correctness, convergence and complexity. Extensive experimental results on synthetic and real-life datasets show the effectiveness of our method.

54 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derived closed-form scaling formulas for strain in embedded lattice-mismatched spherical quantum dots and extended them to cubic anisotropy and arbitrary shape.
Abstract: Both quantitative and qualitative knowledge of strain and strain distributions in quantum dots are essential for the determination and tailoring of their optoelectronic properties. Typically strain is estimated using classical elasticity and then coupled to a suitable band structure calculation approach. However, classical elasticity is intrinsically size independent. This is in contradiction to the physical fact that at the size scale of a few nanometers, the elastic relaxation is size dependent and a departure from classical mechanics is expected. First, in the isotropic case, based on the physical mechanisms of nonlocal interactions, we herein derive (closed-form) scaling formulas for strain in embedded lattice-mismatched spherical quantum dots. In addition to a size dependency, we find marked differences in both spatial distribution of strain as well as in quantitative estimates especially in cases of extremely small quantum dots. Fully recognizing that typical quantum dots are neither of idealized spherical shape nor isotropic, we finally extend our results to cubic anisotropy and arbitrary shape. In particular, an exceptionally simple expression is derived for the dilation in an arbitrary shaped quantum dot. For the more general case (incorporating anisotropy), closed-form results are derived in the Fourier space while numerical results are provided to illustrate the various physical insights. Apart from qualitative and quantitative differences in strain states due to nonlocal effects, an aesthetic by-product for the technologically important polyhedral shaped quantum dots is that strain singularities at corners and vertices (which plague the classical elasticity formulation) are absent. Choosing GaAs as an example material, our results indicate that errors as large as hundreds of meV may be incurred upon neglect of nonlocal effects in sub-10-nm quantum dots.

54 citations

Journal ArticleDOI
TL;DR: Strong optical response in a class of monolayer molecular J-aggregates due to the coherent Coulomb interaction between localised Frenkel excitons is reported, which is promising for next-generation ultrafast on-chip optical communications.
Abstract: Excitons in two-dimensional (2D) materials are tightly bound and exhibit rich physics. So far, the optical excitations in 2D semiconductors are dominated by Wannier-Mott excitons, but molecular systems can host Frenkel excitons (FE) with unique properties. Here, we report a strong optical response in a class of monolayer molecular J-aggregates. The exciton exhibits giant oscillator strength and absorption (over 30% for monolayer) at resonance, as well as photoluminescence quantum yield in the range of 60-100%. We observe evidence of superradiance (including increased oscillator strength, bathochromic shift, reduced linewidth and lifetime) at room-temperature and more progressively towards low temperature. These unique properties only exist in monolayer owing to the large unscreened dipole interactions and suppression of charge-transfer processes. Finally, we demonstrate light-emitting devices with the monolayer J-aggregate. The intrinsic device speed could be beyond 30 GHz, which is promising for next-generation ultrafast on-chip optical communications.

53 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations