scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the optical loss compensation via surface plasmon amplification with the assistance of the gain medium of PbS quantum dots was investigated for a bulk left-handed metamaterial with fishnet structure.
Abstract: A bulk left-handed metamaterial with fishnet structure is investigated to show the optical loss compensation via surface plasmon amplification with the assistance of the gain medium of PbS quantum dots. Simultaneously negative permittivity and permeability are confirmed at the telecommunication wavelength (1.5 μm) by the retrieval of the effective electromagnetic property. The dependence of enhanced transmission on the gain coefficient, as well as on the propagation layers, demonstrates that ultralow loss is feasible in bulk left-handed metamaterials.

50 citations

Journal ArticleDOI
TL;DR: It is observed that synonymous positions in general are conserved relative to intronic sequences, suggesting that messenger RNA molecules are rich in sequence information for functions beyond protein coding and splicing.
Abstract: We have used comparative genomics to characterize the evolutionary behavior of predicted splicing regulatory motifs. Using base substitution rates in intronic regions as a calibrator for neutral change, we found a strong avoidance of synonymous substitutions that disrupt predicted exonic splicing enhancers or create predicted exonic splicing silencers. These results attest to the functionality of the hexameric motif set used and suggest that they are subject to purifying selection. We also found that synonymous substitutions in constitutive exons tend to create exonic splicing enhancers and to disrupt exonic splicing silencers, implying positive selection for these splicing promoting events. We present evidence that this positive selection is the result of splicing-positive events compensating for splicing-negative events as well as for mutations that weaken splice-site sequences. Such compensatory events include nonsynonymous mutations, synonymous mutations, and mutations at splice sites. Compensation was also seen from the fact that orthologous exons tend to maintain the same number of predicted splicing motifs. Our data fit a splicing compensation model of exon evolution, in which selection for splicing-positive mutations takes place to counter the effect of an ongoing splicing-negative mutational process, with the exon as a whole being conserved as a unit of splicing. In the course of this analysis, we observed that synonymous positions in general are conserved relative to intronic sequences, suggesting that messenger RNA molecules are rich in sequence information for functions beyond protein coding and splicing.

50 citations

Journal ArticleDOI
TL;DR: This work presents a theoretical approach to analyze solar cell performance by allowing rigorous electromagnetic calculations of the emission rate using the fluctuation-dissipation theorem and shows the direct quantification of the voltage, current, and efficiency of low-dimensional solar cells.
Abstract: Current methods for evaluating solar cell efficiencies cannot be applied to low-dimensional structures where phenomena from the realm of near-field optics prevail. We present a theoretical approach to analyze solar cell performance by allowing rigorous electromagnetic calculations of the emission rate using the fluctuation-dissipation theorem. Our approach shows the direct quantification of the voltage, current, and efficiency of low-dimensional solar cells. This approach is demonstrated by calculating the voltage and the efficiency of a GaAs slab solar cell for thicknesses from several microns down to a few nanometers. This example highlights the ability of the proposed approach to capture the role of optical near-field effects in solar cell performance.

50 citations

Journal ArticleDOI
TL;DR: In this article, the effect of specific water quality parameters like pH, dissolved oxygen (DO), conductivity, and buffering capacity (attributable to buffering capacities) on water quality was investigated.
Abstract: The objective of this study is to clearly understand the effect of specific water quality parameters like pH, dissolved oxygen (DO), conductivity, and buffering capacity (attributable to d...

49 citations

Journal ArticleDOI
Jaroslav Adam1, Dagmar Adamová2, Madan M. Aggarwal3, G. Aglieri Rinella4  +1005 moreInstitutions (95)
TL;DR: State Committee of Science, World Federation of Scientists (WFS) and Swiss Fonds Kidagan, Armenia; Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Financiadora de Estudos e Projetos (FINEP), Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), National Natural Science Foundation of China (NSFC), the Chinese Ministry of Education (CMOE) and the Ministry of Science and Technology of China

49 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations