scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: An in-depth analysis of the Kerr effect in resonator fiber optic gyros (R-FOGs) based on triangular wave phase modulation shows that the measurement error for an R-FOG with a hollow-core photonic bandgap fiber as the fiber loop can be one to two orders of magnitude smaller than a conventional single mode fiber loop.
Abstract: We present an in-depth analysis of the Kerr effect in resonator fiber optic gyros (R-FOGs) based on triangular wave phase modulation. Formulations that relate gyro output to the rotation rate, the Kerr nonlinearity, and other fiber and gyro parameters are derived and used to study the effect of Kerr nonlinearity on the gyro performance. Numerical investigation shows that the Kerr effect results in a nonzero gyro output even when the gyro is at stationary, which is interpreted as an error in the measurement of rotation rate. This error was found to increase as the frequencies of the two triangular phase modulations deviate from each other, and is not zero even if the intensities of the two counterpropagating beams are exactly the same. For fixed frequencies of the triangular phase modulations, there exists an optimal intensity splitting ratio for the two counterpropagating beams, which leads to zero gyro error. Calculation shows that the measurement error due to the Kerr effect for an R-FOG with a hollow-core photonic bandgap fiber as the fiber loop can be one to two orders of magnitude smaller than an R-FOG with a conventional single mode fiber loop.

20 citations

Journal ArticleDOI
TL;DR: In contrast to conventional phase gradient metasurfaces where each meta-atom responds individually, this article proposed a non-Hermitian coupling between the meta-atoms, where new degrees of freedom are introduced and novel functionalities can be achieved.
Abstract: Abstract Metasurfaces are optically thin layers of subwavelength resonators that locally tailor the electromagnetic response at the nanoscale. Our metasurface research aims at developing novel designs and applications of metasurfaces that go beyond the classical regimes. In contrast to conventional phase gradient metasurfaces where each meta-atom responds individually, we are interested in developing metasurfaces where neighboring meta-atoms are strongly coupled. By engineering a non-Hermitian coupling between the meta-atoms, new degrees of freedom are introduced and novel functionalities can be achieved. We are also interested in combining classical metasurface with quantum emitters, which may offer opportunities for on-chip quantum technologies. Additionally, we have been designing metasurfaces to realize exciting phenomena and applications, such as ultrathin metasurface cloak and strong photonic spin-Hall effect. In this paper, we review our research efforts in optical metasurfaces in the past few years, which ranges from conventional to novel type of metasurface and from classical to quantum regime.

20 citations

Journal ArticleDOI
TL;DR: Results show that the scheme based on QD-SOA is a promising method for the realization of high-speed all-optical communication system in the future.
Abstract: The scheme to realize high-speed (~250 Gb/s) all-optical Boolean logic gates using semiconductor optical amplifiers with quantum-dot (QD-SOA) is introduced and analyzed in this review. Numerical si...

20 citations

Journal ArticleDOI
TL;DR: This paper develops an efficient and exact local search method, FLoS (Fast Local Search), for top-$k$ and extends FLoS to measures having local optimum by utilizing relationship among different measures.
Abstract: Top- $k$ proximity query in large graphs is a fundamental problem with a wide range of applications. Various random walk based measures have been proposed to measure the proximity between different nodes. Although these measures are effective, efficiently computing them on large graphs is a challenging task. In this paper, we develop an efficient and exact local search method, FLoS (Fast Local Search), for top- $k$ proximity query in large graphs. FLoS guarantees the exactness of the solution. Moreover, it can be applied to a variety of commonly used proximity measures. FLoS is based on the no local optimum property of proximity measures. We show that many measures have no local optimum. Utilizing this property, we introduce several operations to manipulate transition probabilities and develop tight lower and upper bounds on the proximity values. The lower and upper bounds monotonically converge to the exact proximity value when more nodes are visited. We further extend FLoS to measures having local optimum by utilizing relationship among different measures. We perform comprehensive experiments on real and synthetic large graphs to evaluate the efficiency and effectiveness of the proposed method.

20 citations

Proceedings ArticleDOI
19 Oct 2020
TL;DR: This work proposes to construct a dual representation space, where transformation is performed explicitly to model the semantic transitions in the form of link topology and node attributes, and adopts auxiliary mutual information loss to enforce the alignment of unpaired/paired examples.
Abstract: Graph translation is very promising research direction and has awide range of potential real-world applications. Graph is a natural structure for representing relationship and interactions, and its translation can encode the intrinsic semantic changes of relation-ships in different scenarios. However, despite its seemingly wide possibilities, usage of graph translation so far is still quite limited.One important reason is the lack of high-quality paired dataset. For example, we can easily build graphs representing peoples? shared music tastes and those representing co-purchase behavior, but a well paired dataset is much more expensive to obtain. Therefore,in this work, we seek to provide a graph translation model in the semi-supervised scenario. This task is non-trivial, because graph translation involves changing the semantics in the form of link topology and node attributes, which is difficult to capture due to the combinatory nature and inter-dependencies. Furthermore, due to the high order of freedom in graph's composition, it is difficult to assure the generalization ability of trained models. These difficulties impose a tighter requirement for the exploitation of unpaired samples. Addressing them, we propose to construct a dual representation space, where transformation is performed explicitly to model the semantic transitions. Special encoder/decoder structures are designed, and auxiliary mutual information loss is also adopted to enforce the alignment of unpaired/paired examples. We evaluate the proposed method in three different datasets.

20 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations