scispace - formally typeset
Search or ask a question
Author

Xiang Zhang

Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a general scheme of inducing topological corner modes (TCMs) in arbitrary geometry based on Dirac vortices from aperiodic Kekul\'e modulations is presented.
Abstract: Recently, higher-order topologies have been experimentally realized, featuring topological corner modes (TCMs) between adjacent topologically distinct domains. However, they have to comply with specific spatial symmetries of underlying lattices, hence their TCMs only emerge in very limited geometries, which significantly impedes generic applications. Here, we report a general scheme of inducing TCMs in arbitrary geometry based on Dirac vortices from aperiodic Kekul\'e modulations. The TCMs can now be constructed and experimentally observed in square and pentagonal domains incompatible with underlying triangular lattices. Such bound modes at arbitrary corners do not require their boundaries to run along particular lattice directions. Our scheme allows an arbitrary specification of numbers and positions of TCMs, which will be important for future on-chip topological circuits. Moreover, the general scheme developed here can be extended to other classical wave systems. Our findings reveal rich physics of aperiodic modulations, and advance applications of TCMs in realistic scenarios.

26 citations

Journal ArticleDOI
TL;DR: The results showed that the metamaterials could not only steer the classical light but also the non-classical light and they might have potential application in the future quantum information.
Abstract: We studied the quantum properties of magnetic plasmon waves in a three-dimensional coupled metamaterial. A Hong-Ou-Mandel dip of two-photon interference with a visibility of 86 ± 6.0% was explicitly observed, when the sample was inserted into one of the two arms of the interferometer. This meant that the quantum interference property survived in such a magnetic plasmon wave-mediated transmission process, thus testifying the magnetic plasmon waves owned a quantum nature. A full quantum model was utilized to describe our experimental results. The results showed that the metamaterials could not only steer the classical light but also the non-classical light and they might have potential application in the future quantum information.

26 citations

01 Jan 2010
TL;DR: This paper studies existing topographic maps at large to medium scales, and proposes and discusses a comprehensive typology of building patterns, their distinctions and characteristics, which includes linear alignments, curvilinear, align-along-road alignments and nonlinear clusters.
Abstract: Building patterns are important settlement structures in applications like automated generalization and spatial data mining. Previous investigations have focused on a few types of building patterns (e.g. collinear building alignments); while many other types are less discussed. In order to get better known of the building patterns available in geography, this paper studies existing topographic maps at large to medium scales, and proposes and discusses a comprehensive typology of building patterns, their distinctions and characteristics. The proposed typology includes linear alignments (i.e. collinear, curvilinear, align-along-road alignments) and nonlinear clusters (grid-like and unstructured patterns). We concentrate in this paper on two specific building structures: align-along-road alignment and unstructured clusters. Two graph-theoretic algorithms are presented to detect these two types of building patterns. The approach bases itself on auxiliary data structures such as Delaunay triangulation and minimum spanning trees for clustering; several rules are used to refine the clusters into specific building patterns. Finally, the proposed algorithms are tested against a real topographic dataset of the Netherlands, which shows the potential of the two algorithms.

26 citations

Journal ArticleDOI
TL;DR: In the US Medicare population, osteoporosis treatment significantly reduced the risk of fragility fractures and black race, higher CCI scores, dementia, and kidney diseases reduced the likelihood of osteoborosis medication use.
Abstract: Our aim was to evaluate the gap in osteoporosis treatment and the impact of osteoporosis treatment on subsequent fragility fractures We found osteoporosis medication use lowered risk of subsequent fractures by 21% and that black race, higher CCI scores, dementia, and kidney diseases reduced the likelihood of osteoporosis medication use The goal of this study was to evaluate the predictors of osteoporosis medication use and compare the risk of fragility fractures within 1 year of a fragility fracture between osteoporosis treated and untreated women We conducted a retrospective, observational cohort study using the national Medicare database Elderly women (≥65 years) who were hospitalized or had an outpatient/ER service for fragility fracture between January 1, 2011 and December 31, 2011 were included The outcomes of interest were the correlates of and time-to-osteoporosis medication use and risk of a subsequent fracture within 12 months for treated and untreated women Cox regression was used to evaluate the predictors of treatment use and the risk of fracture based on treatment status Women (28,722) (277%) were treated with osteoporosis medication within 12 months of index fracture, and 74,979 (722%) were untreated A number of patient characteristics were associated with a reduced likelihood of osteoporosis medication use, including black race, higher Charlson comorbidity index scores, presence of dementia, and kidney diseases in the baseline The predictor most strongly and positively associated with osteoporosis medication use after fracture was osteoporosis medication use before fragility fracture (HR = 787; 95% CI 767–807) After adjusting for baseline characteristics, osteoporosis medication use lowered the risk of subsequent fractures by 21% (HR = 079, 95% CI 075–083) over 12 months compared to women without treatment Demographics and clinical characteristics were strong predictors of osteoporosis medication use In the US Medicare population, osteoporosis treatment significantly reduced the risk of fragility fractures

26 citations

Journal ArticleDOI
TL;DR: In this paper , the authors demonstrate a room-temperature perovskite-based polaritonic platform with a polariton lattice size of up to 10 × 10, which is a promising platform for the realization of robust mode-disorder-free polariton devices at room temperature.
Abstract: Exciton polaritons, the part-light and part-matter quasiparticles in semiconductor optical cavities, are promising for exploring Bose–Einstein condensation, non-equilibrium many-body physics and analogue simulation at elevated temperatures. However, a room-temperature polaritonic platform on par with the GaAs quantum wells grown by molecular beam epitaxy at low temperatures remains elusive. The operation of such a platform calls for long-lifetime, strongly interacting excitons in a stringent material system with large yet nanoscale-thin geometry and homogeneous properties. Here, we address this challenge by adopting a method based on the solution synthesis of excitonic halide perovskites grown under nanoconfinement. Such nanoconfinement growth facilitates the synthesis of smooth and homogeneous single-crystalline large crystals enabling the demonstration of XY Hamiltonian lattices with sizes up to 10 × 10. With this demonstration, we further establish perovskites as a promising platform for room temperature polaritonic physics and pave the way for the realization of robust mode-disorder-free polaritonic devices at room temperature. The realization of large-scale exciton–polariton platforms operating at room temperature and exhibiting long-lived, strongly interacting excitons has been elusive. Here, the authors demonstrate a room-temperature perovskite-based polaritonic platform with a polariton lattice size of up to 10 × 10.

26 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Journal ArticleDOI
04 Mar 2011-Cell
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.

51,099 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations