Author
Xiang Zhang
Other affiliations: University of California, Berkeley, University of Texas MD Anderson Cancer Center, Penn State College of Information Sciences and Technology ...read more
Bio: Xiang Zhang is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 154, co-authored 1733 publications receiving 117576 citations. Previous affiliations of Xiang Zhang include University of California, Berkeley & University of Texas MD Anderson Cancer Center.
Topics: Medicine, Computer science, Materials science, Metamaterial, Chemistry
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities (2.3 < eta < 3.9) in proton-proton collisions at three center-of-mass energies, root s = 0.9, 2.76 and 7 TeV using the ALICE detector.
Abstract: The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities (2.3 < eta < 3.9) in proton-proton collisions at three center-of-mass energies, root s = 0.9, 2.76 and 7 TeV using the ALICE detector. It is observed that the increase in the average photon multiplicity as a function of beam energy is compatible with both a logarithmic and a power-law dependence. The relative increase in average photon multiplicity produced in inelastic pp collisions at 2.76 and 7 TeV center-of-mass energies with respect to 0.9 TeV are 37.2 +/- 0.3% (stat) +/- 8.8% (sys) and 61.2 +/- 0.3% (stat) +/- 7.6% (sys), respectively. The photon multiplicity distributions for all center-of-mass energies are well described by negative binomial distributions. The multiplicity distributions are also presented in terms of KNO variables. The results are compared to model predictions, which are found in general to underestimate the data at large photon multiplicities, in particular at the highest center-of-mass energy. Limiting fragmentation behavior of photons has been explored with the data, but is not observed in the measured pseudorapidity range.
13 citations
••
TL;DR: In this article, the effect of thermal cycling on fatigue crack growth rate was investigated by exposing the specimens to repeated thermal cycles between 70°C and −60°C prior to fatigue testing.
12 citations
••
TL;DR: In this article, the Fermi energy level of the graphene layer was used to tune the absorption coefficient of graphene at communication wavelength and achieved a modulation depth above 3 dB. But the performance of the modulator was limited by its weak refractive index change, which limited the footprint of silicon Mach-Zehnder interferometer modulators to millimeters.
Abstract: Data communications have been growing at a speed even faster than Moore's Law, with a 44-fold increase
expected within the next 10 years. Data Transfer on such scale would have to recruit optical
communication technology and inspire new designs of light sources, modulators, and photodetectors. An
ideal optical modulator will require high modulation speed, small device footprint and large operating
bandwidth. Silicon modulators based on free carrier plasma dispersion effect and compound
semiconductors utilizing direct bandgap transition have seen rapid improvement over the past decade. One
of the key limitations for using silicon as modulator material is its weak refractive index change, which
limits the footprint of silicon Mach-Zehnder interferometer modulators to millimeters. Other approaches
such as silicon microring modulators reduce the operation wavelength range to around 100 pm and are
highly sensitive to typical fabrication tolerances and temperature fluctuations. Growing large, high quality
wafers of compound semiconductors, and integrating them on silicon or other substrates is expensive,
which also restricts their commercialization. In this work, we demonstrate that graphene can be used as the
active media for electroabsorption modulators. By tuning the Fermi energy level of the graphene layer, we
induced changes in the absorption coefficient of graphene at communication wavelength and achieve a
modulation depth above 3 dB. This integrated device also has the potential of working at high speed.
12 citations
•
TL;DR: A comprehensive survey of deep learning techniques used for brain-computer interfaces can be found in this paper by summarizing over 230 contributions most published in the past five years, and discuss the applied areas, opening challenges, and future directions for deep learning-based BCI.
Abstract: Brain-Computer Interface (BCI) bridges the human's neural world and the outer physical world by decoding individuals' brain signals into commands recognizable by computer devices. Deep learning has lifted the performance of brain-computer interface systems significantly in recent years. In this article, we systematically investigate brain signal types for BCI and related deep learning concepts for brain signal analysis. We then present a comprehensive survey of deep learning techniques used for BCI, by summarizing over 230 contributions most published in the past five years. Finally, we discuss the applied areas, opening challenges, and future directions for deep learning-based BCI.
12 citations
••
TL;DR: This work suggests that making use of timely sources by official mapping agencies and companies in a continuous or event-driven data update is technically feasible, with further improvement and extensions discussed.
12 citations
Cited by
More filters
••
27 Jun 2016TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
123,388 citations
•
04 Sep 2014TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
55,235 citations
••
TL;DR: Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer.
51,099 citations
•
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
44,703 citations
••
07 Jun 2015TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.
40,257 citations