scispace - formally typeset
X

Xie Chen

Researcher at Microsoft

Publications -  214
Citations -  17208

Xie Chen is an academic researcher from Microsoft. The author has contributed to research in topics: LIGO & Gravitational wave. The author has an hindex of 55, co-authored 196 publications receiving 13224 citations. Previous affiliations of Xie Chen include University of Cambridge & Massachusetts Institute of Technology.

Papers
More filters
Journal ArticleDOI

Subsystem stabilizer codes cannot have a universal set of transversal gates for even one encoded qudit

TL;DR: It is shown that for subsystem stabilizer codes in $d$-dimensional Hilbert space, such a universal set of transversal gates cannot exist for even one encoded qudit, for any dimension $d$, prime or nonprime.
Journal ArticleDOI

Twisted foliated fracton phases

TL;DR: In this article, the authors identify three-dimensional fracton models with different kinds of foliated fracton order and construct constructions of the twisted models and demonstrate that they possess nontrivial order by studying their fractional excitation contents.
Journal ArticleDOI

A First Search for coincident Gravitational Waves and High Energy Neutrinos using LIGO, Virgo and ANTARES data from 2007

S. Adrián-Martínez, +955 more
TL;DR: In this paper, the results of the first search for gravitational wave bursts associated with high energy neutrinos were presented, which could reveal new, hidden sources that are not observed by conventional photon astronomy, particularly at high energy.
Posted Content

Internal Language Model Training for Domain-Adaptive End-to-End Speech Recognition

TL;DR: An internal LM estimation (ILME) method to facilitate a more effective integration of the external LM with all pre-existing E2E models with no additional model training, including the most popular recurrent neural network transducer (RNN-T) and attention-based encoder-decoder (AED) models.
Proceedings ArticleDOI

Internal Language Model Estimation for Domain-Adaptive End-to-End Speech Recognition

TL;DR: This paper proposed an internal language models estimation (ILME) method to facilitate a more effective integration of the external LM with all pre-existing E2E models with no additional model training, including the most popular recurrent neural network transducer and attention-based encoder-decoder (AED) models.