scispace - formally typeset
X

Xiyou Zhou

Researcher at University of California, Santa Barbara

Publications -  12
Citations -  1015

Xiyou Zhou is an academic researcher from University of California, Santa Barbara. The author has contributed to research in topics: Semantics & Language model. The author has an hindex of 6, co-authored 10 publications receiving 400 citations. Previous affiliations of Xiyou Zhou include Fudan University.

Papers
More filters
Proceedings Article

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting

TL;DR: First, convolutional self-attention is proposed by producing queries and keys with causal convolution so that local context can be better incorporated into attention mechanism, and LogSparse Transformer is proposed, improving forecasting accuracy for time series with fine granularity and strong long-term dependencies under constrained memory budget.
Posted Content

TabFact: A Large-scale Dataset for Table-based Fact Verification

TL;DR: A large-scale dataset with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED is constructed and two different models are designed: Table-BERT and Latent Program Algorithm (LPA).
Proceedings Article

TabFact: A Large-scale Dataset for Table-based Fact Verification

TL;DR: Zhang et al. as mentioned in this paper designed two different models: Table-BERT and Latent Program Algorithm (LPA) to verify whether a textual hypothesis holds based on the given evidence, also known as fact verification.
Posted Content

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting.

TL;DR: In this article, a convolutional self-attention with causal convolution was proposed to improve the accuracy of time series forecasting with fine granularity and strong long-term dependencies.
Proceedings ArticleDOI

Logic2Text: High-Fidelity Natural Language Generation from Logical Forms

TL;DR: This work forms high-fidelity NLG as generation from logical forms in order to obtain controllable and faithful generations, and presents a new large-scale dataset, Logic2Text, with 10,753 descriptions involving common logic types paired with the underlying logical forms.