scispace - formally typeset
K

Kai Xu

Researcher at Arizona State University

Publications -  23
Citations -  625

Kai Xu is an academic researcher from Arizona State University. The author has contributed to research in topics: Convolutional neural network & Compressed sensing. The author has an hindex of 11, co-authored 22 publications receiving 361 citations. Previous affiliations of Kai Xu include Arizona's Public Universities.

Papers
More filters
Proceedings ArticleDOI

Learning in the Frequency Domain

TL;DR: Inspired by digital signal processing theories, the spectral bias from the frequency perspective is analyzed and a learning-based frequency selection method is proposed to identify the trivial frequency components which can be removed without accuracy loss.
Proceedings ArticleDOI

LCANet: End-to-End Lipreading with Cascaded Attention-CTC

TL;DR: LCANet is proposed, an end-to-end deep neural network based lipreading system that incorporates a cascaded attention-CTC decoder to generate output texts and achieves notably performance improvement as well as faster convergence.
Journal ArticleDOI

A GPU-Outperforming FPGA Accelerator Architecture for Binary Convolutional Neural Networks

TL;DR: This article proposes an optimized fully mapped FPGA accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipelines stages that is on a par with a Titan X GPU in terms of throughput and energy efficiency.
Proceedings ArticleDOI

CSVideoNet: A Real-Time End-to-End Learning Framework for High-Frame-Rate Video Compressive Sensing

TL;DR: A noniterative model, named "CSVideoNet", which directly learns the inverse mapping of CS and reconstructs the original input in a single forward propagation, which can take advantage of GPU acceleration to achieve three orders of magnitude speed-up over conventional iterative-based approaches.
Proceedings ArticleDOI

A 7.663-TOPS 8.2-W Energy-efficient FPGA Accelerator for Binary Convolutional Neural Networks (Abstract Only)

TL;DR: In this paper, an optimized accelerator architecture tailored for bitwise convolution and normalization was proposed, which achieved a computing throughput of 7.663 TOPS with power consumption of 8.2 W regardless of the batch size of input data.