H
Hao Zhang
Researcher at Cornell University
Publications - 41
Citations - 564
Hao Zhang is an academic researcher from Cornell University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 9, co-authored 37 publications receiving 256 citations. Previous affiliations of Hao Zhang include Xidian University.
Papers
More filters
Book ChapterDOI
BIRNAT: Bidirectional Recurrent Neural Networks with Adversarial Training for Video Snapshot Compressive Imaging
TL;DR: This work considers the problem of video snapshot compressive imaging (SCI), where multiple high-speed frames are coded by different masks and then summed to a single measurement, and proposes a recurrent networks solution, for the first time that recurrent networks are employed to SCI problem.
Journal ArticleDOI
FusionNet: An Unsupervised Convolutional Variational Network for Hyperspectral and Multispectral Image Fusion
TL;DR: A novel variational probabilistic autoencoder framework implemented by convolutional neural networks is proposed, in order to fuse the spatial and spectral information contained in the LR-HSI and HR-MSI, called FusionNet, which outperforms the state-of-the-art fusion methods.
Posted Content
WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling
TL;DR: Weibull hybrid autoencoding inference (WHAI) as mentioned in this paper infers posterior samples via a hybrid of stochastic gradient MCMC and auto-encoding variational Bayes.
Proceedings ArticleDOI
Friendly Topic Assistant for Transformer Based Abstractive Summarization
TL;DR: A topic assistant (TA) including three modules is proposed that is compatible with various Transformer-based models and user-friendly since i) TA is a plug-and-play model that does not break any structure of the original Transformer network, making users easily fine-tune Transformer+TA based on a well pre-trained model.
Proceedings ArticleDOI
MetaSCI: Scalable and Adaptive Reconstruction for Video Compressive Sensing
TL;DR: MetaSCI as discussed by the authors is composed of a shared backbone for different masks, and light-weight meta-modulation parameters to evolve to different modulation parameters for each mask, thus having the properties of fast adaptation to new masks or systems and ready to scale to large data.