scispace - formally typeset
W

Wenlin Chen

Researcher at Washington University in St. Louis

Publications -  26
Citations -  2720

Wenlin Chen is an academic researcher from Washington University in St. Louis. The author has contributed to research in topics: Deep learning & Support vector machine. The author has an hindex of 15, co-authored 26 publications receiving 2368 citations. Previous affiliations of Wenlin Chen include University of Washington.

Papers
More filters
Posted Content

Compressing Neural Networks with the Hashing Trick

TL;DR: This work presents a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes, and demonstrates on several benchmark data sets that HashingNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.
Proceedings Article

Compressing Neural Networks with the Hashing Trick

TL;DR: HashedNets as discussed by the authors uses a hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value, which can be tuned to adjust to the weight sharing architecture with standard backprop during training.
Posted Content

Multi-Scale Convolutional Neural Networks for Time Series Classification

TL;DR: A novel end-to-end neural network model, Multi-Scale Convolutional Neural Networks (MCNN), which incorporates feature extraction and classification in a single framework, leading to superior feature representation.
Proceedings ArticleDOI

Strategies for Training Large Vocabulary Neural Language Models

TL;DR: The authors presented a systematic comparison of neural strategies to represent and train large vocabularies, including Softmax, hierarchical softmax, target sampling, noise contrastive estimation and self normalization.
Proceedings ArticleDOI

Compressing Convolutional Neural Networks in the Frequency Domain

TL;DR: This paper presents a novel net- work architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption.