scispace - formally typeset
S

Shawn Tan

Researcher at Université de Montréal

Publications -  24
Citations -  596

Shawn Tan is an academic researcher from Université de Montréal. The author has contributed to research in topics: Language model & Recurrent neural network. The author has an hindex of 7, co-authored 20 publications receiving 491 citations. Previous affiliations of Shawn Tan include National University of Singapore & Agency for Science, Technology and Research.

Papers
More filters
Proceedings Article

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

TL;DR: The novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.
Posted Content

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

TL;DR: This article proposed an ordered neurons LSTM (ON-LSTM) to add an inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated.
Proceedings ArticleDOI

Improving the interpretability of deep neural networks with stimulated learning

TL;DR: Constraints are applied so that the hidden units of each layer will exhibit phone-dependent regional activities when arranged in a 2-dimensional grid and it is demonstrated that such constraints are able to yield visible activation regions without compromising the classified network.
Proceedings Article

Improving Explorability in Variational Inference with Annealed Variational Objectives

TL;DR: This paper proposed Annealed Variational Importance Sampling (AVO) to encourage exploration in the latent space by incorporating energy tempering into the optimization objective, and demonstrated its robustness to deterministic warm up.
Posted Content

Improving Explorability in Variational Inference with Annealed Variational Objectives

TL;DR: Inspired by Annealed Importance Sampling, the proposed method facilitates learning by incorporating energy tempering into the optimization objective and introduces Annealed Variational Objectives (AVO) into the training of hierarchical variational methods.