scispace - formally typeset
Q

Quoc Phong Nguyen

Researcher at National University of Singapore

Publications -  26
Citations -  285

Quoc Phong Nguyen is an academic researcher from National University of Singapore. The author has contributed to research in topics: Bayesian optimization & Computer science. The author has an hindex of 5, co-authored 20 publications receiving 137 citations.

Papers
More filters
Proceedings ArticleDOI

GEE: A Gradient-based Explainable Variational Autoencoder for Network Anomaly Detection

TL;DR: GEE comprises of two components:Variational Autoencoder (VAE)- an unsupervised deep-learning technique for detecting anomalies, and a gradient-based fingerprinting technique for explaining anomalies.
Proceedings Article

Inverse reinforcement learning with locally consistent reward functions

TL;DR: This paper presents a novel generalization of the IRL problem that allows each trajectory to be generated by multiple locally consistent reward functions, hence catering to more realistic and complex experts' behaviors.
Posted Content

Variational Bayesian Unlearning.

TL;DR: This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased using the variational inference (VI) framework and proposes two novel tricks to tackle this challenge.
Posted Content

GEE: A Gradient-based Explainable Variational Autoencoder for Network Anomaly Detection

TL;DR: GEE as discussed by the authors is a framework for detecting and explaining anomalies in network traffic, which comprises of two components: (i) Variational Autoencoder (VAE) - an unsupervised deep-learning technique for detecting anomalies, and (ii) a gradient-based fingerprinting technique for explaining anomalies.
Proceedings Article

Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization

TL;DR: An IRL framework called Bayesian optimization-IRL is presented which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space by utilizing Bayesian Optimization and a newly proposed kernel that projects the parameters of policy invariant reward functions to a single point in a latent space.