scispace - formally typeset
A

Anshul Shah

Researcher at Johns Hopkins University

Publications -  12
Citations -  160

Anshul Shah is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Computer science & Autoencoder. The author has an hindex of 4, co-authored 7 publications receiving 88 citations. Previous affiliations of Anshul Shah include Indian Institute of Technology Madras & University of Maryland, College Park.

Papers
More filters
Proceedings ArticleDOI

Bringing Alive Blurred Moments

TL;DR: This work first learns motion representation from sharp videos in an unsupervised manner through training of a convolutional recurrent video autoencoder network that performs a surrogate task of video reconstruction that outperforms competing methods across all factors: accuracy, speed, and compactness.
Proceedings Article

Attention Driven Vehicle Re-identification and Unsupervised Anomaly Detection for Traffic Understanding

TL;DR: An attention-based model which learns to focus on different parts of a vehicle by conditioning the feature maps on visible key-points is used, and triplet embedding is used to reduce the dimensionality of the features obtained from the ensemble of networks trained using different datasets.
Posted Content

Bringing Alive Blurred Moments

TL;DR: In this article, a convolutional recurrent video autoencoder network is proposed to extract a video from a single motion blurred image to sequentially reconstruct the clear views of a scene as beheld by the camera during the time of exposure.
Proceedings ArticleDOI

Learning Based Single Image Blur Detection and Segmentation

TL;DR: This paper addresses the problem of obtaining a blur-based segmentation map from a single image affected by motion or defocus blur by utilising deep neural networks to learn features related to blur and enabling a pixel-level blur classification.
Posted Content

Pose And Joint-Aware Action Recognition.

TL;DR: A new model for joint-based action recognition is presented, which first extracts motion features from each joint separately through a shared motion encoder before performing collective reasoning, and which outperforms the existing baseline on Mimetics, a dataset with out-of-context actions.