S
Shanmuganathan Raman
Researcher at Indian Institute of Technology Gandhinagar
Publications - 148
Citations - 1315
Shanmuganathan Raman is an academic researcher from Indian Institute of Technology Gandhinagar. The author has contributed to research in topics: Convolutional neural network & Computer science. The author has an hindex of 15, co-authored 135 publications receiving 952 citations. Previous affiliations of Shanmuganathan Raman include Indian Institutes of Technology & Indian Institute of Technology Bombay.
Papers
More filters
Proceedings ArticleDOI
Bilateral Filter Based Compositing for Variable Exposure Photography
TL;DR: This article proposes a computationally efficient method of scene compositing using edge-prese rving filters such as bilateral filters and considers the High Dynamic Range Imaging (HDRI) problem.
Proceedings ArticleDOI
Deep Generative Filter for Motion Deblurring
TL;DR: This paper proposes a novel deep filter based on Generative Adversarial Network architecture integrated with global skip connection and dense architecture which outperforms the state-of-the-art blind deblurring algorithms both quantitatively and qualitatively.
Journal ArticleDOI
Reconstruction of high contrast images for dynamic scenes
TL;DR: A novel bottom-up segmentation algorithm is developed through superpixel grouping which enables us to detect scene changes and directly generate the ghost-free LDR image of the dynamic scene.
Proceedings ArticleDOI
Facial Expression Recognition Using Visual Saliency and Deep Learning
TL;DR: The existing convolutional neural network model trained on the visual recognition dataset used in the ILSVRC2012 is fine-tuned to two widely used facial expression datasets - CFEE and RaFD, which when trained and tested independently yielded test accuracies of 74.79% and 95.71%, respectively.
Proceedings ArticleDOI
LP-3DCNN: Unveiling Local Phase in 3D Convolutional Neural Networks
TL;DR: The ReLPV block provides significant parameter savings and improves the state-of-the-art on the UCF-101 split-1 action recognition dataset by 5.68% (when trained from scratch) while using only 15% of the parameters of the state of the art.