scispace - formally typeset
S

Subhransu Maji

Researcher at University of Massachusetts Amherst

Publications -  156
Citations -  22076

Subhransu Maji is an academic researcher from University of Massachusetts Amherst. The author has contributed to research in topics: Convolutional neural network & Point cloud. The author has an hindex of 47, co-authored 156 publications receiving 16605 citations. Previous affiliations of Subhransu Maji include Indian Institute of Technology Kanpur & Toyota Technological Institute.

Papers
More filters
Proceedings ArticleDOI

Multi-view Convolutional Neural Networks for 3D Shape Recognition

TL;DR: In this article, a CNN architecture is proposed to combine information from multiple views of a 3D shape into a single and compact shape descriptor, which can be applied to accurately recognize human hand-drawn sketches of shapes.
Proceedings ArticleDOI

Semantic contours from inverse detectors

TL;DR: A simple yet effective method for combining generic object detectors with bottom-up contours to identify object contours is presented and a principled way of combining information from different part detectors and across categories is provided.
Proceedings ArticleDOI

Describing Textures in the Wild

TL;DR: This work identifies a vocabulary of forty-seven texture terms and uses them to describe a large dataset of patterns collected "in the wild", and shows that they both outperform specialized texture descriptors not only on this problem, but also in established material recognition datasets.
Posted Content

Multi-view Convolutional Neural Networks for 3D Shape Recognition

TL;DR: This work presents a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and shows that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art3D shape descriptors.
Proceedings ArticleDOI

Bilinear CNN Models for Fine-Grained Visual Recognition

TL;DR: Blinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor, are proposed.