S
Suyog Dutt Jain
Researcher at University of Texas at Austin
Publications - 28
Citations - 1445
Suyog Dutt Jain is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 14, co-authored 25 publications receiving 1284 citations. Previous affiliations of Suyog Dutt Jain include Xiamen University.
Papers
More filters
Proceedings ArticleDOI
FusionSeg: Learning to Combine Motion and Appearance for Fully Automatic Segmentation of Generic Objects in Videos
TL;DR: In this paper, a two-stream fully convolutional neural network is proposed to fuse motion and appearance information in a unified framework for segmenting unseen objects in videos, which improves the state-of-the-art for unseen objects.
Book ChapterDOI
Supervoxel-Consistent Foreground Propagation in Video
Suyog Dutt Jain,Kristen Grauman +1 more
TL;DR: This work proposes a higher order supervoxel label consistency potential for semi-supervised foreground segmentation, leveraging bottom-up supervoxels to guide its estimates towards long-range coherent regions.
Posted Content
FusionSeg: Learning to combine motion and appearance for fully automatic segmention of generic objects in videos
TL;DR: This work designs a two-stream fully convolutional neural network which fuses together motion and appearance in a unified framework for segmenting generic objects in videos and shows how to bootstrap weakly annotated videos together with existing image recognition datasets for training.
Proceedings ArticleDOI
Facial expression recognition with temporal modeling of shapes
TL;DR: This work proposes a framework for automatic facial expression recognition from continuous video sequence by modeling temporal variations within shapes using Latent-Dynamic Conditional Random Fields, and shows that the proposed approach outperforms CRFs for recognizing facial expressions.
Proceedings ArticleDOI
Active Image Segmentation Propagation
Suyog Dutt Jain,Kristen Grauman +1 more
TL;DR: An active selection procedure is introduced that operates on the joint segmentation graph over all images in order to identify images that, once annotated, will propagate well to other examples, and it focuses human attention more effectively than existing propagation strategies.