L
Longlong Jing
Researcher at City University of New York
Publications - 33
Citations - 2049
Longlong Jing is an academic researcher from City University of New York. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 9, co-authored 26 publications receiving 893 citations. Previous affiliations of Longlong Jing include The Graduate Center, CUNY.
Papers
More filters
Journal ArticleDOI
Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey
Longlong Jing,Yingli Tian +1 more
TL;DR: An extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos as a subset of unsupervised learning methods to learn general image and video features from large-scale unlabeled data without using any human-annotated labels is provided.
Posted Content
Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey
Longlong Jing,Yingli Tian +1 more
TL;DR: Self-Supervised Learning: Self-supervised learning as discussed by the authors is a subset of unsupervised image and video feature learning, which aims to learn general image features from large-scale unlabeled data without using any human-annotated labels.
Posted Content
Self-Supervised Spatiotemporal Feature Learning via Video Rotation Prediction
TL;DR: With the self-supervised pre-trained 3DRotNet from large datasets, the recognition accuracy is boosted up by 20.4% on UCF101 and 16.7% on HMDB51 respectively, compared to the models trained from scratch.
Posted Content
Self-supervised Spatiotemporal Feature Learning by Video Geometric Transformations
Longlong Jing,Yingli Tian +1 more
TL;DR: A novel 3DConvNet-based fully selfsupervised framework to learn spatiotemporal video features without using any human-labeled annotations and outperforms the state-of-the-arts of fully self-supervised methods on both UCF101 and HMDB51 datasets and achieves 62.9% and 33.7% accuracy respectively.
Journal ArticleDOI
Coarse-to-Fine Semantic Segmentation From Image-Level Labels
TL;DR: This paper proposes a novel recursive coarse-to-fine semantic segmentation framework based on only image-level category labels that can be easily extended to foreground object segmentation task and achieves comparable performance with the state-of-the-art supervised methods on the Internet object dataset.