scispace - formally typeset
L

Longlong Jing

Researcher at City University of New York

Publications -  33
Citations -  2049

Longlong Jing is an academic researcher from City University of New York. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 9, co-authored 26 publications receiving 893 citations. Previous affiliations of Longlong Jing include The Graduate Center, CUNY.

Papers
More filters
Posted Content

Cross-modal Center Loss

TL;DR: This paper proposes an approach to jointly train the components of cross-modal retrieval framework with metadata, and enable the network to find optimal features, and minimizes the distances of features from objects belonging to the same class across all modalities.
Proceedings ArticleDOI

Depth Estimation Matters Most: Improving Per-Object Depth Estimation for Monocular 3D Detection and Tracking

TL;DR: This work proposes a multi-level fusion method that combines different representations (RGB and pseudo-LiDAR) and temporal information across multiple frames for objects (tracklets) to enhance per-object depth estimation and demonstrates that by simply replacing estimated depth with fusion-enhanced depth, it can achieve significant improvements in monocular 3D perception tasks, including detection and tracking.
Posted Content

LGAN: Lung Segmentation in CT Scans Using Generative Adversarial Network

TL;DR: Li et al. as discussed by the authors proposed a novel deep learning Generative Adversarial Network (GAN) based lung segmentation schema, which they denote as LGAN and evaluated on a dataset containing 220 individual CT scans with two metrics: segmentation quality and shape similarity.
Proceedings ArticleDOI

Self-supervised Feature Learning by Cross-modality and Cross-view Correspondences

TL;DR: Li et al. as discussed by the authors proposed a self-supervised learning approach to jointly learn 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels.
Posted Content

Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

TL;DR: Li et al. as discussed by the authors proposed a two-stage network to advance the self-supervised monocular dense depth learning by leveraging low-cost sparse (e.g. 4-beam) LiDAR.