scispace - formally typeset
J

Jinlong Li

Researcher at Cleveland State University

Publications -  22
Citations -  131

Jinlong Li is an academic researcher from Cleveland State University. The author has contributed to research in topics: Computer science & Engineering. The author has an hindex of 2, co-authored 4 publications receiving 13 citations. Previous affiliations of Jinlong Li include Chang'an University.

Papers
More filters
Journal ArticleDOI

Domain Adaptation from Daytime to Nighttime: A Situation-sensitive Vehicle Detection and Traffic Flow Parameter Estimation Framework

TL;DR: A new situation-sensitive method based on Faster R-CNN with Domain Adaptation to improve the vehicle detection at nighttime and a situation- sensitive traffic flow parameter estimation method is developed based on the traffic flow theory.
Proceedings ArticleDOI

Bridging the Domain Gap for Multi-Agent Perception

TL;DR: This paper proposes the first lightweight framework to bridge domain gaps for multi-agent perception, which can be a plug-in module for most of the existing systems while maintaining confidentiality.
Journal ArticleDOI

Let There be Light: Improved Traffic Surveillance via Detail Preserving Night-to-Day Transfer

TL;DR: This paper proposes a framework to alleviate the accuracy decline when object detection is taken to adverse conditions by using image translation method, and utilizes Kernel Prediction Network (KPN) based method to refine the nighttime to daytime image translation.
Journal ArticleDOI

A novel image-based convolutional neural network approach for traffic congestion estimation

TL;DR: An image-based traffic congestion estimation framework is constructed, in which a traffic parameter layer is integrated to the basic convolutional neural network model, which can directly perform traffic congestion calculation and estimation, which shortens the processing time and avoids the complicated postprocessing.
Journal ArticleDOI

V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception

TL;DR: The V2V4Real dataset as mentioned in this paper is a large-scale real-world multi-modal dataset for V2VM perception, consisting of LiDAR frames, 40k RGB frames, 240k annotated 3D bounding boxes for 5 classes, and HDMaps that cover all the driving routes.