scispace - formally typeset
W

Wei Jing

Researcher at Agency for Science, Technology and Research

Publications -  42
Citations -  618

Wei Jing is an academic researcher from Agency for Science, Technology and Research. The author has contributed to research in topics: Computer science & Motion planning. The author has an hindex of 9, co-authored 36 publications receiving 230 citations. Previous affiliations of Wei Jing include Carnegie Mellon University & Institute of High Performance Computing Singapore.

Papers
More filters
Posted Content

Span-based Localizing Network for Natural Language Video Localization

TL;DR: Wang et al. as mentioned in this paper proposed a video span localizing network (VSLNet) to address NLVL task with a span-based QA approach by treating the input video as text passage.
Proceedings ArticleDOI

Span-based Localizing Network for Natural Language Video Localization

TL;DR: This work proposes a video span localizing network (VSLNet), on top of the standard span-based QA framework, to address NLVL, and tackles the differences between NLVL and span- based QA through a simple and yet effective query-guided highlighting (QGH) strategy.
Journal ArticleDOI

Natural Language Video Localization: A Revisit in Span-based Question Answering Framework.

TL;DR: Wang et al. as mentioned in this paper proposed a video span localizing network (VSLNet) to solve the NLVL problem from a span-based question answering (QA) perspective by treating the input video as a text passage.
Proceedings ArticleDOI

Sampling-based view planning for 3D visual coverage task with Unmanned Aerial Vehicle

TL;DR: This paper proposes a novel view planning algorithm for a camera-equipped Unmanned Aerial Vehicle acquiring visual geometric information of target objects in its surrounding environment that makes use of iterative random sampling and a probabilistic potential-field method to generate candidate viewpoints in a non-deterministic manner.
Proceedings ArticleDOI

Video Corpus Moment Retrieval with Contrastive Learning

TL;DR: In this article, the authors propose a Retrieval and Localization Network with Contrastive Learning (ReLoCLNet) for video corpus moment retrieval, which is based on two contrastive learning objectives to refine video and text representations separately but with better alignment for VCMR.