scispace - formally typeset
Search or ask a question

Showing papers by "Jiaqi Yang published in 2021"


Journal ArticleDOI
TL;DR: This paper proposes a method based on semantic and syntactic information for short text similarity calculations by using knowledge and corpora to express the meaning of the term to solve polysemy, and using a constituency parse tree to capture the syntactic structure of short texts.

23 citations


Journal ArticleDOI
TL;DR: A comprehensive evaluation of state-of-the-art 3D correspondence grouping methods can be found in this paper, where a good correspondence grouping algorithm is expected to retrieve as many as inliers from initial feature matches, giving a rise in both precision and recall as well as facilitating accurate transformation estimation.
Abstract: Seeking consistent point-to-point correspondences between 3D rigid data (point clouds, meshes, or depth maps) is a fundamental problem in 3D computer vision. While a number of correspondence selection methods have been proposed in recent years, their advantages and shortcomings remain unclear regarding different applications and perturbations. To fill this gap, this paper gives a comprehensive evaluation of nine state-of-the-art 3D correspondence grouping methods. A good correspondence grouping algorithm is expected to retrieve as many as inliers from initial feature matches, giving a rise in both precision and recall as well as facilitating accurate transformation estimation. Toward this rule, we deploy experiments on three benchmarks with different application contexts, including shape retrieval, 3D object recognition, and point cloud registration. We also investigate various perturbations such as noise, point density variation, clutter, occlusion, partial overlap, different scales of initial correspondences, and different combinations of keypoint detectors and descriptors. The rich variety of application scenarios and nuisances result in different spatial distributions and inlier ratios of initial feature correspondences, thus enabling a thorough evaluation. Based on the outcomes, we give a summary of the traits, merits, and demerits of evaluated approaches and indicate some potential future research directions.

20 citations


Journal ArticleDOI
TL;DR: A light-weight, efficient, and general cross-modal image fusion network, termed as AE-Netv2, that explores the commonness and characteristics of different image fusion tasks and provides a research basis for further research on the continuous learning characteristics of the human brain is proposed.

15 citations


Journal ArticleDOI
TL;DR: A multi-source image fusion framework that combines illuminance factors and attention mechanisms that effectively combines traditional image features and modern deep learning features is proposed.

10 citations


Journal ArticleDOI
TL;DR: A simple yet effective estimator called SAmple Consensus by sampling COmpatibility Triangles in graphs (SAC-COT) for robust 6-DOF pose estimation and 3-D registration and it is shown that correct hypotheses can be generated in early iteration stage.
Abstract: Six-degree-of-freedom (6-DOF) pose estimation from feature correspondences remains a popular and robust approach for 3-D registration. However, heavy outliers that existed in the initial correspondence set pose a great challenge to this problem. This article presents a simple yet effective estimator called SAmple Consensus by sampling COmpatibility Triangles in graphs (SAC-COT) for robust 6-DOF pose estimation and 3-D registration. The key novelty is a guided three-point sampling approach. It is based on a novel correspondence sample representation, i.e., COmpatibility Triangle (COT). We first model the correspondence set as a graph with nodes connecting compatible correspondences. Then, by ranking and sampling COTs formed by ternary loops, we show that correct hypotheses can be generated in early iteration stage. Finally, the hypothesis generated by the COT yielding to the maximum consensus is the output of SAC-COT. Extensive experiments on six data sets and extensive comparisons with the state-of-the-art estimators confirm that: 1) SAC-COT can achieve accurate registrations with a few iterations and 2) SAC-COT outperforms all competitors and is ultrarobust when confronted with Gaussian noise, data decimation, holes, clutter, partial overlap, varying scales of input correspondences, and data modality variation.

9 citations


Journal ArticleDOI
TL;DR: The proposed constraint, named photometric constraint (PC), provides a prospective constraint for absolute phase unwrapping from single-frequency fringe patterns without any additional cameras, and achieved comparable performance with the state-of-the-art method, given a traditional camera-projector setup and single high-frequencyringe patterns.
Abstract: As a fundamental step in fringe projection profilometry, absolute phase unwrapping via single-frequency fringe patterns is still a challenging ill-posed problem, which attracts lots of interest in the research area. To solve the problem above, additional constraints were constructed, such as spatial smoothness constraint (SSC) in spatial phase unwrapping algorithm and viewpoint consistency constraint (VCC) in multi-view systems (e.g., stereo and light-field cameras). However, there still exists phase ambiguity in the unwrapping result based on SSC. Moreover, VCC-based methods rely on additional cameras or light-field cameras, which makes the system complicated and expensive. In this paper, we propose to construct a novel constraint directly from photometric information in captured image intensity, which has never been fully exploited in phase unwrapping. The proposed constraint, named photometric constraint (PC), provides a prospective constraint for absolute phase unwrapping from single-frequency fringe patterns without any additional cameras. Extensive experiments have been conducted for the validation of the proposed method, which achieved comparable performance with the state-of-the-art method, given a traditional camera-projector setup and single high-frequency fringe patterns.

Book ChapterDOI
29 Oct 2021
TL;DR: In this paper, a feature representation for 3D correspondences, dubbed compatibility feature (CF), is proposed to describe the consistencies within inliers and inconsistencies within outliers.
Abstract: We present a simple yet effective method for 3D correspondence grouping. The objective is to accurately classify initial correspondences obtained by matching local geometric descriptors into inliers and outliers. Although the spatial distribution of correspondences is irregular, inliers are expected to be geometrically compatible with each other. Based on such observation, we propose a novel feature representation for 3D correspondences, dubbed compatibility feature (CF), to describe the consistencies within inliers and inconsistencies within outliers. CF consists of top-ranked compatibility scores of a candidate to other correspondences, which purely relies on robust and rotation-invariant geometric constraints. We then formulate the grouping problem as a classification problem for CF features, which is accomplished via a simple multilayer perceptron (MLP) network. Comparisons with nine state-of-the-art methods on four benchmarks demonstrate that: 1) CF is distinctive, robust, and rotation-invariant; 2) our CF-based method achieves the best overall performance and holds good generalization ability.

DOI
19 Jul 2021
TL;DR: In this paper, a more general technique termed frequency-shifting (FS) is proposed, based on which the behavior of periodicity is eliminated and absolute phase can be retrieved pixel-wisely without any phase unwrapping.
Abstract: A more general technique termed frequency-shifting (FS) is proposed, based on which the behavior of periodicity is eliminated and absolute phase can be retrieved pixel-wisely without any phase unwrapping.

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed a novel dynamic image restoration and fusion neural network, termed as DDRF-Net, which is capable of solving two problems, i.e., static restoration, and dynamic degradation.
Abstract: The deep-learning-based image restoration and fusion methods have achieved remarkable results. However, the existing restoration and fusion methods paid little research attention to the robustness problem caused by dynamic degradation. In this paper, we propose a novel dynamic image restoration and fusion neural network, termed as DDRF-Net, which is capable of solving two problems, i.e., static restoration and fusion, dynamic degradation. In order to solve the static fusion problem of existing methods, dynamic convolution is introduced to learn dynamic restoration and fusion weights. In addition, a dynamic degradation kernel is proposed to improve the robustness of image restoration and fusion. Our network framework can effectively combine image degradation with image fusion tasks, provide more detailed information for image fusion tasks through image restoration loss, and optimize image restoration tasks through image fusion loss. Therefore, the stumbling blocks of deep learning in image fusion, e.g., static fusion weight and specifically designed network architecture, are greatly mitigated. Extensive experiments show that our method is more superior compared with the state-of-the-art methods.