scispace - formally typeset
Search or ask a question
Author

ZhangYongbing

Bio: ZhangYongbing is an academic researcher from Harbin Institute of Technology. The author has contributed to research in topics: Distortion & Image quality. The author has an hindex of 1, co-authored 1 publications receiving 2 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The difficulty of no-reference image quality assessment (NR IQA) often lies in the lack of knowledge about the distortion in the image, which makes quality assessment blind and thus inefficient.
Abstract: The difficulty of no-reference image quality assessment (NR IQA) often lies in the lack of knowledge about the distortion in the image, which makes quality assessment blind and thus inefficient. To...

55 citations


Cited by
More filters
Proceedings ArticleDOI
06 Apr 2022
TL;DR: This paper proposes a novel framework to explore the 3D Skinned Multi-Person Linear (SMPL) model of the human body for gait recognition, named SMPLGait, and provides 3D SMPL models recovered from video frames which can provide dense 3D information of body shape, viewpoint, and dynamics.
Abstract: Existing studies for gait recognition are dominated by 2D representations like the silhouette or skeleton of the human body in constrained scenes. However, humans live and walk in the unconstrained 3D space, so projecting the 3D human body onto the 2D plane will discard a lot of crucial information like the viewpoint, shape, and dynamics for gait recognition. Therefore, this paper aims to explore dense 3D representations for gait recognition in the wild, which is a practical yet neglected problem. In particular, we propose a novel framework to explore the 3D Skinned Multi-Person Linear (SMPL) model of the human body for gait recognition, named SMPLGait. Our framework has two elaborately-designed branches of which one extracts appearance features from silhouettes, the other learns knowledge of 3D viewpoints and shapes from the 3D SMPL model. In addition, due to the lack of suitable datasets, we build the first large-scale 3D representation-based gait recognition dataset, named Gait3D. It contains 4,000 subjects and over 25,000 sequences extracted from 39 cameras in an unconstrained indoor scene. More importantly, it provides 3D SMPL models recovered from video frames which can provide dense 3D information of body shape, viewpoint, and dynamics. Based on Gait3D, we comprehensively compare our method with existing gait recognition approaches, which reflects the superior performance of our framework and the potential of 3D representations for gait recognition in the wild. The code and dataset are available at: https://gait3d.github.io.

30 citations

Journal ArticleDOI
TL;DR: A systematic survey of the literature on the use of big data analytics and data mining methods in IoT to identify the lines of research that should receive more attention in future works and provides a summary of the methods used.

15 citations

Journal ArticleDOI
TL;DR: An efficient super-resolution model based on neural architecture search and attention mechanism that introduces the Bayesian algorithm for hyper-parameter tuning and improves the model’s performance based on the optimal sub-network searched out.
Abstract: Although the current super-resolution model based on deep learning has achieved excellent reconstruction results, the increasing depth of the model results in huge parameters, limiting the further application of the super-resolution deep model. To solve this problem, we propose an efficient super-resolution model based on neural architecture search and attention mechanism. First, we use global residual learning to limit the search to the non-linear mapping part of the network and add a down-sampling to this part to reduce the feature map’s size and computation. Second, we establish a lightweight search space and joint rewards for searching the optimal network structure. The model divides the search into macro search and micro search, which are used to search for the optimal down-sampling position and the optimal cell structure, respectively. In addition, we introduce the Bayesian algorithm for hyper-parameter tuning and further improve the model’s performance based on the optimal sub-network searched out. Detailed experiments show that our model achieves excellent super-resolution performance and high computational efficiency compared with some state-of-the-art models.

8 citations

Journal ArticleDOI
TL;DR: This work establishes a large-scale underwater image dataset, dubbed UID2021, for evaluating no-reference UIQA metrics, and enables ones to evaluate NR UIZA algorithms comprehensively and paves the way for further research onUIQA.
Abstract: Achieving subjective and objective quality assessment of underwater images is of high significance in underwater visual perception and image/video processing. However, the development of underwater image quality assessment (UIQA) is limited for the lack of publicly available underwater image datasets with human subjective scores and reliable objective UIQA metrics. To address this issue, we establish a large-scale underwater image dataset, dubbed UID2021, for evaluating no-reference (NR) UIQA metrics. The constructed dataset contains 60 multiply degraded underwater images collected from various sources, covering six common underwater scenes (i.e., bluish scene, blue-green scene, greenish scene, hazy scene, low-light scene, and turbid scene), and their corresponding 900 quality improved versions are generated by employing 15 state-of-the-art underwater image enhancement and restoration algorithms. Mean opinion scores with 52 observers for each image of UID2021 are also obtained by using the pairwise comparison sorting method. Both in-air and underwater-specific NR IQA algorithms are tested on our constructed dataset to fairly compare their performance and analyze their strengths and weaknesses. Our proposed UID2021 dataset enables ones to evaluate NR UIQA algorithms comprehensively and paves the way for further research on UIQA. The dataset is available at https://github.com/Hou-Guojia/UID2021.

8 citations