scispace - formally typeset
Search or ask a question

Showing papers in "Displays in 2021"


Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: A comprehensive review and classification of the latest developments in the deep learning methods for multi-view 3D object recognition is presented, which summarizes the results of these methods on a few mainstream datasets, provides an insightful summary, and puts forward enlightening future research directions.

101 citations


Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: Wang et al. as mentioned in this paper proposed a voxel-based three-view hybrid parallel network for 3D shape classification, which first obtains the depth projection views of the three-dimensional model from the front view, the top view and the side view, and output its predicted probability value for the category of the 3D model.

64 citations


Journal Article•DOI•
01 Dec 2021-Displays
TL;DR: Wang et al. as mentioned in this paper proposed a quadratic polynomial guided fuzzy C-means and dual attention mechanism composite network model architecture to address the medical image's high complexity and noise.

51 citations


Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: The current problems of image inpainting are summarized, the future development trend and research direction are prospected, and the different types of neural network structure based on deep learning methods are summarized.

50 citations


Journal Article•DOI•
Liping Zhang, Weijun Li, Lina Yu1, Linjun Sun, Xiaoli Dong, Xin Ning •
01 Jul 2021-Displays
TL;DR: In this paper, a multi-Gaussian function called GmFace is proposed for face image representation, which utilizes the advantages of two-dimensional Gaussian function which provides a symmetric bell surface with a controllable shape.

33 citations


Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: A blind image quality assessment (BIQA) method to quantify the night-time image quality by investigating the fundamental image properties, which are highly relevant to the image quality, such as the brightness, saturation, sharpness, noiseness, contrast and the semantics.

22 citations


Journal Article•DOI•
Zhangfan Shen1, Linghao Zhang1, Rui Li1, Jie Hou1, Chenguan Liu1, Weizhuan Hu1 •
01 Apr 2021-Displays
TL;DR: The results showed that although there was no significant main effect of luminance contrast on the icon search accuracy, participants responded more quickly to medium Luminance contrast than low or high luminance Contrast, and the medium or low area ratio was more conducive to the participants identifying icons.

22 citations


Journal Article•DOI•
Chao Ping Chen1, Lantian Mi1, Wenbo Zhang1, Jiaxun Ye1, Gang Li1 •
01 Apr 2021-Displays
TL;DR: A waveguide-based near-eye display featuring a dual-channel exit pupil expander, which is composed of an in-coupler, relay gratings, and an out-Coupler that is able to evenly split the field of view into two halves is proposed.

20 citations


Journal Article•DOI•
Xiang Wang1, Chen Wang1, Bing Liu, Xiaoqing Zhou1, Liang Zhang1, Jin Zheng1, Xiao Bai1 •
01 Dec 2021-Displays
TL;DR: In this paper, a comprehensive review of recent deep learning methods for multi-view stereo is presented, which is mainly categorized into depth map based and volumetric based methods according to the 3D representation form and representative methods are reviewed in detail.

20 citations


Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: The effectiveness of adding simultaneous spatialized auditory cues that are fixed at the target’s location that demonstrate the importance of AR cross-modal cueing under conditions of visual uncertainty and show that designers should consider augmenting visual cues with auditory ones.

18 citations


Journal Article•DOI•
Cong Bai1, Anqi Zheng1, Yuan Huang1, Xiang Pan1, Nan Chen •
01 Dec 2021-Displays
TL;DR: A framework using a CNN-based generation model to generate image captions with the help of conditional generative adversarial training (CGAN) and multi-modal graph convolution network (MGCN) is used to exploit visual relationships between objects for generating the caption with semantic meanings.

Journal Article•DOI•
01 Dec 2021-Displays
TL;DR: A comprehensive analysis is carried out on recent methods and their analysis of the semantic segmentation in RGB-D according to the research progress in recent years.

Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: In this article, the authors combined three-dimensional imaging and Internet of Things technology to carry out the research on urban land utilization, and applied these techniques to the model reconstruction of complex urban land.

Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: Li et al. as discussed by the authors proposed a PM2.5 concentration estimator based on deep convolutional neural networks, which consists of three modules: first, they generate a hallucinated reference image of PM 2.5 by using deep CNNs; second, the discrepancy map and the distorted PM.5 image are used to extract the features; and third, the prediction module based on neural networks utilizes those extracted features to predict PM2.5 concentrations.

Journal Article•DOI•
Zhoufeng Liu1, Huo Zhaochen1, Chunlei Li1, Yan Dong1, Bicao Li1 •
01 Jul 2021-Displays
TL;DR: Experimental results demonstrate that the proposed weakly supervised shallow network with Link-SE (L-SE) module and Dilation Up-Weight CAM (DUW-CAM) can localize the defects with high accuracy, and outperforms the state-of-the-art methods on two distinctive fabric datasets with different textures.

Journal Article•DOI•
20 Nov 2021-Displays
TL;DR: Li et al. as discussed by the authors adopt the teacher-student framework to generate pseudo-labels from unlabeled training data, and use a label filtering method to improve the pseudo label quality.

Journal Article•DOI•
01 Sep 2021-Displays
TL;DR: The Hamming distance of the corresponding pixels in the left and right images after Census transform is introduced as the similarity measure in the data term of the energy function, and the dependence on the pixel value is reduced.

Journal Article•DOI•
16 Oct 2021-Displays
TL;DR: Zhang et al. as mentioned in this paper proposed an improved deep neural network framework with attentions for image enhancement by fusing multi-exposure image sequences low-light image enhancement, which can accentuate image feature and is necessary process in image processing.

Journal Article•DOI•
01 Jul 2021-Displays
TL;DR: Wang et al. as mentioned in this paper proposed a novel end-to-end deep learning based network model, which is called temporal graph convolution and attention (T-GAN) for prediction of temporal complex networks.

Journal Article•DOI•
Nan Guo, Kexin Di1, Hongyan Liu, Yifei Wang1, Junfei Qiao •
01 Dec 2021-Displays
TL;DR: A metric-based meta-learning model, which combines attention mechanisms and ensemble learning method is proposed, which intensifies the feature extracting ability of backbone network in meta-learner and reduces over-fitting through ensemble learning and metric learning method.

Journal Article•DOI•
Hongyan Liu, Fei Lei1, Chen Tong, Chunji Cui1, Li Wu1 •
01 Sep 2021-Displays
TL;DR: The proposed model achieves a very high consistency beyond 97% on average between detection results and human judgements, outperforming other state-of-the-art smoke detection algorithms based on deep learning.

Journal Article•DOI•
10 Nov 2021-Displays
TL;DR: Wang et al. as discussed by the authors proposed a discriminative graph convolutional network (DGCN) for hyperspectral image classification, which introduces the concepts of within-class scatter and between-class scattering, which respectively reflect the global geometric structure and discriminating information of the input space.

Journal Article•DOI•
01 Dec 2021-Displays
TL;DR: Experimental results demonstrate that, this new VAE-GAN model is superior to other state-of-the-art ASL image synthesis methods, and the accuracy improvement after incorporating synthesized ASL images from the new model can be as high as 42.41% in dementia diagnosis tasks.

Journal Article•DOI•
01 Apr 2021-Displays
TL;DR: The results showed that the total time necessary to select visual objects (object selection time) increased when dwell time increased, but longer dwell times resulted in a higher object-selection success rate and fewer object selection corrections.

Journal Article•DOI•
01 Apr 2021-Displays
TL;DR: In this paper, the authors constructed an ASD painting database which contains 478 paintings drawn by ASD individuals and 490 drawn by the Typically Developed (TD) group, and trained a classifier of ASD and TD painters using those extracted features, which shows encouraging accuracy as a potential screen tool for ASD.

Journal Article•DOI•
01 Jan 2021-Displays
TL;DR: In this article, the effects of using AR compared to paper instructions were evaluated both on binocular vision, with classical optometric measurements, and on visual fatigue, with the Virtual Reality Symptoms Questionnaire.

Journal Article•DOI•
01 Dec 2021-Displays
TL;DR: A detailed analysis of current developments in 3D object detection methods for RGB-D images to motivate future research and to stimulate future research directions is provided.

Journal Article•DOI•
Jing Zhang1, Qianqian Dou1, Jing Liu1, Yuting Su1, Sun Wanning2 •
01 Sep 2021-Displays
TL;DR: A residual BE algorithm based on advanced conditional generative adversarial network (BE-ACGAN), in which the discriminator adversarially helps assess image quality and train the generator to achieve more photo-realistic recovery performance, which outperforms the state-of-the-art methods on large-scale benchmark datasets.

Journal Article•DOI•
Zhimin Duan1, Chen Yingwen1, Hujie Yu1, Bowen Hu1, Chen Chen1 •
01 Dec 2021-Displays
TL;DR: RGB-Fusion as discussed by the authors combines the advantage of deep learning and multi-view geometry to solve the inherent limitations of traditional monocular reconstruction, and integrates the Perspective-n-Point (PNP) algorithm into the tracking module.

Journal Article•DOI•
01 Dec 2021-Displays
TL;DR: Wang et al. as discussed by the authors proposed a dynamic monitoring plan for the ecological environment of the reserve using 3D sensor image acquisition technology, and realized the monitoring of tourists in the reserve and the collection of natural environment data.