scispace - formally typeset
Search or ask a question
Topic

Human visual system model

About: Human visual system model is a research topic. Over the lifetime, 8697 publications have been published within this topic receiving 259440 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A measure to evaluate the reliability of depth map, and use it to reduce the influence of poor depth map on saliency detection, and two saliency maps are integrated into a final saliency map through weighted-sum method according to their importance.
Abstract: Stereoscopic perception is an important part of human visual system that allows the brain to perceive depth. However, depth information has not been well explored in existing saliency detection models. In this letter, a novel saliency detection method for stereoscopic images is proposed. First, we propose a measure to evaluate the reliability of depth map, and use it to reduce the influence of poor depth map on saliency detection. Then, the input image is represented as a graph, and the depth information is introduced into graph construction. After that, a new definition of compactness using color and depth cues is put forward to compute the compactness saliency map. In order to compensate the detection errors of compactness saliency when the salient regions have similar appearances with background, foreground saliency map is calculated based on depth-refined foreground seeds' selection (DRSS) mechanism and multiple cues contrast. Finally, these two saliency maps are integrated into a final saliency map through weighted-sum method according to their importance. Experiments on two publicly available stereo data sets demonstrate that the proposed method performs better than other ten state-of-the-art approaches.

240 citations

Journal ArticleDOI
TL;DR: Attempts to model the spectral sensitivity of the circadian system are discussed, each of which varies in terms of its complexity and its consideration of retinal neuroanatomy and neurophysiology.
Abstract: It is now well established that the spectral, spatial, temporal and absolute sensitivities of the human circadian system are very different from those of the human visual system. Although qualitati...

239 citations

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed blind image blur evaluation algorithm can produce blur scores highly consistent with subjective evaluations and outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.
Abstract: Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.

239 citations

Journal ArticleDOI
TL;DR: Experimental results on six publicly available databases demonstrate that the proposed metric is comparable with the state-of-the-art quality metrics.
Abstract: Objective image quality assessment (IQA) aims to evaluate image quality consistently with human perception Most of the existing perceptual IQA metrics cannot accurately represent the degradations from different types of distortion, eg, existing structural similarity metrics perform well on content-dependent distortions while not as well as peak signal-to-noise ratio (PSNR) on content-independent distortions In this paper, we integrate the merits of the existing IQA metrics with the guide of the recently revealed internal generative mechanism (IGM) The IGM indicates that the human visual system actively predicts sensory information and tries to avoid residual uncertainty for image perception and understanding Inspired by the IGM theory, we adopt an autoregressive prediction algorithm to decompose an input scene into two portions, the predicted portion with the predicted visual content and the disorderly portion with the residual content Distortions on the predicted portion degrade the primary visual information, and structural similarity procedures are employed to measure its degradation; distortions on the disorderly portion mainly change the uncertain information and the PNSR is employed for it Finally, according to the noise energy deployment on the two portions, we combine the two evaluation results to acquire the overall quality score Experimental results on six publicly available databases demonstrate that the proposed metric is comparable with the state-of-the-art quality metrics

238 citations

Proceedings ArticleDOI
03 Aug 1997
TL;DR: A computational model of visual masking based on psychophysical data is developed that allows us to choose texture patterns for computer graphics images that hide the effects of faceting, banding, aliasing, noise and other visual artifacts produced by sources of error in graphics algorithms.
Abstract: In this paper we develop a computational model of visual masking based on psychophysical data. The model predicts how the presence of one visual pattern affects the detectability of another. The model allows us to choose texture patterns for computer graphics images that hide the effects of faceting, banding, aliasing, noise and other visual artifacts produced by sources of error in graphics algorithms. We demonstrate the utility of the model by choosing a texture pattern to mask faceting artifacts caused by polygonal tesselation of a flat-shaded curved surface. The model predicts how changes in the contrast, spatial frequency, and orientation of the texture pattern, or changes in the tesselation of the surface will alter the masking effect. The model is general and has uses in geometric modeling, realistic image synthesis, scientific visualization, image compression, and image-based rendering. CR Categories: I.3.0 [Computer Graphics]: General;

236 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202349
202294
2021279
2020311
2019351
2018348