scispace - formally typeset
Search or ask a question
Topic

Human visual system model

About: Human visual system model is a research topic. Over the lifetime, 8697 publications have been published within this topic receiving 259440 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Using a Ternus-Pikler display, it is shown that human observers can perceive features of moving objects at locations these features are not present and that these non-retinotopic feature attributions are not errors caused by the limitations of the perceptual system but follow rules of perceptual grouping.

76 citations

Patent
23 Mar 2001
TL;DR: In this paper, a method for adaptive quantization of video frames based on bit rate prediction was proposed, which includes increasing quantization in sectors of a video frame where coding artifacts would be less noticeable to the human visual system.
Abstract: A method for adaptive quantization of video frames based on bit rate prediction that includes increasing quantization in sectors of a video frame where coding artifacts would be less noticeable to the human visual system and decreasing quantization in sectors where coding artifacts would be more noticeable to the human visual system. In one embodiment the method reverts to uniform quantization for video frames in which adaptive quantization would require extra bits.

76 citations

Journal ArticleDOI
TL;DR: Experimental results show that VGS is competitive with state-of-the-art metrics in terms of prediction precision, reliability, simplicity, and low computational cost.
Abstract: A full-reference image quality assessment (IQA) model by multiscale visual gradient similarity (VGS) is presented. The VGS model adopts a three-stage approach: First, global contrast registration for each scale is applied. Then, pointwise comparison is given by multiplying the similarity of gradient direction with the similarity of gradient magnitude. Third, intrascale pooling is applied, followed by interscale pooling. Several properties of human visual systems on image gradient have been explored and incorporated into the VGS model. It has been found that Stevens' power law is also suitable for gradient magnitude. Other factors such as quality uniformity, visual detection threshold of gradient, and visual frequency sensitivity also affect subjective image quality. The optimal values of two parameters of VGS are trained with existing IQA databases, and good performance of VGS has been verified by cross validation. Experimental results show that VGS is competitive with state-of-the-art metrics in terms of prediction precision, reliability, simplicity, and low computational cost.

76 citations

Journal ArticleDOI
TL;DR: The proposed backlight scaling technique is capable of efficiently computing the flickering effect online and subsequently using a measure of the temporal distortion to appropriately adjust the slack on the intra-frame spatial distortion, thereby, achieving a good balance between the two sources of distortion while maximizing the backlight dimming-driven energy saving in the display system and meeting an overall video quality figure of merit.
Abstract: Liquid crystal displays (LCDs) have appeared in applications ranging from medical equipment to automobiles, gas pumps, laptops, and handheld portable computers. These display components present a cascaded energy attenuator to the battery of the handheld device which is responsible for about half of the energy drain at maximum display intensity. As such, the display components become the main focus of every effort for maximization of embedded system's battery lifetime. This paper proposes an approach for pixel transformation of the displayed image to increase the potential energy saving of the backlight scaling method. The proposed approach takes advantage of human visual system (HVS) characteristics and tries to minimize distortion between the perceived brightness values of the individual pixels in the original image and those of the backlight-scaled image. This is in contrast to previous backlight scaling approaches which simply match the luminance values of the individual pixels in the original and backlight-scaled images. Furthermore, this paper proposes a temporally-aware backlight scaling technique for video streams. The goal is to maximize energy saving in the display system by means of dynamic backlight dimming subject to a video distortion tolerance. The video distortion comprises of: 1) an intra-frame (spatial) distortion component due to frame-sensitive backlight scaling and transmittance function tuning and 2) an inter-frame (temporal) distortion component due to large-step backlight dimming across frames modulated by the psychophysical characteristics of the human visual system. The proposed backlight scaling technique is capable of efficiently computing the flickering effect online and subsequently using a measure of the temporal distortion to appropriately adjust the slack on the intra-frame spatial distortion, thereby, achieving a good balance between the two sources of distortion while maximizing the backlight dimming-driven energy saving in the display system and meeting an overall video quality figure of merit. The proposed dynamic backlight scaling approach is amenable to highly efficient hardware realization and has been implemented on the Apollo Testbed II. Actual current measurements demonstrate the effectiveness of proposed technique compared to the previous backlight dimming techniques, which have ignored the temporal distortion effect

76 citations

Journal ArticleDOI
TL;DR: The proposed watermarking method based on 4 × 4 image blocks using redundant wavelet transform with singular value decomposition considering human visual system (HVS) characteristics expressed by entropy values provides high robustness especially under image processing attacks, JPEG2000 and JPEG XR attacks.
Abstract: With the rapid growth of internet technology, image watermarking method has become a popular copyright protection method for digital images. In this paper, we propose a watermarking method based on $$4\times 4$$ image blocks using redundant wavelet transform with singular value decomposition considering human visual system (HVS) characteristics expressed by entropy values. The blocks which have the lower HVS entropies are selected for embedding the watermark. The watermark is embedded by examining $$U_{2,1}$$ and $$U_{3,1}$$ components of the orthogonal matrix obtained from singular value decomposition of the redundant wavelet transformed image block where an optimal threshold value based on the trade-off between robustness and imperceptibility is used. In order to provide additional security, a binary watermark is scrambled by Arnold transform before the watermark is embedded into the host image. The proposed scheme is tested under various image processing, compression and geometrical attacks. The test results are compared to other watermarking schemes that use SVD techniques. The experimental results demonstrate that our method can achieve higher imperceptibility and robustness under different types of attacks compared to existing schemes. Our method provides high robustness especially under image processing attacks, JPEG2000 and JPEG XR attacks. It has been observed that the proposed method achieves better performance over the recent existing watermarking schemes.

76 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202349
202294
2021279
2020311
2019351
2018348