scispace - formally typeset
Search or ask a question
Topic

Human visual system model

About: Human visual system model is a research topic. Over the lifetime, 8697 publications have been published within this topic receiving 259440 citations.


Papers
More filters
Proceedings ArticleDOI
05 Aug 2011
TL;DR: A novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort and significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines.
Abstract: We present a novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort. Unlike previous techniques, our approach works entirely in the 2D domain and does not require the knowledge or creation of a 3D proxy model. Inspired by the fact that the human visual system tends to focus on the most salient features of a scene, which we observe for hand-drawn cartoons are the contours rather than the interior of regions, we can create the illusion of temporally coherent animation using only rough 2D image registration. This key observation allows us to design a simple yet effective algorithm that significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines. We demonstrate our technique on a variety of input animations as well as provide examples of postprocessing operations that can be applied to simulate 3D-like effects entirely in the 2D domain.

45 citations

Journal ArticleDOI
TL;DR: A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy and test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal.
Abstract: A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy. First, the original watermark image is divided into 16×16 blocks and the preprocessed watermark to be embedded is generated by performing element-by-element matrix multiplication on the DCT coefficient matrix of each block and a key-based matrix. The intention of generating the preprocessed watermark is to guarantee the infeasibility of the illegal removal of the embedded watermark by the unauthorized users. Then, adaptive scaling and embedding factors are computed for each block of the host image and the preprocessed watermark according to the features of the corresponding blocks to better match the human visual system characteristics. Finally, the significant DCT coefficients of the preprocessed watermark are adaptively added to those of the host image to yield the watermarked image. The watermarking system is robust against compression to some extent. The performance of the proposed method is verified, and the test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal. Moreover, experimental results demonstrate that legally recovered images can achieve superior visual effects, and peak signal-to-noise ratio values of these images are >50 dB.

45 citations

Journal ArticleDOI
TL;DR: Individual performances on tests of contrast sensitivity, orientation discrimination, wavelength discrimination, and vernier acuity covaried, such that proficiency on one test predicted proficiency on the others, indicating a wide range of visual abilities among normal subjects.
Abstract: The responses of 20 young adult emmetropes with normal color vision were measured on a battery of visual performance tasks. Using previously documented tests of known reliability, we evaluated orientation discrimination, contrast sensitivity, wavelength sensitivity, vernier acuity, direction-of-motion detection, velocity discrimination, and complex form identification. Performance varied markedly between individuals, both on a given test and when the scores from all tests were combined to give an overall indication of visual performance. Moreover, individual performances on tests of contrast sensi tivity, orientation discrimination, wavelength discrimination, and vernier acuity covaried, such that proficiency on one test predicted proficiency on the others. These results indicate a wide range of visual abilities among normal subjects and provide the basis for an overall index of visual proficiency that can be used to determine whether the surprisingly large and coordinated size differences of the components of the human visual system (Andrews, Halpern, & Purves, 1997) are reflected in corresponding variations in visual performance.

45 citations

Journal ArticleDOI
TL;DR: A method that makes use of the retinal integration time in the human visual system for increasing the resolution of displays, and draws a formal connection between the display and super-resolution techniques and finds that both methods share the same limitation.
Abstract: We present a method that makes use of the retinal integration time in the human visual system for increasing the resolution of displays. Given an input image with a resolution higher than the display resolution, we compute several images that match the display's native resolution. We then render these low-resolution images in a sequence that repeats itself on a high refresh-rate display. The period of the sequence falls below the retinal integration time and therefore the eye integrates the images temporally and perceives them as one image. In order to achieve resolution enhancement we apply small-amplitude vibrations to the display panel and synchronize them with the screen refresh cycles. We derive the perceived image model and use it to compute the low-resolution images that are optimized to enhance the apparent resolution of the perceived image. This approach achieves resolution enhancement without having to move the displayed content across the screen and hence offers a more practical solution than existing approaches. Moreover, we use our model to establish limitations on the amount of resolution enhancement achievable by such display systems. In this analysis we draw a formal connection between our display and super-resolution techniques and find that both methods share the same limitation, yet this limitation stems from different sources. Finally, we describe in detail a simple physical realization of our display system and demonstrate its ability to match most of the spectrum displayable on a screen with twice the resolution.

45 citations

Journal ArticleDOI
TL;DR: Inspired by the motion-related process in the HVS, a novel full-reference assessor along salient trajectories (FAST) for VQA (which combines the spatial, temporal, and joint spatial-temporal quality degradations) is introduced.
Abstract: With the rapid growth of digital video through the Internet, a reliable objective video-quality assessment (VQA) algorithm is in great demand for video management. Motion information plays a dominant role for video perception, and the human visual system (HVS) is able to track moving objects effectively with eye movement. Moreover, the middle temporal area of the brain is selective for moving objects with particular velocities. In other words, visual contents that are along the motion trajectories will automatically attract our attention for dedicated processing. Inspired by the motion-related process in the HVS, we suggest analyzing the degradation along attended motion trajectories for VQA. The characteristic of motion velocity along each trajectory is analyzed for temporal quality measurement. Meanwhile, visual information along each trajectory is extracted for joint spatial-temporal quality measurement. Finally, considering the spatial-quality degradation from each frame, a novel full-reference assessor along salient trajectories (FAST) for VQA (which combines the spatial, temporal, and joint spatial-temporal quality degradations) is introduced. Experimental results on five publicly available VQA databases demonstrate that the proposed FAST VQA model performs consistently with the subjective perception. The source code of the proposed method is available at http://web.xidian.edu.cn/wjj/paper.html .

45 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202349
202294
2021279
2020311
2019351
2018348