scispace - formally typeset
Search or ask a question
Topic

Human visual system model

About: Human visual system model is a research topic. Over the lifetime, 8697 publications have been published within this topic receiving 259440 citations.


Papers
More filters
Proceedings ArticleDOI
11 Jun 2002
TL;DR: It is argued here that a more dynamic, just-in-time representation is involved, one with deep similarities to the way that users interact with external displays, which can provide a basis for the design of intelligent display systems that can interact with humans in highly effective and novel ways.
Abstract: One of the more compelling beliefs about vision is that it is based on representations that are coherent and complete, with everything in the visual field described in great detail. However, changes made during a visual disturbance are found to be difficult to see, arguing against the idea that our brains contain a detailed, picture-like representation of the scene. Instead, it is argued here that a more dynamic, just-in-time representation is involved, one with deep similarities to the way that users interact with external displays. It is further argued that these similarities can provide a basis for the design of intelligent display systems that can interact with humans in highly effective and novel ways.

57 citations

Proceedings ArticleDOI
26 Aug 2005
TL;DR: In this article, the authors investigate the influence of sound effects on the perception of motion smoothness in an animation (i.e. on the perceived delivered frame rate) and find that participants who watched audiovisual walkthroughs gave more erroneous answers while performing their task compared to the subjects in the "No Sound" group, regardless of their familiarity with animated CG.
Abstract: The developers and users of interactive computer graphics (CG), such as 3D games and virtual reality, are demanding ever more realistic computer generated imagery delivered at high frame rates, to enable a greater perceptual experience for the user. As more computational power and/or transmission bandwidth are not always available, special techniques are applied that trade off fidelity in order to reduce computational complexity, while trying to minimise the perceptibility of the resulting visual defects. Research on human visual perception has promoted the development of perception driven CG techniques, where knowledge of the human visual system and its weaknesses are exploited when rendering/displaying 3D graphics. It is well known in the human perception community that many factors, including audio stimuli, may influence the amount of cognitive resources available to perform a visual task. In this paper we investigate the influence sound effects have on the perceptibility of motion smoothness in an animation (i.e. on the perception of delivered frame rate). Forty participants viewed pairs of computer-generated walkthrough animations (with the same visual content within the pair) displayed at five different frame rates, in all possible combinations. Both walkthroughs in each test pair were either silent or accompanied by sound effects and the participant had to detect the one that had a smoother motion (i.e. was delivered at higher frame rate). A significant effect of sound effects on the perceived smoothness was revealed. The participants who watched the audiovisual walkthroughs gave more erroneous answers while performing their task compared to the subjects in the "No Sound" group, regardless of their familiarity with animated CG. Especially the unfamiliar participants failed to notice motion smoothness variations which were apparent to them in the absence of sound. The effect of the type of camera movement in the scene (translation or rotation) on the viewers' perception of the motion smoothness/jerkiness was also investigated, but no significant association between them was found. Our results should lead to new insights in 3D graphics regarding the requirements for the delivered frame rate in a wide range of applications.

57 citations

30 Oct 1992
TL;DR: Analysis of the results of the experiments showed that rapid and accurate estimation is possible with both hue and orientation, which suggests that these and perhaps other preattentive features can be used to create visualization tools which allow high-speed multivariate data analysis.
Abstract: A new method for designing multivariate data visualization tools is presented. These tools allow users to perform simple tasks such as estimation, target detection, and detection of data boundaries rapidly and accurately. Our design technique is based on principles arising from an area of cognitive psychology called preattentive processing. Preattentive processing involves visual features that can be detected by the human visual system without focusing attention on particular regions in an image. Examples of preattentive features include colour, orientation, intensity, size, shape, curvature, and line length. Detection is performed very rapidly by the visual system, almost certainly using a large degree of parallelism. We studied two known preattentive features, hue and orientation. The particular question investigated is whether rapid and accurate estimation is possible using these preattentive features. Experiments that simulated displays using our preattentive visualization tool were run. Analysis of the results of the experiments showed that rapid and accurate estimation is possible with both hue and orientation. A second question, whether interaction occurs between the two features, was answered negatively. This suggests that these and perhaps other preattentive features can be used to create visualization tools which allow high-speed multivariate data analysis.

57 citations

Journal ArticleDOI
TL;DR: The spatio-temporal frequency response of the human visual system and the influence of eye movements are discussed, and subthreshold summation suggests a local rather than a global distortion measure.

57 citations

Proceedings ArticleDOI
12 Oct 2020
TL;DR: Experimental results show that the proposed model can predict subjective video quality more accurately than the publicly available video quality models representing the state-of-the-art.
Abstract: Due to the wide range of different natural temporal and spatial distortions appearing in user generated video content, blind assessment of natural video quality is a challenging research problem. In this study, we combine the hand-crafted statistical temporal features used in a state-of-the-art video quality model and spatial features obtained from convolutional neural network trained for image quality assessment via transfer learning. Experimental results on two recently published natural video quality databases show that the proposed model can predict subjective video quality more accurately than the publicly available video quality models representing the state-of-the-art. The proposed model is also competitive in terms of computational complexity.

57 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202349
202294
2021279
2020311
2019351
2018348