scispace - formally typeset
Search or ask a question
Author

Joyce E. Farrell

Other affiliations: Ames Research Center, Agilent Technologies, Hewlett-Packard  ...read more
Bio: Joyce E. Farrell is an academic researcher from Stanford University. The author has contributed to research in topics: Image processing & Pixel. The author has an hindex of 29, co-authored 117 publications receiving 2687 citations. Previous affiliations of Joyce E. Farrell include Ames Research Center & Agilent Technologies.


Papers
More filters
Journal ArticleDOI
TL;DR: A simple model of the human perceiver is constructed that predicts the critical sample rate required to render sampled and continuous moving images indistinguishable and is offered as an explanation of many of the phenomena known as apparent motion.
Abstract: Many visual displays, such as movies and television, rely on sampling in the time domain. We derive the spatiotemporal-frequency spectra for some simple moving images and illustrate how these spectra are altered by sampling in the time domain. We construct a simple model of the human perceiver that predicts the critical sample rate required to render sampled and continuous moving images indistinguishable. The rate is shown to depend on the spatial and the temporal acuity of the observer and on the velocity and spatial-frequency content of the image. Several predictions of this model are tested and confirmed. The model is offered as an explanation of many of the phenomena known as apparent motion. Finally, the implications of the model for computer-generated imagery are discussed.

243 citations

Proceedings ArticleDOI
TL;DR: In this paper, the authors describe system simulations that predict the output of imaging sensors with the same dye size but different pixel sizes and presents metrics that quantify the spatial resolution and light sensitivity for these different imaging sensors.
Abstract: When the size of a CMOS imaging sensor array is fixed, the only way to increase sampling density and spatial resolution is to reduce pixel size. But reducing pixel size reduces the light sensitivity. Hence, under these constraints, there is a tradeoff between spatial resolution and light sensitivity. Because this tradeoff involves the interaction of many different system components, we used a full system simulation to characterize performance. This paper describes system simulations that predict the output of imaging sensors with the same dye size but different pixel sizes and presents metrics that quantify the spatial resolution and light sensitivity for these different imaging sensors.

154 citations

Proceedings ArticleDOI
TL;DR: The Image Systems Evaluation Toolkit (ISET) as mentioned in this paper is an integrated suite of software routines that simulate the capture and processing of visual scenes and includes a graphical user interface (GUI) for users to control the physical characteristics of the scene and many parameters of the optics, sensor electronics and image processing pipeline.
Abstract: The Image Systems Evaluation Toolkit (ISET) is an integrated suite of software routines that simulate the capture and processing of visual scenes. ISET includes a graphical user interface (GUI) for users to control the physical characteristics of the scene and many parameters of the optics, sensor electronics and image processing-pipeline. ISET also includes color tools and metrics based on international standards (chromaticity coordinates, CIELAB and others) that assist the engineer in evaluating the color accuracy and quality of the rendered image.

135 citations

Proceedings ArticleDOI
23 Feb 1997
TL;DR: The S-CIELAB extension includes a spatial processing step, prior to the CIELAB /spl Delta/E calculation, so that the results correspond better to color difference perception by the human eye.
Abstract: We describe experimental tests of a spatial extension to the CIELAB color metric for measuring color reproduction errors of digital images. The standard CIELAB /spl Delta/E metric is suitable for use on large uniform color targets, but not on images, because color sensitivity changes as a function of spatial pattern. The S-CIELAB extension includes a spatial processing step, prior to the CIELAB /spl Delta/E calculation, so that the results correspond better to color difference perception by the human eye. The S-CIELAB metric was used to predict texture visibility of printed halftone patterns. The results correlate with perceptual data better than standard CIELAB and point the way to various improvements.

116 citations

Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work investigated the relationship between the perceived image fidelity and image quality of halftone textures by asking subjects to rank order a set of printed halftones on the basis of smoothness and reducing the contrast of each pattern until it was at threshold.
Abstract: Image fidelity (inferred by the ability to discriminate between two images) and image quality (inferred by the preference for one image over another) are often assumed to be directly related. We investigated the relationship between the perceived image fidelity and image quality of halftone textures. Subjects were asked to rank order a set of printed halftone swatches on the basis of smoothness. They were then asked to reduce the contrast of each pattern until it was at threshold, thus providing an estimate of the pattern's perceptual strength and its discriminability from a non-textured swatch. We found only a moderate correlation between image fidelity and image quality.

100 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI
TL;DR: In this article, the first stage consists of linear filters that are oriented in space-time and tuned in spatial frequency, and the outputs of quadrature pairs of such filters are squared and summed to give a measure of motion energy.
Abstract: A motion sequence may be represented as a single pattern in x–y–t space; a velocity of motion corresponds to a three-dimensional orientation in this space. Motion sinformation can be extracted by a system that responds to the oriented spatiotemporal energy. We discuss a class of models for human motion mechanisms in which the first stage consists of linear filters that are oriented in space-time and tuned in spatial frequency. The outputs of quadrature pairs of such filters are squared and summed to give a measure of motion energy. These responses are then fed into an opponent stage. Energy models can be built from elements that are consistent with known physiology and psychophysics, and they permit a qualitative understanding of a variety of motion phenomena.

3,504 citations

Journal ArticleDOI
11 Sep 1987-Science
TL;DR: A psychological space is established for any set of stimuli by determining metric distances between the stimuli such that the probability that a response learned to any stimulus will generalize to any other is an invariant monotonic function of the distance between them.
Abstract: A psychological space is established for any set of stimuli by determining metric distances between the stimuli such that the probability that a response learned to any stimulus will generalize to any other is an invariant monotonic function of the distance between them. To a good approximation, this probability of generalization (i) decays exponentially with this distance, and (ii) does so in accordance with one of two metrics, depending on the relation between the dimensions along which the stimuli vary. These empirical regularities are mathematically derivable from universal principles of natural kinds and probabilistic geometry that may, through evolutionary internalization, tend to govern the behaviors of all sentient organisms.

2,225 citations

Journal ArticleDOI
TL;DR: A brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders are provided.
Abstract: Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

1,970 citations

Book
23 Nov 2007
TL;DR: This new edition now contains essential information on steganalysis and steganography, and digital watermark embedding is given a complete update with new processes and applications.
Abstract: Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession. *This new edition now contains essential information on steganalysis and steganography *New concepts and new applications including QIM introduced *Digital watermark embedding is given a complete update with new processes and applications

1,773 citations