scispace - formally typeset
Search or ask a question
Author

Ricardo Motta

Bio: Ricardo Motta is an academic researcher. The author has contributed to research in topics: Visible spectrum & The Internet. The author has an hindex of 1, co-authored 2 publications receiving 508 citations.

Papers
More filters
Proceedings Article
01 Jan 1996
TL;DR: The aim of this color space is to complement the current color management strategies by enabling a third method of handling color in the operating systems, device drivers and the Internet that utilizes a simple and robust device independent color definition.

535 citations

Proceedings Article
01 Jan 1993
TL;DR: According to Gruml luminescence is "the phenomenon of the emission by matter of electromagnetic radiation that for certain wavelengths or restricted regions of the spectrum is in excess of that due to the thermal radiation of the material at the same temperature."
Abstract: According to Gruml luminescence is "the phenomenon of the emission by matter of electromagnetic radiation that for certain wavelengths or restricted regions of the spectrum is in excess of that due to the thermal radiation fiom the material at the same temperature." We encounter several different types of luminescence in everyday life such as fluorescence, phbsphorence, cathodoluminescence and chemiluminescence. Fluorescence occurs when light absorbed in one wavelength band generates emissions in longer wavelength bands. This shift towards longer wavelengths is called Stoke's shift. Fluorescence is specially prevalent in printed materials. It is commonly associated with the optical brightners used to whiten paper and the dyes used in the printing inks. Paper has a strong tendency to absorb in the blue region of the visible spectrum (that is why grocery bags are brown.) Optical brightners work by absorbing in the near UV region, where we do not perceive the decrease in reflectance, andre-emitthe energy as blue light, which offsets the paper natural y ellowishness. The most common fluorescent dyes are red and magenta. They work by absorbing radiation in the blue green regions, which makes them look more saturated, and re-emit energy in the red end of the spectrum, further increasing the colorimetric purity. Fluorescent yellow dyes are also veIy common. Fluorescence has major implications for calibrating color reproduction systems because the spectral reflectance of fluorescent surfaces is dependent on the illumination. Unlike non-fluorescing surfaces, whlch can be described by a n-dimensional vector of spectral reflectance factors, fluorescing surfaces require at least three vectors. They are called here the diffuse reflectance, the stimulation spectrum and the emission spectrum. Conversely, the fluorescent surface can be described as an (n x n) lower diagonal matrix where the diagonal contains the conventional non-fluorescent reflectance and the off-diagonal elements contain the contribution due to fluorescence. Figure 1 illustrates such matrix for a magenta printer ink. For scanner calibration fluorescence represents a major obstacle. Colorimetric calibration can only be accurately performed for a fixed illuminant, and depending on the combination of detector, light and filters one might fmd that such illuminant has undesirable white point and metameric properties. The calibration of printers is no less problematic. While diffke surfaces can be characterized with simple spectrophotometry, and tristimulus computed later for any desired illuminant, fluorescent surfaces must be characterized with a source that closely resembles the desired illuminant, or by measuring the entire reflectance matrix. Figure 1. ~eflecta&ematrix for a magenta printing ink

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A quality assessment method [most apparent distortion (MAD)], which attempts to explicitly model these two separate strategies, local luminance and contrast masking and changes in the local statistics of spatial-frequency components are used to estimate appearance-based perceived distortion in low-quality images.
Abstract: The mainstream approach to image quality assessment has centered around accurately modeling the single most relevant strategy employed by the human visual system (HVS) when judging image quality (e.g., detecting visible differences, and extracting image structure/information). In this work, we suggest that a single strategy may not be sufficient; rather, we advocate that the HVS uses multiple strategies to determine image quality. For images containing near-threshold distortions, the image is most apparent, and thus the HVS attempts to look past the image and look for the distortions (a detection-based strategy). For images containing clearly visible distortions, the distortions are most apparent, and thus the HVS attempts to look past the distortion and look for the image's subject matter (an appearance-based strategy). Here, we present a quality assessment method [most apparent distortion (MAD)], which attempts to explicitly model these two separate strategies. Local luminance and contrast masking are used to estimate detection-based perceived distortion in high-quality images, whereas changes in the local statistics of spatial-frequency components are used to estimate appearance-based perceived distortion in low-quality images. We show that a combination of these two measures can perform well in predicting subjective ratings of image quality.

1,651 citations

Journal ArticleDOI
TL;DR: The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in termsof its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.
Abstract: This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.

1,153 citations

Book
10 Apr 2000
TL;DR: In this paper, the authors define color and describe color, and then measure color quality and quality using a color quality measure, and produce colors. But they do not define color classes.
Abstract: Defining Color. Describing Color. Measuring Color. Measuring Color Quality. Colorants. Producing Colors. Back to Principles. Appendix. Bibliography. Index.

752 citations

Journal ArticleDOI
TL;DR: This paper shows how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges and proposing a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow- free image.
Abstract: This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.

638 citations

Book
28 Nov 2007
TL;DR: Digital Image Processing is the definitive textbook for students, researchers, and professionals in search of critical analysis and modern implementations of the most important algorithms in the field, and is also eminently suitable for self-study.
Abstract: This revised and expanded new edition of an internationally successful classic presents an accessible introduction to the key methods in digital image processing for both practitioners and teachers. Emphasis is placed on practical application, presenting precise algorithmic descriptions in an unusually high level of detail, while highlighting direct connections between the mathematical foundations and concrete implementation. The text is supported by practical examples and carefully constructed chapter-ending exercises drawn from the authors' years of teaching experience, including easily adaptable Java code and completely worked out examples. Source code, test images and additional instructor materials are also provided at an associated website. Digital Image Processingis the definitive textbook for students, researchers, and professionals in search of critical analysis and modern implementations of the most important algorithms in the field, and is also eminently suitable for self-study.

558 citations