scispace - formally typeset
Search or ask a question
Author

Y. Yoo

Bio: Y. Yoo is an academic researcher from Texas Instruments. The author has contributed to research in topics: Digital image processing & Color image. The author has an hindex of 1, co-authored 1 publications receiving 323 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An overview of the image processing pipeline is presented, first from a signal processing perspective and later from an implementation perspective, along with the tradeoffs involved.
Abstract: Digital still color cameras (DSCs) have gained significant popularity in recent years, with projected sales in the order of 44 million units by the year 2005. Such an explosive demand calls for an understanding of the processing involved and the implementation issues, bearing in mind the otherwise difficult problems these cameras solve. This article presents an overview of the image processing pipeline, first from a signal processing perspective and later from an implementation perspective, along with the tradeoffs involved.

368 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The proposed fully automated vector technique can be easily implemented in either hardware or software; and incorporated in any existing microarray image analysis and gene expression tool.
Abstract: Vector processing operations use essential spectral and spatial information to remove noise and localize microarray spots. The proposed fully automated vector technique can be easily implemented in either hardware or software; and incorporated in any existing microarray image analysis and gene expression tool.

348 citations

Journal ArticleDOI
19 Nov 2014
TL;DR: This work proposes an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation.
Abstract: Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

319 citations

Journal ArticleDOI
TL;DR: This work seeks that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously, and goes over to the quadratic entropy, rather than Shannon's definition.
Abstract: Recently, a method for removing shadows from colour images was developed (Finlayson et al. in IEEE Trans. Pattern Anal. Mach. Intell. 28:59---68, 2006) that relies upon finding a special direction in a 2D chromaticity feature space. This "invariant direction" is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon's definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadow-free colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.

312 citations

Journal ArticleDOI
TL;DR: A new video database is presented: CVD2014-Camera Video Database, which uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process.
Abstract: This paper presents a new database, CID2013, to address the issue of using no-reference (NR) image quality assessment algorithms on images with multiple distortions. Current NR algorithms struggle to handle images with many concurrent distortion types, such as real photographic images captured by different digital cameras. The database consists of six image sets; on average, 30 subjects have evaluated 12–14 devices depicting eight different scenes for a total of 79 different cameras, 480 images, and 188 subjects (67% female). The subjective evaluation method was a hybrid absolute category rating-pair comparison developed for the study and presented in this paper. This method utilizes a slideshow of all images within a scene to allow the test images to work as references to each other. In addition to mean opinion score value, the images are also rated using sharpness, graininess, lightness, and color saturation scales. The CID2013 database contains images used in the experiments with the full subjective data plus extensive background information from the subjects. The database is made freely available for the research community.

203 citations

Journal ArticleDOI
TL;DR: A comparative analysis of algorithm performance is presented that is used as the basis of a discussion of the current state of colour constancy research and of the important issues that future research in this field should address.
Abstract: This article addresses the problem of colour constancy: how a visual system is able to ensure that the colours it perceives remain stable, regardless of the prevailing scene illuminant. My aim is first to summarize and review the most important theoretical advances that have been made in this field. Second, I present a comparative analysis of algorithm performance that we use as the basis of a discussion of the current state of colour constancy research and of the important issues that future research in this field should address. Finally, I highlight some areas of recent research that are important in the context of further improving the performance of colour constancy algorithms. © 2006 Wiley Periodicals, Inc. Col Res Appl, 31, 303–314, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.20226

174 citations