scispace - formally typeset
Search or ask a question
Author

Cheng Lu

Bio: Cheng Lu is an academic researcher from Simon Fraser University. The author has contributed to research in topics: Pixel & Image restoration. The author has an hindex of 12, co-authored 18 publications receiving 1426 citations. Previous affiliations of Cheng Lu include Aptina & Micron Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper shows how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges and proposing a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow- free image.
Abstract: This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.

638 citations

Journal ArticleDOI
TL;DR: This work seeks that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously, and goes over to the quadratic entropy, rather than Shannon's definition.
Abstract: Recently, a method for removing shadows from colour images was developed (Finlayson et al. in IEEE Trans. Pattern Anal. Mach. Intell. 28:59---68, 2006) that relies upon finding a special direction in a 2D chromaticity feature space. This "invariant direction" is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon's definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadow-free colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.

312 citations

Book ChapterDOI
11 May 2004
TL;DR: This paper shows that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration, and can be applied successfully to remove shadows from unsourced imagery.
Abstract: A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.

307 citations

Patent
28 May 2008
TL;DR: In this article, a method and apparatus for restoring an image captured through an extended depth-of-field lens is described, where preprocessed data relating to image degradation is stored and used during an image restoration process.
Abstract: A method and apparatus are disclosed for restoring an image captured through an extended depth-of-field lens. Preprocessed data relating to image degradation is stored and used during an image restoration process.

119 citations

Proceedings ArticleDOI
01 Oct 2001
TL;DR: A method for effective classification of different types of videos that uses the output of a concise video summarization technique that forms a list of keyframes that efficiently summarizes any video.
Abstract: Tools for efficiently summarizing and classifying video sequences are indispensable to assist in the synthesis and analysis of digital video. In this paper, we present a method for effective classification of different types of videos that uses the output of a concise video summarization technique that forms a list of keyframes. The summarization is produced by a method recently presented, in which we generate a universal basis on which to project a video frame feature that effectively reduces any video to the same lighting conditions. Each frame is represented by a compressed chromaticity signature. A multi-stage hierarchical clustering method efficiently summarizes any video. Here, we classify TV programs using a trained hidden Markov model, using the keyframe plus temporal features generated in the summaries.

41 citations


Cited by
More filters
Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations

Book ChapterDOI
11 May 2004
TL;DR: A novel method for human detection in single images which can detect full bodies as well as close-up views in the presence of clutter and occlusion is described.
Abstract: We describe a novel method for human detection in single images which can detect full bodies as well as close-up views in the presence of clutter and occlusion. Humans are modeled as flexible assemblies of parts, and robust part detection is the key to the approach. The parts are represented by co-occurrences of local features which captures the spatial layout of the partrsquos appearance. Feature selection and the part detectors are learnt from training images using AdaBoost. The detection algorithm is very efficient as (i) all part detectors use the same initial features, (ii) a coarse-to-fine cascade approach is used for part detection, (iii) a part assembly strategy reduces the number of spurious detections and the search space. The results outperform existing human detectors.

746 citations

Journal ArticleDOI
TL;DR: The proposed CrackTree method is evaluated on a collection of 206 real pavement images and the experimental results show that the proposed method achieves a better performance than several existing methods.

657 citations

Journal ArticleDOI
TL;DR: This paper shows how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges and proposing a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow- free image.
Abstract: This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.

638 citations

Patent
20 May 2009
TL;DR: In this paper, the system and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed.
Abstract: Systems and methods for implementing array cameras configured to perform super- resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. Lens stack arrays in accordance with many embodiments of the invention include lens elements formed on substrates separated by spacers, where the lens elements, substrates and spacers are configured to form a plurality of optical channels, at least one aperture located within each optical channel, at least one spectral filter located within each optical channel, where each spectral filter is configured to pass a specific spectral band of light, and light blocking materials located within the lens stack array to optically isolate the optical channels.

594 citations