Entropy Minimization for Shadow Removal
read more
Citations
CrackTree: Automatic crack detection from pavement images
Shape, Illumination, and Reflectance from Shading
Shape, Illumination, and Reflectance from Shading
Single-image shadow detection and removal using paired regions
Paired Regions for Shadow Detection and Removal
References
Mean shift: a robust approach toward feature space analysis
On Estimation of a Probability Density Function and Mode
Color Science. Concepts and Methods, Quantitative Data and Formulas
Multivariate Density Estimation, Theory, Practice and Visualization
Lightness and Retinex Theory
Related Papers (5)
Frequently Asked Questions (10)
Q2. What have the authors stated for future works in "Entropy minimization for shadow removal" ?
Future work would involve a careful assessment of how onboard nonlinear processing in cameras affects results. For the re-integration step, it may be the case that consideration of a separate shadow-edge map for x and y could be useful, since in principle these are different. For example, if a spectral sharpening transform ( Finlayson et al., 1994 ) is available for a camera ( or even using a generic such transform ( Drew et al., 2007 ) ) then the authors can expect to obtain better shadow removal from the lighting invariant. Under bright lighting, shadows are typically driven down to very small pixel values — say, to 2 % of the maximum channel value — that may be unusable by the method presented.
Q3. How many points do the authors have for each colour patch?
If the authors form chromaticities (actually the authors use geometric mean chromaticities defined in eq. (7) below instead of simple band ratios, in order to not favour one particular colour channel), then taking logarithms and plotting the authors have 9 points (for their 9 lights) for every colour patch.
Q4. How can the authors recover a full-colour image?
Using the re-integration method in (Finlayson et al., 2006), the authors can go on from their invariant image to recover a full-colour shadow-free image.
Q5. What is the method for removing shadows from a light source?
when strong interreflections are present, in shadow regions that are very close to an object with attached shadow, the method can also not correctly remove this effect.
Q6. What is the simplest way to determine which colours are intrinsic to the scene?
The authors can use the greyscale or the pseudo-colour invariant as a guide that allows us to determine which colours in the original, RGB, colour image are intrinsic to the scene or are simply artifacts of the shadows due to lighting.
Q7. how do the authors form the projected 2-vector via =?
The authors form the projected 2-vector χ θ via χ θ = P θχ and then go back to an estimate (indicated by a tilde) of 3D ρ and c via ρ̃ = U T χ θ, c̃ = exp(ρ̃ ).
Q8. What is the colour of the shadow?
Notice that the colour of the shadow is basically a deep blue; since this is an outdoor shot on a clear day, this is not surprising in that the light for shadowed pixels is mostly from the sky dome, whereas light for non-shadowed pixels is comprised of both sky-light as well as direct sunlight.
Q9. What is the effect of a single fixed calibration direction on the image?
The authors point out to the reader that there is considerable variance in the recovered invariant angle direction over the set of images and cameras (150 degrees plus or minus 20 degrees) and so a single fixed calibration direction will not remove the effect of illumination in images.
Q10. What is the minimum-variance direction for lines formed in log-chromaticity space?
If the authors wished to find the minimum-variance direction for lines that are formed in log-chromaticity space as the light changes, the authors would need to know which points fall on which lines.