scispace - formally typeset
Search or ask a question

Showing papers by "Shai Avidan published in 2017"


Proceedings ArticleDOI
12 May 2017
TL;DR: A new method for calculating the air-light color, the color of an area of the image with no objects in line-of-sight, based on the haze-lines prior that was recently introduced, which performs on-par with current state of theart techniques and is more computationally efficient.
Abstract: Outdoor images taken in bad weather conditions, such as haze and fog, look faded and have reduced contrast. Recently there has been great success in single image dehazing, i.e., improving the visibility and restoring the colors from a single image. A crucial step in these methods is the calculation of the air-light color, the color of an area of the image with no objects in line-of-sight. We propose a new method for calculating the air-light. The method relies on the haze-lines prior that was recently introduced. This prior is based on the observation that the pixel values of a hazy image can be modeled as lines in RGB space that intersect at the air-light. We use Hough transform in RGB space to vote for the location of the air-light. We evaluate the proposed method on an existing dataset of real world images, as well as some synthetic and other real images. Our method performs on-par with current state-of-the-art techniques and is more computationally efficient.

142 citations


Journal ArticleDOI
TL;DR: Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure and it is proved that they can be sampled using a density that depends on the smoothness of the image.
Abstract: Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure. There is a huge number of transformations to consider but we prove that they can be sampled using a density that depends on the smoothness of the image. For each potential transformation, we approximate the SAD error using a sublinear algorithm that randomly examines only a small number of pixels. We further accelerate the algorithm using a branch-and-bound-like scheme. As images are known to be piecewise smooth, the result is a practical affine template matching algorithm with approximation guarantees, that takes a few seconds to run on a standard machine. We perform several experiments on three different datasets, and report very good results.

90 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: The CoF extends the BF to deal with boundaries, not just edges, and learns co-occurrences directly from the image, which can achieve various filtering results.
Abstract: Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.

29 citations


Posted Content
TL;DR: Co-occurrence filter (CoF) as discussed by the authors is a boundary preserving filter based on the Bilateral Filter (BF) that relies on a co-occurence matrix.
Abstract: Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.

26 citations


Journal ArticleDOI
TL;DR: This paper presents a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity, and uses Convolutional Neural Networks to learn Patch2Vec.
Abstract: Many image editing applications rely on the analysis of image patches. In this paper, we present a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity. Inspired by Word2Vec, we term our approach Patch2Vec. However, there is a significant difference between words and patches. Words have a fairly small and well defined dictionary. Image patches, on the other hand, have no such dictionary and the number of different patch types is not well defined. The problem is aggravated by the fact that each patch might contain several objects and textures. Moreover, Patch2Vec should be universal because it must be able to map never‐seen‐before texture to the vector space. The mapping is learned by analyzing the distribution of all natural patches. We use Convolutional Neural Networks (CNN) to learn Patch2Vec. In particular, we train a CNN on labeled images with a triplet‐loss objective function. The trained network encodes a given patch to a 128D vector. Patch2Vec is evaluated visually, qualitatively, and quantitatively. We then use several variants of an interactive single‐click image segmentation algorithm to demonstrate the power of our method.

21 citations


Proceedings ArticleDOI
01 Jan 2017

17 citations


Patent
06 Apr 2017
TL;DR: In this article, the authors proposed a method for dehazing a digital image and restoring an underwater digital image by clustering pixels of the digital image into haze lines, where each of the haze-lines is comprised of a sub-group of the pixels that are scattered non-locally over the image.
Abstract: Methods for dehazing a digital image and for restoring an underwater digital image. The methods include the following steps: First, clustering pixels of a digital image into haze-lines, wherein each of the haze-lines is comprised of a sub-group of the pixels that are scattered non-locally over the digital image. Second, estimating, based on the haze-lines, a transmission map of the digital image, wherein the transmission map encodes scene depth information for each pixel of the digital image. Then, for a hazy image, calculating a dehazed digital image based on the transmission map. For an underwater image, calculating a restored image based on the transmission map and also based on attenuation coefficient ratios. An optional addition to the underwater image restoration takes into account different attenuation coefficients for different color channels, when the image depicts a scene characterized by wavelength-dependent transmission, such as under water. Further disclosed are methods for airlight estimation and for veiling-light estimation, which may be utilized for the dehazing and restoration, or for other purposes.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to detect dynamic regions in CrowdCam images based on the observation that matching static points must satisfy the epipolar geometry constraints, but computing exact matches is challenging, so they compute the probability that a pixel has a match along the corresponding epipolar line.

5 citations