scispace - formally typeset
Search or ask a question
Author

Anat Levin

Bio: Anat Levin is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Speckle pattern & Scattering. The author has an hindex of 42, co-authored 91 publications receiving 12993 citations. Previous affiliations of Anat Levin include Stanford University & Hebrew University of Jerusalem.


Papers
More filters
DOI
19 Jul 2021
TL;DR: In this paper, the authors used speckle intensity correlations to image incoherent illuminators inside scattering samples and achieved order-of-magnitude expansion in both the range and density of illuminator it can recover.
Abstract: We use speckle intensity correlations to image incoherent illuminators inside scattering samples. Our approach uses correlation properties specific t o s peckle patterns created by near-field illuminators. Compared to previous far-field approaches, our approach achieves order-of-magnitude expansion in both the range and density of illuminators it can recover.
Proceedings ArticleDOI
29 Nov 2022
TL;DR: In this article , a closed-form approach for acquiring material parameters from thick samples, avoiding costly optimization, is proposed based on imaging the material of interest under coherent laser light and capturing speckle patterns, allowing to measure the singly scattered component of the light, even when observing thick samples where most light is scattered multiple times.
Abstract: In material acquisition we want to infer the internal properties of materials from the way they scatter light. In particular, we are interested in measuring the phase function of the material, governing the amount of energy scattered towards different directions. This phase function has been shown to carry a lot of information about the type and size of particles dispersed in the medium, and is therefore essential for its characterization. Previous approaches to this task have relied on computationally costly inverse rendering optimization. Alternatively, if the material can be made optically thin enough so that most light paths scatter only once, this optimization can be avoided and the phase function can be directly read from the profile of light scattering at different angles. However, in many realistic applications, it is not easy to slice or dilute the material so that it is thin enough for such a single scattering model to hold. In this work we suggest a simple closed-form approach for acquiring material parameters from thick samples, avoiding costly optimization. Our approach is based on imaging the material of interest under coherent laser light and capturing speckle patterns. We show that memory-effect correlations between speckle patterns produced under nearby illumination directions provide a gating mechanism, allowing us to measure the singly scattered component of the light, even when observing thick samples where most light is scattered multiple times. We have built an experimental prototype capable of measuring phase functions over a narrow angular cone. We test the accuracy of our approach using validation materials whose ground truth phase function is known; and we use it to capture a set of everyday materials.
Proceedings ArticleDOI
23 Jun 2019
TL;DR: In this paper, a physically accurate and computationally efficient Monte Carlo algorithm is proposed to evaluate the complex statistics of speckle fields in scattering media, such as the memory effect, for a large variety of material and imaging parameters.
Abstract: We derive a physically accurate and computationally efficient Monte Carlo algorithm that can be used to evaluate the complex statistics of speckle fields in scattering media. This allows evaluating and studying second-order speckle statistics, such as the memory effect, for a large variety of material and imaging parameters, including turbid materials. This helps bridge the gap between analytical formulas, derived under restrictive assumptions such as diffusion, and empirical lab measurements. It also opens up the possibility for discovering new types of correlation effects, and using those to improve our ability to see through and focus into random media.
Proceedings ArticleDOI
24 Jun 2019
TL;DR: Using a new MC simulator, the authors study statistics of speckle fields in scattering media, which allows understanding the Memory Effect limits and using speckles correlations to improve our ability to see through random media.
Abstract: Using a new MC simulator, we study statistics of speckle fields in scattering media. This allows understanding the Memory Effect limits and using speckle correlations to improve our ability to see through random media

Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a feed-forward denoising convolutional neural networks (DnCNNs) to handle Gaussian denobling with unknown noise level.
Abstract: The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

5,902 citations

Book ChapterDOI
07 Oct 2012
TL;DR: The goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships, to better understand how 3D cues can best inform a structured 3D interpretation.
Abstract: We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.

4,827 citations