scispace - formally typeset
Search or ask a question
Author

Shmuel Peleg

Bio: Shmuel Peleg is an academic researcher from Hebrew University of Jerusalem. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 60, co-authored 221 publications receiving 16853 citations. Previous affiliations of Shmuel Peleg include Sarnoff Corporation & University of Maryland, College Park.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available, and the proposed approach is similar to back-projection used in tomography.

2,081 citations

Journal ArticleDOI
TL;DR: Accurate computation of image motion enables the enhancement of image sequences that include improvement of image resolution, filling-in occluded regions, and reconstruction of transparent objects.

911 citations

Journal ArticleDOI
TL;DR: Textures are classified based on the change in their properties with changing resolution, and the relation of a texture picture to its negative, and directional properties are discussed.
Abstract: Textures are classified based on the change in their properties with changing resolution The area of the gray level surface is measured at serveral resolutions This area decreases at coarser resolutions since fine details that contribute to the area disappear Fractal properties of the picture are computed from the rate of this decrease in area, and are used for texture comparison and classification The relation of a texture picture to its negative, and directional properties, are also discussed

833 citations

Patent
03 Dec 1996
TL;DR: In this article, the irises of a human or an animal in an image with little or no active involvement by the human or animal was analyzed. And a method for obtaining and analyzing images of at least one object in a scene comprising capturing a wide field of view image of the object to locate the object in the scene; and then using a narrow field-of-view (NFOV) imager responsive to the location information provided in the capturing step to obtain higher resolution image.
Abstract: A recognition system which obtains and analyzes images of at least one object in a scene comprising a wide field of view (WFOV) imager which is used to capture an image of the scene and to locate the object and a narrow field of view (NFOV) imager which is responsive to the location information provided by the WFOV imager and which is used to capture an image of the object, the image of the object having a higher resolution than the image captured by the WFOV imager is disclosed. In one embodiment, a system that obtains and analyzes images of the irises of eyes of a human or animal in an image with little or no active involvement by the human or animal is disclosed. A method for obtaining and analyzing images of at least one object in a scene comprising capturing a wide field of view image of the object to locate the object in the scene; and then using a narrow field of view imager responsive to the location information provided in the capturing step to obtain higher resolution image of the object is also disclosed.

599 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a continuous symmetry measure to quantify the distance of a given (distorted molecular) shape from any chosen element of symmetry, allowing one to compare the symmetry distance of several objects relative to a single symmetry element.
Abstract: We advance the notion that for many realistic issues involving symmetry in chemistry, it is more natural to analyze symmetry properties in terms of a continuous scale rather than in terms of "yes or no". Justification of that approach is dealt with in some detail using examples such as: symmetry distortions due to vibrations; changes in the "allowedness" of electronic transitions due to deviations from an ideal symmetry; continuous changes in environmental symmetry with reference to crystal and ligand field effects; non-ideal symmetry in concerted reactions; symmetry issues of polymers and large random objects. A versatile, simple tool is developed as a continuous symmetry measure. Its main property is the ability to quantify the distance of a given (distorted molecular) shape from any chosen element of symmetry. The generality of this symmetry measure allows one to compare the symmetry distance of several objects relative to a single symmetry element and to compare the symmetry distance of a single object relative to various symmetry elements. The continuous symmetry approach is presented in detail for the case of cyclic molecules, first in a practical way and then with a rigorous mathematical analysis. The versatility of the approah is then further demonstrated with alkane conformations, with a vibrating ABA water-like molecule, and with a three-dimensional analysis of the symmetry of a (2 3 21 reaction in which the double bonds are not ideally aligned.

584 citations


Cited by
More filters
Book ChapterDOI
08 Oct 2016
TL;DR: In this paper, the authors combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image style transfer, where a feedforward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Abstract: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.

6,639 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a deep learning method for single image super-resolution (SR), which directly learns an end-to-end mapping between the low/high-resolution images.
Abstract: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

6,122 citations

Posted Content
TL;DR: This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Abstract: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.

5,668 citations

Journal ArticleDOI
08 Sep 1978-Science

5,182 citations

Journal ArticleDOI
TL;DR: This paper presents a new approach to single-image superresolution, based upon sparse signal representation, which generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods.
Abstract: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.

4,958 citations