Author
Hagit Hel-Or
Other affiliations: Stanford University, Bar-Ilan University, IEEE Computer Society ...read more
Bio: Hagit Hel-Or is an academic researcher from University of Haifa. The author has contributed to research in topics: Image processing & Digital watermarking. The author has an hindex of 19, co-authored 55 publications receiving 1676 citations. Previous affiliations of Hagit Hel-Or include Stanford University & Bar-Ilan University.
Papers published on a yearly basis
Papers
More filters
TL;DR: Old and new methods of measuring fluctuating asymmetry are reviewed, including measures of dispersion, landmark methods for shape asymmetry, and continuous symmetry measures, and attempts to explain conflicting results.
Abstract: Fluctuating asymmetry consists of random deviations from perfect symmetry in populations of organisms. It is a measure of developmental noise, which reflects a population’s average state of adaptation and coadaptation. Moreover, it increases under both environmental and genetic stress, though responses are often inconsistent. Researchers base studies of fluctuating asymmetry upon deviations from bilateral, radial, rotational, dihedral, translational, helical, and fractal symmetries. Here, we review old and new methods of measuring fluctuating asymmetry, including measures of dispersion, landmark methods for shape asymmetry, and continuous symmetry measures. We also review the theory, developmental origins, and applications of fluctuating asymmetry, and attempt to explain conflicting results. In the process, we present examples from the literature, and from our own research at “Evolution Canyon” and elsewhere.
327 citations
Book•
17 Jun 2010
TL;DR: Recognizing the fundamental relevance and group theory of symmetry has the potential to play an important role in computational sciences.
Abstract: In the arts and sciences, as well as in our daily lives, symmetry has made a profound and lasting impact. Likewise, a computational treatment of symmetry and group theory (the ultimate mathematical formalization of symmetry) has the potential to play an important role in computational sciences. Though the term Computational Symmetry was formally defined a decade ago by the first author, referring to algorithmic treatment of symmetries, seeking symmetry from digital data has been attempted for over four decades. Computational symmetry on real world data turns out to be challenging enough that, after decades of effort, a fully automated symmetry-savvy system remains elusive for real world applications. The recent resurging interests in computational symmetry for computer vision and computer graphics applications have shown promising results. Recognizing the fundamental relevance and potential power that computational symmetry affords, we offer this survey to the computer vision and computer graphics communities. This survey provides a succinct summary of the relevant mathematical theory, a historic perspective of some important symmetry-related ideas, a partial yet timely report on the state of the arts symmetry detection algorithms along with its first quantitative benchmark, a diverse set of real world applications, suggestions for future directions and a comprehensive reference list.
235 citations
TL;DR: A novel approach to pattern matching is presented in which time complexity is reduced by two orders of magnitude compared to traditional approaches because of an efficient projection scheme which bounds the distance between a pattern and an image window using very few operations on average.
Abstract: A novel approach to pattern matching is presented in which time complexity is reduced by two orders of magnitude compared to traditional approaches. The suggested approach uses an efficient projection scheme which bounds the distance between a pattern and an image window using very few operations on average. The projection framework is combined with a rejection scheme which allows rapid rejection of image windows that are distant from the pattern. Experiments show that the approach is effective even under very noisy conditions. The approach described here can also be used in classification schemes where the projection values serve as input features that are informative and fast to extract.
150 citations
TL;DR: It is argued that the assumptions introduced in most studies arise from the complexity of the problem of shadow removal from a single image and limit the class of shadow images which can be handled by these methods.
Abstract: Removal of shadows from a single image is a challenging problem. Producing a high-quality shadow-free image which is indistinguishable from a reproduction of a true shadow-free scene is even more difficult. Shadows in images are typically affected by several phenomena in the scene, including physical phenomena such as lighting conditions, type and behavior of shadowed surfaces, occluding objects, etc. Additionally, shadow regions may undergo postacquisition image processing transformations, e.g., contrast enhancement, which may introduce noticeable artifacts in the shadow-free images. We argue that the assumptions introduced in most studies arise from the complexity of the problem of shadow removal from a single image and limit the class of shadow images which can be handled by these methods. The purpose of this paper is twofold: First, it provides a comprehensive survey of the problems and challenges which may occur when removing shadows from a single image. In the second part of the paper, we present our framework for shadow removal, in which we attempt to overcome some of the fundamental problems described in the first part of the paper. Experimental results demonstrating the capabilities of our algorithm are presented.
137 citations
TL;DR: It is shown that, when tone mapping is approximated by a piecewise constant/linear function, a fast computational scheme is possible requiring computational time similar to the fast implementation of normalized cross correlation (NCC).
Abstract: A fast pattern matching scheme termed matching by tone mapping (MTM) is introduced which allows matching under nonlinear tone mappings. We show that, when tone mapping is approximated by a piecewise constant/linear function, a fast computational scheme is possible requiring computational time similar to the fast implementation of normalized cross correlation (NCC). In fact, the MTM measure can be viewed as a generalization of the NCC for nonlinear mappings and actually reduces to NCC when mappings are restricted to be linear. We empirically show that the MTM is highly discriminative and robust to noise with comparable performance capability to that of the well performing mutual information, but on par with NCC in terms of computation time.
96 citations
Cited by
More filters
Book•
[...]
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.
2,849 citations
TL;DR: This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions, based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples.
Abstract: This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n^2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.
2,603 citations
TL;DR: In this paper, the authors offer a new book that enPDFd the perception of the visual world to read, which they call "Let's Read". But they do not discuss how to read it.
Abstract: Let's read! We will often find out this sentence everywhere. When still being a kid, mom used to order us to always read, so did the teacher. Some books are fully read in a week and we need the obligation to support reading. What about now? Do you still love reading? Is reading only for you who have obligation? Absolutely not! We here offer you a new book enPDFd the perception of the visual world to read.
2,250 citations
TL;DR: This work identified the borders between several retinotopically organized visual areas in the posterior occipital lobe and estimated the spatial resolution of the fMRI signal and found that signal amplitude falls to 60% at a spatial frequency of 1 cycle per 9 mm of visual cortex.
Abstract: A method of using functional magnetic resonance imaging (fMRI) to measure retinotopic organization within human cortex is described. The method is based on a visual stimulus that creates a traveling wave of neural activity within retinotopically organized visual areas. We measured the fMRI signal caused by this stimulus in visual cortex and represented the results on images of the flattened cortical sheet. We used the method to locate visual areas and to evaluate the spatial precision of fMRI. Specifically, we: (i) identified the borders between several retinotopically organized visual areas in the posterior occipital lobe; (ii) measured the function relating cortical position to visual field eccentricity within area V1; (iii) localized activity to within 1.1 mm of visual cortex; and (iv) estimated the spatial resolution of the fMRI signal and found that signal amplitude falls to 60% at a spatial frequency of 1 cycle per 9 mm of visual cortex. This spatial resolution is consistent with a linespread whose full width at half maximum spreads across 3.5 mm of visual cortex. In a series of experiments, we measured the retinotopic organization of human cortical area V1 and identified the locations of other nearby retinotopically organized visual areas. We also used the retinotopic organization of human primary visual cortex to measure the spatial localization and spatial resolution that can be obtained from functional magnetic resonance imaging (fMRI) of human visual cortex. Human primary visual cortex (area V1) is located in the
1,585 citations