scispace - formally typeset
Search or ask a question
Author

Irad Yavneh

Bio: Irad Yavneh is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Multigrid method & Sparse approximation. The author has an hindex of 31, co-authored 124 publications receiving 3185 citations. Previous affiliations of Irad Yavneh include National Center for Atmospheric Research & Weizmann Institute of Science.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors analyzed the particular example of an unbalanced instability of a balanced, horizontally uniform, vertically sheared current, as it occurs within the Boussinesq equations.
Abstract: Under the influences of stable density stratification and the earth’s rotation, large-scale flows in the ocean and atmosphere have a mainly balanced dynamics—sometimes called the slow manifold—in the sense that there are diagnostic hydrostatic and gradient-wind momentum balances that constrain the fluid acceleration. The nonlinear balance equations are a widely successful, approximate model for this regime, and mathematically explicit limits of their time integrability have been identified. It is hypothesized that these limits are indicative, at least approximately, of the transition from the larger-scale regime of inverse energy cascades by anisotropic flows to the smaller-scale regime of forward energy cascade to dissipation by more nearly isotropic flows and intermittently breaking inertia–gravity waves. This paper analyzes the particular example of an unbalanced instability of a balanced, horizontally uniform, vertically sheared current, as it occurs within the Boussinesq equations. This ageo...

216 citations

Journal ArticleDOI
TL;DR: In this paper, a sparsity-based single-shot subwavelength resolution coherent diffractive imaging (CDI) method was proposed to reconstruct sub-wavelength features from far-field intensity patterns at a resolution several times better than the diffraction limit.
Abstract: Coherent Diffractive Imaging (CDI) is an algorithmic imaging technique where intricate features are reconstructed from measurements of the freely diffracting intensity pattern. An important goal of such lensless imaging methods is to study the structure of molecules that cannot be crystallized. Ideally, one would want to perform CDI at the highest achievable spatial resolution and in a single-shot measurement such that it could be applied to imaging of ultrafast events. However, the resolution of current CDI techniques is limited by the diffraction limit, hence they cannot resolve features smaller than one half the wavelength of the illuminating light. Here, we present sparsity-based single-shot subwavelength resolution CDI: algorithmic reconstruction of subwavelength features from far-field intensity patterns, at a resolution several times better than the diffraction limit. This work paves the way for subwavelength CDI at ultrafast rates, and it can considerably improve the CDI resolution with X-ray free-electron lasers and high harmonics.

189 citations

Journal ArticleDOI
TL;DR: It is shown that while the maximum a posteriori probability (MAP) estimator aims to find and use the sparsest representation, the minimum mean- squared-error (MMSE) estimators leads to a fusion of representations to form its result, which is a far more accurate estimation in terms of the expected lscr2 -norm error.
Abstract: Cleaning of noise from signals is a classical and long-studied problem in signal processing. Algorithms for this task necessarily rely on an a priori knowledge about the signal characteristics, along with information about the noise properties. For signals that admit sparse representations over a known dictionary, a commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one. As this problem is too complex in general, approximation methods, such as greedy pursuit algorithms, are often employed. In this line of reasoning, we are led to believe that detection of the sparsest representation is key in the success of the denoising goal. Does this mean that other competitive and slightly inferior sparse representations are meaningless? Suppose we are served with a group of competing sparse representations, each claiming to explain the signal differently. Can those be fused somehow to lead to a better result? Surprisingly, the answer to this question is positive; merging these representations can form a more accurate (in the mean-squared-error (MSE) sense), yet dense, estimate of the original signal even when the latter is known to be sparse. In this paper, we demonstrate this behavior, propose a practical way to generate such a collection of representations by randomizing the Orthogonal Matching Pursuit (OMP) algorithm, and produce a clear analytical justification for the superiority of the associated Randomized OMP (RandOMP) algorithm. We show that while the maximum a posteriori probability (MAP) estimator aims to find and use the sparsest representation, the minimum mean- squared-error (MMSE) estimator leads to a fusion of representations to form its result. Thus, working with an appropriate mixture of candidate representations, we are surpassing the MAP and tending towards the MMSE estimate, and thereby getting a far more accurate estimation in terms of the expected lscr2 -norm error.

188 citations

Journal ArticleDOI
TL;DR: A method of performing single-shot sub-wavelength resolution Coherent Diffractive Imaging (CDI), i.e. algorithmic object reconstruction from far-field intensity measurements, applicable to objects that are sparse in a known basis.
Abstract: We present the experimental reconstruction of sub-wavelength features from the far-field intensity of sparse optical objects: sparsity-based sub-wavelength imaging combined with phase-retrieval. As examples, we demonstrate the recovery of random and ordered arrangements of 100 nm features with the resolution of 30 nm, with an illuminating wavelength of 532 nm. Our algorithmic technique relies on minimizing the number of degrees of freedom; it works in real-time, requires no scanning, and can be implemented in all existing microscopes - optical and non-optical.

160 citations

Journal ArticleDOI
15 Apr 1994-Science
TL;DR: High-resolution numerical simulations were made of unforced, planetary-scale fluid dynamics based on the quasi-geostrophic equations for a Boussinesq fluid in a uniformly rotating and stably stratified environment, which is an idealization for large regions of either the atmosphere or ocean.
Abstract: High-resolution numerical simulations were made of unforced, planetary-scale fluid dynamics. In particular, the simulation was based on the quasi-geostrophic equations for a Boussinesq fluid in a uniformly rotating and stably stratified environment, which is an idealization for large regions of either the atmosphere or ocean. The solutions show significant discrepancies from the long-standing theoretical prediction of isotropy. The discrepancies are associated with the self-organization of the flow into a large population of coherent vortices. Their chaotic interactions govern the subsequent evolution of the flow toward a final configuration that is nonturbulent.

128 citations


Cited by
More filters
Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI

3,734 citations

Journal ArticleDOI
TL;DR: The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, and the extensive experiments validate the generality and state-of-the-art performance of the proposed NCSR algorithm.
Abstract: Sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred, and/or down-sampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm.

1,441 citations

Journal ArticleDOI
TL;DR: Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.
Abstract: As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of the l1-norm optimization techniques and the fact that natural images are intrinsically sparse in some domains. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a precollected dataset of example image patches, and then, for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image nonlocal self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.

1,253 citations