scispace - formally typeset
Search or ask a question
Author

Dror Irony

Bio: Dror Irony is an academic researcher from Tel Aviv University. The author has contributed to research in topics: Matrix multiplication & Cholesky decomposition. The author has an hindex of 9, co-authored 9 publications receiving 469 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Lower bounds on the amount of communication that matrix multiplication algorithms must perform on a distributed-memory parallel computer are presented and it is shown that in any algorithm that uses O(n2/P2/3) words of memory per processor, at least one processor must send or receive Ω(n 2/P1/2) words.

271 citations

Journal ArticleDOI
TL;DR: A new class of shape approximation techniques for irregular triangular meshes that approximates the geometry of the mesh using a linear combination of a small number of basis vectors, and develops an incremental update of the factorization of the least-squares system.
Abstract: We introduce a new class of shape approximation techniques for irregular triangular meshes. Our method approximates the geometry of the mesh using a linear combination of a small number of basis vectors. The basis vectors are functions of the mesh connectivity and of the mesh indices of a number of anchor vertices. There is a fundamental difference between the bases generated by our method and those generated by geometry-oblivious methods, such as Laplacian-based spectral methods. In the latter methods, the basis vectors are functions of the connectivity alone. The basis vectors of our method, in contrast, are geometry-aware since they depend on both the connectivity and on a binary tagging of vertices that are "geometrically important" in the given mesh (e.g., extrema). We show that, by defining the basis vectors to be the solutions of certain least-squares problems, the reconstruction problem reduces to solving a single sparse linear least-squares problem. We also show that this problem can be solved quickly using a state-of-the-art sparse-matrix factorization algorithm. We show how to select the anchor vertices to define a compact effective basis from which an approximated shape can be reconstructed. Furthermore, we develop an incremental update of the factorization of the least-squares system. This allows a progressive scheme where an initial approximation is incrementally refined by a stream of anchor points. We show that the incremental update and solving the factored system are fast enough to allow an online refinement of the mesh geometry

74 citations

Journal ArticleDOI
TL;DR: The design, implementation, and performance of a new parallel sparse Cholesky factorization code that outperforms two state-of-the-art message-passing codes and implies that recursive schedules, blocked data layouts, and dynamic scheduling are effective in the implementation of sparse factorization codes.

33 citations

Proceedings Article
01 Jan 2005
TL;DR: This paper presents a speaker recognition algorithm that models explicitly intra-speaker inter-session variability, and evaluated the technique on the NIST-2004 speaker recognition evaluation corpus, and compared it to a GMM baseline system.
Abstract: In this paper we present a speaker recognition algorithm that models explicitly intra-speaker inter-session variability. Such variability may be caused by changing speaker characteristics (mood, fatigue, etc.), channel variability or noise variability. We define a session-space in which each session (either train or test session) is a vector. We then calculate a rotation of the session-space for which the estimated intra-speaker subspace is isolated and can be modeled explicitly. We evaluated our technique on the NIST-2004 speaker recognition evaluation corpus, and compared it to a GMM baseline system. Results indicate significant reduction in error rate.

29 citations

Journal ArticleDOI
TL;DR: The most significant innovation of the new algorithm is a dynamic partitioning method for the sparse factor that results in very low I/O traffic and allows the algorithm to run at high computational rates, even though the factor is stored on a slow disk.
Abstract: We present a new out-of-core sparse symmetric-indefinite factorization algorithm. The most significant innovation of the new algorithm is a dynamic partitioning method for the sparse factor. This partitioning method results in very low I/O traffic and allows the algorithm to run at high computational rates, even though the factor is stored on a slow disk. Our implementation of the new code compares well with both high-performance in-core sparse symmetric-indefinite codes and a high-performance out-of-core sparse Cholesky code.

23 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A large selection of solution methods for linear systems in saddle point form are presented, with an emphasis on iterative methods for large and sparse problems.
Abstract: Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.

2,253 citations

01 Jan 2006
TL;DR: Three new variations of a direct factorization scheme to tackle the is- sue of indeniteness in sparse symmetric linear systems, including a reordering that is based on a symmetric weighted matching of the matrix, which is effective for highly indenite symmetric systems.
Abstract: This paper discusses new pivoting factorization methods for solving sparse symmetric indenite sys- tems. As opposed to many existing pivoting methods, our SupernodenBunchnKaufman (SBK) pivoting method dy- namically selects and pivots and may be supplemented by pivot perturbation techniques. We demonstrate the effectiveness and the numerical accuracy of this algorithm and also show that a high performance implementa- tion is feasible. We will also show that symmetric maximum-weighted matching strategies add an additional level of reliability to SBK. These techniques can be seen as a complement to the alternative idea of using more complete pivoting techniques during the numerical factorization. Numerical experiments validate these conclusions. where is a diagonal matrix with and pivot blocks, is a sparse lower triangu- lar matrix, and is a symmetric indenite diagonal matrix that reects small half-machine precision perturbations, which might be necessary to tackle the problem of tiny pivots. is a reordering that is based on a symmetric weighted matching of the matrix , and tries to move the largest off-diagonal elements directly alongside the diagonal in order to form good initial or diagonal block pivots. is a ll reducing reordering which honors the structure of . We will present three new variations of a direct factorization scheme to tackle the is- sue of indeniteness in sparse symmetric linear systems. These methods restrict the pivoting search, to stay as long as possible within predened data structures for efcient Level-3 BLAS factorization and parallelization. On the other hand, the imposed pivoting restrictions can be reduced in several steps by taking the matching permutation into account. The rst al- gorithm uses SupernodenBunchnKaufman (SBK) pivoting and dynamically selects and pivots. It is supplemented by pivot perturbation techniques. It uses no more storage than a sparse Cholesky factorization of a positive denite matrix with the same sparsity structure due to restricting the pivoting to interchanges within the diagonal block associated to a single supernode. The coefcient matrix is perturbed whenever numerically acceptable and pivots cannot be found within the diagonal block. One or two steps of iterative re- nement may be required to correct the effect of the perturbations. We will demonstrate that this restricting notion of pivoting with iterative renement is effective for highly indenite symmetric systems. Furthermore the accuracy of this method is for a large set of matrices from different applications areas as accurate as a direct factorization method that uses com- plete sparse pivoting techniques. In addition, we will discuss two preprocessing algorithms to identify large entries in the coefcient matrix that, if permuted close to the diagonal, permit

474 citations

Journal ArticleDOI
TL;DR: A band‐by‐band spectrum computation algorithm and an out‐of‐core implementation that can compute thousands of eigenvectors for meshes with up to a million vertices are proposed and a limited‐memory filtering algorithm, that does not need to store the eigenvctors, is proposed.
Abstract: We present a new method to convert the geometry of a mesh into frequency space. The eigenfunctions of the Laplace-Beltrami operator are used to define Fourier-like function basis and transform. Since this generalizes the classical Spherical Harmonics to arbitrary manifolds, the basis functions will be called Manifold Harmonics. It is well known that the eigenvectors of the discrete Laplacian define such a function basis. However, important theoretical and practical problems hinder us from using this idea directly. From the theoretical point of view, the combinatorial graph Laplacian does not take the geometry into account. The discrete Laplacian (cotan weights) does not have this limitation, but its eigenvectors are not orthogonal. From the practical point of view, computing even just a few eigenvectors is currently impossible for meshes with more than a few thousand vertices. In this paper, we address both issues. On the theoretical side, we show how the FEM (Finite Element Modeling) formulation defines a function basis which is both geometry-aware and orthogonal. On the practical side, we propose a band-by-band spectrum computation algorithm and an out-of-core implementation that can compute thousands of eigenvectors for meshes with up to a million vertices. Finally, we demonstrate some applications of our method to interactive convolution geometry filtering and interactive shading design.

394 citations

Proceedings ArticleDOI
10 Dec 2008
TL;DR: This course outlines its mathematical foundations, describes recent methods for parameterizing meshes over various domains, discusses emerging tools like global parameterization and inter-surface mapping, and demonstrates a variety of parameterization applications.
Abstract: Mesh parameterization is a powerful geometry processing tool with numerous computer graphics applications, from texture mapping to animation transfer. This course outlines its mathematical foundations, describes recent methods for parameterizing meshes over various domains, discusses emerging tools like global parameterization and inter-surface mapping, and demonstrates a variety of parameterization applications.

356 citations

Proceedings ArticleDOI
29 Nov 2006
TL;DR: This work introduces a framework for triangle shape optimization and feature preserving smoothing of triangular meshes that is guided by the vertex Laplacian and the discrete mean curvature normal, and it is capable of smoothing the surface while preserving geometric features.
Abstract: We introduce a framework for triangle shape optimization and feature preserving smoothing of triangular meshes that is guided by the vertex Laplacians, specifically, the uniformly weighted Laplacian and the discrete mean curvature normal. Vertices are relocated so that they approximate prescribed Laplacians and positions in a weighted least-squares sense; the resulting linear system leads to an efficient, non-iterative solution. We provide different weighting schemes and demonstrate the effectiveness of the framework on a number of detailed and highly irregular meshes; our technique successfully improves the quality of the triangulation while remaining faithful to the original surface geometry, and it is also capable of smoothing the surface while preserving geometric features.

329 citations