scispace - formally typeset
A

Ali Charara

Researcher at King Abdullah University of Science and Technology

Publications -  20
Citations -  240

Ali Charara is an academic researcher from King Abdullah University of Science and Technology. The author has contributed to research in topics: Linear algebra & Matrix (mathematics). The author has an hindex of 9, co-authored 20 publications receiving 174 citations. Previous affiliations of Ali Charara include University of Tennessee.

Papers
More filters
Proceedings ArticleDOI

SLATE: design of a modern distributed and accelerated linear algebra library

TL;DR: The SLATE (Software for Linear Algebra Targeting Exascale) library is being developed to provide fundamental dense linear algebra capabilities for current and upcoming distributed high-performance systems, both accelerated CPU-GPU based and CPU based.
Book ChapterDOI

Exploiting Data Sparsity for Large-Scale Matrix Computations

TL;DR: The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library is extended to provide a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization.
Proceedings ArticleDOI

A novel fast and accurate pseudo-analytical simulation approach for MOAO

TL;DR: A novel hybrid, pseudo-analytical simulation scheme that allows to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and opens the way for a future on-sky implementation of the tomography control, plus the joint PSF and performance estimation.
Journal ArticleDOI

Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs

TL;DR: This work describes the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes (up to 256) using single and multiple GPUs and outperforms existing state-of-the-art implementations.
Proceedings ArticleDOI

Pipelining computational stages of the tomographic reconstructor for multi-object adaptive optics on a multi-GPU system

TL;DR: The proposed TR simulation outperforms asymptotically previous state-of-the-art implementations up to 13-fold speedup, and appears to be the largest-scale AO problem submitted to computation, to date, and opens new research directions for extreme scale AO simulations.