scispace - formally typeset
M

Michael K. Ng

Researcher at University of Hong Kong

Publications -  658
Citations -  24376

Michael K. Ng is an academic researcher from University of Hong Kong. The author has contributed to research in topics: Cluster analysis & Computer science. The author has an hindex of 72, co-authored 608 publications receiving 20492 citations. Previous affiliations of Michael K. Ng include The Chinese University of Hong Kong & Vanderbilt University.

Papers
More filters
Journal ArticleDOI

A hybrid preconditioner of banded matrix approximation and alternating direction implicit iteration for symmetric Sinc-Galerkin linear systems

TL;DR: A two-step preconditioning strategy based on the banded matrix approximation (BMA) and the alternating direction implicit (ADI) iteration for these Sinc–Galerkin systems is presented and it is shown that the two- Step Preconditioner is symmetric positive definite, and the condition number of the preconditionsed matrix is bounded by the convergence factor of the involved ADI iteration.
Journal ArticleDOI

Fast algorithm with theoretical guarantees for constrained low-tubal-rank tensor recovery in hyperspectral images denoising

TL;DR: Extensive experiments on HSIs denoising demonstrate that the exact and inexact methods both outperform comparing methods in quantitative evaluation metrics and visual effects, and the inexact PAM can compromise between the accuracy and efficiency for large scale HSIs.
Journal ArticleDOI

Band-toeplitz preconditioned gmres iterations for time-dependent pdes ∗

TL;DR: This work examines the convergence characteristics of the GMRES method with circulant-like block preconditioning for solvingonsymmetric linear systems of algebraic equations which are small rank perturbations of block band-Toeplitz matrices from discretization of time-dependent PDEs.
Journal ArticleDOI

Improved residue function and reduced flow dependence in MR perfusion using least‐absolute‐deviation regularization

TL;DR: The development of a model‐independent delay‐invariant deconvolution technique using least‐absolute‐deviation (LAD) regularization to improve the CBF estimation accuracy and initial clinical implementation of the method on six representative clinical cases confirm the advantages of the LAD method over rSVD and sSVD methods.
Journal ArticleDOI

Robust low-rank tensor completion via transformed tensor nuclear norm with total variation regularization

TL;DR: In this article, a transformed tensor nuclear norm method combined with total variational (TV) regularization is proposed for low-rank tensor completion with different degradations for third-order tensors, and its global convergence is established under very mild conditions.