scispace - formally typeset
Open AccessJournal ArticleDOI

Low-Rank Modeling and Its Applications in Image Analysis

TLDR
This article reviews the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis, and summarizes the models and algorithms for low-Rank matrix recovery and illustrates their advantages and limitations with numerical experiments.
Abstract
Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions.

read more

Citations
More filters
Journal ArticleDOI

On the Applications of Robust PCA in Image and Video Processing

TL;DR: This paper presents the applications of RPCA in video processing which utilize additional spatial and temporal information compared to image processing and provides perspectives on possible future research directions and algorithmic frameworks that are suitable for these applications.
Posted Content

Low-Rank Modeling and Its Applications in Image Analysis

TL;DR: Low-rank matrix recovery as discussed by the authors is a class of methods that solve problems by representing variables of interest as low-rank matrices, and it has achieved great success in various fields including computer vision, data mining, signal processing and bioinformatics.
Journal ArticleDOI

Low-Rank Quaternion Approximation for Color Image Processing

TL;DR: Extensive evaluations for color image denoising and inpainting tasks verify that LRQA achieves better performance over several state-of-the-art sparse representation and LRMA-based methods in terms of both quantitative metrics and visual quality.
Journal ArticleDOI

Low CP Rank and Tucker Rank Tensor Completion for Estimating Missing Components in Image Data

TL;DR: This paper uses the alternating direction method of multipliers (ADMM) to reformulate the optimization model with two tensor ranks into its two sub-problems, and each has only one tensor rank optimization.
References
More filters
Book

Pattern Recognition and Machine Learning

TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Reference EntryDOI

Principal Component Analysis

TL;DR: Principal component analysis (PCA) as discussed by the authors replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables.
Book

Nonlinear Programming

Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.
Related Papers (5)