scispace - formally typeset
Open AccessJournal ArticleDOI

Tensor completion for estimating missing values in visual data

Reads0
Chats0
TLDR
The contribution of this paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and building a working algorithm to estimate missing values in tensors of visual data.
Abstract
In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC and HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.

read more

Content maybe subject to copyright    Report

Tensor Completion for Estimating
Missing Values in Visual Data
Item Type Article
Authors Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping
Citation Liu J, Musialski P, Wonka P, Ye J (2013) Tensor Completion for
Estimating Missing Values in Visual Data. IEEE Transactions on
Pattern Analysis and Machine Intelligence 35: 208–220. Available:
http://dx.doi.org/10.1109/TPAMI.2012.39.
Eprint version Post-print
DOI 10.1109/TPAMI.2012.39
Publisher Institute of Electrical and Electronics Engineers (IEEE)
Journal IEEE Transactions on Pattern Analysis and Machine Intelligence
Download date 09/08/2022 19:37:18
Link to Item http://hdl.handle.net/10754/562566

1
Tensor Completion for Estimating Missing
Values in Visual Data
Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye
Abstract—In this paper we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing
due to problems in the acquisition process, or because the user manually identified unwanted outliers. Our algorithm works
even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built
on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix
case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm.
First, we propose a definition for the tensor trace norm, that generalizes the established definition of the matrix trace norm.
Second, similar to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the
straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple
constraints. To tackle this problem, we developed three algorithms: SiLRTC, FaLRTC, and HaLRTC. The SiLRTC algorithm is
simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate
descent (BCD) method to achieve a globally optimal solution; The FaLRTC algorithm utilizes a smoothing scheme to transform
the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem;
The HaLRTC algorithm applies the alternating direction method of multipliers (ADMM) to our problem. Our experiments show
potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust
than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and
between FaLRTC and HaLRTC the for mer is more efficient to obtain a low accuracy solution and the latter is preferred if a high
accuracy solution is desired.
Index Terms—Tensor completion, trace norm, sparse learning.
1INTRODUCTION
In computer vision and graphics, many problems can be for-
mulated as a missing value estimation problem, e.g. image
in-painting [4], [22], video decoding, video in-painting [23],
scan completion, and appearance acquisition completion.
The core problem of the missing value estimation lies
on how to build up the relationship between the known
elements and the unknown ones. Some energy methods
broadly used in image in-painting, e.g. PDEs [4] and belief
propagation [22] mainly focus on the local relationship.
The basic (implicit) assumption is that the missing entries
mainly depend on their neighbors. The further apart two
points are, the smaller their dependance is. However, some-
times the value of the missing entry depends on the entries
which are far away. Thus, it is necessary to develop a tool
to directly capture the global information in the data.
In the two-dimensional case, i.e. the matrix case, the
“rank” is a powerful tool to capture some type of global
information. In Fig. 1, we show a texture with 80% of its
elements removed randomly on the left and its reconstruc-
tion using a low rank constraint on the right. This example
illustrates the power of low rank approximation for missing
data estimation. However, “rank(·)” is unfortunately not a
convex function. Some heuristic algorithms were proposed
Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye are with
Arizona State University, Tempe, AZ, 85287.
E-mail: {Ji.Liu, pmusials, Peter.Wonka, and Jieping.Ye}@asu.edu
to estimate the missing values iteratively [13], [24]. How-
ever, they are not guaranteed to find a globally optimal
solution due to the non-convexity of the rank constraint.
Fig. 1: The left figure contains 80% missing entries shown
as white pixels and the right figure shows its reconstruction
using the low rank approximation.
Recently, the trace norm of matrices was used to approx-
imate the rank of matrices [30], [7], [37], which leads to
a convex optimization problem. The trace norm has been
shown to be the tightest convex approximation for the rank
of matrices [37], and efficient algorithms for the matrix
completion problem using the trace norm constraint were
proposed in [30], [7]. Recently, Cand
`
es and Recht [9],
Recht et al. [37], and Cand
`
es and Tao [10] showed that
under certain conditions, the minimum rank solution can
be recovered by solving a convex optimization problem,
namely the minimization of the trace norm over the given
affine space. Their work theoretically justified the validity

2
of the trace norm to approximate the rank.
Although the low rank approximation problem has been
well studied for matrices, there is not much work on ten-
sors, which are a higher-dimensional extension of matrices.
One major challenge lies in an appropriate definition of
the trace norm for tensors. To the best of our knowledge,
this has been not addressed in the literature. In this paper,
we make two main contributions: 1) We lay the theoretical
foundation of low rank tensor completion and propose the
first definition of the trace norm for tensors. 2) We are the
first to propose a solution for the low rank completion of
tensors.
The challenge of the second part is to build a high
quality algorithm. Similar to matrix completion, the tensor
completion can be formulated as a convex optimization
problem. Unfortunately, the straightforward problem exten-
sion is significantly harder to solve than the matrix case
because of the dependency among multiple constraints.
To tackle this problem, we developed three algorithms:
SiLRTC, FaLRTC, and HaLRTC. The SiLRTC algorithm,
a pretty simple and intuitive method, employs a relaxation
technique to separate the dependant relationships and uses
the block coordinate descent (BCD) method to achieve a
globally optimal solution. It actually simplifies the LRTC
algorithm proposed in our conference paper [29]. The FaL-
RTC algorithm utilizes a smoothing scheme to transform
the original nonsmooth problem into a smooth problem.
We also present a theoretical analysis of the convergence
rate for the FaLRTC algorithm. The third method applies
the alternating direction method of multipliers (ADMM) al-
gorithm [5] to our problem. In addition, we present several
heuristic models, which involve non-convex optimization
problems. Our experiments show that our method is more
accurate and robust than these heuristic approaches. We also
give some potential applications in image in-painting, video
compression, and BRDF data estimation, using our tensor
completion technique. The efficiency comparison indicates
that FaLRTC and HaLRTC are more efficient than SiLRTC
and between FaLRTC and HaLRTC the former is more
efficient to obtain a low accuracy solution and the latter is
preferred if a high accuracy solution is desired.
1.1 Notation
We use upper case letters for matrices, e.g. X, and lower
case letters for the entries, e.g. x
ij
. Σ(X) is a vector,
consisting of the singular values of X in descending order
and σ
i
(X) denotes the i
th
largest singular value. The
Frobenius norm of the matrix X is defined as: X
F
:=
(
i,j
|x
ij
|
2
)
1
2
. The spectral norm is denoted as X :=
σ
1
(X) and the trace norm as X
tr
:=
i
σ
i
(X). Let
X = U ΣV
be the singular value decomposition for X.
The “shrinkage” operator D
τ
(X) is defined as [7]:
D
τ
(X)=UΣ
τ
V
, (1)
where Σ
τ
= diag(max(σ
i
τ, 0)). The “truncate” opera-
tion T
τ
(X) is defined as:
T
τ
(X)=U Σ
¯τ
V
, (2)
where Σ
¯τ
= diag(min(σ
i
, τ)). It is easy to verify that
X = T
τ
(X)+D
τ
(X). Let be an index set, then X
denotes the matrix copying the entries from X in the set
and letting the remaining entries be “0”. A similar definition
can be extended to the tensor case. The inner product of
the matrix space is defined by X, Y =
i,j
X
ij
Y
ij
.
We follow [11] to define the terminology of tensors used
in the paper. An n-mode tensor (or norder tensor) is
defined as X R
I
1
×I
2
×···×I
n
. Its elements are denoted
as x
i
1
,··· ,i
n
, where 1 i
k
I
k
, 1 k n. For
example, a vector is a 1-mode tensor and a matrix is
a 2-mode tensor. It is sometimes convenient to unfold a
tensor into a matrix. The “unfold” operation along the
k-th mode on a tensor X is defined as unfold
k
(X ):=
X
(k)
R
I
k
×(I
1
···I
k1
I
k+1
···I
n
)
. The opposite operation
“fold” is defined as fold
k
(X
(k)
):=X . Denote X
F
:=
(
i
1
,i
2
,···i
n
|a
i
1
,i
2
,···i
n
|
2
)
1
2
as the Frobenius norm of a
tensor. It is clear that X
F
= X
(k)
F
for any 1 k n.
Please refer to [11] for a more extensive overview of
tensors. In addition, we use a nonnegative superscript
number to denote the iteration index, e.g., X
k
denotes the
value of X at the k
th
iteration; the superscript “-2” in K
2
denotes the power.
1.2 Organization
We review related work in Section 2, introduce a convex
model and three heuristic models for the low rank tensor
completion problem in Section 3, present the SiLRTC,
FaLRTC, and HaLRTC algorithms to solve the convex
model in Section 4, Section 5, and Section 6 respectively,
report empirical results in Section 7, and conclude this
paper in Section 8. To increase the readability of this paper
for the casual reader most technical details can be found in
the appendix.
2RELATED WORK
The low rank or approximately low rank problem broadly
occurs in science and engineering, e.g. computer vision
[42], machine learning [1], [2], signal processing [26], and
bioinformatics [44]. Fazel et al. [13], [12] introduced a
low rank minimization problem in control system analysis
and design. They heuristically used the trace norm to
approximate the rank of the matrix. They showed that the
trace norm minimization problem can be reformulated as
a semidefinite programming (SDP) problem via its dual
norm (spectral norm). Srebro et al. [39] employed second-
order cone programming (SCOP) to formulate a trace norm
related problem in matrix factorization. However, many
existing optimization methods such as SDPT3 [41] and
SeDuMi [40] cannot solve a SDP or SOCP problem when
the size of the matrix is much larger than 100 × 100 [30],
[37]. This limitation prevented the usage of the matrix com-
pletion technique in computer vision and image processing.
Recently, to solve the rank minimization problem for large
scale matrices, Ma et al. [30] applied the fixed point and
Bregman iterative method and Cai et al. [7] proposed a
singular value thresholding algorithm. In both algorithms,

3
one key building block is the existence of a closed form
solution for the following optimization problem:
min
XR
p×q
:
1
2
X M
2
F
+ τX
tr
, (3)
where M R
p×q
, and τ is a constant. Cand
`
es and Recht
[9], Recht et al. [37], and Cand
`
es and Tao [10] theoretically
justified the validity of the trace norm to approximate the
rank of matrices. Recht [36] recently improved their result
and also largely simplified the proof by using the golfing
scheme from quantum information theory [15]. An alterna-
tive singular value based method for matrix completion was
recently proposed and justified by Keshavan et al. [21].
This journal paper builds on our own previous work
[29] where we extended the matrix trace norm to the
tensor case and proposed to recover the missing entries
in a low rank tensor by solving a tensor trace norm
minimization problem. We used a relaxation trick on the
objective function such that the block coordinate descent
algorithm can be employed to solve this problem [29].
Since this approach is not efficient enough, some recent
papers tried to use the alternating direction method of
multipliers (ADMM) to efficiently solve the tensor trace
norm minimization problem. The ADMM algorithm was
developed in the 1970s, but was successful in solving large
scale problems and optimization problems with multiple
nonsmooth terms in the objective function [28] recently.
Signoretto et al. [38] and Gandy et al. [14] applied the
ADMM algorithm to solve the tensor completion problem
with Gaussian observation noise, i.e.,
min
X
:
λ
2
X
T
2
F
+ X
, (4)
where X
is the tensor trace norm defined in Eq. (8).
The tensor completion problem without observation noise
can be solved by optimizing Eq. (4) iteratively with an
increasing value of λ [38], [14]. Tomioka et al. [43]
proposed several slightly different models for the problem
Eq. (4) by introducing dummy variables and also applied
ADMM to solve them. Out of these three algorithms for
tensor completion based on ADMM, we choose to compare
to the algorithm by Gandy et al., because the problem
statement is identical to ours. Our results will show that our
adaption of ADMM and our proposed FaLRTC algorithm
are more efficient.
Besides tensor completion, the tensor trace norm pro-
posed in [26] can be applied in various other computer
vision problems such as visual saliency detection [47],
medical imaging [16], corrupted data correction [26], [27],
data compression [25].
3THE FORMULATION OF TENSOR COM-
PLETION
This section presents a convex model and three heuristic
models for tensor completion.
3.1 Convex Formulation for Tensor Completion
Before introducing the low rank tensor completion problem,
let us start from the well-known optimization problem [24]
for low rank matrix completion:
min
X
: rank(X)
s.t. : X
= M
,
(5)
where X, M R
p×q
, and the elements of M in the set
are given while the remaining elements are missing. The
missing elements of X are determined such that the rank
of the matrix X is as small as possible. The optimization
problem in Eq. (5) is a nonconvex optimization problem
since the function rank(X) is nonconvex. One common
approach is to use the trace norm .
to approximate the
rank of matrices. The advantage of the trace norm is that
.
is the tightest convex envelop for the rank of matrices.
This leads to the following convex optimization problem for
matrix completion [3], [7], [30]:
min
X
: X
s.t. : X
= M
.
(6)
The tensor is the generalization of the matrix concept. We
generalize the completion algorithm for the matrix (i.e., 2-
mode or 2-order tensor) case to higher-order tensors by
solving the following optimization problem:
min
X
: X
s.t. : X
= T
(7)
where X , T are n-mode tensors with identical size in each
mode. The first issue is the definition of the trace norm for
the general tensor case. We propose the following definition
for the tensor trace norm:
X
:=
n
i=1
α
i
X
(i)
.
(8)
where α
i
s are constants satisfying α
i
0 and
n
i=1
α
i
=
1. In essence, the trace norm of a tensor is a convex
combination of the trace norms of all matrices unfolded
along each mode. Note that when the mode number n is
equal to 2 (i.e. the matrix case), the definition of the trace
norm of a tensor is consistent with the matrix case, because
the trace norm of a matrix is equal to the trace norm of its
transpose. Under this definition, the optimization in Eq. (7)
can be written as:
min
X
:
n
i=1
α
i
X
(i)
s.t. : X
= T
.
(9)
Here one might ask why we do not define the tensor
trace norm as the convex envelop of the tensor rank like
in the matrix case. Unlike matrices, computing the rank
of a general tensor (mode number > 2) is an NP hard
problem [18]. Therefore, there is no explicit expression for
the convex envelop of the tensor rank to the best of our
knowledge.

4
3.2 Three Heuristic Algorithm
We introduce several heuristic models, which, unlike the
one in the last section, involve non-convex optimization
problems. A goal of introducing the heuristic algorithms
is to establish some basic methods that can be used for
comparison.
Tucker: One natural approach is to use the Tucker
model [46] for tensor factorization to the tensor completion
problem as follows:
min
X ,C,U
1
,··· ,U
n
:
1
2
X C ×
1
U
1
×
2
U
2
×
3
···×
n
U
n
2
F
s.t. : X
= T
(10)
where C ×
1
U
1
×
2
U
2
×
3
···×
n
U
n
is the Tucker model
based tensor factorization, U
i
R
I
i
×r
i
, C R
r
1
×···×r
n
,
and T , X R
I
1
×···×I
n
. One can simply use the block co-
ordinate descent method to solve this problem by iteratively
optimizing two blocks X and C, U
1
, ··· ,U
n
respectively
while fixing the other. X can be computed by letting
X
= T
and X
¯
=(C ×
1
U
1
×
2
U
2
×
3
··· ×
n
U
n
)
¯
.
C, U
1
, ··· ,U
n
can be computed by any existing tensor
factorization algorithm based on the Tucker model. The
procedure can also be employed to solve the following two
heuristic algorithms.
Parafac: Another natural approach is to use the parallel
factor analysis (Parafac) model [17], resulting in the fol-
lowing optimization problem:
min
X ,U
1
,U
2
,··· ,U
n
:
1
2
X U
1
U
2
··· U
n
2
F
s.t. : X
= T
(11)
where denotes the outer product and U
1
U
2
···U
n
is
the Parafac model based decomposition, U
i
R
I
i
×r
, and
T , X R
I
1
×···×I
n
.
SVD: The third alternative is to consider the tensor as
multiple matrices and force the unfolding matrix along each
mode of the tensor to be low rank as follows:
min
X ,M
1
,M
2
,··· ,M
n
:
1
2
n
i=1
X
(i)
M
i
2
F
s.t. : X
= T
rank(M
i
) r
i
i =1, ··· ,n.
(12)
where M
i
R
I
i
×(
k=i
I
k
)
, and T , X R
I
1
×···×I
n
.
4ASIMPLE LOW RANK TENSOR COMPLE-
TION (SILRTC) ALGORITHM
In this section we present the SiLRTC algorithm to solve
the convex model in Eq. (9), which is simple to understand
and to implement. In Section 4.1, we relax the original
problem into a simple convex structure which can be solved
by block coordinate descent. Section 4.2 presents the details
of the proposed algorithm.
4.1 Simplified Formulation
The problem in Eq. (9) is difficult to solve due to the
interdependent matrix trace norm terms, i.e., while we
optimize the sum of multiple matrix trace norms, the
matrices share the same entries and cannot be optimized
independently. Hence, the existing result in Eq. (3) cannot
be used directly. Our key motivation of simplifying this
original problem is how to split these interdependent terms
such that they can be solved independently. We introduce
additional matrices M
1
, ··· ,M
n
and obtain the following
equivalent formulation:
min
X ,M
i
:
n
i=1
α
i
M
i
s.t. : X
(i)
= M
i
for i =1, ··· ,n
X
= T
(13)
In this formulation, the trace norm terms are still not
independent because of the equality constraints M
i
= X
(i)
which enforces all M
i
s to be identical. Thus, we relax the
equality constraints M
i
= X
(i)
by M
i
X
(i)
2
F
d
i
as Eq. (14), so that we can independently solve each
subproblem later on.
min
X ,M
i
:
n
i=1
α
i
M
i
s.t. : X
(i)
M
i
2
F
d
i
for i =1, ··· ,n
X
= T
(14)
d
i
(> 0) is a threshold that could be defined by the user,
but we do not use d
i
explicitly in our algorithm. This
optimization problem can be converted to an equivalent
formulation for certain positive values of β
i
s:
min
X ,M
i
:
n
i=1
α
i
M
i
+
β
i
2
X
(i)
M
i
2
F
s.t. : X
= T
.
(15)
This is a convex but nondifferentiable optimization prob-
lem. Next, we show how to solve the optimization problem
in Eq. (15).
4.2 The Main Algorithm
We propose to employ block coordinate descent (BCD) for
the optimization. The basic idea of block coordinate descent
is to optimize a group (block) of variables while fixing the
other groups. We divide the variables into n +1 blocks:
X ,M
1
,M
2
, ··· ,M
n
.
Computing X : The optimal X with all other variables fixed
is given by solving the following subproblem:
min
X
:
n
i=1
β
i
2
M
i
X
(i)
2
F
s.t. : X
= T
.
(16)
It is easy to check that the solution to Eq. (16) is given by
X
i
1
,··· ,i
n
=
i
β
i
fold
i
(M
i
)
i
β
i
i
1
,··· ,i
n
(i
1
, ··· ,i
n
) / ;
T
i
1
,··· ,i
n
(i
1
, ··· ,i
n
) .
(17)

Figures
Citations
More filters
Journal ArticleDOI

A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion

TL;DR: This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables and proposes a generalized block coordinate descent method.
Journal ArticleDOI

Tensor completion and low-n-rank tensor recovery via convex optimization

TL;DR: This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.
Proceedings ArticleDOI

Sparse reconstruction cost for abnormal event detection

TL;DR: The method provides a unified solution to detect both local abnormal events and global abnormal events through a sparse reconstruction over the normal bases and extends it to support online abnormal event detection by updating the dictionary incrementally.
Journal ArticleDOI

A Survey of Deep Learning-Based Object Detection

TL;DR: This survey provides a comprehensive overview of a variety of object detection methods in a systematic manner, covering the one-stage and two-stage detectors, and lists the traditional and new applications.
Journal ArticleDOI

Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization

TL;DR: This paper proposes to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values, and develops a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm.
References
More filters
Journal ArticleDOI

Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones

TL;DR: This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.
Journal ArticleDOI

Robust principal component analysis

TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Journal ArticleDOI

A Singular Value Thresholding Algorithm for Matrix Completion

TL;DR: This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Journal ArticleDOI

Exact Matrix Completion via Convex Optimization

TL;DR: It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.
Proceedings ArticleDOI

Image inpainting

TL;DR: A novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators, and does not require the user to specify where the novel information comes from.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What have the authors contributed in "Tensor completion for estimating missing values in visual data" ?

In this paper the authors propose an algorithm to estimate missing values in tensors of visual data. First, the authors propose a definition for the tensor trace norm, that generalizes the established definition of the matrix trace norm. Their experiments show potential applications of their algorithms and the quantitative evaluation indicates that their methods are more accurate and robust than heuristic approaches. 

The authors plan to extend the theoretical results of Candès and Recht to the tensor case. The authors also plan to extend the proposed algorithms using techniques recently proposed in [ 8 ], [ 48 ].