scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Interval-valued Matrix Factorization with Applications

13 Dec 2010-pp 1037-1042
TL;DR: The Interval-valued Matrix Factorization (IMF) framework is proposed and it is shown that proposed I-NMF and I-PMF significantly outperform their single-valued counterparts in FA and CF applications.
Abstract: In this paper, we propose the Interval-valued Matrix Factorization (IMF) framework. Matrix Factorization (MF) is a fundamental building block of data mining. MF techniques, such as Nonnegative Matrix Factorization (NMF) and Probabilistic Matrix Factorization (PMF), are widely used in applications of data mining. For example, NMF has shown its advantage in Face Analysis (FA) while PMF has been successfully applied to Collaborative Filtering (CF). In this paper, we analyze the data approximation in FA as well as CF applications and construct interval-valued matrices to capture these approximation phenomenons. We adapt basic NMF and PMF models to the interval-valued matrices and propose Interval-valued NMF (I-NMF) as well as Interval-valued PMF (I-PMF). We conduct extensive experiments to show that proposed I-NMF and I-PMF significantly outperform their single-valued counterparts in FA and CF applications.

Content maybe subject to copyright    Report

Interval-valued Matrix Factorization with Applications
Zhiyong Shen
1,3
,LiangDu
2,1
, Xukun Shen
3
, Yidong Shen
2
1
Hewlett Packard Labs China, zhiyongs@hp.com
2
State Key Laboratory of Computer Science, China,{duliang,ydshen}@ios.ac.cn
3
State Key Laboratory of Virtual Reality Technology and system,China, xkshen@vrlab.buaa.edu.cn
Abstract—In this paper, we propose the Interval-valued
Matrix Factorization (IMF) framework. Matrix Factorization
(MF) is a fundamental building block of data mining. MF
techniques, such as Nonnegative Matrix Factorization (NMF)
and Probabilistic Matrix Factorization (PMF), are widely used
in applications of data mining. For example, NMF has shown
its advantage in Face Analysis (FA) while PMF has been
successfully applied to Collaborative Filtering (CF). In this
paper, we analyze the data approximation in FA as well
as CF applications and construct interval-valued matrices to
capture these approximation phenomenons. We adapt basic
NMF and PMF models to the interval-valued matrices and
propose Interval-valued NMF (I-NMF) as well as Interval-
valued PMF (I-PMF). We conduct extensive experiments to
show that proposed I-NMF and I-PMF significantly outperform
their single-valued counterparts in FA and CF applications.
Keywords -Matrix factorization, uncertainty
I. INTRODUCTION
Exploring data approximation has attracted much atten-
tion in uncertain data mining [1] and privacy preserving
data mining [2]. Data approximation might be caused by
limitations of measuring, delayed data update or intensional
data perturbation. When traditional data mining techniques
are employed, the consideration of data approximation may
improve the quality of results. Thus, various data mining
techniques, such as clustering, classification, association
mining have been adapted to handling data approximation. In
this paper, we devote to inject data approximation into Ma-
trix Factorization (MF) techniques. MF, also known as ma-
trix decomposition, underlies many data mining techniques
including clustering, dimensionality reduction and missing
data prediction etc.. It decomposes an input data matrix into
a number of low-rank factor matrices, which leads to a more
compact linear approximation for the original data matrix.
Variations MF have been extensively studied in literatures.
In this paper, we pay special attention to Nonnegative Ma-
trix Factorization (NMF) [3], [4] and Probabilistic Matrix
Factorization (PMF) [5]. Each of these MF techniques is
suited for a particular class of applications. For example,
NMF has shown its advantage in Face Analysis (FA) [4].
In FA applications, each face is represented by a feature
vector. NMF factorizes the matrix of multiple face feature
vectors into factor matrices and then achieve a more compact
representation of the original face data. On the other hand,
PMF has been successfully applied to Collaborative Filtering
(CF) [6]. CF is one of the most successful techniques
for automatic recommendation systems which need only
an observed r ating matrix as input. PMF decomposes the
sparse rating matrix into user profile matrix and item profile
matrix, and then makes predictions for the unknown entries.
However, traditional NMF and PMF ignore the following
data approximation phenomenons in FA and CF.
Alignment approximation in FA: The faces need to be
rotated and aligned to make sure that same columns in the
data matrix are corresponding to the same positions in faces.
Such alignment is hardly to be perfect in practice, i.e. there
is approximation with the alignment in FA applications (see
Section II-A for details).
Rating approximation in CF: When a user rates an
item in a real-life rating system she/he usually selects a
discretized rating value which is close to the ideal numerical
preference value (the exact preference degree). Thus, the
rating matrix does contain approximations to some degree
(see Section II-B for details).
Interval bounds are better than single-valued variables to
describe the above phenomenons of approximation. Many
application areas have taken advantage of interval-valued
data analysis (see for instance [7]), such as object tracking,
market analysis, quantitative economics and so on. In tra-
ditional MF techniques, input data matrices might be real
values, non-negative values or binary values etc., all of
which are single-valued. In this paper, we introduce a new
type of data matrix interval-valued matrix to MF, which
captures approximation in the observed data matrix. Then,
we propose a novel MF framework Interval-valued Matrix
Factorization (IMF) to decompose such matrices. Under the
IMF framework, we inject data approximation i nto NMF and
PMF and extend them to interval-valued NMF (I-NMF for
short) and interval-valued PMF (I-PMF for short). Therefore,
our work is a marriage between interval-valued data analysis
[7] and MF, and our contributions on both sides of research
area are summarized as follows
We analyze the alignment approximation in FR as well
as the rating approximation in CF, and formalize them
with interval-valued matrices (Section II).
We propose the IMF framework, under which we
extend two representative basic MF techniques NMF
2010 IEEE International Conference on Data Mining
1550-4786/10 $26.00 © 2010 IEEE
DOI 10.1109/ICDM.2010.115
1037

and PMF to I-NMF and I-PMF which are capable of
handling interval-valued matrices (Section IV).
We conduct extensive experiments to show that the the
proposed I-NMF and I-PMF significantly outperform
their traditional single-valued counterparts in FA and
CF applications (Section V).
II. I
NTERVAL-VALUED MATRIX AND DATA
APPROXIMATION
In this section we formalize the approximation in CF and
FR problems with interval-valued matrices. First of all, we
give formal definitions of interval-valued matrix.
Let 𝑿
𝑛×𝑑
denote the input data matrix, with
entries denoted as 𝑋
𝑖𝑗
.Let𝐼(𝑿) denote the interval-valued
matrix corresponding to 𝑿, and we have the following two
equivalent representations for 𝐼(𝑿).
Definition 1 (Center-radius representation). We denote the
interval with center 𝑋
𝑖𝑗
and radius 𝛿
𝑖𝑗
as
𝐼(𝑋
𝑖𝑗
)=𝑋
𝑖𝑗
,𝛿
𝑖𝑗
(1)
For entire matrices, we have 𝐼(𝑿)=𝑿, 𝜹.
Definition 2 (Min-max representation). We denote the in-
terval bounds as 𝑋
low
𝑖𝑗
= 𝑋
𝑖𝑗
𝛿
𝑖𝑗
and 𝑋
up
𝑖𝑗
= 𝑋
𝑖𝑗
+ 𝛿
𝑖𝑗
.
𝐼(𝑋
𝑖𝑗
)=[𝑋
low
𝑖𝑗
,𝑋
up
𝑖𝑗
] (2)
For entire matrices, we have 𝐼(𝑿)=[𝑿
low
, 𝑿
up
].
In practice, we might only observe single-valued data
matrices rather than interval-valued ones. In the following
subsections we’ll give the empirical method to construct
𝑰(
𝑿) based on 𝑿. The above definitions have already been
adopted in interval-valued data analysis [8]. In our work,
we’ll use the center-radius representation (Definition 1) to
formalize the rating approximation in CF and alignment
approximation in FR and then construct interval-valued
matrices. The min-max representation (Definition 2) will be
used as input for the proposed IMF models introduced in
Section IV.
A. Alignment Approximation in FR
In many FA techniques, we need to align the faces image
such that, ideally, the pixels with the same coordinates
correspond to the identical positions of a face. In Figure
1, we take the position of the nose tip as an example to
show that the alignment is not perfect. Although the same
position of a face is not exactly aligned, they should be
near to each other. Take the first row as examples, the pixel
with coordinate (33,35) may corresponds to the face position
coordinated by (41,34) in the second image, or (33,40) in
the third and so on. Formally, the value of a pixel with
coordinates (𝑥, 𝑦),𝑥 ∈{1, ..., 𝑑
𝑥
},𝑦 ∈{1, ..., 𝑑
𝑦
}, might
correspond to a pixel with coordinates (𝑥 𝑥, 𝑦 𝑦),
0
Δ𝑥, Δ𝑦 𝑟.InMF,the𝑖’th face is represented by
(33,35)
20 40 60
20
40
60
(41,34)
20 40 60
20
40
60
(33,40)
20 40 60
20
40
60
(25,32)
20 40 60
20
40
60
(44,33)
20 40 60
20
40
60
(34,32)
20 40 60
20
40
60
(29,38)
20 40 60
20
40
60
(32,36)
20 40 60
20
40
60
(35,38)
20 40 60
20
40
60
(37,37)
20 40 60
20
40
60
Figure 1. Illustration o f alignment approximation.
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
20 40 60
20
40
60
Figure 2. An example of 𝜹 matrix corresponding to faces in Figure 1.
a vector 𝑿
𝑖
with dimensionality 𝑑 = 𝑑
𝑥
× 𝑑
𝑦
.Weuse
(𝑥
(𝑖,𝑗)
,𝑦
(𝑖,𝑗)
) to denote the coordinates of pixels in the 𝑖’th
image which corresponds to the 𝑗’th element in vector 𝑿
𝑖
,
namely 𝑋
𝑖𝑗
. Then, we define the following set of the entries
in 𝑿 for each 𝑋
𝑖𝑗
𝒮
FA( 𝑟)
𝑖𝑗
= {𝑋
𝑖𝑗
∣∣𝑥
(𝑖,𝑗
)
𝑥
(𝑖,𝑗)
∣≤𝑟 ∧∣𝑦
(𝑖,𝑗
)
𝑦
(𝑖,𝑗)
∣≤𝑟}
(3)
The elements in 𝒮
FA( 𝑟)
𝑖𝑗
correspond to pixels around
(𝑥
(𝑖,𝑗)
,𝑦
(𝑖,𝑗)
) in a range 𝑟. Intuitively, 𝑋
𝑖𝑗
may corresponds
to a value in the interval of [min(𝒮
FA ( 𝑟)
𝑖𝑗
),max(𝒮
FA ( 𝑟)
𝑖𝑗
)],
which coincides the min-max definition (Definition 2).
However, min-max statistics are not robust in practice and
alternatively, we construct 𝐼(𝑋
𝑖𝑗
) based on the standard
deviation to capture the variation in 𝒮
FA( 𝑟)
𝑖𝑗
. According to
Definition 1, we set 𝑋
𝑖𝑗
as the center of 𝐼(𝑋
𝑖𝑗
) and calculate
the radius via
𝛿
FA( 𝑟)
𝑖𝑗
:= 𝛼 std(𝒮
FR(𝑟)
𝑖𝑗
) (4)
where 𝛼
+
is a multiplicative scale coefficient. Based
on Defintion 2, it’s easy to calculate the bounds of interval-
valued input for I-NMF according to min-max representation
(Definition 2). Examples of the 𝜹
FR(r)
𝑖
corresponding to
the faces in Figure 1 are shown in Figure 2, where lighter
gray level represents larger radius. In Figure 2, we can see
positions such as eyes or nose have larger radius. These
positions are more sensitive to alignment error that may
hurt the performance of single-valued techniques. With a
relatively large radius, the interval-valued techniques may
be more tolerant to such alignment errors.
1038

Table I
E
XAMPLES OF SINGLE-VALUED AND INTERVAL-VALUED RATING MATRICES FOR CF
(a) A single-valued rating matrix: 𝑿
𝑚
1
𝑚
2
𝑚
3
𝑚
4
𝑚
5
𝑢
1
14 5
𝑢
2
312
𝑢
3
14
𝑢
4
5
𝑢
5
142
𝑢
6
325
(b) A interval-valued rating matrix: 𝐼(𝑿)
𝑚
1
𝑚
2
𝑚
3
𝑚
4
𝑚
5
𝑢
1
[0.6,1.4] [3.5,4.5] [4.8,5.2]
𝑢
2
[2.8,3.2] [0.5,1.5] [1.5,2.5]
𝑢
3
[0.7,1.3] [3.5,4.5]
𝑢
4
[4.5,5.5]
𝑢
5
[0.4,1.6] [3.7,4.3] [1.8,2.2]
𝑢
6
[2.7, 3.3] [1.4, 2.6] [4.2, 5.8]
B. Rating Approximation in CF
In CF, the rating degree is actually an approximate to
its actual preference degree of a user 𝑢 over an item. For
example, a web site allows users to rate items from one star
to five stars. User 𝑢 may think the two items 𝑎 and 𝑏 are
beyond two stars while not worth four stars, and he may
prefer 𝑎 to 𝑏. Suppose the continuous preference degrees of
user 𝑢 on 𝑎 and 𝑏 are 3.4 and 2.8, respectively. However,
due to the constraint of the rating system, 𝑢 can only rate
both 𝑎 and 𝑏 as three stars, and the difference between 𝑎
and 𝑏 disappears. It also indicates that the rating degree
actually represents a continuous interval, which may include
the i deal preference degree. Intuitively, the rating degree 𝑋
𝑖𝑗
is affected by both the 𝑖’th user and 𝑗’th item. Therefore,
we define the observations relevant to 𝑋
𝑖𝑗
with the set as
follows:
𝒮
CF
𝑖𝑗
= {𝑋
𝑖
𝑗
(𝑖
= 𝑖 𝑗
= 𝑗) (𝑖
,𝑗
) (i, j)} (5)
𝒮
CF
𝑖𝑗
is actually constructed by the observed rating degrees
in the 𝑖-th row and 𝑗-th column of the rating matrix in CF.
Again, we calculate the radius 𝛿
CF
𝑖𝑗
for each observed r ating
degree 𝑋
𝑖𝑗
according to Definition 1 based on the standard
deviation of the ratings in 𝒮
CF
𝑖𝑗
:
𝛿
CF
𝑖𝑗
:= 𝛼 std(𝒮
CF
𝑖𝑗
) (6)
where 𝛼
+
is again a multiplicative scale coefficient.
Intuitively, a user’s ratings on different items (or the ratings
of a item from different users) vary greatly, we should assign
a big value of interval radius to this entry. Then, it’s easy
to calculate the bounds of interval-valued input for I-PMF
according to min-max representation (Definition 2). A exam-
ple of interval-valued rating matrix with its corresponding
single-valued matrix in min-max representation are shown
in Table II-A
III. M
ATR I X FACTORIZATION WITH APPLICATIONS
In this section we briefly discuss the MF techniques with
their applications. We devote special attention the the NMF
and PMF techniques since they serve to be the single-valued
counterparts of the proposed IMF models.
MF is a linear approximation data representation for the
original data matrix 𝑿
𝑛×𝑑
. Generally, we have
𝑿 𝑼𝑽 (7)
where 𝑼
𝑛×𝑘
and 𝑽
𝑘×𝑑
. Each data instance 𝑋
𝑖
is approximated by a linear combination of the rows of 𝑽
with weight vector 𝑼
𝑖
,the𝑖’th row of 𝑼 . Thus, we call
𝑼 as weight matrix and 𝑽 as basis matrix. The ranks of
𝑼 and 𝑽 are always much lower than the rank of 𝑿, i.e.
𝑘 𝑚𝑖𝑛(𝑛, 𝑑). After learning 𝑼 and 𝑽 , we can reconstruct
𝑿 as follows
ˆ
𝑿 𝑼𝑽 (8)
Various assumptions over 𝑼 and 𝑽 lead to different MF
models which have been widely used in data mining appli-
cations. The following two series of applications are relevant
to this paper:
Parts-based representation: MF naturally represent the
original data matrix 𝑿 by parts. The rows in 𝑽 , so-called
basis vectors, are optimized for the linear approximation
for 𝑿 , and 𝑼
𝑖
could be regard as a representation for the
𝑋
𝑖
with lower dimensionality. NMF has been successfully
applied to find addictive parts-based representations for face
images (see for detail in Section III-A).
Missing Data Prediction: The reconstructed matrix
ˆ
𝑿 is
a full matrix. Therefore, when 𝑿 is sparse, we can make
prediction for its missing entries based on
ˆ
𝑿. For example,
PMF is successfully applied to predict the missing entries
of the rating matrices in CF (see for detail in Section III-B).
A. Nonnegative Matrix Factorization
NMF aims to factorize a nonnegative matrix 𝑿
𝑛×𝑑
+
with two nonnegative matrices 𝑼
𝑛×𝑘
+
and 𝑽
𝑘×𝑑
+
which minimize the following 𝐿
2
loss function
NMF
= 𝑿 𝑼𝑽
2
F
s.t. 𝑼 0, 𝑽 0
(9)
where ∥⋅∥
F
denotes the Frobenius norm. The estimations
of 𝑼 and 𝑽 can be find via the multiplicative update rules
proposed in [3], which iteratively update 𝑼 and 𝑽 as follows
1039

𝑈
𝑖𝑗
𝑈
𝑖𝑗
(𝑿𝑽
𝑇
)
𝑖𝑗
(𝑼𝑽 𝑽
𝑇
)
𝑖𝑗
𝑉
𝑖𝑗
𝑉
𝑖𝑗
(𝑼
𝑇
𝑿)
𝑖𝑗
(𝑼
𝑇
𝑼𝑽 )
𝑖𝑗
(10)
The update rules in (10) can be deduced according to
Karush-Kunhn-Trucker optimal condition [9] of inequality
constraint (see for detail in [10]). In [3], it is proved that
the updates in (10) lead to a local minimum of (9). The
non-negative constraints on 𝑼 and 𝑽 only allow addictive
linear combination of basis vectors in 𝑽 , so-called parts-
based representation [4]. NMF is suited for many real world
applications such as human face analysis [4]. In human
face analysis, the resultant matrix 𝑼 constructs a optimized
representation for the original data instances. Many FA
algorithms, such as face recognition, face clustering, may
be directly applied on 𝑼 instead of the original data matrix
𝑿.
B. Probabilistic Matrix Factorization
In CF, the PMF model [5] assume that the ratings are
drawn from some Gaussian distribution.
𝑝(𝑋
𝑖𝑗
𝑖, 𝑗, 𝑼 , 𝑽 ,𝜎
2
)=G(𝑋
𝑖𝑗
𝑼
𝑖
𝑽
𝑗
,𝜎
2
) (11)
For 𝑼 and 𝑽 , they place zero-mean spherical Gaussian
priors
𝑝(𝑼 𝜎
2
1
)=
𝑖
G(𝑼
𝑖
0,𝜎
2
1
𝑰),𝑝(𝑽 𝜎
2
1
)=
𝑗
G(𝑽
𝑗
0,𝜎
2
1
𝑰)
(12)
The 𝑼 and 𝑽 are computed via over the observed ratings
PMF
= 𝑿 𝑼𝑽
2
F
+ 𝜆
[
𝑼
2
F
+ 𝑽
2
F
]
(13)
where 𝜆 = 𝜎
2
/𝜎
2
1
. A local minimum of (13) can be found
via gradient decent in 𝑼
𝑖
and 𝑽
𝑗
PMF
𝑼
𝑖
=
𝑗j
𝑖
(𝑼
𝑖
𝑽
𝑗
𝑋
𝑖𝑗
)𝑽
𝑇
𝑗
+ 𝜆𝑼
𝑖
PMF
𝑽
𝑗
=
𝑖i
𝑗
(𝑼
𝑖
𝑽
𝑇
𝑗
𝑋
𝑖𝑗
)𝑼
𝑇
𝑖
+ 𝜆𝑽
𝑗
(14)
Based on the learning of 𝑼 and 𝑽 , we can estimate the
unknown ratings in 𝑿 via
ˆ
𝑋
𝑖𝑗
= 𝑼
𝑖
𝑽
𝑗
(15)
IV. I
NTERVAL-VALUED MATR I X FACTORIZATION
In this section, we introduce the IMF framework. The
proposed framework is based on the Min-Max representation
of the interval-valued matrix: 𝐼(𝑿 )=[𝑿
low
, 𝑿
up
]. We can
extend the original MF over 𝑋 to the joint MF over 𝑿
low
and 𝑿
up
. Firstly, we assume each 𝑋
𝑖𝑗
is drawn from a
uniform distribution with parameters 𝑋
low
𝑖𝑗
and 𝑋
up
𝑖𝑗
.
𝑋
𝑖𝑗
uniform(𝑋
low
𝑖𝑗
,𝑋
up
𝑖𝑗
) (16)
Base on this assumption, we have
E(𝑋
𝑖𝑗
)=
1
2
(𝑋
low
𝑖𝑗
+ 𝑋
up
𝑖𝑗
) (17)
Therefore, we propose to estimate the bounds of 𝐼(𝑿) first
via the following joint MF
𝑿
low
𝑼𝑽
low
, 𝑿
up
𝑼𝑽
up
(18)
We fix the weight matrix 𝑼 to make a unique profile for
each data instance and use 𝑽
low
, 𝑽
up
to maintain the data
approximation. The reconstructions of 𝑿
low
and 𝑿
up
could
be calculated as follows
ˆ
𝑿
low
𝑼𝑽
low
,
ˆ
𝑿
up
𝑼𝑽
up
(19)
According to (17) and (19), we can reconstruct 𝑿 via
ˆ
𝑿
1
2
(
ˆ
𝑿
low
+
ˆ
𝑿
up
) (20)
A. Interval-valued NMF
According to (9) and (18), the 𝐿
2
loss function of interval-
valued NMF (I-NMF f or short) is
INMF
= 𝑿
low
𝑼𝑽
low
2
F
+ 𝑿
up
𝑼𝑽
up
2
F
s.t. 𝑼 0, 𝑽
low
0, 𝑽
up
0
(21)
Similar to traditional NMF, we have the following multi-
plicative update rule for 𝑼 , 𝑽
low
and 𝑽
up
:
𝑈
𝑡+1
𝑖𝑗
𝑈
𝑡
𝑖𝑗
[𝑿
low
(𝑽
low
)
𝑇
+ 𝑿
up
(𝑽
up
)
𝑇
]
𝑖𝑗
[𝑼𝑽
low
(𝑽
low
)
𝑇
+ 𝑼𝑽
up
(𝑽
up
)
𝑇
]
𝑖𝑗
𝑉
low,𝑡+1
𝑖𝑗
𝑉
low,𝑡
𝑖𝑗
(𝑼
𝑇
𝑿
low
)
𝑖𝑗
(𝑼
𝑇
𝑼𝑽
low
)
𝑖𝑗
𝑉
up,𝑡+1
𝑖𝑗
𝑉
up,𝑡
𝑖𝑗
(𝑼
𝑇
𝑿
up
)
𝑖𝑗
(𝑼
𝑇
𝑼𝑽
up
)
𝑖𝑗
(22)
Similar to traditional NMF, we also have that the 𝐿
2
loss
function
INMF
as shown in (21) is nonincreasing under
the multiplicative update rules as shown in (22).
Traditional NMF decomposes the original data matrix
into two low-rank factor matrices: one profiles the data
instances while the other profiles the features. In I-NMF,
the proposed the joint matrix factorization framework makes
the feature profile factor matrices 𝑽
low
𝑗
and 𝑽
up
𝑗
contain the
data approximation while preserving a unique profile 𝑼
𝑖
for
each data instance. We can directly apply the face analysis
techniques over 𝑼 .
B. Interval-valued PMF
In this section we introduce the interval-valued PMF (I-
PMF for short). Analogously to (13) and according to (18),
we have the following regularized 𝐿
2
loss
IPMF
= 𝑿
low
𝑼𝑽
low
F
+ 𝑿
up
𝑼𝑽
up
2
F
+𝜆
(
𝑼
2
F
+ 𝑽
low
2
F
+ 𝑽
up
2
F
)
(23)
1040

20 22 24 26 28 30 32 34 36 38 40
0.87
0.875
0.88
0.885
0.89
0.895
0.9
0.905
0.91
0.915
0.92
Face Recognition
F1 Measure
Number of Factors (k)
ORL32: Raw
ORL32: NMF
ORL32: INMF
20 22 24 26 28 30 32 34 36 38 40
13.5
14
14.5
15
15.5
16
16.5
17
Face Reconstruction
RE
Number of Factors (k)
20 22 24 26 28 30 32 34 36 38 40
0.62
0.63
0.64
0.65
0.66
0.67
0.68
0.69
0.7
0.71
Number of Factors (k)
ACC
Face Clustering
ORL64: Raw
ORL64: NMF
ORL64: INMF
20 22 24 26 28 30 32 34 36 38 40
0.78
0.785
0.79
0.795
0.8
0.805
0.81
0.815
0.82
0.825
0.83
Face Clustering
NMI
Number of Factors (k)
Figure 3. Performance comparison in face analysis.
It is easy to derive a gradient decent in 𝑼
𝑖
, 𝑽
low
𝑗
and
𝑽
up
𝑗
to find a local minimum of (23).
IPMF
𝑼
𝑖
=
𝑗j
𝑖
[(𝑼
𝑖
𝑽
low
𝑗
𝑋
low
𝑖𝑗
)𝑽
low𝑇
𝑗
+(𝑼
𝑖
𝑽
up
𝑗
𝑋
up
𝑖𝑗
)𝑽
up𝑇
𝑗
]+𝜆𝑼
𝑖
IPMF
𝑽
low
𝑗
=
𝑖i
𝑗
(𝑼
𝑖
𝑽
low
𝑗
𝑀
low
𝑢𝑚
)𝑼
𝑇
𝑖
+ 𝜆𝑽
low
𝑗
IPMF
𝑽
up
𝑗
=
𝑖i
𝑗
(𝑼
𝑖
𝑽
up
𝑗
𝑀
up
𝑢𝑚
)𝑼
𝑇
𝑖
+ 𝜆𝑽
up
𝑗
(24)
For CF application, we can used the learned 𝑼 , 𝑽
low
and
𝑽
up
to compute the unknown ratings via (19) and (20).
V. E
XPERIMENTAL RESULTS
We divide the experiments into two parts: In Section V-A
we conduct the comparison of I-NMF against the basic NMF
for FA applications, and in Section V-B we compare the
performance of I-PMF and PMF over CF applications.
A. Comparison of I-NMF against NMF
We compare the performance of NMF and I-NMF on
various FA applications including face recognition, face
reconstruction and face clustering.
1) Data Description and Evaluation Setting: We use the
Olivertti Research Laboratory (ORL) face data sets to evalu-
ate the NMF and I-NMF models, which contain ten different
images of each of 40 distinct persons, (𝑛 =10× 40 = 400
in total). Two versions of processed data sets
1
: one with res-
olution 32×32 (ORL32) and the other with 64×64 (ORL64),
are used for our experimental evaluation. In ORL32, each
face image is represented by a vector with dimensionality
𝑑 =32× 32 = 1024 while in ORL64, 𝑑 =64× 64 = 4096.
We implement I-NMF based on multiplicative update
rules introduced in Section IV-A. The experiments for NMF
1
http://www.cs.uiuc.edu/homes/dengcai2/Data/FaceData.html
and are based on the DTU NMF toolbox
2
. Various classifiers
has been adopted for face recognition and in this paper, we
apply the the nearest neighbor method for its simplicity. For
face clustering, we choose the popular K-means algorithm.
All the classification and clustering algorithms are applied
on the output weight matrices 𝑼 from NMF and I-NMF and
we also give the performance of these algorithms over the
raw data matrix 𝑿 as the baseline. In the construction of
interval-valued matrices (4), we set 𝑟 =5and 𝛼 =2.5.
We evaluate the proposed models in terms of the face
recognition and clustering effectiveness. Note that face
recognition is actually a classification problem. To evaluate
the effectiveness of classification (FR), we use the standard
𝐹 1 measure. We adopt two popular metrics Normalized
Mutual Information (NMI) [11] and Clustering Accuracy
(ACC) for cluster evaluation. Based on NMF, the faces
are reconstructed with the weighted summation of basis
vectors. We use the following Reconstruction Error (RE):
RE(
ˆ
𝑿, 𝑿 )=
𝑛
𝑖=1
𝑑
𝑗=1
(
ˆ
𝑋
𝑖𝑗
𝑋
𝑖𝑗
)
2
𝑛×𝑑
to evaluate the good-
ness of reconstruction matrix
ˆ
𝑿 according to the original
data matrix 𝑿 .
Note that larger values of F1, NMI and ACC indicate bet-
ter face recognition or clustering results while small values
of RE indicate better performance of face reconstruction.
2) Evaluation Results: We compare the models with
varying rank of factor matrix 𝑘 and interval sizes.
Evaluation with varying 𝑘: The face clustering and face
reconstruction tasks are evaluated over entire data sets. For
the face recognition task, we make ten rounds of random
sampling of 50% data for training. In general, the perfor-
mance of NMF and I-NMF for all the face analysis tasks
varies with the number of latent factors (𝑘). For each value
of 𝑘, we run 100 rounds of NMF and I-NMF. The average
values of the performance metrics plotted for each model
as shown in Figure 3 where each sub-figure corresponds
to a face analysis task with the specific evaluation metric
and each line corresponds to a model on a specific data
set.From Figure 3, we see that I-NMF outperforms NMF
with statistical significance over all evaluation metrics on
both two data sets.
B. Comparison of I-PMF against PMF
1) Data Description and Evaluation Setting: In this part
of experiments, we also use two data sets for evaluation.
Movielens data set
3
is downloaded from the web-site of
GroupLens research group and we use the subset which
contains 100,000 ratings for 𝑑 = 1682 movies by 𝑛 = 943
users of the online movie recommender service. We name
this data set as Movielens-100K. Netflix data set
4
is the
official data set used in the Netflix Prize competition. Again,
2
http://isp.imm.dtu.dk/toolbox/nmf/nmf toolbox ver1.4.zip
3
http://www.grouplens.org/system/files/ml-data 0.zip
4
http://archive.ics.uci.edu/ml/datasets/Netflix+Prize
1041

Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes matrix decomposition techniques that consider the existence of interval-valued data and shows that naive ways to deal with such imperfect data may introduce errors in analysis and present factorization techniques that are especially effective when the amount of imprecise information is large.
Abstract: With many applications relying on multi-dimensional datasets for decision making, matrix factorization (or decomposition) is becoming the basis for many knowledge discoveries and machine learning tasks, from clustering, trend detection, anomaly detection, to correlation analysis. Unfortunately, a major shortcoming of matrix analysis operations is that, despite their effectiveness when the data is scalar, these operations become difficult to apply in the presence of non-scalar data, as they are not designed for data that include non-scalar observations, such as intervals. Yet, in many applications, the available data are inherently non-scalar for various reasons, including imprecision in data collection, conflicts in aggregated data, data summarization, or privacy issues, where one is provided with a reduced, clustered, or intentionally noisy and obfuscated version of the data to hide information. In this paper, we propose matrix decomposition techniques that consider the existence of interval-valued data. We show that naive ways to deal with such imperfect data may introduce errors in analysis and present factorization techniques that are especially effective when the amount of imprecise information is large.

4 citations


Cites background or methods from "Interval-valued Matrix Factorizatio..."

  • ...As discussed above, interval NMF and PMF [9] also have been studied to resolve alignment approximation in face analysis and rating approximation in collaborative filtering....

    [...]

  • ...As the chart shows, the prediction accuracy of all algorithms improves as we consider higher decomposition ranks and the proposed latent semantic alignment based approach, AIPMF, leads to better prediction performance than both PMF and I-PMF, for decomposition ranks > 60....

    [...]

  • ...[9] extended these to interval-valued matrices as follows:...

    [...]

  • ...As described in Section 6.1.2, we also compare proposed ISVD approaches with NMF and I-NMF [9] for the face analysis tasks: data reconstruction and classification....

    [...]

  • ...For collaborative filtering with social media data, discussed in Section 6.1.3, we used PMF and I-PMF [9] as competitors....

    [...]

Journal ArticleDOI
TL;DR: In this article , the Tensor-Train technique is extended to deal with uncertain data, here modeled as intervals, and the authors propose a way to address this issue by extending the known tensor-train technique for tensor decomposition in order to handle uncertain data.
Abstract: In many fields of computer science, tensor decomposition techniques are increasingly being adopted as the core of many applications that rely on multi-dimensional datasets for implementing knowledge discovery tasks. Unfortunately, a major shortcoming of state-of-the-art tensor analyses is that, despite their effectiveness when the data is certain, these operations become difficult to apply, or altogether inapplicable, in the presence of uncertainty in the data, a circumstance common to many real-world scenarios. In this paper we propose a way to address this issue by extending the known Tensor-Train technique for tensor factorization in order to deal with uncertain data, here modeled as intervals. Working with interval-valued data, however, presents numerous challenges, since many algebraic operations that form the building blocks of the factorization process, as well as the properties that make these procedures useful for knowledge discovery, cannot be easily extended from their scalar counterparts, and often require some approximation (including, though it is not only the case, for keeping computational costs manageable). These challenges notwithstanding, our proposed techniques proved to be reasonably effective, and are supported by a thorough experimental validation.

1 citations

Proceedings ArticleDOI
01 May 2019
TL;DR: A probabilistic model for analyzing the generalized interval valued matrix, a matrix that has scalar valued elements and bounded/unbounded interval valued elements, is proposed and it is proved that the objective function is monotonically decreasing by the parameter update.
Abstract: In this paper, we propose a probabilistic model for analyzing the generalized interval valued matrix, a matrix that has scalar valued elements and bounded/unbounded interval valued elements. We derive a majorization minimization algorithm for parameter estimation and prove that the objective function is monotonically decreasing by the parameter update. An experiment shows that the proposed model well handles interval- valued elements and offers improved performance.
References
More filters
Journal ArticleDOI
TL;DR: This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position, and test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences.
Abstract: Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view.

4,337 citations

Proceedings Article
03 Dec 2007
TL;DR: The Probabilistic Matrix Factorization (PMF) model is presented, which scales linearly with the number of observations and performs well on the large, sparse, and very imbalanced Netflix dataset and is extended to include an adaptive prior on the model parameters.
Abstract: Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system.

4,022 citations


"Interval-valued Matrix Factorizatio..." refers methods in this paper

  • ...In this paper, we pay special attention to Nonnegative Matrix Factorization (NMF) [3], [4] and Probabilistic Matrix Factorization (PMF) [5]....

    [...]

Journal ArticleDOI
TL;DR: From basic techniques to the state-of-the-art, this paper attempts to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.
Abstract: As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.

3,406 citations


"Interval-valued Matrix Factorizatio..." refers methods in this paper

  • ...On the other hand, PMF has been successfully applied to Collaborative Filtering (CF) [6]....

    [...]

Journal ArticleDOI
Thomas Hofmann1
TL;DR: This paper proposes to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice, and results in a more principled approach with a solid foundation in statistical inference.
Abstract: This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.

2,574 citations

Book
03 Feb 2000
TL;DR: This work focuses on Symbolic Data Analysis and the SODAS Project: Purpose, History, Perspective, and Symbolic Objects, where H.H. Bock and E. Diday focused on the former and the latter dealt with the latter.
Abstract: E. Diday: Symbolic Data Analysis and the SODAS Project: Purpose, History, Perspective.- H.H. Bock: The Classical Data Situation.- H.H. Bock: Symbolic Data.- H.H. Bock, E. Diday: Symbolic Objects.- V. Stephan, G. Hebrail, Y. Lechevallier: Generation of Symbolic Objects from Relational Databases.- P. Bertrand, F. Goupil: Descriptive Statistics for Symbolic Data.- M. Noirhomme-Fraiture, M. Rouard: Visualizing and Editing Symbolic Objects.- Similarity and Dissimilarity: F. Esposito, D. Malerba, V. Tamma, H.H. Bock: Classical Resemblance Measures.- H.H. Bock: Dissimilarity Measures for Probability Distributions.- F. Esposito, D. Malerba, V. Tamma: Dissimilarity Measures for Symbolic Objects.- F. Esposito, D. Malerba, F. Lisi: Matching Symbolic Objects.- Symbolic Factor Analysis: H.H.Bock: Classical Principal Component Analysis.- A. Chouakria, P. Cazes, E. Diday: Symbolic Principal Component Analysis.- N.C. Lauro, F. Palumbo, R. Verde: Factorial Discriminant Analysis on Symbolic Objects.- Discrimination: Assigning Symbolic Objects to Classes: J. Rasson, S. Lissoir: Classical Methods of Discrimination.- J. Rasson, S. Lissoir: Symbolic Kernel Discriminant Analysis.- E. Perinel, Y. Lechevalier: Symbolic Discrimination Rules.- M. Bravo Llatas, J. Garcia-Santesmases: Segmentation Trees for Stratified Data.- Clustering Methods for Symbolic Objects: M. Chavent, H.H. Bock: Clustering Problem, Clustering Methods for Classical Data.- M. Chavent: Criterion-Based Divisive Clustering for Symbolic Data.- P. Brito: Hierarchical and Pyramidal Clustering with Complete Symbolic Objects.- G. Polaillon: Pyramidal Classification for Interval Data Using Galois Lattice Reduction.- M. Gettler-Summa, C. Pardoux: Symbolic Approaches for Three-way Data.-Illustrative Benchmark Analysis: R. Bisdorff: Introduction.- R. Bisdorff: Professional Careers of Retired Working Persons.- A. Iztueta, P. Calvo: Labour Force Survey.- F. Goupil, M. Touati, E. Diday, R. Moult: Census Data from the Office for National Statistics.- A. Morineau: The SODAS Software Package.

605 citations


"Interval-valued Matrix Factorizatio..." refers background in this paper

  • ...Again, we calculate the radius 𝛿CF𝑖𝑗 for each observed rating degree 𝑋𝑖𝑗 according to Definition 1 based on the standard deviation of the ratings in 𝒮CF𝑖𝑗 : 𝛿CF𝑖𝑗 := 𝛼 ⋅ std(𝒮CF𝑖𝑗 ) (6) where 𝛼 ∈ ℝ+ is again a multiplicative scale coefficient....

    [...]

  • ...However, due to the constraint of the rating system, 𝑢 can only rate both 𝑎 and 𝑏 as three stars, and the difference between 𝑎 and 𝑏 disappears....

    [...]

Frequently Asked Questions (3)
Q1. What have the authors contributed in "Interval-valued matrix factorization with applications" ?

In this paper, the authors propose the Interval-valued Matrix Factorization ( IMF ) framework. In this paper, the authors analyze the data approximation in FA as well as CF applications and construct interval-valued matrices to capture these approximation phenomenons. The authors adapt basic NMF and PMF models to the interval-valued matrices and propose Interval-valued NMF ( I-NMF ) as well as Intervalvalued PMF ( I-PMF ). The authors conduct extensive experiments to show that proposed I-NMF and I-PMF significantly outperform their single-valued counterparts in FA and CF applications. 

The evaluations over multiple real-life data sets with different experimental settings show that I-NMF and I-PMF, which take these interval-valued matrices as input, significantly outperform their corresponding single-valued counterparts. 

5http://www.mit.edu/∼rsalakhu/BPMF.htmlIn this paper the authors propose the IMF framework which injects data approximation into traditional MF via taking intervalvalued matrices as input.