Matrix Factorization with Interval-Valued Data
read more
Citations
Tensor-Train Decomposition in the Presence of Interval-Valued Data
SIRTEM: Spatially Informed Rapid Testing for Epidemic Modeling and Response to COVID-19
Matrix Factorization with Interval-Valued Data
References
Elements of information theory
Indexing by Latent Semantic Analysis
Learning the parts of objects by non-negative matrix factorization
Learning parts of objects by non-negative matrix factorization
Probabilistic Matrix Factorization
Related Papers (5)
High-dimensional Time Series Prediction with Missing Values
Frequently Asked Questions (12)
Q2. What are the main reasons for the development of interval-valued PCA algorithms?
several data analysis tools, including regression [23], [24], canonical analysis [25], and multi-dimensional scaling [26], have been developed for symbolic and interval-valued data.
Q3. What is the eigenvalue of the matrix?
Let matrices V∗ and V ∗ capture the eigenvectors and Σ∗ and Σ ∗ be diagonal matrices encoding the square roots of the eigenvalues of matrices
Q4. What are the main reasons for the non-scalar data?
in many applications, the available data are inherently non-scalar for various reasons, including imprecision in data collection, conflicts in aggregated data, data summarization, or privacy issues, where one is provided with a reduced, clustered, or intentionally noisy and obfuscated version of the data to hide information.
Q5. What is the key challenge in performing decompositions over intervals?
The key challenge in performing decompositions over interval-valued matrices is that definitions of basic algebraic operations, such as multiplication and inversion (needed to implement factorization operations), are not as straightforward for intervals as they are for scalars (see Section 2.1).
Q6. What is the corollary of the above theorem?
Corollary 2. Corollary 1 further implies that U†U T † and V†V T † cannot be equal to scalar-valued matrix, The author, which means that an exact decomposition of interval-valued matrices is not possible.
Q7. What is the simplest way to find the local minimum of loss function LPMF?
A local minimum of loss function LPMF can be found via gradient descent in U[i,:] and V[j,:] T∂LPMF ∂U[i,:]= m ∑j=1(U[i,:]V[j,:] T −M [i, j])V[j,:] + λU[i,:]∂LPMF ∂V[j,:]
Q8. What is the result of the recomputation step?
The result is visualized in Figure 5(b): as the authors see here, after the recomputation step, the V∗ and V ∗ matrices become much more similar, indicating more precise factor matrices, an improvement which (as the authors detail in Section 6) contributes to more accurate decompositions.
Q9. What is the difference between the two columns in the factor matrices?
The columns in the factor matrices are referred to as latent semantics (LS) and, preferably, they are mutually orthogonal to serve as basis of the transformed space.
Q10. What is the first approach to solve the decomposition problem?
This approach results in U and V matrices that are scalar and orthonormal and a Σ core matrix that is also scalarvalued (i.e., it is compatible only with the decomposition target-c, discussed in Section 3.4).
Q11. What is the reason for the poor performance of the ISVD0 technique?
Figure 6 provides an overview of the accuracy and execution time results for the default configuration:• the authors obtain the highest accuracies using ISVD#-b class of techniques (returning both scalar-valued factors andinterval-valued core) – highest overall accuracy is provided by ISVD4-b, which leverages both semantic alignment and latent space recomputation techniques; • the ISVD#-c class of techniques (returning scalar valued factor and core matrices) approximate the accuracy of the ISVD0 technique – however, these include redundant work; • linear-programing based competitors [33], [35] have poor accuracies and massive execution times; the reason for this poor performance is that, as also acknowledged by the authors, these approaches are effective only when the interval ranges are very small, while their proposed approaches are able to handle intervals of varying sizes effectively.
Q12. What is the simplest example of a latent space?
Given this observation, the authors can now present an interval latent semantic alignment problem, which would optimally combine minimum and maximum vectors to form an interval-valued latent space.