scispace - formally typeset
Book ChapterDOI

Low rank approximation of multidimensional data

Reads0
Chats0
TLDR
This chapter proposes to shift the paradigm in order to break the curse of dimensionality by introducing decomposition to reduced data and intends to bridge between applied mathematics community and the computational mechanics one.
Abstract
In the last decades, numerical simulation has experienced tremendous improvements driven by massive growth of computing power. Exascale computing has been achieved this year and will allow solving ever more complex problems. But such large systems produce colossal amounts of data which leads to its own difficulties. Moreover, many engineering problems such as multiphysics or optimisation and control, require far more power that any computer architecture could achieve within the current scientific computing paradigm. In this chapter, we propose to shift the paradigm in order to break the curse of dimensionality by introducing decomposition to reduced data. We present an extended review of data reduction techniques and intends to bridge between applied mathematics community and the computational mechanics one. The chapter is organized into two parts. In the first one bivariate separation is studied, including discussions on the equivalence of proper orthogonal decomposition (POD, continuous framework) and singular value decomposition (SVD, discrete matrices). Then, in the second part, a wide review of tensor formats and their approximation is proposed. Such work has already been provided in the literature but either on separate papers or into a pure applied mathematics framework. Here, we offer to the data enthusiast scientist a description of Canonical, Tucker, Hierarchical and Tensor train formats including their approximation algorithms. When it is possible, a careful analysis of the link between continuous and discrete methods will be performed.

read more

Citations
More filters
Journal ArticleDOI

Pass-efficient methods for compression of high-dimensional turbulent flow data

TL;DR: The utility of pass-efficient, parallelizable, low-rank, matrix decomposition methods in compressing high-dimensional simulation data from turbulent flows and the presentation of a novel single-pass Matrix decomposition algorithm for computing the so-called interpolative decomposition is focused on.
Posted Content

Pass-efficient methods for compression of high-dimensional turbulent flow data

TL;DR: In this article, the authors focus on the utility of pass-efficient, parallelizable, low-rank, matrix decomposition methods in compressing high-dimensional simulation data from turbulent flows.
Posted Content

Task-parallel in-situ temporal compression of large-scale computational fluid dynamics data.

TL;DR: In this paper, a single-pass matrix column interpolative decomposition (SPID) algorithm is proposed for CFD applications, based on the task-based Legion programming model.
Journal ArticleDOI

Task-parallel in situ temporal compression of large-scale computational fluid dynamics data

TL;DR: In this article , a single-pass matrix column interpolative decomposition (SPID) algorithm was proposed for CFD applications. But the performance of the algorithm was not evaluated.
Journal ArticleDOI

Numerical Study of Low Rank Approximation Methods for Mechanics Data and Its Analysis

TL;DR: A comparison of the numerical aspect and efficiency of several low rank approximation techniques for multidimensional data, namely CPD, HOSVD, TT-SVD, RPOD, QTT, SVD and HT, is presented in this article.
References
More filters
Journal ArticleDOI

LIII. On lines and planes of closest fit to systems of points in space

TL;DR: This paper is concerned with the construction of planes of closest fit to systems of points in space and the relationships between these planes and the planes themselves.
Journal ArticleDOI

Tensor Decompositions and Applications

TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Journal ArticleDOI

Analysis of individual differences in multidimensional scaling via an n-way generalization of 'eckart-young' decomposition

TL;DR: In this paper, an individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common "psychological space" and a corresponding method of analyzing similarities data is proposed, involving a generalization of Eckart-Young analysis to decomposition of three-way (or higher-way) tables.
Related Papers (5)