scispace - formally typeset
Open AccessPosted Content

Tensor Ring Decomposition

TLDR
A fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition is introduced.
Abstract
Tensor networks have in recent years emerged as the powerful tools for solving the large-scale optimization problems. One of the most popular tensor network is tensor train (TT) decomposition that acts as the building blocks for the complicated tensor networks. However, the TT decomposition highly depends on permutations of tensor dimensions, due to its strictly sequential multilinear products over latent cores, which leads to difficulties in finding the optimal TT representation. In this paper, we introduce a fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition. The key advantage of TR model is the circular dimensional permutation invariance which is gained by employing the trace operation and treating the latent cores equivalently. TR model can be viewed as a linear combination of TT decompositions, thus obtaining the powerful and generalized representation abilities. For optimization of latent cores, we present four different algorithms based on the sequential SVDs, ALS scheme, and block-wise ALS techniques. Furthermore, the mathematical properties of TR model are investigated, which shows that the basic multilinear algebra can be performed efficiently by using TR representaions and the classical tensor decompositions can be conveniently transformed into the TR representation. Finally, the experiments on both synthetic signals and real-world datasets were conducted to evaluate the performance of different algorithms.

read more

Citations
More filters
Journal ArticleDOI

Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey

TL;DR: This article reviews the mainstream compression approaches such as compact model, tensor decomposition, data quantization, and network sparsification, and answers the question of how to leverage these methods in the design of neural network accelerators and present the state-of-the-art hardware architectures.
Journal ArticleDOI

Low-Rank Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Problems: Perspectives and Challenges PART 1.

TL;DR: In this paper, the authors provide mathematical and graphical representations and interpretation of tensor networks, with the main focus on the Tucker and Tensor Train (TT) decompositions and their extensions or generalizations.
Journal ArticleDOI

Deep neural network concepts for background subtraction:A systematic review and comparative evaluation

TL;DR: In this article, the authors provide a review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions.
Book

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions

TL;DR: In this paper, the authors provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks, and demonstrate the ability of tensor networks to provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2.
Proceedings ArticleDOI

Efficient Low Rank Tensor Ring Completion

TL;DR: The numerical comparison between the TR completion algorithm and the existing algorithms that employ a low rank Tensor train (TT) approximation for data completion shows that the method outperforms the existing ones for a variety of real computer vision settings, and thus demonstrates the improved expressive power of tensor ring as compared to tensor train.
References
More filters
Journal ArticleDOI

Tensor Decompositions and Applications

TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Journal ArticleDOI

Some mathematical notes on three-mode factor analysis

TL;DR: The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables.
Journal ArticleDOI

PARAFAC. Tutorial and applications

TL;DR: The multi-way decomposition method PARAFAC is a generalization of PCA to higher order arrays, but some of the characteristics of the method are quite different from the ordinary two-way case.
Journal ArticleDOI

Tensor-Train Decomposition

TL;DR: The new form gives a clear and convenient way to implement all basic operations efficiently, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.

Columbia Object Image Library (COIL100)

S. Nayar
TL;DR: Columbia Object Image Library COIL is a database of color images of objects that were placed on a motorized turntable against a black background and rotated through degrees to vary object pose with respect to a xed color camera.
Related Papers (5)