scispace - formally typeset
Search or ask a question

Showing papers on "Tucker decomposition published in 2019"


Proceedings ArticleDOI
01 Nov 2019
TL;DR: This paper proposed TuckER, a linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples, which outperforms previous state-of-the-art models across standard link prediction datasets.
Abstract: Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively straightforward but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms previous state-of-the-art models across standard link prediction datasets, acting as a strong baseline for more elaborate models. We show that TuckER is a fully expressive model, derive sufficient bounds on its embedding dimensionalities and demonstrate that several previously introduced linear models can be viewed as special cases of TuckER.

506 citations


Proceedings ArticleDOI
TL;DR: The authors proposed TuckER, a linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples, which outperforms previous state-of-the-art models across standard link prediction datasets.
Abstract: Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively straightforward but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms previous state-of-the-art models across standard link prediction datasets, acting as a strong baseline for more elaborate models. We show that TuckER is a fully expressive model, derive sufficient bounds on its embedding dimensionalities and demonstrate that several previously introduced linear models can be viewed as special cases of TuckER.

160 citations


Journal ArticleDOI
TL;DR: A stable three-order tensor is first constructed from the normalized image, so as to enhance the robustness of the TD hashing, where image hash generation is viewed as deriving a compact representation from a tensor.
Abstract: This paper presents a new image hashing that is designed with tensor decomposition (TD), referred to as TD hashing, where image hash generation is viewed as deriving a compact representation from a tensor. Specifically, a stable three-order tensor is first constructed from the normalized image, so as to enhance the robustness of our TD hashing. A popular TD algorithm, called Tucker decomposition, is then exploited to decompose the three-order tensor into a core tensor and three orthogonal factor matrices. As the factor matrices can reflect intrinsic structure of original tensor, hash construction with the factor matrices makes a desirable discrimination of the TD hashing. To examine these claims, there are 14,551 images selected for our experiments. A receiver operating characteristics (ROC) graph is used to conduct theoretical analysis and the ROC comparisons illustrate that the TD hashing outperforms some state-of-the-art algorithms in classification performance between the robustness and discrimination.

98 citations


Journal ArticleDOI
TL;DR: The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions and gives an adaptive randomized algorithm for the computation of a low multilinear rank approximation of the Tensor train approximations of the tensors.
Abstract: Randomized algorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomized algorithms are often faster and robust. The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions. We give an adaptive randomized algorithm for the computation of a low multilinear rank approximation of the tensors with unknown multilinear rank and analyze its probabilistic error bound under certain assumptions. Finally, we design an adaptive randomized algorithm for computing the tensor train approximations of the tensors. Based on the bounds about the singular values of sub-Gaussian matrices with independent columns or independent rows, we analyze these randomized algorithms. We illustrate our adaptive randomized algorithms via several numerical examples.

95 citations


Journal ArticleDOI
TL;DR: A tensor-based representation learning method for multi-view clustering (tRLMvC) that can unify heterogeneous and high-dimensional multi- view feature spaces to a low-dimensional shared latent feature space and improve multi-View clustering performance is introduced.
Abstract: With the development of data collection techniques, multi-view clustering becomes an emerging research direction to improve the clustering performance. This paper has shown that leveraging multi-view information is able to provide a rich and comprehensive description. One of the core problems is how to sufficiently represent multi-view data in the analysis. In this paper, we introduce a tensor-based representation learning method for multi-view clustering (tRLMvC) that can unify heterogeneous and high-dimensional multi-view feature spaces to a low-dimensional shared latent feature space and improve multi-view clustering performance. To sufficiently capture plenty multi-view information, the tRLMvC represents multi-view data as a third-order tensor, expresses each tensorial data point as a sparse $t$ -linear combination of all data points with $t$ -product, and constructs a self-expressive tensor through reconstruction coefficients. The low-dimensional multi-view data representation in the shared latent feature space can be obtained via Tucker decomposition on the self-expressive tensor. These two parts are iteratively performed so that the interaction between self-expressive tensor learning and its factorization can be enhanced and the new representation can be effectively generated for clustering purpose. We conduct extensive experiments on eight multi-view data sets and compare the proposed model with the state-of-the-art methods. Experimental results have shown that tRLMvC outperforms the baselines in terms of various evaluation metrics.

57 citations


Proceedings ArticleDOI
01 Sep 2019
TL;DR: The tensor theory is adopted to explore the LF 4D structure characteristics and the first Blind quality Evaluator of LIght Field image (BELIF) is proposed, demonstrating that BELIF outperforms the existing image quality assessment algorithms.
Abstract: With the development of immersive media, Light Field Image (LFI) quality assessment is becoming more and more important, which helps to better guide light field acquisition, processing and application. However, almost all existing LFI quality assessment schemes utilize the 2D or 3D quality assessment methods while ignoring the intrinsic high dimensional characteristics of LFI. Therefore, we adopt the tensor theory to explore the LF 4D structure characteristics and propose the first Blind quality Evaluator of LIght Field image (BELIF). We generate cyclopean images tensor from the original LFI and then the features are extracted by the tucker decomposition. Specifically, Tensor Spatial Characteristic Features (TSCF) for spatial quality and Tensor Structure Variation Index (TSVI) for angular consistency are designed to fully assess the LFI quality. Extensive experimental results on the public LFI databases demonstrate that BELIF signifi-cantly outperforms the existing image quality assessment algorithms.

33 citations


Journal ArticleDOI
TL;DR: This work proposes a low-complexity compression approach formultispectral images based on convolution neural networks (CNNs) with NTD, and obtains the optimized small-scale spectral tensor by the minimization of original and reconstructed three-dimensionspectral tensor in self-learning CNNs.
Abstract: A multispectral image is a three-order tensor since it is a three-dimensional matrix, i.e., one spectral dimension and two spatial position dimensions. Multispectral image compression can be achieved by means of the advantages of tensor decomposition (TD), such as Nonnegative Tucker Decomposition (NTD). Unfortunately, the TD suffers from high calculation complexity and cannot be used in the on-board low-complexity case (e.g., multispectral cameras) that the hardware resources and power are limited. Here, we propose a low-complexity compression approach for multispectral images based on convolution neural networks (CNNs) with NTD. We construct a new spectral transform using CNNs, where the CNNs are able to transform the three-dimension spectral tensor from large-scale to a small-scale version. The NTD resources only allocate the small-scale three-dimension tensor to improve calculation efficiency. We obtain the optimized small-scale spectral tensor by the minimization of original and reconstructed three-dimension spectral tensor in self-learning CNNs. Then, the NTD is applied to the optimized three-dimension spectral tensor in the DCT domain to obtain the high compression performance. We experimentally confirmed the proposed method on multispectral images. Compared to the case that the new spectral tensor transform with CNNs is not applied to the original three-dimension spectral tensor at the same compression bit-rates, the reconstructed image quality could be improved. Compared with the full NTD-based method, the computation efficiency was obviously improved with only a small sacrifices of PSNR without affecting the quality of images.

32 citations


Journal ArticleDOI
TL;DR: This work optimizes tensor-times-dense matrix multiply (Ttm) for general sparse and semi-sparse tensors on CPU and NVIDIA GPU platforms, and designs an in-place sequential SpTtm to avoid explicit data reorganizing between a tensor and a matrix in its conventional approach.

31 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed method outperforms other existing color image watermarking methods, which can resist JPEG compression, salt & pepper noise, median filtering, scaling, blurring, low-pass filtering, and so on attacks.
Abstract: In order to protect the copyright of the color image, a novel robust color image watermarking method using correlations of RGB channels is presented. RGB three channels of the color image have much strong correlations, which are stable under various image attacks, and thus these correlations can be mined to embed watermark for robustness. In order to keep RGB correlations and chrominance perception, the color image is considered as the third-order tensor, and tucker decomposition is employed to operate on the color image. At first, Tucker decomposition is used to generate the first feature image, which includes the most of image energies and correlations between three channels. Then, the first feature image is divided into non-overlap blocks, and the singular value decomposition (SVD) is used to decompose the block to compute the left-singular matrix. Finally, the stable coefficients relationship of the left-singular matrix is modified to embed watermark for obtaining the robustness. Experimental results show that the proposed method outperforms other existing color image watermarking methods, which can resist JPEG compression, salt & pepper noise, median filtering, scaling, blurring, low-pass filtering, and so on attacks.

28 citations


Journal ArticleDOI
TL;DR: This paper uses teacher-student learning to transfer the knowledge of a large-size teacher model to a small-size compact student model, followed by Tucker decomposition to further compress the student model to reduce model size and runtime latency for CNN-DBLSTM based character model for OCR.

28 citations


Journal ArticleDOI
TL;DR: Experimental results on four real HSIs demonstrate that the proposed method can achieve better performance compared with the state-of-the-art subspace learning methods when a limited amount of source labeled samples are available.
Abstract: This paper presents a tensor alignment (TA) based domain adaptation (DA) method for hyperspectral image (HSI) classification. To be specific, HSIs in both domains are first segmented into superpixels, and tensors of both domains are constructed to include neighboring samples from a single superpixel. Then the subspace alignment (SA) between the two domains is achieved through alignment matrices, and the original tensors are projected as core tensors with lower dimensions into the invariant tensor subspace by applying projection matrices. To preserve the geometric information of original tensors, we employ a manifold regularization term for core tensors into the optimization process. The alignment matrices, projection matrices, and core tensors are solved in the framework of Tucker decomposition with an alternating optimization strategy. In addition, a postprocessing strategy is defined via pure samples extraction for each superpixel to further improve classification performance. Experimental results on four real HSIs demonstrate that the proposed method can achieve better performance compared with the state-of-the-art subspace learning methods when a limited amount of source labeled samples are available.

Posted Content
TL;DR: A new algorithm for computing a low-Tucker-rank approximation of a tensor that applies a randomized linear map to the tensor to obtain a sketch that captures the important directions within each mode, as well as the interactions among the modes.
Abstract: This paper describes a new algorithm for computing a low-Tucker-rank approximation of a tensor. The method applies a randomized linear map to the tensor to obtain a sketch that captures the important directions within each mode, as well as the interactions among the modes. The sketch can be extracted from streaming or distributed data or with a single pass over the tensor, and it uses storage proportional to the degrees of freedom in the output Tucker approximation. The algorithm does not require a second pass over the tensor, although it can exploit another view to compute a superior approximation. The paper provides a rigorous theoretical guarantee on the approximation error. Extensive numerical experiments show that that the algorithm produces useful results that improve on the state of the art for streaming Tucker decomposition.

Journal ArticleDOI
TL;DR: A novel tensor dimensionality reduction approach for 2D+3D facial expression recognition via low-rank tensor completion (FERLrTC) is proposed to solve the factor matrices in a majorization–minimization manner by using a rank reduction strategy.

Journal ArticleDOI
TL;DR: A novel network compression method called Adaptive Dimension Adjustment Tucker decomposition (ADA-Tucker), with learnable core tensors and transformation matrices, which proposes that weight tensors in networks with proper order and balanced dimension are easier to be compressed.

Posted Content
14 Mar 2019
TL;DR: The Tucker Tensor Layer is introduced, an alternative to the dense weight-matrices of the fully connected layers of feed-forward neural networks (NNs), to answer the long standing quest to compress NNs and improve their interpretability.
Abstract: We introduce the Tucker Tensor Layer (TTL), an alternative to the dense weight-matrices of the fully connected layers of feed-forward neural networks (NNs), to answer the long standing quest to compress NNs and improve their interpretability. This is achieved by treating these weight-matrices as the unfolding of a higher order weight-tensor. This enables us to introduce a framework for exploiting the multi-way nature of the weight-tensor in order to efficiently reduce the number of parameters, by virtue of the compression properties of tensor decompositions. The Tucker Decomposition (TKD) is employed to decompose the weight-tensor into a core tensor and factor matrices. We re-derive back-propagation within this framework, by extending the notion of matrix derivatives to tensors. In this way, the physical interpretability of the TKD is exploited to gain insights into training, through the process of computing gradients with respect to each factor matrix. The proposed framework is validated on synthetic data and on the Fashion-MNIST dataset, emphasizing the relative importance of various data features in training, hence mitigating the "black-box" issue inherent to NNs. Experiments on both MNIST and Fashion-MNIST illustrate the compression properties of the TTL, achieving a 66.63 fold compression whilst maintaining comparable performance to the uncompressed NN.

Posted Content
TL;DR: This work introduces a novel and efficient framework for exploiting the multi-way nature of the weight-tensor in order to dramatically reduce the number of DNN parameters, and derives the tensor valued back-propagation algorithm within the TTL framework, by extending the notion of matrix derivatives to tensors.
Abstract: This work aims to help resolve the two main stumbling blocks in the application of Deep Neural Networks (DNNs), that is, the exceedingly large number of trainable parameters and their physical interpretability. This is achieved through a tensor valued approach, based on the proposed Tucker Tensor Layer (TTL), as an alternative to the dense weight-matrices of DNNs. This allows us to treat the weight-matrices of general DNNs as a matrix unfolding of a higher order weight-tensor. By virtue of the compression properties of tensor decompositions, this enables us to introduce a novel and efficient framework for exploiting the multi-way nature of the weight-tensor in order to dramatically reduce the number of DNN parameters. We also derive the tensor valued back-propagation algorithm within the TTL framework, by extending the notion of matrix derivatives to tensors. In this way, the physical interpretability of the Tucker decomposition is exploited to gain physical insights into the NN training, through the process of computing gradients with respect to each factor matrix. The proposed framework is validated on both synthetic data, and the benchmark datasets MNIST, Fashion-MNIST, and CIFAR-10. Overall, through the ability to provide the relative importance of each data feature in training, the TTL back-propagation is shown to help mitigate the "black-box" nature inherent to NNs. Experiments also illustrate that the TTL achieves a 66.63-fold compression on MNIST and Fashion-MNIST, while, by simplifying the VGG-16 network, it achieves a 10\% speed up in training time, at a comparable performance.

Journal ArticleDOI
TL;DR: In this article, a method of memory footprint reduction for FFT-based, electromagnetic (EM) volume integral equation (VIE) formulations is presented, which allows Tucker decomposition to be employed for their compression, thereby greatly reducing the required memory storage for numerical simulations.
Abstract: We present a method of memory footprint reduction for FFT-based, electromagnetic (EM) volume integral equation (VIE) formulations. The arising Green’s function tensors have low multilinear rank, which allows Tucker decomposition to be employed for their compression, thereby greatly reducing the required memory storage for numerical simulations. Consequently, the compressed components are able to fit inside a graphical processing unit (GPU) on which highly parallelized computations can vastly accelerate the iterative solution of the arising linear system. In addition, the elementwise products throughout the iterative solver’s process require additional flops, thus, we provide a variety of novel and efficient methods that maintain the linear complexity of the classic elementwise product with an additional multiplicative small constant. We demonstrate the utility of our approach via its application to VIE simulations for the magnetic resonance imaging (MRI) of a human head. For these simulations, we report an order of magnitude acceleration over standard techniques.

Journal ArticleDOI
TL;DR: In this paper, an L1-norm based reformulation of Tucker decomposition is proposed, and two algorithms for its solution, namely L 1-norm Higher-Order Singular Value Decomposition (L1-HOSVD) and L 1 -norm Higher Order Orthogonal Iterations (L 1-HOOI), are presented.
Abstract: Tucker decomposition is a standard multi-way generalization of Principal-Component Analysis (PCA), appropriate for processing tensor data. Similar to PCA, Tucker decomposition has been shown to be sensitive against faulty data, due to its L2-norm-based formulation which places squared emphasis to peripheral/outlying entries. In this work, we explore L1-Tucker, an L1-norm based reformulation of Tucker decomposition, and present two algorithms for its solution, namely L1-norm Higher-Order Singular Value Decomposition (L1-HOSVD) and L1-norm Higher-Order Orthogonal Iterations (L1-HOOI). The proposed algorithms are accompanied by complexity and convergence analysis. Our numerical studies on tensor reconstruction and classification corroborate that L1-Tucker decomposition, implemented by means of the proposed algorithms, attains similar performance to standard Tucker when the processed data are corruption-free, while it exhibits sturdy resistance against heavily corrupted entries.

Journal ArticleDOI
Bilian Chen1, Ting Sun1, Zhehao Zhou1, Yifeng Zeng1, Langcai Cao1 
TL;DR: The specialty of the model is that the ranks of nonnegative Tucker decomposition are no longer constants, while they all become a part of the decisions to be optimized for regularized multiconvex optimization.
Abstract: We consider the problem of low-rank tensor decomposition of incomplete tensors that has applications in many data analysis problems, such as recommender systems, signal processing, machine learning, and image inpainting. In this paper, we focus on nonnegative tensor completion via low-rank Tucker decomposition for dealing with it. The specialty of our model is that the ranks of nonnegative Tucker decomposition are no longer constants, while they all become a part of the decisions to be optimized. Our solving approach is based on the penalty method and blocks coordinate descent method with prox-linear updates for regularized multiconvex optimization. We demonstrate the convergence of our algorithm. The numerical results on the three image datasets show that the proposed algorithm offers competitive performance compared with other existing algorithms even though the data is highly sparse.

Posted Content
TL;DR: Numerical studies on tensor reconstruction and classification corroborate that L1-Tucker decomposition, implemented by means of the proposed algorithms, attains similar performance to standard Tucker when the processed data are corruption-free, while it exhibits sturdy resistance against heavily corrupted entries.
Abstract: Tucker decomposition is a common method for the analysis of multi-way/tensor data. Standard Tucker has been shown to be sensitive against heavy corruptions, due to its L2-norm-based formulation which places squared emphasis to peripheral entries. In this work, we explore L1-Tucker, an L1-norm based reformulation of standard Tucker decomposition. After formulating the problem, we present two algorithms for its solution, namely L1-norm Higher-Order Singular Value Decomposition (L1-HOSVD) and L1-norm Higher-Order Orthogonal Iterations (L1-HOOI). The presented algorithms are accompanied by complexity and convergence analysis. Our numerical studies on tensor reconstruction and classification corroborate that L1-Tucker, implemented by means of the proposed methods, attains similar performance to standard Tucker when the processed data are corruption-free, while it exhibits sturdy resistance against heavily corrupted entries.

Journal ArticleDOI
TL;DR: A novel hyperspectral AD weighting strategy based on tensor decomposition and cluster weighting is proposed in this letter and, equipped with this simple but effective strategy as a postprocess, the detection performances of generic AD methods can be significantly boosted.
Abstract: Numerous hyperspectral anomaly detection (AD) methods suffer from complex background compositions and subpixel objects due to their inadequate Gaussian-distributed representations for nonhomogeneous backgrounds or their low discrimination between subpixel anomalies and the background. To alleviate these issues, a novel hyperspectral AD weighting strategy based on tensor decomposition and cluster weighting is proposed in this letter. Equipped with this simple but effective strategy as a postprocess, the detection performances of generic AD methods can be significantly boosted. In this strategy, Tucker decomposition is adopted to remove the major background information. A parameter-adaptive $k$ -means clustering method is then applied on the decomposed anomaly/noise data cube to assemble homogeneous regions. After segmenting the clustering result into a number of nonoverlapping eight-connected domains, corresponding weights are assigned to large domains according to an improved Gaussian weight function. Finally, the resulting weight matrix is multiplied by the results of the detectors to achieve a performance boost. Experiments on two hyperspectral data sets validate the effectiveness of the proposed strategy.

Journal ArticleDOI
TL;DR: A global and nonlocal weighted tensor norm minimum Denoising method which jointly utilizes GC and NSS and can preserve the structural information and outperforms several state-of-the-art denoising methods.
Abstract: A hyperspectral image (HSI) contains abundant spatial and spectral information, but it is always corrupted by various noises, especially Gaussian noise. Global correlation (GC) across spectral domain and nonlocal self-similarity (NSS) across spatial domain are two important characteristics for an HSI. To keep the integrity of the global structure and improve the details of the restored HSI, we propose a global and nonlocal weighted tensor norm minimum denoising method which jointly utilizes GC and NSS. The weighted multilinear rank is utilized to depict the GC information. To preserve structural information with NSS, a patch-group-based low-rank-tensor-approximation (LRTA) model is designed. The LRTA makes use of Tucker decompositions of 4D patches, which are composed of a similar 3D patch group of HSI. The alternating direction method of multipliers (ADMM) is adapted to solve the proposed models. Experimental results show that the proposed algorithm can preserve the structural information and outperforms several state-of-the-art denoising methods.

Journal ArticleDOI
30 Jun 2019-Sensors
TL;DR: The proposed tensor-based method for speckle noise reduction in the side-scan sonar images is based on the Tucker decomposition with automatically determined ranks of factoring tensors, showing very good results.
Abstract: Real signals are usually contaminated with various types of noise. This phenomenon has a negative impact on the operation of systems that rely on signals processing. In this paper, we propose a tensor-based method for speckle noise reduction in the side-scan sonar images. The method is based on the Tucker decomposition with automatically determined ranks of factoring tensors. As verified experimentally, the proposed method shows very good results, outperforming other types of speckle-noise filters.

Proceedings ArticleDOI
12 May 2019
TL;DR: A novel Graph reguralized Nonnegative Tucker Decomposition (GNTD) method which is able to extract the low-dimensional parts-based representation and preserve the geometrical information simultaneously from high-dimensional tensor data is proposed.
Abstract: Nonnegative Tucker Decomposition (NTD) is one of the most popular technique for feature extraction and representation from nonnegative tensor data with preserving internal structure information. From the perspective of geometry, highdimensional data are usually drawn in low-dimensional submanifold of the ambient space. In this paper, we propose a novel Graph reguralized Nonnegative Tucker Decomposition (GNTD) method which is able to extract the low-dimensional parts-based representation and preserve the geometrical information simultaneously from high-dimensional tensor data. We also present an effictive algorithm to solve the proposed GNTD model. Experimental results demonstrate the effectiveness and high efficiency of the proposed GNTD method.

Proceedings ArticleDOI
28 Jun 2019
TL;DR: This paper presents an hardware accelerator for a classical tensor computation framework, Tucker decomposition, and studies three modules: tensor-times-matrix (TTM), matrix singular value decomposition (SVD), and tensor permutation, and implemented them on Xilinx FPGA for prototyping.
Abstract: Tensor computation has emerged as a powerful mathematical tool for solving high-dimensional and/or extreme-scale problems in science and engineering. The last decade has witnessed tremendous advancement of tensor computation and its applications in machine learning and big data. However, its hardware optimization on resource-constrained devices remains an (almost) unexplored field. This paper presents an hardware accelerator for a classical tensor computation framework, Tucker decomposition. We study three modules of this architecture: tensor-times-matrix (TTM), matrix singular value decomposition (SVD), and tensor permutation, and implemented them on Xilinx FPGA for prototyping. In order to further reduce the computing time, a warm-start algorithm for the Jacobi iterations in SVD is proposed. A fixed-point simulator is used to evaluate the performance of our design. Some synthetic data sets and a real MRI data set are used to validate the design and evaluate its performance. We compare our work with state-of-the-art software toolboxes running on both CPU and GPU, and our work shows 2.16 – 30.2× speedup on the cardiac MRI data set.

Journal ArticleDOI
TL;DR: It is shown that finding an optimal dimension tree for an N-dimensional tensor is NP-hard for both CP and Tucker decompositions, and faster exact algorithms are provided for finding this tree.
Abstract: Dense tensor decompositions have been widely used in many signal processing problems including analyzing speech signals, identifying the localization of signal sources, and many other communication applications. Computing these decompositions poses major computational challenges for big datasets emerging in these domains. CANDECOMP/PARAFAC~(CP) and Tucker formulations are the prominent tensor decomposition schemes heavily used in these fields, and the algorithms for computing them involve applying two core operations, namely tensor-times-matrix~(TTM) and -vector~(TTV) multiplication, which are executed repetitively within an iterative framework. In the recent past, efficient computational schemes using a data structure called dimension tree are employed to significantly reduce the cost of these two operations through storing and reusing partial results that are commonly used across different iterations of these algorithms. This framework has been introduced for sparse CP and Tucker decompositions in the literature, and a recent work investigates using an optimal binary dimension tree structure in computing dense Tucker decompositions. In this paper, we investigate finding an optimal dimension tree for both CP and Tucker decompositions. We show that finding an optimal dimension tree is NP-hard for both decompositions, provide faster exact algorithms for finding an optimal dimension tree in $O(3^N)$ time using $O(2^N)$ space for the Tucker case, and extend the algorithm to the case of CP decomposition with the same time and space complexities.

Journal ArticleDOI
TL;DR: It is demonstrated how to use tensors to characterize muscle activity and developed a new consTD method for muscle synergy extraction that could be used for shared and task-specific synergies identification.
Abstract: Higher order tensor decompositions have hardly been used in muscle activity analysis despite multichannel electromyography (EMG) datasets naturally occurring as multi-way structures. Here, we seek to demonstrate and discuss the potential of tensor decompositions as a framework to estimate muscle synergies from the third-order EMG tensors built by stacking repetitions of multi-channel EMG for several tasks. We compare the two most widespread tensor decomposition models-parallel factor analysis (PARAFAC) and Tucker-in muscle synergy analysis of the wrist's three main degrees of freedom (DoF) using the public first Ninapro database. Furthermore, we proposed a constrained Tucker decomposition (consTD) method for efficient synergy extraction building on the power of tensor decompositions. This method is proposed as a direct novel approach for shared and task-specific synergy estimation from two biomechanically related tasks. Our approach is compared with the current standard approach of repetitively applying non-negative matrix factorization (NMF) to a series of movements. The results show that the consTD method is suitable for synergy extraction compared with PARAFAC and Tucker. Moreover, exploiting the multi-way structure of muscle activity, the proposed methods successfully identified shared and task-specific synergies for all three DoFs tensors. These were found to be robust to disarrangement with regard to task-repetition information, unlike the commonly used NMF. In summary, we demonstrate how to use tensors to characterize muscle activity and develop a new consTD method for muscle synergy extraction that could be used for shared and task-specific synergies identification. We expect that this paper will pave the way for the development of novel muscle activity analysis methods based on higher order techniques.

Proceedings ArticleDOI
13 May 2019
TL;DR: In this article, the authors present a selection of classification methods that employ an L1-norm-based, corruption-resistant reformulation of Tucker (L1-Tucker), which has demonstrated severe sensitivity to corrupted measurements due to its L2-norm formulation.
Abstract: Most commonly used classification algorithms process data in the form of vectors. At the same time, mod- ern datasets often comprise multimodal measurements that are naturally modeled as multi-way arrays, also known as tensors. Processing multi-way data in their tensor form can enable enhanced inference and classification accuracy. Tucker decomposition is a standard method for tensor data processing, which however has demonstrated severe sensitivity to corrupted measurements due to its L2-norm formulation. In this work, we present a selection of classification methods that employ an L1-norm-based, corruption-resistant reformulation of Tucker (L1-Tucker). Our experimental studies on multiple real datasets corroborate the corruption-resistance and classification accuracy afforded by L1-Tucker.

Proceedings ArticleDOI
08 Apr 2019
TL;DR: This study focuses on darknet traffic analysis and applies tensor factorization in order to detect coordinated group activities, such as a botnet, using nonnegative Tucker decomposition, one of the tensor factors, because it has non-negativity constraints.
Abstract: This study focuses on darknet traffic analysis and applies tensor factorization in order to detect coordinated group activities, such as a botnet. Tensor factorization is a powerful tool for extracting co-occurrence patterns that is highly interpretable and can handle more variables than matrix factorization. We propose a simple method for detecting group activities from its extracted features. However, tensor factorization requires too high a computational cost to run in real time. To address this problem, we implemented a two-step algorithm in order to achieve fast, memory-efficient factorization. We also utilize nonnegative Tucker decomposition, one of the tensor factorization methods, because it has non-negativity constraints, to avoid physically unreasonable results. Finally, we introduce our prototype implementation of the proposed scheme, with which we demonstrate the effectiveness of the scheme by reviewing several past security incidents.

Posted Content
TL;DR: The nonnegative tensor data are studied and an orthogonal nonnegative Tucker decomposition (ONTD) is proposed and a convex relaxation algorithm of the augmented Lagrangian function is developed to solve the optimization problem.
Abstract: In this paper, we study the nonnegative tensor data and propose an orthogonal nonnegative Tucker decomposition (ONTD). We discuss some properties of ONTD and develop a convex relaxation algorithm of the augmented Lagrangian function to solve the optimization problem. The convergence of the algorithm is given. We employ ONTD on the image data sets from the real world applications including face recognition, image representation, hyperspectral unmixing. Numerical results are shown to illustrate the effectiveness of the proposed algorithm.