scispace - formally typeset
Search or ask a question

Showing papers on "Tucker decomposition published in 2021"


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multilayer sparsity-based tensor decomposition (MLSTD) for low-rank tensor completion, which encodes the structured sparsity of a tensor by multiple-layer representation.
Abstract: Existing methods for tensor completion (TC) have limited ability for characterizing low-rank (LR) structures. To depict the complex hierarchical knowledge with implicit sparsity attributes hidden in a tensor, we propose a new multilayer sparsity-based tensor decomposition (MLSTD) for the low-rank tensor completion (LRTC). The method encodes the structured sparsity of a tensor by the multiple-layer representation. Specifically, we use the CANDECOMP/PARAFAC (CP) model to decompose a tensor into an ensemble of the sum of rank-1 tensors, and the number of rank-1 components is easily interpreted as the first-layer sparsity measure. Presumably, the factor matrices are smooth since local piecewise property exists in within-mode correlation. In subspace, the local smoothness can be regarded as the second-layer sparsity. To describe the refined structures of factor/subspace sparsity, we introduce a new sparsity insight of subspace smoothness: a self-adaptive low-rank matrix factorization (LRMF) scheme, called the third-layer sparsity. By the progressive description of the sparsity structure, we formulate an MLSTD model and embed it into the LRTC problem. Then, an effective alternating direction method of multipliers (ADMM) algorithm is designed for the MLSTD minimization problem. Various experiments in RGB images, hyperspectral images (HSIs), and videos substantiate that the proposed LRTC methods are superior to state-of-the-art methods.

72 citations


Journal ArticleDOI
TL;DR: This work proposes a novel multi-view clustering method via learning a LRTG model, which simultaneously learns the representation and affinity matrix in a single step to preserve their correlation.
Abstract: Graph and subspace clustering methods have become the mainstream of multi-view clustering due to their promising performance. However, (1) since graph clustering methods learn graphs directly from the raw data, when the raw data is distorted by noise and outliers, their performance may seriously decrease; (2) subspace clustering methods use a “two-step” strategy to learn the representation and affinity matrix independently, and thus may fail to explore their high correlation. To address these issues, we propose a novel multi-view clustering method via learning a Low-Rank Tensor Graph (LRTG). Different from subspace clustering methods, LRTG simultaneously learns the representation and affinity matrix in a single step to preserve their correlation. We apply Tucker decomposition and l2;1-norm to the LRTG model to alleviate noise and outliers for learning a “clean” representation. LRTG then learns the affinity matrix from this “clean” representation. Additionally, an adaptive neighbor scheme is proposed to find the K largest entries of the affinity matrix to form a flexible graph for clustering. An effective optimization algorithm is designed to solve the LRTG model based on the alternating direction method of multipliers. Extensive experiments on different clustering tasks demonstrate the effectiveness and superiority of LRTG over seventeen state-of-the-art clustering methods.

71 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a graph Laplacian-guided coupled tensor decomposition (gLGCTD) model for fusion of hyperspectral image (HSI) and MSI for spatial and spectral resolution enhancements.
Abstract: We propose a novel graph Laplacian-guided coupled tensor decomposition (gLGCTD) model for fusion of hyperspectral image (HSI) and multispectral image (MSI) for spatial and spectral resolution enhancements. The coupled Tucker decomposition is employed to capture the global interdependencies across the different modes to fully exploit the intrinsic global spatial–spectral information. To preserve local characteristics, the complementary submanifold structures embedded in high-resolution (HR)-HSI are encoded by the graph Laplacian regularizations. The global spatial–spectral information captured by the coupled Tucker decomposition and the local submanifold structures are incorporated into a unified framework. The gLGCTD fusion framework is solved by a hybrid framework between the proximal alternating optimization (PAO) and the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real data sets demonstrate that the gLGCTD fusion method is superior to state-of-the-art fusion methods with a more accurate reconstruction of the HR-HSI.

51 citations


Journal ArticleDOI
TL;DR: This work revisits coupled tensor decomposition (CTD)-based hyperspectral super-resolution (HSR) with a flexible algorithmic framework that can work with a series of structural information to take advantages of the model interpretability.
Abstract: Hyperspectral super-resolution (HSR) aims at fusing a pair of hyperspectral and multispectral images to recover a super-resolution image (SRI) This work revisits coupled tensor decomposition (CTD)-based HSR The vast majority of the HSR approaches take a low-rank matrix recovery perspective The challenge is that theoretical guarantees for recovering the SRI using low-rank matrix models are either elusive or derived under stringent conditions A couple of recent CTD-based methods ensure recoverability for the SRI under relatively mild conditions, leveraging algebraic properties of the canonical polyadic decomposition (CPD) and the Tucker decomposition models, respectively However, the latent factors of both the CPD and Tucker models have no physical interpretations in the context of spectral image analysis, which makes incorporating prior information challenging—but using priors is often essential for enhancing performance in noisy environments This work employs an idea that models spectral images as tensors following the block-term decomposition model with multilinear rank- $(L_r,L_r,1)$ terms (ie, the ${\mathsf {LL1}}$ model) and formulates the HSR problem as a coupled ${\mathsf {LL1}}$ tensor decomposition problem Similar to the existing CTD approaches, recoverability of the SRI is shown under mild conditions More importantly, the latent factors of the ${\mathsf {LL1}}$ model can be interpreted as the key constituents of spectral images, ie, the endmembers’ spectral signatures and abundance maps This connection allows us to incorporate prior information for performance enhancement A flexible algorithmic framework that can work with a series of structural information is proposed to take advantages of the model interpretability The effectiveness is showcased using simulated and real data

37 citations


Journal ArticleDOI
TL;DR: An effective method based on three-order tensor creation and Tucker decomposition (TCTD) is proposed, which detects targets with various brightness, spatial sizes, and intensities and ensures that targets can be preserved on the remaining minor principal components.
Abstract: Existing infrared small-target detection methods tend to perform unsatisfactorily when encountering complex scenes, mainly due to the following: 1) the infrared image itself has a low signal-to-noise ratio (SNR) and insufficient detailed/texture knowledge; 2) spatial and structural information is not fully excavated. To avoid these difficulties, an effective method based on three-order tensor creation and Tucker decomposition (TCTD) is proposed, which detects targets with various brightness, spatial sizes, and intensities. In the proposed TCTD, multiple morphological profiles, i.e., diverse attributes and different shapes of trees, are designed to create three-order tensors, which can exploit more spatial and structural information to make up for lacking detailed/texture knowledge. Then, Tucker decomposition is employed, which is capable of estimating and eliminating the major principal components (i.e., most of the background) from three dimensions. Thus, targets can be preserved on the remaining minor principal components. Image contrast is further enhanced by fusing the detection maps of multiple morphological profiles and several groups with discontinuous pruning values. Extensive experiments validated on two synthetic data and six real data sets demonstrate the effectiveness and robustness of the proposed TCTD.

37 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the image fusion problem while accounting for both spatially and spectrally localized changes in an additive model, and propose two new algorithms, one purely algebraic and another based on an optimization procedure.
Abstract: Coupled tensor approximation has recently emerged as a promising approach for the fusion of hyperspectral and multispectral images, reconciling state of the art performance with strong theoretical guarantees. However, tensor-based approaches previously proposed assume that the different observed images are acquired under exactly the same conditions. A recent work proposed to accommodate inter-image spectral variability in the image fusion problem using a matrix factorization-based formulation, but did not account for spatially-localized variations. Moreover, it lacks theoretical guarantees and has a high associated computational complexity. In this paper, we consider the image fusion problem while accounting for both spatially and spectrally localized changes in an additive model. We first study how the general identifiability of the model is impacted by the presence of such changes. Then, assuming that the high-resolution image and the variation factors admit a Tucker decomposition, two new algorithms are proposed – one purely algebraic, and another based on an optimization procedure. Theoretical guarantees for the exact recovery of the high-resolution image are provided for both algorithms. Experimental results show that the proposed method outperforms state-of-the-art methods in the presence of spectral and spatial variations between the images, at a smaller computational cost.

31 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a three-order Tucker decomposition and reconstruction detector to better approximate both spectral and spatial information for change detection from multitemporal hyperspectral images.
Abstract: Change detection from multitemporal hyperspectral images has attracted great attention. Most traditional methods using spectral information for change detection treat a hyperspectral image as a two-dimensional matrix and do not take into account inherently structure information of spectrum, which leads to limited detection accuracy. To better approximate both spectral and spatial information, a novel three-order Tucker decomposition and reconstruction detector is proposed for hyperspectral change detection. Initially, Tucker decomposition and reconstruction strategies are used to eliminate the influence of various factors in a multitemporal dataset. Specifically, a singular value accumulation strategy is used to determine principal components in factor matrices. Meanwhile, a spectral angle is used to analyze spectral change after tensor processing in different domains. Finally, a new detector is designed to further improve the detection accuracy. Experiments conducted on five real hyperspectral datasets demonstrate that the proposed detector achieves a better detection performance.

31 citations


Journal ArticleDOI
TL;DR: In recent years, measurement or collection of heterogeneous sets of data such as those containing scalars, waveform signals, images, and even structured point clouds has become more common as mentioned in this paper.
Abstract: In recent years, measurement or collection of heterogeneous sets of data such as those containing scalars, waveform signals, images, and even structured point clouds, has become more common. Statis...

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors review recent advances in randomization for computation of Tucker decomposition and Higher Order SVD (HOSVD) and discuss random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decompposition and the HOSVD.
Abstract: Big data analysis has become a crucial part of new emerging technologies such as the internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and feature extraction. Randomized algorithms are efficient tools for handling big data tensors. They accelerate decomposing large-scale data tensors by reducing the computational complexity of deterministic algorithms and the communication among different levels of memory hierarchy, which is the main bottleneck in modern computing environments and architectures. In this article, we review recent advances in randomization for computation of Tucker decomposition and Higher Order SVD (HOSVD). We discuss random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decomposition and the HOSVD. Simulations on synthetic and real datasets are provided to compare the performance of some of best and most promising algorithms.

28 citations


Journal ArticleDOI
TL;DR: Under the DCCHI framework, a new reconstruction algorithm is proposed based on the collaborative Tucker3 Tensor decomposition, which forces the spatial factor matrices and core tensor of the panchromatic image’s Tucker3 decomposition to be identical to that of the reconstructed HSI.
Abstract: Computational imaging for hyperspectral images (HSIs) is a hot topic in remote sensing and imaging systems. The dual-camera compressive hyperspectral imaging (DCCHI) system has been successfully designed and applied in hyperspectral imaging. However, the corresponding reconstruction algorithms are not well developed. In this paper, under the DCCHI framework, a new reconstruction algorithm is proposed based on the collaborative Tucker3 Tensor decomposition. In actual HSI, similar nonlocal patches always have similar spatial-spectral structures, and thus, these nonlocal patches can share the same spatial and spectral factors in Tucker decomposition. To characterize the similarities simultaneously, the Tucker3 decomposition is used to model the 4-order tensor formed by the similar cubic patches. To keep the spatial structures in the reconstructed HSI consistent with the panchromatic image’s spatial structures, we force the spatial factor matrices and the core tensor in the Tucker3 decomposition of the HSI to be identical to the spatial factor matrices and core tensor of the panchromatic image’s Tucker3 decomposition. In addition, a spectral quadratic variation constraint is introduced into the spectral factor to characterize the band smoothness. To solve the optimization problem, an alternating direction method of multipliers (ADMM)-based algorithm is designed and each variable is separately solved. Experimental results on a public data set and the remote sensing image demonstrate the advantage of the proposed method.

27 citations


Journal ArticleDOI
TL;DR: An efficient separable multimodal learning method to deal with the tasks with modality missing issue and Tucker decomposition is introduced, which leads to a general extension of the low-rank tensor fusion method with more modality interactions.
Abstract: Multimodal sentiment analysis has increasingly attracted attention since with the arrival of complementary data streams, it has great potential to improve and go beyond unimodal sentiment analysis. In this article, we present an efficient separable multimodal learning method to deal with the tasks with modality missing issue. In this method, the multimodal tensor is utilized to guide the evolution of each separated modality representation. To save the computational expense, Tucker decomposition is introduced, which leads to a general extension of the low-rank tensor fusion method with more modality interactions. The method, in turn, enhances our modality distillation processing. Comprehensive experiments on three popular multimodal sentiment analysis datasets, CMU-MOSI, POM, and IEMOCAP, show a superior performance especially when only partial modalities are available.

Journal ArticleDOI
TL;DR: This article uses blockchain techniques to enable IIoT data providers to reliably and securely share their data by encrypting them locally before recording them in the blockchain, and uses tensor train (TT) theory to build an efficient TT-based t Tucker decomposition based on gradient descent that tremendously reduces the number of elements to be updated during the tucker decomposition.
Abstract: Tucker decomposition has been widely used to extract meaningful and underlying data from heterogeneous data generated by different kinds of devices in a wide range of industrial Internet of Things (IIoT) applications. IIoT data uploaded to the cloud contain personal and sensitive information; thus, there is a growing concern about data privacy. Current existing data analysis solutions, however, assume that the data are reliably and securely collected from different IIoT data providers, an assumption that is not always true in the real world. To address the issues, in this article we propose a privacy-preserving tucker train decomposition based on gradient descent over blockchain-based encrypted IIoT data. Specifically, we use blockchain techniques to enable IIoT data providers to reliably and securely share their data by encrypting them locally before recording them in the blockchain. We use tensor train (TT) theory to build an efficient TT-based tucker decomposition based on gradient descent that tremendously reduces the number of elements to be updated during the tucker decomposition. We utilize the massive resources of fogs and clouds to implement an efficient privacy-preserving tucker train decomposition scheme. We use homomorphic encryption to build our scheme that does complete tucker train decomposition without the involvement of users. Results from a series of extensive experiments on synthetic datasets and real-world datasets demonstrate that our proposed scheme is efficient.

Journal ArticleDOI
TL;DR: The results show that TD has a good detection effect for seizure classification and that this method has high computational speed and great potential for real-time diagnosis.
Abstract: Electroencephalogram (EEG) plays an important role in recording brain activity to diagnose epilepsy. However, it is not only laborious, but also not very cost effective for medical experts to manually identify the features on EEG. Therefore, automatic seizure detection in accordance with the EEG recordings is significant for the diagnosis and treatment of epilepsy. Here, a new method for detecting seizures using tensor distance (TD) is proposed. First, the time-frequency characteristics of EEG signals are obtained by wavelet transformation, and the tensor representation of EEG signals is then obtained. Tucker decomposition is used to obtain the principal components of the EEG tensor. After, the distances between different categories of EEG tensors are calculated as the EEG features. Finally, the TD features are classified through the Bayesian Linear Discriminant Analysis (Bayesian LDA) classifier. The performance of this method is measured by the sensitivity, specificity, and recognition accuracy. Results indicate 95.12% sensitivity, 97.60% specificity, 97.60% recognition accuracy, and a false detection rate of 0.76 per hour in the invasive EEG dataset, which included 566.57[Formula: see text]h of EEG recording data from 21 patients. Taken together, the results show that TD has a good detection effect for seizure classification and that this method has high computational speed and great potential for real-time diagnosis.

Journal ArticleDOI
TL;DR: This work presents Dynamic L1-Tucker: an algorithm for dynamic and outlier-resistant Tucker analysis of tensor data that attains high basis estimation performance, identifies/rejects outliers, and adapts to nominal subspace changes.
Abstract: Tucker decomposition is a standard method for processing multi-way (tensor) measurements and finds many applications in machine learning and data mining, among other fields. When tensor measurements arrive in a streaming fashion or are too many to jointly decompose, incremental Tucker analysis is preferred. In addition, dynamic adaptation of bases is desired when the nominal data subspaces change. At the same time, it has been documented that outliers in the data can significantly compromise the performance of existing methods for dynamic Tucker analysis. In this work, we present Dynamic L1-Tucker: an algorithm for dynamic and outlier-resistant Tucker analysis of tensor data. Our experimental studies on both real and synthetic datasets corroborate that the proposed method (i) attains high bases estimation performance, (ii) identifies/rejects outliers, and (iii) adapts to changes of the nominal subspaces.

Journal ArticleDOI
TL;DR: The proposed tensor decomposition deciphers the higher-order interrelations among the considered clinical covariates for early prediction of sepsis and the results obtained are on par with existing state-of-the-art performances.

Journal ArticleDOI
TL;DR: By combining the thin QR decomposition and the subsampled randomized Fourier transform (SRFT), an efficient randomized algorithm for computing the approximate Tucker decomposition with a given target multilinear rank is obtained.
Abstract: By combining the thin QR decomposition and the subsampled randomized Fourier transform (SRFT), we obtain an efficient randomized algorithm for computing the approximate Tucker decomposition with a given target multilinear rank. We also combine this randomized algorithm with the power iteration technique to improve the efficiency of the algorithm. By using the results about the singular values of the product of orthonormal matrices with the Kronecker product of SRFT matrices, we obtain the error bounds of these two algorithms. Finally, the efficiency of these algorithms is illustrated by several numerical examples.

Journal ArticleDOI
TL;DR: In this article, the low-dimensional representation extracted by NTD can be treated as the predicted soft-clustering coefficient matrix and can therefore be learned jointly with label propagation in a unified framework.
Abstract: Non-negative Tucker decomposition (NTD) has been developed as a crucial method for non-negative tensor data representation. However, NTD is essentially an unsupervised method and cannot take advantage of label information. In this paper, we claim that the low-dimensional representation extracted by NTD can be treated as the predicted soft-clustering coefficient matrix and can therefore be learned jointly with label propagation in a unified framework. The proposed method can extract the physically-meaningful and parts-based representation of tensor data in their natural form while fully exploring the potential ability of the given labels with a nearest neighbors graph. In addition, an efficient accelerated proximal gradient (APG) algorithm is developed to solve the optimization problem. Finally, the experimental results on five benchmark image data sets for semi-supervised clustering and classification tasks demonstrate the superiority of this method over state-of-the-art methods.

Proceedings Article
18 May 2021
TL;DR: In this article, a deep transfer tensor decomposition (DTTD) method is proposed by integrating deep structure and Tucker decomposition, where an orthogonal constrained stacked denoising autoencoder (OC-SDAE) is proposed for alleviating the scale variation in learning effective latent representation, and the side information is incorporated as a compensation for tensor sparsity.
Abstract: Tensor decomposition is one of the most effective techniques for multi-criteria recommendations. However, it suffers from data sparsity when dealing with three-dimensional (3D) user-item-criterion ratings. To mitigate this issue, we consider effectively incorporating the side information and cross-domain knowledge in tensor decomposition. A deep transfer tensor decomposition (DTTD) method is proposed by integrating deep structure and Tucker decomposition, where an orthogonal constrained stacked denoising autoencoder (OC-SDAE) is proposed for alleviating the scale variation in learning effective latent representation, and the side information is incorporated as a compensation for tensor sparsity. Tucker decomposition generates private users and items' latent factors to connect with OC-SDAEs and creates a common core tensor to bridge different domains. A cross-domain alignment algorithm (CDAA) is proposed to solve the rotation issue between two core tensors in source and target domain. To the best of our knowledge, this is the first work in Tucker decomposition based recommendations to use deep structure to incorporate the side information and cross-domain knowledge. Experiments show that DTTD outperforms state-of-the-art related works.

Journal ArticleDOI
TL;DR: The classical vector autoregressive model is a fundamental tool for multivariate time series analysis, but it involves too many parameters when the number of time series and lag order are even...
Abstract: The classical vector autoregressive model is a fundamental tool for multivariate time series analysis. However, it involves too many parameters when the number of time series and lag order are even...

Journal ArticleDOI
TL;DR: In this paper, a machine learning-based automated diagnosis of sepsis using easily recordable physiological data can be more promising as compared to the gold standard rule-based clinical criteria in current practice.

Journal ArticleDOI
TL;DR: This letter tries to effectively reduce the dimension of hyperspectral images (HSIs) by jointly considering both the spectral redundancy and spatial continuity through a multilinear transformation with graph embedding in core tensor space in the framework of Tucker decomposition.
Abstract: This letter tries to effectively reduce the dimension of hyperspectral images (HSIs) by jointly considering both the spectral redundancy and spatial continuity through a multilinear transformation with graph embedding in core tensor space. The whole process is constructed in the framework of Tucker decomposition (TD). Since the distance between intraclass samples should be relatively smaller than that of the interclass samples, the reduced tensor cores should maintain this property. To achieve this goal, a graph is embedded to the core tensor space during TD. Moreover, considering the unstability of solution of the previous works, we constrain the projected matrices by orthogonality so that the results can be more stable and the extracted features can be more discriminative. We further analyze the effect of different constrains to TD methods for HSI dimensionality reduction. Finally, the experimental results show the superiority of this method to many other tensor methods.

Proceedings ArticleDOI
14 Aug 2021
TL;DR: Zoom-Tucker as discussed by the authors is a fast and memory-efficient Tucker decomposition method for finding hidden factors of temporal tensor data in an arbitrary time range, which fully exploits block structure to compress a given tensor.
Abstract: Given a temporal dense tensor and an arbitrary time range, how can we efficiently obtain latent factors in the range? Tucker decomposition is a fundamental tool for analyzing dense tensors to discover hidden factors, and has been exploited in many data mining applications. However, existing decomposition methods do not provide the functionality to analyze a specific range of a temporal tensor. The existing methods are one-off, with the main focus on performing Tucker decomposition once for a whole input tensor. Although a few existing methods with a preprocessing phase can deal with a time range query, they are still time-consuming and suffer from low accuracy. In this paper, we propose Zoom-Tucker, a fast and memory-efficient Tucker decomposition method for finding hidden factors of temporal tensor data in an arbitrary time range. Zoom-Tucker fully exploits block structure to compress a given tensor, supporting an efficient query and capturing local information. Zoom-Tucker answers diverse time range queries quickly and memory-efficiently, by elaborately decoupling the preprocessed results included in the range and carefully determining the order of computations. We demonstrate that Zoom-Tucker is up to 171.9x faster and requires up to 230x less space than existing methods while providing comparable accuracy.

Journal ArticleDOI
TL;DR: A quality controlled compression of multilead electrocardiogram (MECG) is proposed, based on tensor analysis, and implemented upon 3D beat tensor of MECG, and has provided superior result as compared to recently published works on M ECG data compression.

Journal ArticleDOI
TL;DR: In this paper, a second-order Levenberg-Marquardt optimization method was proposed for the structured Tucker decomposition problem using an approximation of the Hessian matrix by the Krylov subspace method, which is shown to perform well in comparison to existing tensor decomposition methods.
Abstract: Structured Tucker tensor decomposition models complete or incomplete multiway data sets (tensors), where the core tensor and the factor matrices can obey different constraints. The model includes block-term decomposition or canonical polyadic decomposition as special cases. We propose a very flexible optimization method for the structured Tucker decomposition problem, based on the second-order Levenberg-Marquardt optimization, using an approximation of the Hessian matrix by the Krylov subspace method. An algorithm with limited sensitivity of the decomposition is included. The proposed algorithm is shown to perform well in comparison to existing tensor decomposition methods.


Journal ArticleDOI
TL;DR: The experimental results indicate that the proposed scheme has better fidelity and stronger robustness for common image-processing and geometric attacks, can effectively resist each color channel exchange attack, and achieves better performance.
Abstract: To protect the copyright of the color image, a color image watermarking scheme based on quaternion discrete Fourier transform (QDFT) and tensor decomposition (TD) is presented. Specifically, the cover image is partitioned into non-overlapping blocks, and then QDFT is performed on each image block. Then, the three imaginary frequency components of QDFT are used to construct a third-order tensor. The third-order tensor is decomposed by Tucker decomposition and generates a core tensor. Finally, an improved odd–even quantization technique is employed to embed a watermark in the core tensor. Moreover, pseudo-Zernike moments and multiple output least squares support vector regression (MLS–SVR) network model are used for geometric distortion correction in the watermark extraction stage. The scheme utilizes the inherent correlations among the three RGB channels of a color image, and spreads the watermark into the three channels. The experimental results indicate that the proposed scheme has better fidelity and stronger robustness for common image-processing and geometric attacks, can effectively resist each color channel exchange attack. Compared with the existing schemes, the presented scheme achieves better performance.

Journal ArticleDOI
TL;DR: In this paper, a tensorized Chebyshev interpolation with a Tucker decomposition is used to approximate a trivariate function defined on a tensorsor product domain via function evaluations.
Abstract: This work is concerned with approximating a trivariate function defined on a tensor-product domain via function evaluations. Combining tensorized Chebyshev interpolation with a Tucker decomposition...

Journal ArticleDOI
Abstract: This article presents a novel global gradient sparse and nonlocal low-rank tensor decomposition model with a hyper-Laplacian prior for hyperspectral image (HSI) superresolution to produce a high-resolution HSI (HR-HSI) by fusing a low-resolution HSI (LR-HSI) with an HR multispectral image (HR-MSI). Inspired by the investigated hyper-Laplacian distribution of the gradients of the difference images between the upsampled LR-HSI and latent HR-HSI, we formulate the relationship between these two datasets as a $\ell _{p}$ $(0 -norm term to enforce spectral preservation. Then, the relationship between the HR-MSI and latent HR-HSI is built using a tensor-based fidelity term to recover the spatial details. To effectively capture the high spatio-spectral-nonlocal similarities of the latent HR-HSI, we design a novel nonlocal low-rank Tucker decomposition to model the 3-D regular tensors constructed from the grouped nonlocal similar HR-HSI cubes. The global spatial-spectral total variation regularization is then adopted to ensure the global spatial piecewise smoothness and spectral consistency of the reconstructed HR-HSI from nonlocal low-rank cubes. Finally, an alternating direction method of multipliers-based algorithm is designed to efficiently solve the optimization problem. Experiments on both the synthetic and real datasets collected by different sensors show the effectiveness of the proposed method, from visual and quantitative assessments.

Journal ArticleDOI
TL;DR: Tucker decomposition is proposed to reduce the memory requirement of the far-fields in the fast multipole method (FMM)-accelerated surface integral equation simulators in this article.
Abstract: Tucker decomposition is proposed to reduce the memory requirement of the far-fields in the fast multipole method (FMM)-accelerated surface integral equation simulators It is particularly used to compress the far-fields of FMM groups, which are stored in three-dimensional (3-D) arrays (or tensors) The compressed tensors are then used to perform fast tensor-vector multiplications during the aggregation and disaggregation stages of the FMM For many practical scenarios, the proposed Tucker decomposition yields a significant reduction in the far-fields’ memory requirement while dramatically accelerating the aggregation and disaggregation stages For the electromagnetic scattering analysis of a $30\lambda $ -diameter sphere, it reduces the memory requirement of the far-fields more than 87% while it expedites the aggregation and disaggregation stages by a factor of 158 and 152, respectively, where $\lambda $ is the wavelength in free space

Journal ArticleDOI
TL;DR: A correlation-based Tucker decomposition (CBTD) method that can be employed in any TD-based method of N th -order tensor and has better ability to improve the performance of HSI compression than other state-of-the-art methods.