scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 2020"


Journal ArticleDOI
TL;DR: A fibered rank minimization model for HSI mixed noise removal is proposed, in which the underlying HSI is modeled as a low-fibered-rank component and each subproblem within ADMM is proven to have a closed-form solution, although 3DLogTNN is nonconvex.
Abstract: The tensor tubal rank, defined based on the tensor singular value decomposition (t-SVD), has obtained promising results in hyperspectral image (HSI) denoising. However, the framework of the t-SVD lacks flexibility for handling different correlations along different modes of HSIs, leading to suboptimal denoising performance. This article mainly makes three contributions. First, we introduce a new tensor rank named tensor fibered rank by generalizing the t-SVD to the mode- ${k}$ t-SVD, to achieve a more flexible and accurate HSI characterization. Since directly minimizing the fibered rank is NP-hard, we suggest a three-directional tensor nuclear norm (3DTNN) and a three-directional log-based tensor nuclear norm (3DLogTNN) as its convex and nonconvex relaxation to provide an efficient numerical solution, respectively. Second, we propose a fibered rank minimization model for HSI mixed noise removal, in which the underlying HSI is modeled as a low-fibered-rank component. Third, we develop an efficient alternating direction method of multipliers (ADMMs)-based algorithm to solve the proposed model, especially, each subproblem within ADMM is proven to have a closed-form solution, although 3DLogTNN is nonconvex. Extensive experimental results demonstrate that the proposed method has superior denoising performance, as compared with the state-of-the-art competing methods on low-rank matrix/tensor approximation and noise modeling.

150 citations


Journal ArticleDOI
TL;DR: Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in peak signal‐to‐noise ratio than that by using Fourier transform and other robust Tensor completion methods.

107 citations


Journal ArticleDOI
TL;DR: In this article, a framelet representation of the tensor nuclear norm was developed for third-order tensor recovery, and the proposed minimization model is convex and global minimizers can be obtained.
Abstract: The main aim of this paper is to develop a framelet representation of the tensor nuclear norm for third-order tensor recovery. In the literature, the tensor nuclear norm can be computed by using tensor singular value decomposition based on the discrete Fourier transform matrix, and tensor completion can be performed by the minimization of the tensor nuclear norm which is the relaxation of the sum of matrix ranks from all Fourier transformed matrix frontal slices. These Fourier transformed matrix frontal slices are obtained by applying the discrete Fourier transform on the tubes of the original tensor. In this paper, we propose to employ the framelet representation of each tube so that a framelet transformed tensor can be constructed. Because of framelet basis redundancy, the representation of each tube is sparsely represented. When the matrix slices of the original tensor are highly correlated, we expect the corresponding sum of matrix ranks from all framelet transformed matrix frontal slices would be small, and the resulting tensor completion can be performed much better. The proposed minimization model is convex and global minimizers can be obtained. Numerical results on several types of multi-dimensional data (videos, multispectral images, and magnetic resonance imaging data) have tested and shown that the proposed method outperformed the other testing methods.

102 citations


Journal ArticleDOI
TL;DR: This work proposes a new formulation of logistic PCA which extends Pearson’s formulation of a low dimensional data representation with minimum error to binary data and derives explicit solutions for data matrices of special structure and provides a computationally efficient algorithm for solving for the principal component loadings.

99 citations


Journal ArticleDOI
TL;DR: In this article, a novel lossy compression algorithm for multidimensional data over regular grids is proposed, which leverages the higher-order singular value decomposition (HOSVD), a generalization of the SVD to three dimensions and higher, together with bit-plane, run-length and arithmetic coding to compress the HOSVD transform coefficients.
Abstract: Memory and network bandwidth are decisive bottlenecks when handling high-resolution multidimensional data sets in visualization applications, and they increasingly demand suitable data compression strategies. We introduce a novel lossy compression algorithm for multidimensional data over regular grids. It leverages the higher-order singular value decomposition (HOSVD), a generalization of the SVD to three dimensions and higher, together with bit-plane, run-length and arithmetic coding to compress the HOSVD transform coefficients. Our scheme degrades the data particularly smoothly and achieves lower mean squared error than other state-of-the-art algorithms at low-to-medium bit rates, as it is required in data archiving and management for visualization purposes. Further advantages of the proposed algorithm include very fine bit rate selection granularity and the ability to manipulate data at very small cost in the compression domain, for example to reconstruct filtered and/or subsampled versions of all (or selected parts) of the data set.

91 citations


Journal ArticleDOI
TL;DR: The proposed autoencoder-LSTM method is compared with non-intrusive reduced order models based on dynamic mode decomposition (DMD) and proper orthogonal decomposition and shown to be considerably capable of predicting fluid flow evolution.
Abstract: Unsteady fluid systems are nonlinear high-dimensional dynamical systems that may exhibit multiple complex phenomena in both time and space. Reduced Order Modeling (ROM) of fluid flows has been an active research topic in the recent decade with the primary goal to decompose complex flows into a set of features most important for future state prediction and control, typically using a dimensionality reduction technique. In this work, a novel data-driven technique based on the power of deep neural networks for ROM of the unsteady fluid flows is introduced. An autoencoder network is used for nonlinear dimension reduction and feature extraction as an alternative for singular value decomposition (SVD). Then, the extracted features are used as an input for a long short-term memory (LSTM) network to predict the velocity field at future time instances. The proposed autoencoder-LSTM method is compared with non-intrusive reduced order models based on dynamic mode decomposition (DMD) and proper orthogonal decomposition. Moreover, an autoencoder-DMD algorithm is introduced for ROM, which uses the autoencoder network for dimensionality reduction rather than SVD rank truncation. The results show that the autoencoder-LSTM method is considerably capable of predicting fluid flow evolution, where higher values for the coefficient of determination R2 are obtained using autoencoder-LSTM compared to other models.

91 citations


Journal ArticleDOI
TL;DR: The improved VMD method after parameter optimization can extract the early failure characteristics of rolling bearing more distinctly, and the fault diagnosis model based on this method has higher accuracy and application value.

89 citations


Journal ArticleDOI
TL;DR: In the proposed GLTA framework, the tensor singular value decomposition-based tensor nuclear norm is adopted to explore the high-order cross-view correlations and the manifold regularization is exploited to preserve the local structures embedded in high-dimensional space.

88 citations


Journal ArticleDOI
TL;DR: Surrogate-based optimization has been used in aerodynamic shape optimization, but it has been limited due to the curse of dimensionality.
Abstract: Surrogate-based optimization has been used in aerodynamic shape optimization, but it has been limited due to the curse of dimensionality. Although a large number of variables are required for the s...

83 citations


Journal ArticleDOI
Quanxue Gao1, Wei Xia1, Zhizhen Wan1, Deyan Xie1, Pu Zhang1 
03 Apr 2020
TL;DR: The WTNNM method is superior to several state-of-the-art multi-view subspace clustering methods in terms of performance and exploiting the high order correlations embedded in different views is applied.
Abstract: Low-rank representation based on tensor-Singular Value Decomposition (t-SVD) has achieved impressive results for multi-view subspace clustering, but it does not well deal with noise and illumination changes embedded in multi-view data. The major reason is that all the singular values have the same contribution in tensor-nuclear norm based on t-SVD, which does not make sense in the existence of noise and illumination change. To improve the robustness and clustering performance, we study the weighted tensor-nuclear norm based on t-SVD and develop an efficient algorithm to optimize the weighted tensor-nuclear norm minimization (WTNNM) problem. We further apply the WTNNM algorithm to multi-view subspace clustering by exploiting the high order correlations embedded in different views. Extensive experimental results reveal that our WTNNM method is superior to several state-of-the-art multi-view subspace clustering methods in terms of performance.

82 citations


Journal ArticleDOI
TL;DR: Through the comparison with traditional AUKF, it can be easily concluded that the proposed method can achieve precise and stable SOC estimation even though error covariance matrix is non-positive definite.
Abstract: Precise state of charge (SOC) estimation is crucial to assure safe and reliable operation of lithium-ion battery in electric vehicles. Adaptive unscented Kalman filter (AUKF) has been intensively applied to estimate SOC due to its features of self-correction and high accuracy. Nevertheless, the estimation by traditional AUKF cannot proceed when error covariance matrix is non-positive definite, greatly influencing the stability of SOC estimation. To address this issue, an improved AUKF is proposed in this paper. Firstly, the forgetting factor recursive least square is employed to online identify parameters of electrical equivalent circuit model. With these identified parameters, an improved AUKF, whose Cholesky decomposition for error covariance matrix of tradition AUKF is replaced by singular value decomposition, is applied here to provide online accurate SOC estimation. The feasibility of the proposed method is verified by experimental data under Federal Urban Driving Schedule test. The validation results of robustness present that the algorithm has satisfactory robustness against inaccurate initial SOC. Moreover, through the comparison with traditional AUKF, it can be easily concluded that the proposed method can achieve precise and stable SOC estimation even though error covariance matrix is non-positive definite.

Journal ArticleDOI
TL;DR: In this paper, a generalized tensor function according to the tensor singular value decomposition (T-SVD) is defined, from which the projection operators and Moore-Penrose inverse of tensors are obtained.

Journal ArticleDOI
TL;DR: The proposed algorithm not only improves the clarity and continuity of ridge structures but also removes the background and blurred regions of a fingerprint image to achieve higher fingerprint classification accuracy than related methods can.
Abstract: Fingerprint image enhancement is a key aspect of an automated fingerprint identification system. This paper describes an effective algorithm based on a novel lighting compensation scheme. The scheme involves the use of adaptive higher-order singular value decomposition on a tensor of wavelet subbands of a fingerprint (AHTWF) image to enhance the quality of the image. The algorithm consists of three stages. The first stage is the decomposition of an input fingerprint image of size 2M × 2N into four subbands at the first level by applying a two-dimensional discrete wavelet transform. In the second stage, we construct a tensor in ℝ M×N×4 space. The tensor contains four wavelet subbands that serve as four frontal planes. Furthermore, the tensor is decomposed through higher-order singular value decomposition to separate the fingerprint's wavelet subbands into detailed individual components. In the third stage, a compensated image is produced by adaptively obtaining the compensation coefficient for each frontal plane of the tensor-based on the reference Gaussian template. The experimental results indicated that the quality of the AHTWF image was higher than that of the original image. The proposed algorithm not only improves the clarity and continuity of ridge structures but also removes the background and blurred regions of a fingerprint image. Therefore, this algorithm can achieve higher fingerprint classification accuracy than related methods can.

Journal ArticleDOI
TL;DR: The proposed sparse low rank (SLR) method, which sparsifies SVD matrices to achieve better compression rate by keeping lower rank for unimportant neurons, is demonstrated in compressing famous convolutional neural networks based image recognition frameworks which are trained on popular datasets.

Journal ArticleDOI
TL;DR: This paper proposes the partial sum of the tubal nuclear norm (PSTNN) of a tensor, a surrogate of the tensor tubal multi-rank, and builds two PSTNN-based minimization models for two typical tensor recovery problems, i.e., the Tensor completion and the tensar principal component analysis.

Journal ArticleDOI
TL;DR: The experimental results show the capacity of the FrCMs proposed for image reconstruction and image watermarking against different attacks such as noise and geometric distortions.

Journal ArticleDOI
TL;DR: Multivariate analysis of the EEG signal for the detection of Schizophrenia condition and five entropy measures measured from the IMF signal showed a significant difference.

Journal ArticleDOI
TL;DR: An adaptive dictionary is designed to bridge the gap between group-based sparse coding (GSC) and rank minimization, and WSNM is found to be the closest one to the real singular values of each patch group and is translated into a non-convex weighted norm minimization problem in GSC.
Abstract: Sparse coding has achieved a great success in various image processing tasks. However, a benchmark to measure the sparsity of image patch/group is missing since sparse coding is essentially an NP-hard problem. This work attempts to fill the gap from the perspective of rank minimization. We firstly design an adaptive dictionary to bridge the gap between group-based sparse coding (GSC) and rank minimization. Then, we show that under the designed dictionary, GSC and the rank minimization problems are equivalent, and therefore the sparse coefficients of each patch group can be measured by estimating the singular values of each patch group. We thus earn a benchmark to measure the sparsity of each patch group because the singular values of the original image patch groups can be easily computed by the singular value decomposition (SVD). This benchmark can be used to evaluate performance of any kind of norm minimization methods in sparse coding through analyzing their corresponding rank minimization counterparts. Towards this end, we exploit four well-known rank minimization methods to study the sparsity of each patch group and the weighted Schatten $p$ -norm minimization (WSNM) is found to be the closest one to the real singular values of each patch group. Inspired by the aforementioned equivalence regime of rank minimization and GSC, WSNM can be translated into a non-convex weighted $\ell _{p}$ -norm minimization problem in GSC. By using the earned benchmark in sparse coding, the weighted $\ell _{p}$ -norm minimization is expected to obtain better performance than the three other norm minimization methods, i.e. , $\ell _{1}$ -norm, $\ell _{p}$ -norm and weighted $\ell _{1}$ -norm. To verify the feasibility of the proposed benchmark, we compare the weighted $\ell _{p}$ -norm minimization against the three aforementioned norm minimization methods in sparse coding. Experimental results on image restoration applications, namely image inpainting and image compressive sensing recovery, demonstrate that the proposed scheme is feasible and outperforms many state-of-the-art methods.

Journal ArticleDOI
TL;DR: A novel tensor rank is defined, the tensor $N$-tubal rank, as a vector whose elements contain the tubal rank of all mode-$k_1k_2$ unfolding tensors, to depict the correlations along different modes.

Journal ArticleDOI
TL;DR: This paper proposes a novel RPCA model based on matrix tri-factorization, which only needs the computation of SVDs for very small matrices and reduces the complexity of RPCA to be linear and makes it fully scalable.

Journal ArticleDOI
TL;DR: A fast learning method, for which the time complexity is determined by the number of observed entries in the data matrix rather than the matrix size, and the key idea is truncated singular value decomposition on the weight matrix to get a more compact representation of the weights.
Abstract: Matrix factorization (MF) has been widely used to discover the low-rank structure and to predict the missing entries of data matrix. In many real-world learning systems, the data matrix can be very high dimensional but sparse. This poses an imbalanced learning problem since the scale of missing entries is usually much larger than that of the observed entries, but they cannot be ignored due to the valuable negative signal. For efficiency concern, existing work typically applies a uniform weight on missing entries to allow a fast learning algorithm. However, this simplification will decrease modeling fidelity, resulting in suboptimal performance for downstream applications. In this paper, we weight the missing data nonuniformly, and more generically, we allow any weighting strategy on the missing data. To address the efficiency challenge, we propose a fast learning method, for which the time complexity is determined by the number of observed entries in the data matrix rather than the matrix size. The key idea is twofold: 1) we apply truncated singular value decomposition on the weight matrix to get a more compact representation of the weights and 2) we learn MF parameters with elementwise alternating least squares (eALS) and memorize the key intermediate variables to avoid repeating computations that are unnecessary. We conduct extensive experiments on two recommendation benchmarks, demonstrating the correctness, efficiency, and effectiveness of our fast eALS method.

Journal ArticleDOI
TL;DR: In this paper, an autoencoder network is used for nonlinear dimension reduction and feature extraction as an alternative for singular value decomposition (SVD) then the extracted features are used as an input for long short-term memory network (LSTM) to predict the velocity field at future time instances.
Abstract: Unsteady fluid systems are nonlinear high-dimensional dynamical systems that may exhibit multiple complex phenomena both in time and space Reduced Order Modeling (ROM) of fluid flows has been an active research topic in the recent decade with the primary goal to decompose complex flows to a set of features most important for future state prediction and control, typically using a dimensionality reduction technique In this work, a novel data-driven technique based on the power of deep neural networks for reduced order modeling of the unsteady fluid flows is introduced An autoencoder network is used for nonlinear dimension reduction and feature extraction as an alternative for singular value decomposition (SVD) Then, the extracted features are used as an input for long short-term memory network (LSTM) to predict the velocity field at future time instances The proposed autoencoder-LSTM method is compared with non-intrusive reduced order models based on dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) Moreover, an autoencoder-DMD algorithm is introduced for reduced order modeling, which uses the autoencoder network for dimensionality reduction rather than SVD rank truncation Results show that the autoencoder-LSTM method is considerably capable of predicting fluid flow evolution, where higher values for coefficient of determination $R^{2}$ are obtained using autoencoder-LSTM compared to other models

Journal ArticleDOI
TL;DR: The proposed watermarking algorithm is highly resistant to a variety of image processing attacks and error-free in the absence of attack, and outperforms existing SVD-based schemes in terms of imperceptibility and robustness at a payload capacity of 1/16 bit per pixel.

Proceedings ArticleDOI
12 Oct 2020
TL;DR: NumPyWren, a system for linear algebra built on a disaggregated serverless programming model, and LAmbdaPACK, a companion domain-specific language designed for serverless execution of highly parallel linear algebra algorithms are built.
Abstract: Datacenter disaggregation provides numerous benefits to both the datacenter operator and the application designer. However switching from the server-centric model to a disaggregated model requires developing new programming abstractions that can achieve high performance while benefiting from the greater elasticity. To explore the limits of datacenter disaggregation, we study an application area that near-maximally benefits from current server-centric datacenters: dense linear algebra. We build NumPyWren, a system for linear algebra built on a disaggregated serverless programming model, and LAmbdaPACK, a companion domain-specific language designed for serverless execution of highly parallel linear algebra algorithms. We show that, for a number of linear algebra algorithms such as matrix multiply, singular value decomposition, Cholesky decomposition, and QR decomposition, NumPyWren's performance (completion time) is within a factor of 2 of optimized server-centric MPI implementations, and has up to 15% greater compute efficiency (total CPU-hours), while providing fault tolerance.

Journal ArticleDOI
TL;DR: Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data, benefiting from a strong connection to nonlinear dynamical s....
Abstract: Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data, benefiting from a strong connection to nonlinear dynamical s...

Journal ArticleDOI
TL;DR: In this paper, a variational quantum circuit that produces the singular value decomposition of a bipartite pure state is presented, which preserves entanglement between the parties and acts as a diagonalizer that delivers the eigenvalues of the Schmidt decomposition.
Abstract: We present a variational quantum circuit that produces the singular value decomposition of a bipartite pure state. The proposed circuit, which we name quantum singular value decomposer or QSVD, is made of two unitaries respectively acting on each part of the system. The key idea of the algorithm is to train this circuit so that the final state displays exact output coincidence from both subsystems for every measurement in the computational basis. Such circuit preserves entanglement between the parties and acts as a diagonalizer that delivers the eigenvalues of the Schmidt decomposition. Our algorithm only requires measurements in one single setting, in striking contrast to the ${3}^{n}$ settings required by state tomography. Furthermore, the adjoints of the unitaries making the circuit are used to create the eigenvectors of the decomposition up to a global phase. Some further applications of QSVD are readily obtained. The proposed QSVD circuit allows us to construct a SWAP between the two parties of the system without the need of any quantum gate communicating them. We also show that a circuit made with QSVD and CNOTs acts as an encoder of information of the original state onto one of its parties. This idea can be reversed and used to create random states with a precise entanglement structure.

Journal ArticleDOI
TL;DR: Experimental results and security analyses show that the proposed color image encryption scheme has high security, fast speed and could resist various common attacks.
Abstract: To realize real-time image encryption, a fast color image encryption scheme by combining 3D orthogonal Latin squares (3D-OLSs) with matching matrix is proposed. The 3D-OLSs represent that each plane of two matrices must be Latin square and the corresponding planes of the two matrices must satisfy orthogonality. The matching matrix is to produce a matrix orthogonal with the 3D Latin square. In the permutation process, a new 3D permutation method with 3D-OLSs and matching matrix is devised. The proposed scheme could save encryption time to a certain degree, since the orthogonal Latin squares are defined over integers directly. In the diffusion process, to solve the diffuse problem between two planes in the 3D matrix, some matrices of the diffusion process are changed with three variables. Experimental results and security analyses show that the proposed color image encryption scheme has high security, fast speed and could resist various common attacks.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed SVD-ESN model obtains better performance in terms of prediction accuracy, which conforms that the suggested ESN can be used as an effective dynamic model for developing accurate soft sensors.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a robust and fast matrix completion method based on the maximum correntropy criterion (MCC), which is utilized instead of the $l_2$ -based error norm to improve robustness against noise.
Abstract: Robust matrix completion aims to recover a low-rank matrix from a subset of noisy entries perturbed by complex noises. Traditional matrix completion algorithms are always based on $l_2$ -norm minimization and are sensitive to non-Gaussian noise with outliers. In this paper, we propose a novel robust and fast matrix completion method based on the maximum correntropy criterion (MCC). The correntropy-based error measure is utilized instead of the $l_2$ -based error norm to improve robustness against noise. By using the half-quadratic optimization technique, the correntropy-based optimization can be transformed into a weighted matrix factorization problem. Two efficient algorithms are then derived: an alternating minimization-based algorithm and an alternating gradient descent-based algorithm. These algorithms do not require the singular value decomposition (SVD) to be calculated for each iteration. Furthermore, an adaptive kernel width selection strategy is proposed to accelerate the convergence speed as well as improve the performance. A comparison with existing robust matrix completion algorithms is provided by simulations and shows that the new methods can achieve better performance than the existing state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A new SVD-based image watermarking scheme that uses a chaotic map is proposed that improves security and overcomes FPP issues, achieves high robustness with different scaling factors, and outperforms several existing schemes.
Abstract: Image watermarking schemes based on singular value decomposition (SVD) have become popular due to a good trade-off between robustness and imperceptibility. However, the false positive problem (FPP) is the main drawback of SVD-based watermarking schemes. The singular value is the main cause of FPP issues because it a fixed value that does not hold structural information of an image. In this paper, a new SVD-based image watermarking scheme that uses a chaotic map is proposed to overcome this issue. The secret key is first extracted from both the host and watermark image. This key is used to generate a new chaotic matrix and chaotic multiple scaling factors (CMSF) to increase the sensitivity of the proposed scheme. The watermark image is then transformed based on the chaotic matrix before being directly embedded into the singular value of the host image by using the CMSF. The extracted secret key is unique to the host and the watermark images, which improves security and overcomes FPP issues. Experimental results show that the proposed scheme fulfils all watermarking requirements in terms of robustness, imperceptibility, security, and payload. Furthermore, it achieves high robustness with different scaling factors, and outperforms several existing schemes.