scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 2021"


Journal ArticleDOI
TL;DR: A robust three-branch model with triplet module and matrix Fisher distribution module is proposed to address head pose estimation problems and achieves state-of-the-art performance in comparison with traditional methods.
Abstract: Head pose estimation suffers from several problems, including low pose tolerance under different disturbances and ambiguity arising from common head pose representation. In this study, a robust three-branch model with triplet module and matrix Fisher distribution module is proposed to address these problems. Based on metric learning, the triplet module employs triplet architecture and triplet loss. It is implemented to maximize the distance between embeddings with different pose pairs and minimize the distance between embeddings with same pose pairs. It can learn a highly discriminate and robust embedding related to head pose. Moreover, the rotation matrix instead of Euler angle and unit quaternion is utilized to represent head pose. An exponential probability density model based on the rotation matrix (referred to as the matrix Fisher distribution) is developed to model head rotation uncertainty. The matrix Fisher distribution can further analyze the head pose, and its maximum likelihood obtained using singular value decomposition provides enhanced accuracy. Extensive experiments executed over AFLW2000 and BIWI datasets demonstrate that the proposed model achieves state-of-the-art performance in comparison with traditional methods.

120 citations


Journal ArticleDOI
TL;DR: A new improved KPLS method, which considers the KPI-related information in the residual subspace, has been proposed for K PI-related process monitoring and performs general singular value decomposition on the calculable loadings based on the kernel matrix.
Abstract: Although the partial least squares approach is an effective fault detection method, some issues of nonlinear process monitoring related to key performance indicators (KPIs) still exist. To address the nonlinear characteristics in the industrial processes, kernel partial least squares (KPLS) method was proposed in the literature. However, the KPLS method also faces some difficulties in fault detection. None of the existing KPLS methods can accurately decompose measurements into KPI-related and KPI-unrelated parts, and these methods usually ignore the fact that the residual subspace still contains some KPI-related information. In this article, a new improved KPLS method, which considers the KPI-related information in the residual subspace, has been proposed for KPI-related process monitoring. First, the proposed method performs general singular value decomposition (GSVD) on the calculable loadings based on the kernel matrix. Next, the kernel matrix can be suitably divided into KPI-related and KPI-unrelated subspaces. Besides, we present the design of two statistics for process monitoring as well as a detailed algorithm performance analysis for kernel methods. Finally, a numerical case and Tennessee Eastman benchmark process demonstrate the efficacy and merits of the improved KPLS-based method.

114 citations


Journal ArticleDOI
TL;DR: In this paper, a new bearing fault diagnosis method combining singular value decomposition (SVD) and the squared envelope spectrum (SES) is proposed for early-stage defects of rolling element bearings (REBs) using vibration signals.

83 citations


Journal ArticleDOI
TL;DR: The proposed factorization hinges on the optimal shrinkage/thresholding of the singular value decomposition (SVD) singular values of low-rank tensor unfoldings of nonlocal similar 3-D patches, thus greatly improving the denoising performance and reducing the computational complexity during processing.
Abstract: The ever-increasing spectral resolution of hyperspectral images (HSIs) is often obtained at the cost of a decrease in the signal-to-noise ratio of the measurements, thus calling for effective denoising techniques. HSIs from the real world lie in low-dimensional subspaces and are self-similar. The low dimensionality stems from the high correlation existing among the reflectance vectors, and self-similarity is common in real-world images. In this article, we exploit the above two properties. The low dimensionality is a global property that enables the denoising to be formulated just with respect to the subspace representation coefficients, thus greatly improving the denoising performance and reducing the computational complexity during processing. The self-similarity is exploited via a low-rank tensor factorization of nonlocal similar 3-D patches. The proposed factorization hinges on the optimal shrinkage/thresholding of the singular value decomposition (SVD) singular values of low-rank tensor unfoldings. As a result, the proposed method is user friendly and insensitive to its parameters. Its effectiveness is illustrated in a comparison with state-of-the-art competitors. A MATLAB demo of this work is available at https://github.com/LinaZhuang for the sake of reproducibility.

69 citations


Journal ArticleDOI
TL;DR: In this paper, Biorthogonal Wavelet Transform with Singular Value Decomposition (BWT-SVD)-based feature extraction is applied to find the image forgery.
Abstract: To improve the trustworthiness to assess the digital images by identifying authentic images and tampered images, this work is focused on Copy-Move based image Forgery Detection (CMFD) and classification using Improved Relevance Vector Machine (IRVM). In this paper, Biorthogonal Wavelet Transform with Singular Value Decomposition (BWT-SVD)-based feature extraction is applied to find the image forgery. The proposed method begins with dividing the test images into overlapping blocks, and then Biorthogonal Wavelet Transform (BWT) with Singular Value Decomposition (SVD) applies to extract the feature vector from the blocks. After that, the feature vectors are sorts and the duplicate vectors are identified by the similarity between two successive vectors. The occurrences of clone vectors are identified on the basis of Minkowski distance and the threshold value. Then, similarity criteria result in the existence of forgery in images. To classify images into the category of authentic images or forged images, improved version of Relevance Vector Machine (RVM) uses, which leads to efficiency and accuracy of forged image identification process. Performance of proposed scheme tests by performing experiments on CoMoFoD database. The simulation results show that the proposed IRVM scheme attained high performance when compared with existing Copy-Move based image Forgery Detection schemes in MATLAB environment.

65 citations


Journal ArticleDOI
TL;DR: The tensor nature inherited from the array measurement is fully explored, and the coprime geometry enables EMVS–MIMO radar to achieve larger array aperture than the existing uniform linear configuration; thus, the proposed method offers better estimation performance than current state-of-the-art algorithms.
Abstract: The issue of two-dimensional (2D) direction-of-departure and direction-of-arrival estimation for bistatic multiple-input multiple-output (MIMO) radar with a coprime electromagnetic vector sensor (EMVS) is addressed in this paper, and a tensor-based subspace algorithm is proposed. Firstly, the covariance measurement of the received data is arranged into a fourth-order tensor, which can maintain the multi-dimensional characteristic of the received data. Then, the higher-order singular value decomposition is followed to get an accurate signal subspace. By utilizing the uniformity of the subarrays in coprime EMVS–MIMO radar, the rotation invariant technique is adopted to achieve ambiguous elevation angle estimation. Thereafter, the unambiguous elevation angles are recovered by exploring the coprime characteristic of the subarrays. Finally, all azimuth angles are achieved by using the vector cross-product strategy. The tensor nature inherited from the array measurement is fully explored, and the coprime geometry enables EMVS–MIMO radar to achieve larger array aperture than the existing uniform linear configuration; thus, the proposed method offers better estimation performance than current state-of-the-art algorithms. Several computer simulations validate the effectiveness of the proposed algorithm.

62 citations


Journal ArticleDOI
TL;DR: Experimental and comparative results demonstrated the stability and improved performance of the proposed scheme compared to its parents watermarking schemes, and it is free of false positive detection error.
Abstract: This paper presents a new intelligent image watermarking scheme based on discrete wavelet transform (DWT) and singular values decomposition (SVD) using human visual system (HVS) and particle swarm optimization (PSO). The cover image is transformed by one-level (DWT) and subsequently the LL sub-band of (DWT) transformed image is chosen for embedding. To achieve the highest possible visual quality, the embedding regions are selected based on (HVS). After applying (SVD) on the selected regions, every two watermark bits are embedded indirectly into the U and $$V^{t}$$ components of SVD decomposition of the selected regions, instead of embedding one watermark bit into the U component and compensating on the $$V^{t}$$ component that results in twice capacity and reasonable imperceptibility. In addition, for increasing the robustness without losing the transparency, the scaling factors are chosen automatically by (PSO) based on the attacks test results and predefined conditions, instead of using fixed or manually set scaling factors for all different cover images. Experimental and comparative results demonstrated the stability and improved performance of the proposed scheme compared to its parents watermarking schemes. Moreover, the proposed scheme is free of false positive detection error.

55 citations


Journal ArticleDOI
TL;DR: The flexible filter design and superior noise reduction abilities of the IWPT and the passband denoise ability of the ISVD are organicly combined to form enhanced singular value decomposition (E-SVD) method, which is verified by the analysis of simulated data and actual cases of rolling bearing.
Abstract: For the two shortcomings of singular value decomposition (SVD), the determination of the reconstruction order and the poor noise reduction ability, an enhanced SVD is introduced in this article. The core ideas include: first, an efficient method to determine the reconstructed order of SVD and the relative-change rate of the singular envelope kurtosis is presented, composed of improved SVD (ISVD). Then, the method to select the optimal node of wavelet packet transform (WPT) by the criterion of envelope kurtosis maximum is presented, composed of improved WPT (IWPT). The flexible filter design and superior noise reduction abilities of the IWPT and the passband denoise ability of the ISVD are organicly combined to form enhanced singular value decomposition (E-SVD) method. In addition, an indicator is introduced to evaluate the performance of the results. First, the reconstructed signal is obtained by performing ISVD on the original signal. Second, IWPT is executed on the reconstructed signal to achieve the optimal node. Finally, the filtered signal is combined with the envelope power spectrum to extract the bearing fault characteristic frequency. The method's validity and superiority are verified by the analysis of simulated data and actual cases of rolling bearing.

51 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, the authors proposed a network training algorithm Adam-NSCL which sequentially optimizes network parameters in the null space of all previous tasks, where the candidate parameter update can be generated by Adam.
Abstract: In the setting of continual learning, a network is trained on a sequence of tasks, and suffers from catastrophic forgetting. To balance plasticity and stability of network in continual learning, in this paper, we propose a novel network training algorithm Adam-NSCL which sequentially optimizes network parameters in the null space of all previous tasks. We first propose two mathematical conditions respectively for achieving network stability and plasticity in continual learning. Based on them, the network training for sequential tasks without forgetting can be simply achieved by projecting the candidate parameter update into the approximate null space of all previous tasks in the network training process, where the candidate parameter update can be generated by Adam. The approximate null space can be derived by applying singular value decomposition to the un-centered covariance matrix of all input features of previous tasks for each linear layer. For efficiency, the uncentered covariance matrix can be incrementally computed after learning each task. We also empirically verify the rationality of the approximate null space at each linear layer. We apply our approach to training networks for continual learning on benchmark datasets of CIFAR-100 and TinyImageNet, and the results suggest that the proposed approach outperforms or matches the state-ot-the-art continual learning approaches.

49 citations


Journal ArticleDOI
TL;DR: Two novel factor group sparsity-regularized nonconvex low-rank approximation (FGSLR) methods are introduced for HSI denoising, which can simultaneously overcome the mentioned issues of previous works.
Abstract: Hyperspectral image (HSI) mixed noise removal is a fundamental problem and an important preprocessing step in remote sensing fields. The low-rank approximation-based methods have been verified effective to encode the global spectral correlation for HSI denoising. However, due to the large scale and complexity of real HSI, previous low-rank HSI denoising techniques encounter several problems, including coarse rank approximation (such as nuclear norm), the high computational cost of singular value decomposition (SVD) (such as Schatten p-norm), and adaptive rank selection (such as low-rank factorization). In this article, two novel factor group sparsity-regularized nonconvex low-rank approximation (FGSLR) methods are introduced for HSI denoising, which can simultaneously overcome the mentioned issues of previous works. The FGSLR methods capture the spectral correlation via low-rank factorization, meanwhile utilizing factor group sparsity regularization to further enhance the low-rank property. It is SVD-free and robust to rank selection. Moreover, FGSLR is equivalent to Schatten p-norm approximation (Theorem 1), and thus FGSLR is tighter than the nuclear norm in terms of rank approximation. To preserve the spatial information of HSI in the denoising process, the total variation regularization is also incorporated into the proposed FGSLR models. Specifically, the proximal alternating minimization is designed to solve the proposed FGSLR models. Experimental results have demonstrated that the proposed FGSLR methods significantly outperform existing low-rank approximation-based HSI denoising methods.

48 citations


Journal ArticleDOI
TL;DR: Using an autoencoder for dimensionality reduction, this article presents a novel projection‐based reduced‐order model for eigenvalue problems, compared with the standard POD‐Galerkin approach and applied to two test cases taken from the field of nuclear reactor physics.
Abstract: Using an autoencoder for dimensionality reduction, this paper presents a novel projection-based reduced-order model for eigenvalue problems Reduced-order modelling relies on finding suitable basis functions which define a low-dimensional space in which a high-dimensional system is approximated Proper orthogonal decomposition (POD) and singular value decomposition (SVD) are often used for this purpose and yield an optimal linear subspace Autoencoders provide a nonlinear alternative to POD/SVD, that may capture, more efficiently, features or patterns in the high-fidelity model results Reduced-order models based on an autoencoder and a novel hybrid SVD-autoencoder are developed These methods are compared with the standard POD-Galerkin approach and are applied to two test cases taken from the field of nuclear reactor physics

Journal ArticleDOI
TL;DR: A novel fault diagnosis procedure based on improved symplectic geometry mode decomposition (SGMD) and optimized SVM and Harris hawks optimization algorithm (HHO) is presented, demonstrating its effectiveness and robustness for rotating machineries fault diagnosis.

Journal ArticleDOI
TL;DR: A novel HSI restoration model is suggested by introducing a fibered rank constrained tensor restoration framework with an embedded plug-and-play ( PnP)-based regularization (FRCTR-PnP), which achieves superior performance over compared methods in terms of quantitative evaluation and visual inspection.
Abstract: Hyperspectral images (HSIs) are often contaminated by several types of noise, which significantly limits the accuracy of subsequent applications. Recently, low-rank modeling based on tensor singular value decomposition (T-SVD) has achieved great success in HSI restoration. Most of them use the convex and nonconvex surrogates of the tensor rank, which cannot well approximate the tensor singular values and obtain suboptimal restored results. We suggest a novel HSI restoration model by introducing a fibered rank constrained tensor restoration framework with an embedded plug-and-play (PnP)-based regularization (FRCTR-PnP). More precisely, instead of using the convex and nonconvex surrogates to approximate the fibered rank, the proposed model directly constrains the tensor fibered rank of the solution, leading to a better approximation to the original image. Since exploiting the low-fibered-rankness of HSI is mainly to capture the global structure, we further employ an implicit PnP-based regularization to preserve the image details. Particularly, the above two building blocks are complementary to each other, rather than isolated and uncorrelated. Based on the alternating direction multiplier method (ADMM), we propose an efficient algorithm to tackle the proposed model. For robustness, we develop a three-directional randomized T-SVD (3DRT-SVD), which preserves the intrinsic structure of the clean HSI and removes partial noise by projecting the HSI onto a low-dimensional essential subspace. Extensive experimental results including simulated and real data demonstrate that the proposed method achieves superior performance over compared methods in terms of quantitative evaluation and visual inspection.

Journal ArticleDOI
TL;DR: A robust and imperceptible watermarking scheme is presented by combining Canny edge detection, contourlet transform with singular value decomposition and the embedded watermark could be recovered blindly in the watermark extraction stage.
Abstract: To enhance the invisibility and the robustness of the watermarking algorithm, a robust and imperceptible watermarking scheme is presented by combining Canny edge detection, contourlet transform with singular value decomposition. The host image is firstly decomposed by the contourlet transform. Then the low frequency sub-band obtained by contourlet transform is partitioned into 4×4 non-overlapping blocks and the singular value decomposition is carried out for the specific blocks selected by the Canny edge detection. Finally, the watermark is embedded into the coefficient of matrix U. The embedded watermark could be recovered blindly in the watermark extraction stage. And the robustness and the imperceptibility are efficiently guaranteed with an optimal threshold k selected by the least-square curve fitting. Experimental results demonstrate that the proposed watermarking scheme is superior in terms of imperceptibility and robustness against common attacks.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a low-rank tensor representation based on coupled transform, which fully exploits the spatial multi-scale nature and redundancy in spatial and spectral/temporal dimensions, leading to a better low tensor multi-rank approximation.
Abstract: This paper addresses the tensor completion problem, which aims to recover missing information of multi-dimensional images. How to represent a low-rank structure embedded in the underlying data is the key issue in tensor completion. In this work, we suggest a novel low-rank tensor representation based on coupled transform, which fully exploits the spatial multi-scale nature and redundancy in spatial and spectral/temporal dimensions, leading to a better low tensor multi-rank approximation. More precisely, this representation is achieved by using two-dimensional framelet transform for the two spatial dimensions, one/two-dimensional Fourier transform for the temporal/spectral dimension, and then Karhunen–Loeve transform (via singular value decomposition) for the transformed tensor. Based on this low-rank tensor representation, we formulate a novel low-rank tensor completion model for recovering missing information in multi-dimensional visual data, which leads to a convex optimization problem. To tackle the proposed model, we develop the alternating directional method of multipliers (ADMM) algorithm tailored for the structured optimization problem. Numerical examples on color images, multispectral images, and videos illustrate that the proposed method outperforms many state-of-the-art methods in qualitative and quantitative aspects.

Journal ArticleDOI
TL;DR: In this paper, two improved kernel canonical correlation analysis (KCCA) methods are proposed to deal with key performance indicators (KPI)-related issue, and fault detectability analysis and computational complexity analysis on these two methods are performed.

Journal ArticleDOI
TL;DR: The CSP is reformulates as a constrained minimization problem and the equivalence of the reformulated and the original CSPs is established, which validate the efficiency and effectiveness of the proposed CSP formulation in different learning contexts.
Abstract: Common spatial pattern (CSP) is one of the most successful feature extraction algorithms for brain–computer interfaces (BCIs). It aims to find spatial filters that maximize the projected variance ratio between the covariance matrices of the multichannel electroencephalography (EEG) signals corresponding to two mental tasks, which can be formulated as a generalized eigenvalue problem (GEP). However, it is challenging in principle to impose additional regularization onto the CSP to obtain structural solutions (e.g., sparse CSP) due to the intrinsic nonconvexity and invariance property of GEPs. This article reformulates the CSP as a constrained minimization problem and establishes the equivalence of the reformulated and the original CSPs. An efficient algorithm is proposed to solve this optimization problem by alternately performing singular value decomposition (SVD) and least squares. Under this new formulation, various regularization techniques for linear regression can then be easily implemented to regularize the CSPs for different learning paradigms, such as the sparse CSP, the transfer CSP, and the multisubject CSP. Evaluations on three BCI competition datasets show that the regularized CSP algorithms outperform other baselines, especially for the high-dimensional small training set. The extensive results validate the efficiency and effectiveness of the proposed CSP formulation in different learning contexts.

Journal ArticleDOI
TL;DR: This article employs iterative reweighted least square (IRLS) method to solve the objective function, and the dimension of the reversed matrix is lessened, improving the azimuth resolution in low SNR condition and increases computational efficiency compared with the sparse-TSVD method.
Abstract: Most existing super-resolution imaging methods fail to work in low signal-to-noise ratio (SNR) condition due to the ill-posed antenna measurement matrix, but the sparse-truncated singular value decomposition (TSVD) method can effectively suppress noise and improve azimuth resolution in low SNR condition. However, the current sparse-TSVD method encounters large computation cost, resulting in a slow algorithm speed. In this work, a fast sparse-TSVD super-resolution imaging method of real aperture radar is proposed. First, the proposed method is based on the results of TSVD, using the truncated unitary matrix and diagonal matrix to reconstruct the signal convolution model. The dimension of the reconstructed antenna measurement matrix reduces from $N \times N$ to $k \times N$ , and the dimension of the reconstructed echo matrix reduces from $N \times 1$ to $k \times 1$ , where $N$ is azimuth sampling points and $k$ is truncation parameter, $N \gg k$ . Much of the expensive matrix– multiplication computation can then be performed on the smaller matrices, thereby accelerating the algorithm. Second, an objective function is established as the ${l_{1}}$ constraint based on the regularization strategy. Lastly, this article employs iterative reweighted least square (IRLS) method to solve the objective function, and the dimension of the reversed matrix is lessened from $N \times N$ to $k \times k$ , speeding up the algorithm further. The simulation and real data verify that the proposed algorithm not only improves the azimuth resolution in low SNR condition but also increases computational efficiency compared with the sparse-TSVD method.

Journal ArticleDOI
TL;DR: This work presents a simple yet low-complex and efficient model, comprising single shot detector, 2D convolutional neural network, singular value decomposition (SVD), and long short term memory, to real-time isolated hand sign language recognition (IHSLR) from RGB video.
Abstract: One of the challenges in computer vision models, especially sign language, is real-time recognition. In this work, we present a simple yet low-complex and efficient model, comprising single shot detector, 2D convolutional neural network, singular value decomposition (SVD), and long short term memory, to real-time isolated hand sign language recognition (IHSLR) from RGB video. We employ the SVD method as an efficient, compact, and discriminative feature extractor from the estimated 3D hand keypoints coordinators. Despite the previous works that employ the estimated 3D hand keypoints coordinates as raw features, we propose a novel and revolutionary way to apply the SVD to the estimated 3D hand keypoints coordinates to get more discriminative features. SVD method is also applied to the geometric relations between the consecutive segments of each finger in each hand and also the angles between these sections. We perform a detailed analysis of recognition time and accuracy. One of our contributions is that this is the first time that the SVD method is applied to the hand pose parameters. Results on four datasets, RKS-PERSIANSIGN ( $$99.5 \pm 0.04$$ ), First-Person ( $$91 \pm 0.06$$ ), ASVID ( $$93 \pm 0.05$$ ), and isoGD ( $$86.1 \pm 0.04$$ ), confirm the efficiency of our method in both accuracy ( $$mean + std$$ ) and time recognition. Furthermore, our model outperforms or gets competitive results with the state-of-the-art alternatives in IHSLR and hand action recognition.

Journal ArticleDOI
04 Apr 2021-Sensors
TL;DR: Wang et al. as discussed by the authors proposed a bearing fault feature extraction method based on feature fusion, which can contribute to guaranteeing running stability and maintenance, thus promoting production efficiency and economic benefits, and the results show that the algorithm can effectively diagnose the bearing under steady-state and unsteady state.
Abstract: Bearing is one of the most important parts of rotating machinery with high failure rate, and its working state directly affects the performance of the entire equipment. Hence, it is of great significance to diagnose bearing faults, which can contribute to guaranteeing running stability and maintenance, thus promoting production efficiency and economic benefits. Usually, the bearing fault features are difficult to extract effectively, which results in low diagnosis performance. To solve the problem, this paper proposes a bearing fault feature extraction method and it establishes a bearing fault diagnosis method that is based on feature fusion. The basic idea of the method is as follows: firstly, the time-frequency feature of the bearing signal is extracted through Wavelet Packet Transform (WPT) to form the time-frequency characteristic matrix of the signal; secondly, the Multi-Weight Singular Value Decomposition (MWSVD) is constructed by singular value contribution rate and entropy weight. The features of the time-frequency feature matrix obtained by WPT are further extracted, and the features that are sensitive to fault in the time-frequency feature matrix are retained while the insensitive features are removed; finally, the extracted feature matrix is used as the input of the Support Vector Machine (SVM) classifier for bearing fault diagnosis. The proposed method is validated by data sets from the time-varying bearing data from the University of Ottawa and Case Western Reserve University Bearing Data Center. The results show that the algorithm can effectively diagnose the bearing under the steady-state and unsteady state. This paper proposes that the algorithm has better fault diagnosis capabilities and feature extraction capabilities when compared with methods that aree based on traditional feature technology.

Journal ArticleDOI
TL;DR: A novel optimized authentication mechanism is designed to resolve the false positive problem, which exists in the SVD-based watermarking algorithms, and three-dimensional optimal mapping algorithm is proposed to search the optimal scaling factors through a novel objective evaluation function, and it can significantly improve the imperceptibility and robustness.
Abstract: Image watermarking technique is one of effective solutions to protect copyright, and it is applied to a variety of information security application domains. It needs to meet four requirements of imperceptibility, robustness, capacity and security. A multi-scale and secure image watermarking method is proposed in this work, which is based on the Integer Wavelet Transform (IWT) and Singular Value Decomposition (SVD). Four IWT sub-bands are firstly obtained after 1-level IWT on the host image, and the corresponding singular diagonal matrices of four sub-bands can be obtained using SVD. Then, each singular diagonal matrix is divided into four non-overlapping sections in terms of the size of embedding watermark. Particularly, the size of upper left part is same as the size of watermark. The watermark can be directly embedded into four upper left parts afterwards by multiplying different scaling factors to complete the final watermarking operation. Especially, a novel optimized authentication mechanism is designed to resolve the false positive problem, which exists in the SVD-based watermarking algorithms. In addition, three-dimensional optimal mapping algorithm is proposed to search the optimal scaling factors through a novel objective evaluation function, and it can significantly improve the imperceptibility and robustness. The experimental test and comparison analysis illustrate that the proposed watermark scheme demonstrates a high imperceptibility with peak signal to noise ratio values of 45 dB and strong robustness with average normalized correlation values of 0.92.

Journal ArticleDOI
TL;DR: An efficient high-dimensional dictionary learning (DL) method is proposed by avoiding the singular value decomposition (SVD) calculation in each dictionary update step that is required by the classic KSVD algorithm.
Abstract: A sparse dictionary is more adaptive than a sparse fixed-basis transform since it can learn the features directly from the input data in a data-driven way. However, learning a sparse dictionary is time-consuming because a large number of iterations are required in order to obtain the dictionary atoms that best represent the features of input data. The computational cost becomes unaffordable when it comes to high-dimensional problems, e.g., 3-D or even 5-D applications. We propose an efficient high-dimensional dictionary learning (DL) method by avoiding the singular value decomposition (SVD) calculation in each dictionary update step that is required by the classic $K$ -singular value decomposition (KSVD) algorithm. Besides, due to the special structure of the sparse coefficient matrix, it requires a much less expensive sparse coding process. The overall computational efficiency of the new DL method is much higher, while the results are still comparable or event better than those from the traditional KSVD method. We apply the proposed method to both 3-D and 5-D seismic data reconstructions and demonstrate successful and efficient performance.

Journal ArticleDOI
TL;DR: In this article, the authors review recent advances in randomization for computation of Tucker decomposition and Higher Order SVD (HOSVD) and discuss random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decompposition and the HOSVD.
Abstract: Big data analysis has become a crucial part of new emerging technologies such as the internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and feature extraction. Randomized algorithms are efficient tools for handling big data tensors. They accelerate decomposing large-scale data tensors by reducing the computational complexity of deterministic algorithms and the communication among different levels of memory hierarchy, which is the main bottleneck in modern computing environments and architectures. In this article, we review recent advances in randomization for computation of Tucker decomposition and Higher Order SVD (HOSVD). We discuss random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decomposition and the HOSVD. Simulations on synthetic and real datasets are provided to compare the performance of some of best and most promising algorithms.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a tensor-singular value decomposition (t-SVD)-based multiview subspace clustering method by integrating coefficient matrix learning and spectral clustering into a unified framework.
Abstract: Despite the promising preliminary results, tensor-singular value decomposition (t-SVD)-based multiview subspace is incapable of dealing with real problems, such as noise and illumination changes. The major reason is that tensor-nuclear norm minimization (TNNM) used in t-SVD regularizes each singular value equally, which does not make sense in matrix completion and coefficient matrix learning. In this case, the singular values represent different perspectives and should be treated differently. To well exploit the significant difference between singular values, we study the weighted tensor Schatten p-norm based on t-SVD and develop an efficient algorithm to solve the weighted tensor Schatten p-norm minimization (WTSNM) problem. After that, applying WTSNM to learn the coefficient matrix in multiview subspace clustering, we present a novel multiview clustering method by integrating coefficient matrix learning and spectral clustering into a unified framework. The learned coefficient matrix well exploits both the cluster structure and high-order information embedded in multiview views. The extensive experiments indicate the efficiency of our method in six metrics.

Journal ArticleDOI
TL;DR: It is shown that the proposed mechanism results in significantly better prediction and estimation performance with less tunable parameters in just one learning epoch.

Journal ArticleDOI
Xiao Cheng1, Jiandong Mao1, Juan Li1, Hu Zhao1, Chunyan Zhou1, Xin Gong1, Zhimin Rao1 
TL;DR: The EEMD-SVD-LWT algorithm was also used to denoise practical lidar signals and was better than that achieved with the other methods.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear trajectory SAR imaging algorithm based on controlled singular value decomposition (CSVD) is proposed to improve the image quality compared with SVD-Stolt.
Abstract: The nonlinear trajectory and bistatic characteristics of general bistatic synthetic aperture radar (SAR) can cause severe two-dimensional space-variance in the echo signal, and therefore it is difficult to focus the echo signal directly using the traditional frequency-domain imaging algorithm based on the assumption of azimuth translational invariance. At present, the state-of-the-art nonlinear trajectory imaging algorithm is based on singular value decomposition (SVD), which has the problem that SVD may be not controlled, and thus may lead to a high imaging complexity or low imaging accuracy. Therefore, this article proposes a nonlinear trajectory SAR imaging algorithm based on controlled SVD (CSVD). First, the chirp scaling algorithm is used to correct the range space-variance, and then SVD is used to decompose the remaining azimuth space-variant phase, and the first two feature components after SVD are integrated to make them be represented by a new feature component. Finally, the new feature component is used for interpolation to correct the azimuth space-variance. The simulation results show that the proposed CSVD can further improve the image quality compared with SVD-Stolt.

Journal ArticleDOI
TL;DR: The proposed three-way optimization method is designed to address both static and dynamic background cases of MOD separately with the intention to reduce the misclassifications due to moving/cluttered background.
Abstract: The rising demand for surveillance systems naturally necessitates more efficient and noise robust moving object detection (MOD) systems from the captured video streams. Inspired by the challenges in MOD which are yet to be addressed properly, this paper proposes a new MOD scheme using $l_{1/2}$ regularization in the tensor framework. It takes advantage of the special features of tensor singular value decomposition ( ${t}$ -SVD) along with regularizations using $l_{1/2}$ -norm with half thresholding operation and tensor total variation (TTV) to develop a noise robust MOD system with improved detection accuracy. While ${t}$ -SVD exploits the spatio-temporal correlation of the video background, $l_{1/2}$ regularization provides noise robustness besides removing the sparser but discontinuous dynamic elements in the spatio-temporal direction. Moreover, TTV enhances the spatio-temporal continuity and fills up the gaps due to the lingering objects and thereby extracting the foreground precisely. The proposed three-way optimization method is designed to address both static and dynamic background cases of MOD separately with the intention to reduce the misclassifications due to moving/cluttered background. The brilliance of this method is confirmed by the impressive visual quality of the background/foreground separation, noise robustness, reduced computational complexity, and rapid response. The quantitative evaluation discloses the predominance of the proposed method with respect to the state-of-the-art techniques.

Journal ArticleDOI
TL;DR: It is established that this robust and secure data hiding scheme to transmit grayscale image in encryption-then-compression domain has a better ability to recover concealed mark than conventional ones at low cost.
Abstract: This paper introduces a robust and secure data hiding scheme to transmit grayscale image in encryption-then-compression domain. First, host image is transformed using lifting wavelet transform, Hessenberg decomposition and redundant singular value decomposition. Then, we use appropriate scaling factor to invisibly embed the singular value of watermark data into the lower frequency sub-band of the host image. We also use suitable encryption-then-compression scheme to improve the security of the image. Additionally, de-noising convolutional neural network is performed at extracted mark data to enhance the robustness of the scheme. Experimental results verify the effectiveness of our scheme, including embedding capacity, robustness, invisibility, and security. Further, it is established that our scheme has a better ability to recover concealed mark than conventional ones at low cost.

Journal ArticleDOI
TL;DR: In this paper, a hybrid blind digital image watermarking with a combination of discrete cosine transform (DCT), DWT, and singular value decomposition (SVD) is proposed.