scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 2019"


Posted Content
TL;DR: In this paper, the authors present exact results for partition functions of JT gravity on two-dimensional surfaces of arbitrary genus with an arbitrary number of boundaries, and show that the partition functions correspond to the genus expansion of a certain matrix integral.
Abstract: We present exact results for partition functions of Jackiw-Teitelboim (JT) gravity on two-dimensional surfaces of arbitrary genus with an arbitrary number of boundaries. The boundaries are of the type relevant in the NAdS${}_2$/NCFT${}_1$ correspondence. We show that the partition functions correspond to the genus expansion of a certain matrix integral. A key fact is that Mirzakhani's recursion relation for Weil-Petersson volumes maps directly onto the Eynard-Orantin "topological recursion" formulation of the loop equations for this matrix integral. The matrix integral provides a (non-unique) nonperturbative completion of the genus expansion, sensitive to the underlying discreteness of the matrix eigenvalues. In matrix integral descriptions of noncritical strings, such effects are due to an infinite number of disconnected worldsheets connected to D-branes. In JT gravity, these effects can be reproduced by a sum over an infinite number of disconnected geometries -- a type of D-brane logic applied to spacetime.

474 citations


Journal ArticleDOI
TL;DR: This tutorial-style overview highlights the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees and reviews two contrasting approaches: two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and global landscape analysis and initialization-free algorithms.
Abstract: Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, and robust principal component analysis. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

369 citations


Book ChapterDOI
01 Dec 2019
TL;DR: A totally positive n×n matrix with real entries is said to be totally positive if all its minors are ≥ 0 (resp. totally > 0) if all the entries in the matrix are real entries as mentioned in this paper.
Abstract: An invertible n×n matrix with real entries is said to be totally ≥0 (resp. totally >0) if all its minors are ≥0 (resp. >0). This definition appears in Schoenberg’s 1930 paper [S] and in the 1935 note [GK] of Gantmacher and Krein. (For a recent survey of totally positive matrices, see [A].)

354 citations


Posted Content
TL;DR: A parametrization of linear feedback systems is derived that paves the way to solve important control problems using data-dependent linear matrix inequalities only and is remarkable in that no explicit system's matrices identification is required.
Abstract: In a paper by Willems and coauthors it was shown that persistently exciting data can be used to represent the input-output behavior of a linear system. Based on this fundamental result, we derive a parametrization of linear feedback systems that paves the way to solve important control problems using data-dependent Linear Matrix Inequalities only. The result is remarkable in that no explicit system's matrices identification is required. The examples of control problems we solve include the state and output feedback stabilization, and the linear quadratic regulation problem. We also discuss robustness to noise-corrupted measurements and show how the approach can be used to stabilize unstable equilibria of nonlinear systems.

259 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed the first block diagonal matrix induced regularizer, which uses the block diagonal structure prior to solve the subspace clustering problem by block diagonal representation (BDR).
Abstract: This paper studies the subspace clustering problem. Given some data points approximately drawn from a union of subspaces, the goal is to group these data points into their underlying subspaces. Many subspace clustering methods have been proposed and among which sparse subspace clustering and low-rank representation are two representative ones. Despite the different motivations, we observe that many existing methods own the common block diagonal property, which possibly leads to correct clustering, yet with their proofs given case by case. In this work, we consider a general formulation and provide a unified theoretical guarantee of the block diagonal property. The block diagonal property of many existing methods falls into our special case. Second, we observe that many existing methods approximate the block diagonal representation matrix by using different structure priors, e.g., sparsity and low-rankness, which are indirect. We propose the first block diagonal matrix induced regularizer for directly pursuing the block diagonal matrix. With this regularizer, we solve the subspace clustering problem by Block Diagonal Representation (BDR), which uses the block diagonal structure prior. The BDR model is nonconvex and we propose an alternating minimization solver and prove its convergence. Experiments on real datasets demonstrate the effectiveness of BDR.

258 citations


Proceedings ArticleDOI
23 Jun 2019
TL;DR: A classical analogue to Kerenidis and Prakash’s quantum recommendation system is given, previously believed to be one of the strongest candidates for provably exponential speedups in quantum machine learning, which produces recommendations exponentially faster than previous classical systems, which run in time linear in m and n.
Abstract: We give a classical analogue to Kerenidis and Prakash’s quantum recommendation system, previously believed to be one of the strongest candidates for provably exponential speedups in quantum machine learning. Our main result is an algorithm that, given an m × n matrix in a data structure supporting certain l2-norm sampling operations, outputs an l2-norm sample from a rank-k approximation of that matrix in time O(poly(k)log(mn)), only polynomially slower than the quantum algorithm. As a consequence, Kerenidis and Prakash’s algorithm does not in fact give an exponential speedup over classical algorithms. Further, under strong input assumptions, the classical recommendation system resulting from our algorithm produces recommendations exponentially faster than previous classical systems, which run in time linear in m and n. The main insight of this work is the use of simple routines to manipulate l2-norm sampling distributions, which play the role of quantum superpositions in the classical setting. This correspondence indicates a potentially fruitful framework for formally comparing quantum machine learning algorithms to classical machine learning algorithms.

257 citations


Journal ArticleDOI
TL;DR: A novel multi-view clustering method that works in the GBS framework is also proposed, which can construct data graph matrices effectively, weight each graph matrix automatically, and produce clustering results directly.
Abstract: This paper studies clustering of multi-view data, known as multi-view clustering. Among existing multi-view clustering methods, one representative category of methods is the graph-based approach. Despite its elegant and simple formulation, the graph-based approach has not been studied in terms of (a) the generalization of the approach or (b) the impact of different graph metrics on the clustering results. This paper extends this important approach by first proposing a general Graph-Based System (GBS) for multi-view clustering, and then discussing and evaluating the impact of different graph metrics on the multi-view clustering performance within the proposed framework. GBS works by extracting data feature matrix of each view, constructing graph matrices of all views, and fusing the constructed graph matrices to generate a unified graph matrix, which gives the final clusters. A novel multi-view clustering method that works in the GBS framework is also proposed, which can (1) construct data graph matrices effectively, (2) weight each graph matrix automatically, and (3) produce clustering results directly. Experimental results on benchmark datasets show that the proposed method outperforms state-of-the-art baselines significantly.

217 citations



Journal ArticleDOI
Xiuli Chai1, Xiuli Chai2, Zhihua Gan2, Ke Yuan2, Yi Chen1, Xianxing Liu2 
TL;DR: Experimental results and security analyses demonstrate that the proposed scheme not only has good encryption effect, but also is secure enough to resist against the known attacks.
Abstract: In the paper, a novel image encryption algorithm based on DNA sequence operations and chaotic systems is proposed. The encryption architecture of permutation and diffusion is adopted. Firstly, 256-bit hash value of the plain image is gotten to calculate the initial values and system parameters of the 2D Logistic-adjusted-Sine map (2D-LASM) and a new 1D chaotic system; thus, the encryption scheme highly depends on the original image. Next, the chaotic sequences from 2D-LASM are used to produce the DNA encoding/decoding rule matrix, and the plain image is encoded into a DNA matrix according to it. Thirdly, DNA level row permutation and column permutation are performed on the DNA matrix of the original image, inter-DNA-plane permutation and intra-DNA-plane permutation can be attained simultaneously, and then, DNA XOR operation is performed on the permutated DNA matrix using a DNA key matrix, and the key matrix is produced by the combination of two 1D chaotic systems. Finally, after decoding the confused DNA matrix, the cipher image is obtained. Experimental results and security analyses demonstrate that the proposed scheme not only has good encryption effect, but also is secure enough to resist against the known attacks.

212 citations


Proceedings Article
01 Jan 2019
TL;DR: This paper establishes risk convergence and asymptotic weight matrix alignment --- a form of implicit regularization --- of gradient flow and gradient descent when applied to deep linear networks on linearly separable data.
Abstract: This paper establishes risk convergence and asymptotic weight matrix alignment --- a form of implicit regularization --- of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized i-th weight matrix asymptotically equals its rank-1 approximation $u_iv_i^{\top}$; (iii) these rank-1 matrices are aligned across layers, meaning $|v_{i+1}^{\top}u_i|\to1$. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network --- the product of its weight matrices --- converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.

167 citations


Journal ArticleDOI
TL;DR: A standardized matrix modeling method based on the concept of EH to build the coupling matrix automatically for multiple energy systems to improve the overall efficiency of the energy system is proposed.
Abstract: Multiple energy systems (MESs) bring together the electric power, heat, natural gas, and other systems to improve the overall efficiency of the energy system. An energy hub (EH) models an MES as a device with multiple ports using a matrix coupling the inputs and outputs. This paper proposes a standardized matrix modeling method based on the concept of EH to build the coupling matrix automatically. The components and the structure of MES are first defined using graph theory. Then, the matrices describing the topology of the MES and the characteristics of the energy converters are developed. On this basis, the energy flow equations are formulated. Gaussian elimination can then be applied to obtain the coupling matrix and analyze the degree of freedom of the EH. A standard data structure for basic information on the MES is proposed to facilitate computerized modeling. Further, extension modeling of energy storage and demand response is also discussed. Finally, a case study of a modified tri-generation system is conducted to illustrate the proposed method.

Journal ArticleDOI
TL;DR: A feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step, thanks to the physical computing with Ohm’s and Kirchhoff's laws.
Abstract: Conventional digital computers can execute advanced operations by a sequence of elementary Boolean functions of 2 or more bits. As a result, complicated tasks such as solving a linear system or solving a differential equation require a large number of computing steps and an extensive use of memory units to store individual bits. To accelerate the execution of such advanced tasks, in-memory computing with resistive memories provides a promising avenue, thanks to analog data storage and physical computation in the memory. Here, we show that a cross-point array of resistive memory devices can directly solve a system of linear equations, or find the matrix eigenvectors. These operations are completed in just one single step, thanks to the physical computing with Ohm’s and Kirchhoff’s laws, and thanks to the negative feedback connection in the cross-point circuit. Algebraic problems are demonstrated in hardware and applied to classical computing tasks, such as ranking webpages and solving the Schrodinger equation in one step.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive review on the four key parameters specifically material, geometry, event and environmental-related conditions that affect the structural behavior of fiber reinforced polymer matrix composites to impact loading is discussed.

Journal ArticleDOI
TL;DR: The effect of the food matrix (FM-effect) is discussed in reference to food processing, oral processing and flavor perception, satiation and satiety, and digestion in the gastrointestinal tract, and a classification for the major types of matrices found in foods is proposed.
Abstract: The concept of food matrix has received much attention lately in reference to its effects on food processing, nutrition and health. However, the term matrix is used vaguely by food and nutrition scientists, often as synonymous of the food itself or its microstructure. This review analyses the concept of food matrix and proposes a classification for the major types of matrices found in foods. The food matrix may be viewed as a physical domain that contains and/or interacts with specific constituents of a food (e.g., a nutrient) providing functionalities and behaviors which are different from those exhibited by the components in isolation or a free state. The effect of the food matrix (FM-effect) is discussed in reference to food processing, oral processing and flavor perception, satiation and satiety, and digestion in the gastrointestinal tract. The FM-effect has also implications in nutrition, food allergies and food intolerances, and in the quality and relevance of results of analytical techniques. The role of the food matrix in the design of healthy foods is also discussed.

Proceedings ArticleDOI
Jiezhong Qiu1, Yuxiao Dong2, Hao Ma3, Jian Li1, Chi Wang2, Kuansan Wang2, Jie Tang1 
13 May 2019
TL;DR: NetSMF as mentioned in this paper leverages theories from spectral sparsification to efficiently sparsify the aforementioned dense matrix, enabling significantly improved efficiency in embedding learning, which helps maintain the representation power of the learned embeddings.
Abstract: We study the problem of large-scale network embedding, which aims to learn latent representations for network mining applications. Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2) the explicit factorization of such matrix generates more powerful embeddings than existing methods. However, directly constructing and factorizing this matrix-which is dense-is prohibitively expensive in terms of both time and space, making it not scalable for large networks. In this work, we present the algorithm of large-scale network embedding as sparse matrix factorization (NetSMF). NetSMF leverages theories from spectral sparsification to efficiently sparsify the aforementioned dense matrix, enabling significantly improved efficiency in embedding learning. The sparsified matrix is spectrally close to the original dense one with a theoretically bounded approximation error, which helps maintain the representation power of the learned embeddings. We conduct experiments on networks of various scales and types. Results show that among both popular benchmarks and factorization based methods, NetSMF is the only method that achieves both high efficiency and effectiveness. We show that NetSMF requires only 24 hours to generate effective embeddings for a large-scale academic collaboration network with tens of millions of nodes, while it would cost DeepWalk months and is computationally infeasible for the dense matrix factorization solution. The source code of NetSMF is publicly available1.

Journal ArticleDOI
Jeffrey Pennington1, Pratik Worah1
TL;DR: This work demonstrates that the pointwise nonlinearities typically applied in neural networks can be incorporated into a standard method of proof in random matrix theory known as the moments method, and identifies an intriguing new class of activation functions with favorable properties.
Abstract: Neural network configurations with random weights play an important role in the analysis of deep learning. They define the initial loss landscape and are closely related to kernel and random feature methods. Despite the fact that these networks are built out of random matrices, the vast and powerful machinery of random matrix theory has so far found limited success in studying them. A main obstacle in this direction is that neural networks are nonlinear, which prevents the straightforward utilization of many of the existing mathematical results. In this work, we open the door for direct applications of random matrix theory to deep learning by demonstrating that the pointwise nonlinearities typically applied in neural networks can be incorporated into a standard method of proof in random matrix theory known as the moments method. The test case for our study is the Gram matrix $Y^TY$, $Y=f(WX)$, where $W$ is a random weight matrix, $X$ is a random data matrix, and $f$ is a pointwise nonlinear activation function. We derive an explicit representation for the trace of the resolvent of this matrix, which defines its limiting spectral distribution. We apply these results to the computation of the asymptotic performance of single-layer random feature methods on a memorization task and to the analysis of the eigenvalues of the data covariance matrix as it propagates through a neural network. As a byproduct of our analysis, we identify an intriguing new class of activation functions with favorable properties.

Journal ArticleDOI
TL;DR: This paper focuses on the Markov chain-based spectral clustering method and proposes a novel essential tensor learning method to explore the high-order correlations for multi-view representation and achieves superior performance over other state-of-the-art methods.
Abstract: Recently, multi-view clustering attracts much attention, which aims to take advantage of multi-view information to improve the performance of clustering. However, most recent work mainly focuses on the self-representation-based subspace clustering, which is of high computation complexity. In this paper, we focus on the Markov chain-based spectral clustering method and propose a novel essential tensor learning method to explore the high-order correlations for multi-view representation. We first construct a tensor based on multi-view transition probability matrices of the Markov chain. By incorporating the idea from the robust principle component analysis, tensor singular value decomposition (t-SVD)-based tensor nuclear norm is imposed to preserve the low-rank property of the essential tensor, which can well capture the principle information from multiple views. We also employ the tensor rotation operator for this task to better investigate the relationship among views as well as reduce the computation complexity. The proposed method can be efficiently optimized by the alternating direction method of multipliers (ADMM). Extensive experiments on seven real-world datasets corresponding to five different applications show that our method achieves superior performance over other state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a block Lanczos method for large-scale shell-model calculations, which is an extension of the traditional thick-restart block-Lanczos method with the block algorithm.

Journal ArticleDOI
TL;DR: This critical review provides an overview of the state-of-the-art procedures in microplastic analysis, gives examples of potential ways ahead and remaining challenges and classifies available methods according to the underlying research question.
Abstract: Microplastics are observed ubiquitously and in different environmental compartments ranging from marine waters and sediments to freshwater and terrestrial ecosystems including biota. Over the last decade, several methods have been applied and advanced to monitor and quantify microplastics, to identify the polymer material and to describe the particle properties, such as size, shape or colour. In most cases, the overarching aim is to elucidate patterns of occurrence that might result from (micro)plastic emissions and environmental fate. But the applied methods are subject to uncertainties and boundary conditions, be it spatial resolution that excludes the smallest microplastics or limitations in distinguishing microplastic particles from natural particles. This critical review provides an overview of the state-of-the-art procedures in microplastic analysis, gives examples of potential ways ahead and remaining challenges and classifies available methods according to the underlying research question. The resulting decision tree for the selection of analytical methods starts with a common research question and takes specificities of the environmental matrix into account. The procedural range consequently ranges from fast screening methods based on visual identification to a highly sophisticated combination of analytical methods that provide information on polymer type, particle number or mass and eventually particle size but are very time-consuming and expensive. Standardization of microplastic analytical methods on the basis of the research aim will help to make study results comparable and obtain a more comprehensive picture of microplastic abundance and fate in the environment. Graphical abstract.

Journal ArticleDOI
TL;DR: A comprehensive performance analysis of the most popular algorithms proposed in this domain in terms of accuracy of weights calculated from fuzzy comparison matrices found that modified Logarithmic Least Squares Method and Fuzzy Inverse of Column Sum Method generally outperformed other algorithms.

Posted Content
TL;DR: This work presents a novel gossip-based stochastic gradient descent algorithm, CHOCO-SGD, that converges at rate $\mathcal{O}\left(1/(nT) + 1/(T \delta^2 \omega)^2\right)$ for strongly convex objectives, where $T$ denotes the number of iterations and $\delta$ the eigengap of the connectivity matrix.
Abstract: We consider decentralized stochastic optimization with the objective function (e.g. data samples for machine learning task) being distributed over $n$ machines that can only communicate to their neighbors on a fixed communication graph. To reduce the communication bottleneck, the nodes compress (e.g. quantize or sparsify) their model updates. We cover both unbiased and biased compression operators with quality denoted by $\omega \leq 1$ ($\omega=1$ meaning no compression). We (i) propose a novel gossip-based stochastic gradient descent algorithm, CHOCO-SGD, that converges at rate $\mathcal{O}\left(1/(nT) + 1/(T \delta^2 \omega)^2\right)$ for strongly convex objectives, where $T$ denotes the number of iterations and $\delta$ the eigengap of the connectivity matrix. Despite compression quality and network connectivity affecting the higher order terms, the first term in the rate, $\mathcal{O}(1/(nT))$, is the same as for the centralized baseline with exact communication. We (ii) present a novel gossip algorithm, CHOCO-GOSSIP, for the average consensus problem that converges in time $\mathcal{O}(1/(\delta^2\omega) \log (1/\epsilon))$ for accuracy $\epsilon > 0$. This is (up to our knowledge) the first gossip algorithm that supports arbitrary compressed messages for $\omega > 0$ and still exhibits linear convergence. We (iii) show in experiments that both of our algorithms do outperform the respective state-of-the-art baselines and CHOCO-SGD can reduce communication by at least two orders of magnitudes.

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of multifunctional structural materials, particularly continuous fiber polymer-matrix composites, including sensing, EMI shielding, heating, energy generation, energy storage, actuation and self-healing.
Abstract: This paper reviews multifunctional structural materials, particularly continuous fiber polymer-matrix composites. The non-structural functions include sensing, EMI shielding, heating, energy generation, energy storage, actuation and self-healing. For each function, the materials, composite design, functional performance and relevant technological needs are covered. Both unmodified and modified forms of the composite are addressed. Sensing, EMI shielding, heating and actuation are functions that do not require modification, though modifications can be performed. In contrast, energy generation, energy storage and self-healing require modification. For a continuous fiber polymer-matrix composite, the modification commonly involves the incorporation of constituents at the interlaminar interface.

Posted Content
TL;DR: This work introduces a systematic technique for minimizing requisite state preparations by exploiting the simultaneous measurability of partitions of commuting Pauli strings, and encompasses algorithms for efficiently approximating a MIN-COMMUTING-PARTITION, as well as a synthesis tool for compiling simultaneous measurement circuits.
Abstract: Variational quantum eigensolver (VQE) is a promising algorithm suitable for near-term quantum machines. VQE aims to approximate the lowest eigenvalue of an exponentially sized matrix in polynomial time. It minimizes quantum resource requirements both by co-processing with a classical processor and by structuring computation into many subproblems. Each quantum subproblem involves a separate state preparation terminated by the measurement of one Pauli string. However, the number of such Pauli strings scales as $N^4$ for typical problems of interest--a daunting growth rate that poses a serious limitation for emerging applications such as quantum computational chemistry. We introduce a systematic technique for minimizing requisite state preparations by exploiting the simultaneous measurability of partitions of commuting Pauli strings. Our work encompasses algorithms for efficiently approximating a MIN-COMMUTING-PARTITION, as well as a synthesis tool for compiling simultaneous measurement circuits. For representative problems, we achieve 8-30x reductions in state preparations, with minimal overhead in measurement circuit cost. We demonstrate experimental validation of our techniques by estimating the ground state energy of deuteron on an IBM Q 20-qubit machine. We also investigate the underlying statistics of simultaneous measurement and devise an adaptive strategy for mitigating harmful covariance terms.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: Two quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups and a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension are given.
Abstract: We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.

Journal ArticleDOI
TL;DR: The dynamic mode decomposition is a regression technique that integrates two of the leading data analysis methods in use today: Fourier transforms and singular value decomposition and the quality of the resulting background model is competitive, quantified by the F-measure, recall and precision.
Abstract: We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling. The dynamic mode decomposition is a regression technique that integrates two of the leading data analysis methods in use today: Fourier transforms and singular value decomposition. Borrowing ideas from compressed sensing and matrix sketching, cDMD eases the computational workload of high-resolution video processing. The key principal of cDMD is to obtain the decomposition on a (small) compressed matrix representation of the video feed. Hence, the cDMD algorithm scales with the intrinsic rank of the matrix, rather than the size of the actual video (data) matrix. Selection of the optimal modes characterizing the background is formulated as a sparsity-constrained sparse coding problem. Our results show that the quality of the resulting background model is competitive, quantified by the F-measure, recall and precision. A graphics processing unit accelerated implementation is also presented which further boosts the computational performance of the algorithm.

Journal ArticleDOI
TL;DR: In this article, a class of Riemann-Hilbert problems on the real axis is formulated for solving the multicomponent AKNS integrable hierarchies associated with a kind of bock matrix spectral problems.
Abstract: A class of Riemann–Hilbert problems on the real axis is formulated for solving the multicomponent AKNS integrable hierarchies associated with a kind of bock matrix spectral problems. Through special Riemann–Hilbert problems where a jump matrix is taken to be the identity matrix, soliton solutions to all integrable equations in each hierarchy are explicitly computed. A class of specific reductions of the presented integrable hierarchies is also made, together with its reduced Lax pairs and soliton solutions.

Journal ArticleDOI
TL;DR: In this paper, the structure and energy properties of a peak-aged Al Zn Mg Cu alloy were investigated using the combination of aberration corrected HAADF-STEM imaging and first-principles calculations.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: This model is inspired by the recently proposed tensor-tensor product (t-product) based on any invertible linear transforms and deduce the new tensor tubal rank, tensor spectral norm, and tensor nuclear norm.
Abstract: This work studies the low-rank tensor completion problem, which aims to exactly recover a low-rank tensor from partially observed entries. Our model is inspired by the recently proposed tensor-tensor product (t-product) based on any invertible linear transforms. When the linear transforms satisfy certain conditions, we deduce the new tensor tubal rank, tensor spectral norm, and tensor nuclear norm. Equipped with the tensor nuclear norm, we then solve the tensor completion problem by solving a convex program and provide the theoretical bound for the exact recovery under certain tensor incoherence conditions. The achieved sampling complexity is order-wise optimal. Our model and result greatly extend existing results in the low-rank matrix and tensor completion. Numerical experiments verify our results and the application on image recovery demonstrates the superiority of our method.

Journal ArticleDOI
TL;DR: In this paper, a general overview of what has been achieved recently in the scientific community on the manufacturing monitoring and structural health monitoring of polymer-matrix composites (PMC) using in-situ piezoelectric sensors is provided.

Journal ArticleDOI
TL;DR: In this paper, a 2-dimensional matrix-based relationship between ChlLeaf and two VIs was established using simulated datasets from the PROSAIL model, and three matrices of different VI pairs were tested using the simulated data.