scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2017"


Posted Content
TL;DR: In this paper, the authors show that if the vectors lie near the range of a generative model, such as a variational autoencoder or generative adversarial network, then roughly O(k 2 ) random Gaussian measurements suffice for recovery.
Abstract: The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model $G: \mathbb{R}^k \to \mathbb{R}^n$. Our main theorem is that, if $G$ is $L$-Lipschitz, then roughly $O(k \log L)$ random Gaussian measurements suffice for an $\ell_2/\ell_2$ recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use $5$-$10$x fewer measurements than Lasso for the same accuracy.

503 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the ring of regular functions on a natural class of affine log Calabi-Yau varieties (those with maximal boundary) has a canonical vector space basis parameterized by the integral tropical points of the mirror.
Abstract: In [GHK11], Conjecture 0.6, the first three authors conjectured that the ring of regular functions on a natural class of affine log Calabi-Yau varieties (those with maximal boundary) has a canonical vector space basis parameterized by the integral tropical points of the mirror. Further, the structure constants for the multiplication rule in this basis should be given by counting broken lines (certain combinatorial objects, morally the tropicalisations of holomorphic discs). Here we prove the conjecture in the case of cluster varieties, where the statement is a more precise form of the Fock-Goncharov dual basis conjecture, [FG06], Conjecture 4.3. In particular, under suitable hypotheses, for each Y the partial compactification of an affine cluster variety U given by allowing some frozen variables to vanish, we obtain canonical bases for H0(Y,OY ) extending to a basis of H0(U,OU ). Each choice of seed canonically identifies the parameterizing sets of these bases with integral points in a polyhedral cone. These results specialize to basis results of combinatorial representation theory. For example, by considering the open double Bruhat cell U in the basic affine space Y we obtain a canonical basis of each irreducible representation of SLr, parameterized by a set which each choice of seed identifies with integral points of a lattice polytope. These bases and polytopes are all constructed essentially without representation theoretic considerations. Along the way, our methods prove a number of conjectures in cluster theory, including positivity of the Laurent phenomenon for cluster algebras of geometric type.

332 citations


Proceedings Article
06 Aug 2017
TL;DR: This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee.
Abstract: The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : ℝk → ℝn. Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.

294 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a perturbed iterate framework to analyze stochastic optimization methods where the input to each update is perturbed by bounded noise, and they show that this framework forms the basis of a unified approach to analyzing asynchronous implementations of SVRG, by viewing them as serial methods operating on noisy inputs.
Abstract: We introduce and analyze stochastic optimization methods where the input to each update is perturbed by bounded noise. We show that this framework forms the basis of a unified approach to analyzing asynchronous implementations of stochastic optimization algorithms, by viewing them as serial methods operating on noisy inputs. Using our perturbed iterate framework, we provide new analyses of the Hogwild! algorithm and asynchronous stochastic coordinate descent that are simpler than earlier analyses, remove many assumptions of previous models, and in some cases yield improved upper bounds on the convergence rates. We proceed to apply our framework to develop and analyze KroMagnon: a novel, parallel, sparse stochastic variance-reduced gradient (SVRG) algorithm. We demonstrate experimentally on a 16-core machine that the sparse and parallel version of SVRG is in some cases more than four orders of magnitude faster than the standard SVRG algorithm.

145 citations


Book ChapterDOI
05 Jul 2017
TL;DR: A class of model reduction techniques for parametric partial differential equations, the so-called Reduced Basis (RB) methods, allow to obtain low-dimensional parametric models for various complex applications, enabling accurate and rapid numerical simulations.
Abstract: In this part we are concerned with a class of model reduction techniques for parametric partial differential equations, the so-called Reduced Basis (RB) methods. These allow to obtain low-dimensional parametric models for various complex applications, enabling accurate and rapid numerical simulations. Important aspects are basis generation and certification of the simulation results by suitable a posteriori error control. The main terminology, ideas and assumptions will be explained for the case of linear stationary elliptic, as well as parabolic or hyperbolic instationary problems. Reproducable experiments will illustrate the theoretical findings. We close with a discussion of further recent developments.

139 citations


Journal ArticleDOI
TL;DR: The tool epsilon is presented, an efficient implementation of Lee’s algorithm based on the Fermat computer algebra system as computational back end, and a canonical basis can be found in which they fulfill a differential equation with the right hand side proportional to ϵ.

126 citations


Journal ArticleDOI
TL;DR: The package Azurite (A ZUR ich-bred method for finding master I nTE grals), which efficiently finds a basis of this vector space of loop integrals spanned by a given Feynman diagram and all of its subdiagrams.

118 citations


Journal ArticleDOI
TL;DR: Geodesic Principal Component Analysis (GPCA) as discussed by the authors is a generalization of functional PCA for the space of probability measures on the line, with finite second moment, endowed with the Wasserstein metric.
Abstract: We introduce the method of Geodesic Principal Component Analysis (GPCA) on the space of probability measures on the line, with finite second moment, endowed with the Wasserstein metric. We discuss the advantages of this approach, over a standard functional PCA of probability densities in the Hilbert space of square-integrable functions. We establish the consistency of the method by showing that the empirical GPCA converges to its population counterpart, as the sample size tends to infinity. A key property in the study of GPCA is the isometry between the Wasserstein space and a closed convex subset of the space of square-integrable functions, with respect to an appropriate measure. Therefore, we consider the general problem of PCA in a closed convex subset of a separable Hilbert space, which serves as basis for the analysis of GPCA and also has interest in its own right. We provide illustrative examples on simple statistical models, to show the benefits of this approach for data analysis. The method is also applied to a real dataset of population pyramids.

105 citations


Journal ArticleDOI
TL;DR: The results demonstrate that age can be predicted well on the basis of anatomical measures and it was evident that good prediction accuracies can be achieved using a small but nevertheless age‐representative dataset of brain features.
Abstract: In this study, we examined whether age can be predicted on the basis of different anatomical features obtained from a large sample of healthy subjects (n = 3,144). From this sample we obtained different anatomical feature sets: (1) 11 larger brain regions (including cortical volume, thickness, area, subcortical volume, cerebellar volume, etc.), (2) 148 cortical compartmental thickness measures, (3) 148 cortical compartmental area measures, (4) 148 cortical compartmental volume measures, and (5) a combination of the above-mentioned measures. With these anatomical feature sets, we predicted age using 6 statistical techniques (multiple linear regression, ridge regression, neural network, k-nearest neighbourhood, support vector machine, and random forest). We obtained very good age prediction accuracies, with the highest accuracy being R(2) = 0.84 (prediction on the basis of a neural network and support vector machine approaches for the entire data set) and the lowest being R(2) = 0.40 (prediction on the basis of a k-nearest neighborhood for cortical surface measures). Interestingly, the easy-to-calculate multiple linear regression approach with the 11 large brain compartments resulted in a very good prediction accuracy (R(2) = 0.73), whereas the application of the neural network approach for this data set revealed very good age prediction accuracy (R(2) = 0.83). Taken together, these results demonstrate that age can be predicted well on the basis of anatomical measures. The neural network approach turned out to be the approach with the best results. In addition, it was evident that good prediction accuracies can be achieved using a small but nevertheless age-representative dataset of brain features. Hum Brain Mapp, 2016. © 2016 Wiley Periodicals, Inc.

90 citations


Journal ArticleDOI
TL;DR: In this article, a greedy approach for a reduced-order model generation of parametric Hamiltonian systems is presented, where two new basis vectors are added at each iteration to the linear vector space to increase the accuracy of the reduced basis.
Abstract: While reduced-order models (ROMs) have been popular for efficiently solving large systems of differential equations, the stability of reduced models over long-time integration presents challenges. We present a greedy approach for a ROM generation of parametric Hamiltonian systems that captures the symplectic structure of Hamiltonian systems to ensure stability of the reduced model. Through the greedy selection of basis vectors, two new vectors are added at each iteration to the linear vector space to increase the accuracy of the reduced basis. We use the error in the Hamiltonian due to model reduction as an error indicator to search the parameter space and identify the next best basis vectors. Under natural assumptions on the set of all solutions of the Hamiltonian system under variation of the parameters, we show that the greedy algorithm converges at an exponential rate. Moreover, we demonstrate that combining the greedy basis with the discrete empirical interpolation method also preserves the symplecti...

89 citations


Journal ArticleDOI
TL;DR: A mixed density fitting scheme is introduced that uses both a Gaussian and a plane-wave fitting basis to accurately evaluate electron repulsion integrals in crystalline systems to enable efficient all-electron Gaussian based periodic density functional and Hartree-Fock calculations.
Abstract: We introduce a mixed density fitting scheme that uses both a Gaussian and a plane-wave fitting basis to accurately evaluate electron repulsion integrals in crystalline systems. We use this scheme to enable efficient all-electron Gaussian based periodic density functional and Hartree-Fock calculations.

Journal Article
TL;DR: This work proposes new approximate global multiplicative scaling factors for the DFT calculation of ground state harmonic vibrational frequencies using functionals from the TPSS, M06, and M11 functional families with standard correlation consistent cc-pVxZ and aug-cc-pvxZ basis sets.
Abstract: We propose new approximate global multiplicative scaling factors for the DFT calculation of ground state harmonic vibrational frequencies using functionals from the TPSS, M06, and M11 functional families with standard correlation consistent cc-pVxZ and aug-cc-pVxZ (x = D, T, and Q), 6-311G split valence family, Sadlej and Sapporo polarized triple-ζ basis sets. Results for B3LYP, CAM-B3LYP, B3PW91, PBE, and PBE0 functionals with these basis sets are also reported. A total of 99 harmonic frequencies were calculated for 26 gas-phase organic and nonorganic molecules typically found in detonated solid propellant residue. Our proposed approximate multiplicative scaling factors are determined using a least-squares approach comparing the computed harmonic frequencies to experimental counterparts well established in the scientific literature. A comparison of our work to previously published global scaling factors is made to verify method reliability and the applicability of our molecular test set.

Journal ArticleDOI
TL;DR: In this paper, a second order complete active space self-consistent field implementation to converge wavefunctions for both large active spaces and large AO bases is presented, which decouples the active space wavefunction solver from the orbital optimization in the microiterations.

Journal ArticleDOI
TL;DR: The results show that this P-NIROM has captured the quasi-totality of the details of the flow with CPU speedup of three orders of magnitude.

Journal ArticleDOI
TL;DR: In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques.
Abstract: The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the relations between A, G and L that make the system of iterations complete, Bessel, a basis, or a frame for H. The problem is motivated by the dynamical sampling problem and is connected to several topics in functional analysis, including, frame theory and spectral theory.

Journal ArticleDOI
TL;DR: In this article, the authors propose a reduced order modeling (ROM) approach to solve multiscale fracture problems through a FE2 approach, where a domain separation strategy is proposed as a first technique for model order reduction: unconventionally, the low-dimension space is spanned by a basis in terms of fluctuating strains, as primitive kinematic variables, instead of the conventional formulation in terms displacement fluctuations.

Journal ArticleDOI
TL;DR: In this paper, the same set of molecules were inspected using the projector augmented wave method and the Vienna ab initio simulation package (VASP) for the ionization potential, and the basis set extrapolated plane wave results agree very well with the Gaussian basis sets, often reaching better than 50 meV agreement.
Abstract: In a recent work, van Setten and co-workers have presented a carefully converged G0W0 study of 100 closed shell molecules [J. Chem. Theory Comput. 2015, 11, 5665−5687]. For two different codes they found excellent agreement to within a few 10 meV if identical Gaussian basis sets were used. We inspect the same set of molecules using the projector augmented wave method and the Vienna ab initio simulation package (VASP). For the ionization potential, the basis set extrapolated plane wave results agree very well with the Gaussian basis sets, often reaching better than 50 meV agreement. In order to achieve this agreement, we correct for finite basis set errors as well as errors introduced by periodically repeated images. For positive electron affinities differences between Gaussian basis sets and VASP are slightly larger. We attribute this to larger basis set extrapolation errors for the Gaussian basis sets. For quasi particle (QP) resonances above the vacuum level, differences between VASP and Gaussian basis ...

Journal ArticleDOI
TL;DR: A short overview of RNA folding algorithms, recent additions and variations and address methods to align, compare, and cluster RNA structures are provided, followed by a tabular summary of the most important software suites in the fields.

Posted Content
TL;DR: Estimation and inference methods for the best linear predictor (approximation) of a structural function, such as conditional average structural and treatment effects, and structural derivatives, based on modern machine learning tools are provided.
Abstract: This paper provides estimation and inference methods for the best linear predictor (approximation) of a structural function, such as conditional average structural and treatment effects, and structural derivatives, based on modern machine learning (ML) tools. We represent this structural function as a conditional expectation of an unbiased signal that depends on a nuisance parameter, which we estimate by modern machine learning techniques. We first adjust the signal to make it insensitive (Neyman-orthogonal) with respect to the first-stage regularization bias. We then project the signal onto a set of basis functions, growing with sample size, which gives us the best linear predictor of the structural function. We derive a complete set of results for estimation and simultaneous inference on all parameters of the best linear predictor, conducting inference by Gaussian bootstrap. When the structural function is smooth and the basis is sufficiently rich, our estimation and inference result automatically targets this function. When basis functions are group indicators, the best linear predictor reduces to group average treatment/structural effect, and our inference automatically targets these parameters. We demonstrate our method by estimating uniform confidence bands for the average price elasticity of gasoline demand conditional on income.

Journal ArticleDOI
TL;DR: In this article, a space-time discontinuous Galerkin (dG) finite element method for numerical approximation of parabolic evolution equations on general spatial meshes consisting of polygonal/polyhedral (polytopic) elements was proposed.
Abstract: We present a new $hp$-version space-time discontinuous Galerkin (dG) finite element method for the numerical approximation of parabolic evolution equations on general spatial meshes consisting of polygonal/polyhedral (polytopic) elements, giving rise to prismatic space-time elements. A key feature of the proposed method is the use of space-time elemental polynomial bases of total degree, say $p$, defined in the physical coordinate system, as opposed to standard dG time-stepping methods whereby spatial elemental bases are tensorized with temporal basis functions. This approach leads to a fully discrete $hp$-dG scheme using fewer degrees of freedom for each time step, compared to dG time-stepping schemes employing a tensorized space-time basis, with acceptable deterioration of the approximation properties. A second key feature of the new space-time dG method is the incorporation of very general spatial meshes consisting of possibly polygonal/polyhedral elements with an arbitrary number of faces. A priori er...

Journal ArticleDOI
TL;DR: In this paper, a second order finite difference scheme for fractional diffusion equations (FDEs) is proposed, which combines the Crank-Nicolson scheme and the so-called weighted and shifted Grunwald formula.

Journal ArticleDOI
TL;DR: In this article, a new class of integrands in Cachazo-He-Yuan (CHY) formula, dubbed Cayley functions, are introduced and studied, which naturally generalize the so-called Parke-Taylor factors.
Abstract: In this note, we introduce and study a new class of “half integrands” in Cachazo-He-Yuan (CHY) formula, which naturally generalize the so-called Parke-Taylor factors; these are dubbed Cayley functions as each of them corresponds to a labelled tree graph. The CHY formula with a Cayley function squared gives a sum of Feynman diagrams, and we represent it by a combinatoric polytope whose vertices correspond to Feynman diagrams. We provide a simple graphic rule to derive the polytope from a labelled tree graph, and classify such polytopes ranging from the associahedron to the permutohedron. Furthermore, we study the linear space of such half integrands and find (1) a closed-form formula reducing any Cayley function to a sum of Parke-Taylor factors in the Kleiss-Kuijf basis (2) a set of Cayley functions as a new basis of the space; each element has the remarkable property that its CHY formula with a given Parke-Taylor factor gives either a single Feynman diagram or zero. We also briefly discuss applications of Cayley functions and the new basis in certain disk integrals of superstring theory.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the energy flow polynomials, a complete set of jet substructure observables which form a discrete linear basis for all infrared-and collinear-safe observables.
Abstract: We introduce the energy flow polynomials: a complete set of jet substructure observables which form a discrete linear basis for all infrared- and collinear-safe observables. Energy flow polynomials are multiparticle energy correlators with specific angular structures that are a direct consequence of infrared and collinear safety. We establish a powerful graph-theoretic representation of the energy flow polynomials which allows us to design efficient algorithms for their computation. Many common jet observables are exact linear combinations of energy flow polynomials, and we demonstrate the linear spanning nature of the energy flow basis by performing regression for several common jet observables. Using linear classification with energy flow polynomials, we achieve excellent performance on three representative jet tagging problems: quark/gluon discrimination, boosted W tagging, and boosted top tagging. The energy flow basis provides a systematic framework for complete investigations of jet substructure using linear methods.

Journal ArticleDOI
TL;DR: In this paper, the authors define a measure of the coherence-generating power of a unitary operation with respect to a preferred orthonormal basis, defined as the average coherence generated by the operation acting on a uniform ensemble of incoherent states.
Abstract: Given a preferred orthonormal basis $B$ in the Hilbert space of a quantum system, we define a measure of the coherence-generating power of a unitary operation with respect to $B$. This measure is the average coherence generated by the operation acting on a uniform ensemble of incoherent states. We give its explicit analytical form in any dimension and provide an operational protocol to directly detect it. We characterize the set of unitaries with maximal coherence-generating power and study the properties of our measure when the unitary is drawn at random from the Haar distribution. For a large state-space dimension a random unitary has, with overwhelming probability, nearly maximal coherence-generating power with respect to any basis. Finally, extensions to general unital quantum operations and the relation to the concept of asymmetry are discussed.

Journal ArticleDOI
TL;DR: The sharp error estimates presented in this paper indicate that the proposed Gegenbauer-based Nystrom numerical method for the fractional-Laplacian Dirichlet problem is spectrally accurate, with convergence rates that only depend on the smoothness of the right-hand side.
Abstract: This paper presents regularity results and associated high order numerical methods for one-dimensional fractional-Laplacian boundary-value problems. On the basis of a factorization of solutions as a product of a certain edge-singular weight ω times a "regular" unknown, a characterization of the regularity of solutions is obtained in terms of the smoothness of the corresponding right-hand sides. In particular, for right-hand sides which are analytic in a Bernstein ellipse, analyticity in the same Bernstein ellipse is obtained for the ``regular'' unknown. Moreover, a sharp Sobolev regularity result is presented which completely characterizes the co-domain of the fractional-Laplacian operator in terms of certain weighted Sobolev spaces introduced in (Babuska and Guo, SIAM J. Numer. Anal. 2002). The present theoretical treatment relies on a full eigendecomposition for a certain weighted integral operator in terms of the Gegenbauer polynomial basis. The proposed Gegenbauer-based Nystrom numerical method for the fractional-Laplacian Dirichlet problem, further, is significantly more accurate and efficient than other algorithms considered previously. The sharp error estimates presented in this paper indicate that the proposed algorithm is spectrally accurate, with convergence rates that only depend on the smoothness of the right-hand side. In particular, convergence is exponentially fast (resp. faster than any power of the mesh-size) for analytic (resp. infinitely smooth) right-hand sides. The properties of the algorithm are illustrated with a variety of numerical results.

Journal ArticleDOI
TL;DR: Excellent results are obtained with the DZVP-DFT basis and newly parametrized D3 dispersion correction and the accuracy of interaction energies and geometries is close to significantly more expensive calculations.
Abstract: Calculations of interaction energies of noncovalent interactions in small basis sets are affected by the basis set superposition error and dispersion-corrected DFT-D methods and are thus usually parametrized only for triple-ζ and larger basis sets. Nevertheless, some smaller basis sets could also perform well. Among many combinations tested, we obtained excellent results with the DZVP-DFT basis and newly parametrized D3 dispersion correction. The accuracy of interaction energies and geometries is close to significantly more expensive calculations.

Journal ArticleDOI
TL;DR: In this article, a new approach to construct T-equivariant flat toric degenerations of flag varieties and spherical varieties is presented, combining ideas coming from the theory of Newton-Okounkov bodies with ideas originally stemming from PBW-filtrations.

Journal ArticleDOI
TL;DR: An EMD-AIC picker has been proposed to identify micro-seismic P phase arrival and works efficiently for the majority of identifications and has a better picking accuracy than the DWT-A IC pickings.

Journal ArticleDOI
TL;DR: This work opens the possibility to increase the dimensionality of the state-space used for encoding information while maintaining deterministic detection and will be invaluable for long distance classical and quantum communication.
Abstract: Encoding information in high-dimensional degrees of freedom of photons has led to new avenues in various quantum protocols such as communication and information processing. Yet to fully benefit from the increase in dimension requires a deterministic detection system, e.g., to reduce dimension dependent photon loss in quantum key distribution. Recently, there has been a growing interest in using vector vortex modes, spatial modes of light with entangled degrees of freedom, as a basis for encoding information. However, there is at present no method to detect these non-separable states in a deterministic manner, negating the benefit of the larger state space. Here we present a method to deterministically detect single photon states in a four dimensional space spanned by vector vortex modes with entangled polarisation and orbital angular momentum degrees of freedom. We demonstrate our detection system with vector vortex modes from the |[Formula: see text]| = 1 and |[Formula: see text]| = 10 subspaces using classical and weak coherent states and find excellent detection fidelities for both pure and superposition vector states. This work opens the possibility to increase the dimensionality of the state-space used for encoding information while maintaining deterministic detection and will be invaluable for long distance classical and quantum communication.