scispace - formally typeset
Search or ask a question
Author

Michael Götte

Bio: Michael Götte is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Polynomial regression & Rank (linear algebra). The author has an hindex of 2, co-authored 3 publications receiving 5 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes to extend the low-rank tensors framework by including the concept of block-sparsity, in the context of polynomial regression, to adapt the ansatz space to align better with known sample complexity results.
Abstract: Low-rank tensor are an established framework for the parametrization of multivariate polynomials. We propose to extend this framework by including the concept of block-sparsity to efficiently parametrize homogeneous, multivariate polynomials with low-rank tensors. This provides a representation of general multivariate polynomials as a sum of homogeneous, multivariate polynomials, represented by block-sparse, low-rank tensors. We show that this sum can be concisely represented by a single block-sparse, low-rank tensor. We further prove cases, where low-rank tensors are particularly well suited by showing that for banded symmetric tensors of homogeneous polynomials the block sizes in the block-sparse multivariate polynomial space can be bounded independent of the number of variables. We showcase this format by applying it to high-dimensional least squares regression problems where it demonstrates improved computational resource utilization and sample efficiency.

4 citations

Posted Content
TL;DR: In this article, the eigenvectors of the particle number operator in second quantization are characterized by the block sparsity of their matrix product state representations, which is shown to generalize to other classes of operators.
Abstract: The eigenvectors of the particle number operator in second quantization are characterized by the block sparsity of their matrix product state representations. This is shown to generalize to other classes of operators. Imposing block sparsity yields a scheme for conserving the particle number that is commonly used in applications in physics. Operations on such block structures, their rank truncation, and implications for numerical algorithms are discussed. Explicit and rank-reduced matrix product operator representations of one- and two-particle operators are constructed that operate only on the non-zero blocks of matrix product states.

2 citations

Posted Content
TL;DR: In this paper, a block sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials, which is used to adapt the ansatz space to align better with known sample complexity results.
Abstract: Low-rank tensors are an established framework for high-dimensional least-squares problems. We propose to extend this framework by including the concept of block-sparsity. In the context of polynomial regression each sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials. This allows us to adapt the ansatz space to align better with known sample complexity results. The resulting method is tested in numerical experiments and demonstrates improved computational resource utilization and sample efficiency.

Cited by
More filters
Posted Content
TL;DR: In this paper, a general bilinear optimal control problem subject to an infinite-dimensional state equation is considered, and polynomial approximations of the associated value function are derived around the steady state by repeated formal differentiation of the Hamilton-Jacobi-Bellman equation.
Abstract: A general bilinear optimal control problem subject to an infinite-dimensional state equation is considered. Polynomial approximations of the associated value function are derived around the steady state by repeated formal differentiation of the Hamilton-Jacobi-Bellman equation. The terms of the approximations are described by multilinear forms, which can be obtained as solutions to generalized Lyapunov equations with recursively defined right-hand sides. They form the basis for defining a suboptimal feedback law. The approximation properties of this feedback law are investigated. An application to the optimal control of a Fokker-Planck equation is also provided.

12 citations

Posted Content
TL;DR: In this paper, a low-rank tensor train (TT) decomposition based on the Dirac-Frenkel variational principle is proposed for nonlinear optimal control.
Abstract: We present a novel method to approximate optimal feedback laws for nonlinear optimal control based on low-rank tensor train (TT) decompositions. The approach is based on the Dirac-Frenkel variational principle with the modification that the optimisation uses an empirical risk. Compared to current state-of-the-art TT methods, our approach exhibits a greatly reduced computational burden while achieving comparable results. A rigorous description of the numerical scheme and demonstrations of its performance are provided.

1 citations

Posted Content
TL;DR: In this paper, the problem of approximating a function in general nonlinear subsets of $L 2$ when only a weighted Monte Carlo estimate of the norm can be computed was considered.
Abstract: We consider the problem of approximating a function in general nonlinear subsets of $L^2$ when only a weighted Monte Carlo estimate of the $L^2$-norm can be computed. Of particular interest in this setting is the concept of sample complexity, the number of samples that are necessary to recover the best approximation. Bounds for this quantity have been derived in a previous work and depend primarily on the model class and are not influenced positively by the regularity of the sought function. This result however is only a worst-case bound and is not able to explain the remarkable performance of iterative hard thresholding algorithms that is observed in practice. We reexamine the results of the previous paper and derive a new bound that is able to utilize the regularity of the sought function. A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical high-dimensional random partial differential equation.

1 citations

Posted Content
TL;DR: In this paper, a block sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials, which is used to adapt the ansatz space to align better with known sample complexity results.
Abstract: Low-rank tensors are an established framework for high-dimensional least-squares problems. We propose to extend this framework by including the concept of block-sparsity. In the context of polynomial regression each sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials. This allows us to adapt the ansatz space to align better with known sample complexity results. The resulting method is tested in numerical experiments and demonstrates improved computational resource utilization and sample efficiency.
02 Aug 2022
TL;DR: This work proposes an approach of gauge mediated weight sharing, inspired by notions of machine learning, which significantly improves performance over previous approaches for scalably learning dynamical laws of classical dynamical systems from data.
Abstract: Recent years have witnessed an increased interest in recovering dynamical laws of complex systems in a largely data-driven fashion under meaningful hypotheses. In this work, we propose a method for scalably learning dynamical laws of classical dynamical systems from data. As a novel ingredient, to achieve an efficient scaling with the system size, block sparse tensor trains – instances of tensor networks applied to function dictionaries – are used and the self similarity of the problem is exploited. For the latter, we propose an approach of gauge mediated weight sharing, inspired by notions of machine learning, which significantly improves performance over previous approaches. The practical performance of the method is demonstrated numerically on three one-dimensional systems – the Fermi-Pasta-Ulam-Tsingou system, rotating magnetic dipoles and classical particles interacting via modified Lennard-Jones potentials. We highlight the ability of the method to recover these systems, requiring 1400 samples to recover the 50 particle Fermi-Pasta-Ulam-Tsingou system to residuum of 5 × 10 − 7 , 900 samples to recover the 50 particle magnetic dipole chain to residuum of 1 . 5 × 10 − 4 and 7000 samples to recover the Lennard-Jones system of 10 particles to residuum 1 . 5 × 10 − 2 . The robustness against additive Gaussian noise is demonstrated for the magnetic dipole system.