scispace - formally typeset
Search or ask a question
Author

Joseph M. Landsberg

Bio: Joseph M. Landsberg is an academic researcher from Texas A&M University. The author has contributed to research in topics: Rank (linear algebra) & Matrix multiplication. The author has an hindex of 36, co-authored 162 publications receiving 4754 citations. Previous affiliations of Joseph M. Landsberg include Centre national de la recherche scientifique & University of Pennsylvania.


Papers
More filters
Book
14 Dec 2011
TL;DR: This book has three intended uses: a classroom textbook, a reference work for researchers in the sciences, and an account of classical and modern results in (aspects of) the theory that will be of interest to researchers in geometry.
Abstract: Tensors are ubiquitous in the sciences. The geometry of tensors is both a powerful tool for extracting information from data sets, and a beautiful subject in its own right. This book has three intended uses: a classroom textbook, a reference work for researchers in the sciences, and an account of classical and modern results in (aspects of) the theory that will be of interest to researchers in geometry. For classroom use, there is a modern introduction to multilinear algebra and to the geometry and representation theory needed to study tensors, including a large number of exercises. For researchers in the sciences, there is information on tensors in table format for easy reference and a summary of the state of the art in elementary language. This is the first book containing many classical results regarding tensors. Particular applications treated in the book include the complexity of matrix multiplication, $\mathbf{P}$ versus $\mathbf{NP}$, signal processing, phylogenetics, and algebraic statistics. For geometers, there is material on secant varieties, $G$-varieties, spaces with finitely many orbits and how these objects arise in applications, discussions of numerous open questions in geometry arising in applications, and expositions of advanced topics such as the proof of the Alexander-Hirschowitz theorem and of the Weyman-Kempf method for computing syzygies.

742 citations

Book
01 Jan 2003
TL;DR: In this article, Cartan-Kahler et al. present the Cartan algorithm for linear Pfaffian systems for moving frames and exterior differential systems in projective geometry.
Abstract: Moving frames and exterior differential systems Euclidean geometry and Riemannian geometry Projective geometry Cartan-Kahler I: Linear algebra and constant-coefficient homogeneous systems Cartan-Kahler II: The Cartan algorithm for linear Pfaffian systems Applications to PDE Cartan-Kahler III: The general case Geometric structures and connections Linear algebra and representation theory Differential forms Complex structures and complex manifolds Initial value problems Hints and answers to selected exercises Bibliography Index

262 citations

Journal ArticleDOI
TL;DR: In this article, the rank and border rank of symmetric tensors were studied using geometric methods. And the rank of a polynomial is obtained by considering the singularities of the hypersurface defined by the polynomials.
Abstract: Motivated by questions arising in signal processing, computational complexity, and other areas, we study the ranks and border ranks of symmetric tensors using geometric methods. We provide improved lower bounds for the rank of a symmetric tensor (i.e., a homogeneous polynomial) obtained by considering the singularities of the hypersurface defined by the polynomial. We obtain normal forms for polynomials of border rank up to five, and compute or bound the ranks of several classes of polynomials, including monomials, the determinant, and the permanent.

244 citations

Journal ArticleDOI
TL;DR: In this article, the varieties of linear spaces on rational homogeneous varieties were determined, and a geometric model for these spaces was provided, and basic facts about the local differential geometry of rational homogenous varieties were established.
Abstract: We determine the varieties of linear spaces on rational homogeneous varieties, provide explicit geometric models for these spaces, and establish basic facts about the local differential geometry of rational homogeneous varieties.

209 citations

Journal ArticleDOI
TL;DR: In this paper, the authors established basic techniques for determining the ideals of secant varieties of Segre varieties and solved a conjecture of Garcia, Stillman, and Sturmfels on the generators of the ideal of the first secant variety in the case of three factors.
Abstract: We establish basic techniques for determining the ideals of secant varieties of Segre varieties.We solve a conjecture of Garcia, Stillman, and Sturmfels on the generators of the ideal of the first secant variety in the case of three factors and solve the conjecture set-theoretically for an arbitrary number of factors. We determine the low degree components of the ideals of secant varieties of small dimension in a few cases.

163 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Journal ArticleDOI
TL;DR: The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties; broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.
Abstract: Tensors or multiway arrays are functions of three or more indices $(i,j,k,\ldots)$ —similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row, column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining, and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth and depth that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.

1,284 citations

Journal ArticleDOI
TL;DR: Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.
Abstract: The widespread use of multisensor technology and the emergence of big data sets have highlighted the limitations of standard flat-view matrix models and the necessity to move toward more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift toward models that are essentially polynomial, the uniqueness of which, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.

1,250 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm, approximating an eigen value, eigenvector, singular vector, or the spectral norm is NP-hard and computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.
Abstract: We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list includes: determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determining the rank or best rank-1 approximation of a 3-tensor. Furthermore, we show that restricting these problems to symmetric tensors does not alleviate their NP-hardness. We also explain how deciding nonnegative definiteness of a symmetric 4-tensor is NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.

1,008 citations