scispace - formally typeset
B

Berkant Savas

Researcher at Linköping University

Publications -  23
Citations -  1071

Berkant Savas is an academic researcher from Linköping University. The author has contributed to research in topics: Rank (linear algebra) & Multilinear map. The author has an hindex of 14, co-authored 23 publications receiving 1001 citations. Previous affiliations of Berkant Savas include University of Texas at Austin.

Papers
More filters
Journal ArticleDOI

Handwritten digit classification using higher order singular value decomposition

TL;DR: Two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD) are presented and it is shown that the second algorithm is twice as efficient as the first one.
Journal ArticleDOI

A Newton-Grassmann Method for Computing the Best Multilinear Rank-$(r_1,$ $r_2,$ $r_3)$ Approximation of a Tensor

TL;DR: In this paper, a Newton-Grassmann algorithm for computing the best rank-1,r_2, r_3 approximation of a given tensor is presented, which is formulated as an approximation problem on a product of Grassmann manifolds.
Proceedings ArticleDOI

Supervised Link Prediction Using Multiple Sources

TL;DR: A supervised learning framework that can effectively and efficiently learn the dynamics of social networks in the presence of auxiliary networks, a feature design scheme for constructing a rich variety of path-based features using multiple sources, and an effective feature selection strategy based on structured sparsity are presented.
Journal ArticleDOI

Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors

TL;DR: BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these are defined and it is proved that, when local coordinates are used, their BF GS updates on Grassmanians share the same optimality property as the usual BFGS Updates on Euclidean spaces.
Posted Content

Quasi-Newton methods on Grassmannians and multilinear approximations of tensors

TL;DR: In this article, the authors proposed quasi-Newton and limited memory (L-BFGS) methods for objective functions defined on Grassmannian or a product of Grassmannians, and proved that these methods share the same optimality property as the usual BFGS updates on Euclidean spaces.