scispace - formally typeset
Search or ask a question
Topic

Norm (mathematics)

About: Norm (mathematics) is a research topic. Over the lifetime, 16840 publications have been published within this topic receiving 323489 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations

Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations

Journal ArticleDOI
TL;DR: In this article, simple state-space formulas are derived for all controllers solving the following standard H/sub infinity / problem: for a given number gamma > 0, find all controllers such that the H/ sub infinity / norm of the closed-loop transfer function is (strictly) less than gamma.
Abstract: Simple state-space formulas are derived for all controllers solving the following standard H/sub infinity / problem: For a given number gamma >0, find all controllers such that the H/sub infinity / norm of the closed-loop transfer function is (strictly) less than gamma . It is known that a controller exists if and only if the unique stabilizing solutions to two algebraic Riccati equations are positive definite and the spectral radius of their product is less than gamma /sup 2/. Under these conditions, a parameterization of all controllers solving the problem is given as a linear fractional transformation (LFT) on a contractive, stable, free parameter. The state dimension of the coefficient matrix for the LFT, constructed using the two Riccati solutions, equals that of the plant and has a separation structure reminiscent of classical LQG (i.e. H/sub 2/) theory. This paper is intended to be of tutorial value, so a standard H/sub 2/ solution is developed in parallel. >

5,272 citations

Journal ArticleDOI
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.

4,339 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: The idea is to learn a function that maps input patterns into a target space such that the L/sub 1/ norm in the target space approximates the "semantic" distance in the input space.
Abstract: We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L/sub 1/ norm in the target space approximates the "semantic" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue/AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves.

3,870 citations


Network Information
Related Topics (5)
Bounded function
77.2K papers, 1.3M citations
85% related
Nonlinear system
208.1K papers, 4M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
84% related
Partial differential equation
70.8K papers, 1.6M citations
84% related
Optimization problem
96.4K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202243
20211,013
20201,083
20191,068
20181,013
2017869