scispace - formally typeset
Search or ask a question
Topic

Affine transformation

About: Affine transformation is a research topic. Over the lifetime, 23531 publications have been published within this topic receiving 434668 citations. The topic is also known as: Affine map.


Papers
More filters
Journal Article
TL;DR: In this paper, it was shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

2,742 citations

Journal ArticleDOI
31 Jan 2010
TL;DR: A family of Markov chain Monte Carlo methods whose performance is unaffected by affine tranformations of space is proposed, and computational tests show that the affine invariant methods can be significantly faster than standard MCMC methods on highly skewed distributions.
Abstract: We propose a family of Markov chain Monte Carlo methods whose performance is unaffected by affine tranformations of space. These algorithms are easy to construct and require little or no additional computational overhead. They should be particularly useful for sampling badly scaled distributions. Computational tests show that the affine invariant methods can be significantly faster than standard MCMC methods on highly skewed distributions.

2,569 citations

Journal ArticleDOI
TL;DR: A new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA), which competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Abstract: Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.

2,422 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper expands the internal patch search space by allowing geometric variations, and proposes a compositional model to simultaneously handle both types of transformations to accommodate local shape variations.
Abstract: Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.

2,389 citations

Proceedings ArticleDOI
18 Mar 2019
TL;DR: S spatially-adaptive normalization is proposed, a simple but effective layer for synthesizing photorealistic images given an input semantic layout that allows users to easily control the style and content of image synthesis results as well as create multi-modal results.
Abstract: We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the network, forcing the network to memorize the information throughout all the layers. Instead, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned affine transformation. Experiments on several challenging datasets demonstrate the superiority of our method compared to existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows users to easily control the style and content of image synthesis results as well as create multi-modal results. Code is available upon publication.

2,159 citations


Network Information
Related Topics (5)
Invariant (mathematics)
48.4K papers, 861.9K citations
93% related
Polynomial
52.6K papers, 853.1K citations
91% related
Bounded function
77.2K papers, 1.3M citations
89% related
Upper and lower bounds
56.9K papers, 1.1M citations
86% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023944
20222,140
20211,127
20201,229
20191,154
20181,100