scispace - formally typeset
Search or ask a question

Showing papers on "Invariant (mathematics) published in 2007"


Journal ArticleDOI
TL;DR: The topological invariants of a time-reversal-invariant band structure in two dimensions are multiple copies of the ${\mathbb{Z}}_{2}$ invariant found by Kane and Mele as mentioned in this paper.
Abstract: The topological invariants of a time-reversal-invariant band structure in two dimensions are multiple copies of the ${\mathbb{Z}}_{2}$ invariant found by Kane and Mele. Such invariants protect the ``topological insulator'' phase and give rise to a spin Hall effect carried by edge states. Each pair of bands related by time reversal is described by one ${\mathbb{Z}}_{2}$ invariant, up to one less than half the dimension of the Bloch Hamiltonians. In three dimensions, there are four such invariants per band pair. The ${\mathbb{Z}}_{2}$ invariants of a crystal determine the transitions between ordinary and topological insulators as its bands are occupied by electrons. We derive these invariants using maps from the Brillouin zone to the space of Bloch Hamiltonians and clarify the connections between ${\mathbb{Z}}_{2}$ invariants, the integer invariants that underlie the integer quantum Hall effect, and previous invariants of $\mathcal{T}$-invariant Fermi systems.

1,749 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: An unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions that alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.
Abstract: We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.

1,232 citations


Journal Article
TL;DR: The topological invariants of a time-reversal-invariant band structure in two dimensions are multiple copies of the ${\mathbb{Z}}_{2}$ invariant found by Kane and Mele as discussed by the authors.
Abstract: The topological invariants of a time-reversal-invariant band structure in two dimensions are multiple copies of the ${\mathbb{Z}}_{2}$ invariant found by Kane and Mele. Such invariants protect the ``topological insulator'' phase and give rise to a spin Hall effect carried by edge states. Each pair of bands related by time reversal is described by one ${\mathbb{Z}}_{2}$ invariant, up to one less than half the dimension of the Bloch Hamiltonians. In three dimensions, there are four such invariants per band pair. The ${\mathbb{Z}}_{2}$ invariants of a crystal determine the transitions between ordinary and topological insulators as its bands are occupied by electrons. We derive these invariants using maps from the Brillouin zone to the space of Bloch Hamiltonians and clarify the connections between ${\mathbb{Z}}_{2}$ invariants, the integer invariants that underlie the integer quantum Hall effect, and previous invariants of $\mathcal{T}$-invariant Fermi systems.

1,152 citations


Journal ArticleDOI
TL;DR: Daikon is an implementation of dynamic detection of likely invariants; that is, the Daikon invariant detector reports likely program invariants, a property that holds at a certain point or points in a program.

1,040 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce the notion of a stability condition on a triangulated category and prove a deformation result which shows that the space with its natural topology is a manifold, possibly infinite-dimensional.
Abstract: This paper introduces the notion of a stability condition on a triangulated category. The motivation comes from the study of Dirichlet branes in string theory, and especially from M.R. Douglas's notion of $\Pi$-stability. From a mathematical point of view, the most interesting feature of the definition is that the set of stability conditions $\Stab(\T)$ on a fixed category $\T$ has a natural topology, thus defining a new invariant of triangulated categories. After setting up the necessary definitions I prove a deformation result which shows that the space $\Stab(\T)$ with its natural topology is a manifold, possibly infinite-dimensional.

825 citations


Journal ArticleDOI
TL;DR: A general form of orbital invariant explicitly correlated second-order closed-shell Moller-Plesset perturbation theory (MP2-F12) is derived, and compact working equations are presented.
Abstract: A general form of orbital invariant explicitly correlated second-order closed-shell Moller-Plesset perturbation theory (MP2-F12) is derived, and compact working equations are presented. Many-electron integrals are avoided by resolution of the identity (RI) approximations using the complementary auxiliary basis set approach. A hierarchy of well defined levels of approximation is introduced, differing from the exact theory by the neglect of terms involving matrix elements over the Fock operator. The most accurate method is denoted as MP2-F12/3B. This assumes only that Fock matrix elements between occupied orbitals and orbitals outside the auxiliary basis set are negligible. For the chosen ansatz for the first-order wave function this is exact if the auxiliary basis is complete. In the next lower approximation it is assumed that the occupied orbital space is closed under action of the Fock operator [generalized Brillouin condition (GBC)]; this is equivalent to approximation 2B of Klopper and Samson [J. Chem....

492 citations


Proceedings ArticleDOI
06 Jun 2007
TL;DR: This paper proposes the rank invariant, a discrete invariant for the robust estimation of Betti numbers in a multifiltration, and proves its completeness in one dimension.
Abstract: Persistent homology captures the topology of a filtration - a one-parameter family of increasing spaces - in terms of a complete discrete invariant. This invariant is a multiset of intervals that denote the lifetimes of the topological entities within the filtration. In many applications of topology, we need to study a multifiltration: a family of spaces parameterized along multiple geometric dimensions. In this paper, we show that no similar complete discrete invariant exists for multidimensional persistence. Instead, we propose the rank invariant, a discrete invariant for the robust estimation of Betti numbers in a multifiltration, and prove its completeness in one dimension.

455 citations


Journal ArticleDOI
TL;DR: The PHENIX experiment at the BNL Relativistic Heavy Ion Collider (RHIC) has measured $J/\ensuremath{\psi}$ production for rapidities of 2.2lyl2 as mentioned in this paper.
Abstract: The PHENIX experiment at the BNL Relativistic Heavy Ion Collider (RHIC)has measured $J/\ensuremath{\psi}$ production for rapidities $\ensuremath{-}2.2lyl2.2$ in $\mathrm{Au}+\mathrm{Au}$ collisions at $\sqrt{{s}_{NN}}=200\text{ }\text{ }\mathrm{GeV}$. The $J/\ensuremath{\psi}$ invariant yield and nuclear modification factor ${R}_{AA}$ as a function of centrality,transverse momentum, and rapidity are reported. A suppression of $J/\ensuremath{\psi}$ relative to binary collision scaling of proton-protonreaction yields is observed. Models which describe the lower energy $J/\ensuremath{\psi}$ data at the CERN Super Proton Synchrotron invoking only $J/\ensuremath{\psi}$ destruction based on the local medium density predicta significantly larger suppression at RHIC and more suppression at midrapiditythan at forward rapidity. Both trends are contradicted by our data.

446 citations


Journal ArticleDOI
TL;DR: A series of innovative methods are proposed to construct a high-performance rotation invariant multiview face detector, including the width-first-search (WFS) tree detector structure, the vector boosting algorithm for learning vector-output strong classifiers, the domain-partition-based weak learning method, the sparse feature in granular space, and the heuristic search for sparse feature selection.
Abstract: Rotation invariant multiview face detection (MVFD) aims to detect faces with arbitrary rotation-in-plane (RIP) and rotation-off-plane (ROP) angles in still images or video sequences. MVFD is crucial as the first step in automatic face processing for general applications since face images are seldom upright and frontal unless they are taken cooperatively. In this paper, we propose a series of innovative methods to construct a high-performance rotation invariant multiview face detector, including the width-first-search (WFS) tree detector structure, the vector boosting algorithm for learning vector-output strong classifiers, the domain-partition-based weak learning method, the sparse feature in granular space, and the heuristic search for sparse feature selection. As a result of that, our multiview face detector achieves low computational complexity, broad detection scope, and high detection accuracy on both standard testing sets and real-life images

411 citations


Posted Content
TL;DR: This article presents a novel numerical abstract domain for static analysis by abstract interpretation and gives an efficient representation based on Difference-Bound Matrices with O(n/sup 2/) memory cost, where n is the number of variables, and graph-based algorithms for all common abstract operators, with O (n/Sup 3/) time cost.
Abstract: This article presents a new numerical abstract domain for static analysis by abstract interpretation. It extends a former numerical abstract domain based on Difference-Bound Matrices and allows us to represent invariants of the form (+/-x+/-y<=c), where x and y are program variables and c is a real constant. We focus on giving an efficient representation based on Difference-Bound Matrices - O(n2) memory cost, where n is the number of variables - and graph-based algorithms for all common abstract operators - O(n3) time cost. This includes a normal form algorithm to test equivalence of representation and a widening operator to compute least fixpoint approximations.

338 citations


Journal ArticleDOI
TL;DR: In this paper, the Zamolodchikov-Faddeev algebra for the superstring sigma model on AdS5 ×S 5 was discussed and the canonical su(2|2) 2 invariant S-matrix satisfy- ing the standard Yang-Baxter and crossing symmetry equations.
Abstract: We discuss the Zamolodchikov-Faddeev algebra for the superstring sigma-model on AdS5 ×S 5 . We find the canonical su(2|2) 2 invariant S-matrix satisfy- ing the standard Yang-Baxter and crossing symmetry equations. Its near-plane-wave expansion matches exactly the leading order term recently obtained by the direct per- turbative computation. We also show that the S-matrix obtained by Beisert in the gauge theory framework does not satisfy the standard Yang-Baxter equation, and, as a consequence, the corresponding ZF algebra is twisted. The S-matrices in gauge and string theories however are physically equivalent and related by a non-local transformation of the basis states which is explicitly constructed.

Journal Article
TL;DR: In this paper, the authors discuss how to generate random unitary matrices from the classical compact groups U(N), O(N) and USpN with probability distributions given by the respective invariant measures.
Abstract: We discuss how to generate random unitary matrices from the classical compact groups U(N), O(N) and USp(N) with probability distributions given by the respective invariant measures. The algorithm is straightforward to implement using standard linear algebra packages. This approach extends to the Dyson circular ensembles too. This article is based on a lecture given by the author at the summer school on Number Theory and Random Matrix Theory held at the University of Rochester in June 2006. The exposition is addressed to a general mathematical audience.

Journal ArticleDOI
TL;DR: Simulation results clearly show that the proposed invariant Gabor representations and their extracted invariant features significantly outperform the conventional Gabor representation approach for rotation-invariant and scale-Invariant texture image retrieval.

Journal ArticleDOI
TL;DR: A shape retrieval method using triangle-area representation for nonrigid shapes with closed contours that is effective in capturing both local and global characteristics of a shape, invariant to translation, rotation, and scaling, and robust against noise and moderate amounts of occlusion.

Journal ArticleDOI
TL;DR: In this paper, a 2-dimensional conformally invariant non-linear elliptic PDE (harmonic map equation, prescribed mean curvature equations,..., etc.) is presented in divergence form.
Abstract: We succeed in writing 2-dimensional conformally invariant non-linear elliptic PDE (harmonic map equation, prescribed mean curvature equations,..., etc.) in divergence form. These divergence-free quantities generalize to target manifolds without symmetries the well known conservation laws for weakly harmonic maps into homogeneous spaces. From this form we can recover, without the use of moving frame, all the classical regularity results known for 2-dimensional conformally invariant non-linear elliptic PDE (see [Hel]). It enables us also to establish new results. In particular we solve a conjecture by E. Heinz asserting that the solutions to the prescribed bounded mean curvature equation in arbitrary manifolds are continuous and we solve a conjecture by S. Hildebrandt [Hil1] claiming that critical points of continuously differentiable elliptic conformally invariant Lagrangian in two dimensions are continuous.

Journal ArticleDOI
TL;DR: In this paper, the classical superpotential of warped type II flux compactifications with SU(3) × SU (3)-structure to AdS4 or flat Minkowski space-time was derived from a standard argument involving domain walls and generalized calibrations.
Abstract: We discuss the four-dimensional N = 1 effective approach in the study of warped type II flux compactifications with SU(3) × SU(3)-structure to AdS4 or flat Minkowski space-time. The non-trivial warping makes it natural to use a supergravity formulation invariant under local complexified Weyl transformations. We obtain the classical superpotential from a standard argument involving domain walls and generalized calibrations and show how the resulting F-flatness and D-flatness equations exactly reproduce the full ten-dimensional supersymmetry equations. Furthermore, we consider the effect of non-perturbative corrections to this superpotential arising from gaugino condensation or Euclidean D-brane instantons. For the latter we derive the supersymmetry conditions in N = 1 flux vacua in full generality. We find that the non-perturbative corrections induce a quantum deformation of the internal generalized geometry. Smeared instantons allow to understand KKLT-like AdS vacua from a ten-dimensional point of view. On the other hand, non-smeared instantons in IIB warped Calabi-Yau compactifications `destabilize' the Calabi-Yau complex structure into a genuine generalized complex one. This deformation gives a geometrical explanation of the non-trivial superpotential for mobile D3-branes induced by the non-perturbative corrections.

Proceedings ArticleDOI
12 Dec 2007
TL;DR: An extensive and up-to-date survey of the existing techniques to address the illumination variation problem is presented and covers the passive techniques that attempt to solve the illumination problem by studying the visible light images in which face appearance has been altered by varying illumination.
Abstract: The illumination variation problem is one of the well-known problems in face recognition in uncontrolled environment. In this paper an extensive and up-to-date survey of the existing techniques to address this problem is presented. This survey covers the passive techniques that attempt to solve the illumination problem by studying the visible light images in which face appearance has been altered by varying illumination, as well as the active techniques that aim to obtain images of face modalities invariant to environmental illumination.

Journal ArticleDOI
TL;DR: In this paper, a dynamical, 10^5-dimensional state-space representation of plane Couette flow at Re = 400 in a small, periodic cell is presented and a new method of visualizing invariant manifolds embedded in such high dimensions.
Abstract: Motivated by recent experimental and numerical studies of coherent structures in wall-bounded shear flows, we initiate a systematic exploration of the hierarchy of unstable invariant solutions of the Navier-Stokes equations. We construct a dynamical, 10^5-dimensional state-space representation of plane Couette flow at Re = 400 in a small, periodic cell and offer a new method of visualizing invariant manifolds embedded in such high dimensions. We compute a new equilibrium solution of plane Couette flow and the leading eigenvalues and eigenfunctions of known equilibria at this Reynolds number and cell size. What emerges from global continuations of their unstable manifolds is a surprisingly elegant dynamical-systems visualization of moderate-Reynolds turbulence. The invariant manifolds tessellate the region of state space explored by transiently turbulent dynamics with a rigid web of continuous and discrete symmetry-induced heteroclinic connections.

Journal ArticleDOI
TL;DR: In this paper, the authors complete the proof of the third author's conjectures relating definably compact groups G in saturated o-minimal structures to compact Lie groups, and also prove some structural results about such groups, for example, the existence of a left invariant finitely additive probability measure on definable subsets of G.
Abstract: We discuss measures, invariant measures on definable groups, and genericity, often in an NIP (failure of the independence property) environment. We complete the proof of the third author’s conjectures relating definably compact groups G in saturated o-minimal structures to compact Lie groups. We also prove some other structural results about such G, for example the existence of a left invariant finitely additive probability measure on definable subsets of G. We finally introduce a new notion “compact domination” (domination of a definable set by a compact space) and raise some new conjectures in the o-minimal case.

Proceedings ArticleDOI
26 Dec 2007
TL;DR: An affine invariant shape descriptor for maximally stable extremal regions (MSER) is introduced that uses only the shape of the detected MSER itself and can achieve the best performance under a range of imaging conditions by matching both the texture and shape descriptors.
Abstract: This paper introduces an affine invariant shape descriptor for maximally stable extremal regions (MSER). Affine invariant feature descriptors are normally computed by sampling the original grey-scale image in an invariant frame defined from each detected feature, but we instead use only the shape of the detected MSER itself. This has the advantage that features can be reliably matched regardless of the appearance of the surroundings of the actual region. The descriptor is computed using the scale invariant feature transform (SIFT), with the resampled MSER binary mask as input. We also show that the original MSER detector can be modified to achieve better scale invariance by detecting MSERs in a scale pyramid. We make extensive comparisons of the proposed feature against a SIFT descriptor computed on grey-scale patches, and also explore the possibility of grouping the shape descriptors into pairs to incorporate more context. While the descriptor does not perform as well on planar scenes, we demonstrate various categories of full 3D scenes where it outperforms the SIFT descriptor computed on grey-scale patches. The shape descriptor is also shown to be more robust to changes in illumination. We show that a system can achieve the best performance under a range of imaging conditions by matching both the texture and shape descriptors.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: A probabilistic model for learning rich, distributed representations of image transformations that develops domain specific motion features, in the form of fields of locally transformed edge filters, and can fantasize new transformations on previously unseen images.
Abstract: We describe a probabilistic model for learning rich, distributed representations of image transformations. The basic model is defined as a gated conditional random field that is trained to predict transformations of its inputs using a factorial set of latent variables. Inference in the model consists in extracting the transformation, given a pair of images, and can be performed exactly and efficiently. We show that, when trained on natural videos, the model develops domain specific motion features, in the form of fields of locally transformed edge filters. When trained on affine, or more general, transformations of still images, the model develops codes for these transformations, and can subsequently perform recognition tasks that are invariant under these transformations. It can also fantasize new transformations on previously unseen images. We describe several variations of the basic model and provide experimental results that demonstrate its applicability to a variety of tasks.

Patent
29 Mar 2007
TL;DR: In this paper, a method of presenting a digital work includes displaying a portion of the digital work on a display screen under a set of display conditions, and providing one or more invariant location reference identifiers corresponding to the portion of a digital file on the display screen.
Abstract: A method of presenting a digital work includes displaying a portion of the digital work on a display screen under a set of display conditions, and providing one or more invariant location reference identifiers corresponding to the portion of the digital work on the display screen. The invariant location reference identifiers are separate from the digital work, and each invariant location reference identifier is provided along with the corresponding portion of the digital work, regardless of the display conditions under which the portion of the digital work is displayed.

Journal ArticleDOI
TL;DR: In this article, it was shown that if X is an algebraic scheme over a perfect field and if D is the exceptional normal crossing divisor of a resolution of the singularities of X, the homotopy type of the incidence complex of D is an invariant of X.
Abstract: V.G. Berkovich’s non-Archimedean analytic geometry provides a natural framework to understand the combinatorial aspects in the theory of toric varieties and toroidal embeddings. This point of view leads to a conceptual and elementary proof of the following results: if X is an algebraic scheme over a perfect field and if D is the exceptional normal crossing divisor of a resolution of the singularities of X, the homotopy type of the incidence complex of D is an invariant of X. This is a generalization of a theorem due to D. Stepanov.

Journal ArticleDOI
TL;DR: In this paper, the authors classify orbit closures and invariant measures for the natural-action of SL2(R) on UM2, the bundle of holomorphic 1-forms over the moduli space of Riemann surfaces of genus two.
Abstract: This paper classifies orbit closures and invariant measures for the natural action of SL2(R) on UM2, the bundle of holomorphic 1-forms over the moduli space of Riemann surfaces of genus two.

Journal ArticleDOI
John Geweke1
TL;DR: The mixture model likelihood function is invariant with respect to permutation of the components of the mixture, and simple and widely used Markov chain Monte Carlo algorithms with data augmentation reliably recover the entire posterior distribution.

Journal ArticleDOI
TL;DR: This paper discusses in detail the work principles of the typical RST invariant image watermarking algorithms, analyze the performance of these typical algorithms through implementation and point out their advantages and disadvantages.
Abstract: In this article, we review the algorithms for rotation, scaling and translation (RST) invariant image watermarking. There are mainly two categories of RST invariant image watermarking algorithms. One is to rectify the RST transformed image before conducting watermark detection. Another is to embed and detect watermark in an RST invariant or semi-invariant domain. In order to help readers understand, we first introduce the fundamental theories and techniques used in the existing RST invariant image watermarking algorithms. Then, we discuss in detail the work principles, embedding process, and detection process of the typical RST invariant image watermarking algorithms. Finally, we analyze and evaluate these typical algorithms through implementation, and point out their advantages and disadvantages.

Journal ArticleDOI
TL;DR: In this article, the relation between random matrices and free probability theory was extended from the level of expectations to a level of fluctuations, and the concept of second order freeness was introduced to understand global fluctuations of Haar distributed unitary random matrix.

Journal ArticleDOI
TL;DR: A simple geometric model is demonstrated that allows to describe facial expressions as isometric deformations of the facial surface and experimentally supports the claim showing that a smaller embedding error leads to better recognition.
Abstract: Addressed here is the problem of constructing and analyzing expression-invariant representations of human faces. We demonstrate and justify experimentally a simple geometric model that allows to describe facial expressions as isometric deformations of the facial surface. The main step in the construction of expression-invariant representation of a face involves embedding of the facial intrinsic geometric structure into some low-dimensional space. We study the influence of the embedding space geometry and dimensionality choice on the representation accuracy and argue that compared to its Euclidean counterpart, spherical embedding leads to notably smaller metric distortions. We experimentally support our claim showing that a smaller embedding error leads to better recognition

Proceedings ArticleDOI
26 Dec 2007
TL;DR: Local fractal features that are evaluated densely are developed and it is shown that the local fractal dimension is invariant to local bi-Lipschitz transformations whereas its extension is able to correctly distinguish between fundamental texture primitives.
Abstract: We address the problem of developing discriminative, yet invariant, features for texture classification. Texture variations due to changes in scale are amongst the hardest to handle. One of the most successful methods of dealing with such variations is based on choosing interest points and selecting their characteristic scales [Lazebnik et al. PAMI 2005]. However, selecting a characteristic scale can be unstable for many textures. Furthermore, the reliance on an interest point detector and the inability to evaluate features densely can be serious limitations. Fractals present a mathematically well founded alternative to dealing with the problem of scale. However, they have not become popular as texture features due to their lack of discriminative power. This is primarily because: (a) fractal based classification methods have avoided statistical characterisations of textures (which is essential for accurate analysis) by using global features; and (b) fractal dimension features are unable to distinguish between key texture primitives such as edges, corners and uniform regions. In this paper, we overcome these drawbacks and develop local fractal features that are evaluated densely. The features are robust as they do not depend on choosing interest points or characteristic scales. Furthermore, it is shown that the local fractal dimension is invariant to local bi-Lipschitz transformations whereas its extension is able to correctly distinguish between fundamental texture primitives. Textures are characterised statistically by modelling the full joint PDF of these features. This allows us to develop a texture classification framework which is discriminative, robust and achieves state-of-the-art performance as compared to affine invariant and fractal based methods.

Proceedings Article
19 Jul 2007
TL;DR: Shift-invariant sparse coding (SISC) as mentioned in this paper is an extension of sparse coding which reconstructs a (usually time-series) input using all of the basis functions in all possible shifts.
Abstract: Sparse coding is an unsupervised learning algorithm that learns a succinct high-level representation of the inputs given only unlabeled data; it represents each input as a sparse linear combination of a set of basis functions. Originally applied to modeling the human visual cortex, sparse coding has also been shown to be useful for self-taught learning, in which the goal is to solve a supervised classification task given access to additional unlabeled data drawn from different classes than that in the supervised learning problem. Shift-invariant sparse coding (SISC) is an extension of sparse coding which reconstructs a (usually time-series) input using all of the basis functions in all possible shifts. In this paper, we present an efficient algorithm for learning SISC bases. Our method is based on iteratively solving two large convex optimization problems: The first, which computes the linear coefficients, is an L1-regularized linear least squares problem with potentially hundreds of thousands of variables. Existing methods typically use a heuristic to select a small subset of the variables to optimize, but we present a way to efficiently compute the exact solution. The second, which solves for bases, is a constrained linear least squares problem. By optimizing over complex-valued variables in the Fourier domain, we reduce the coupling between the different variables, allowing the problem to be solved efficiently. We show that SISC's learned high-level representations of speech and music provide useful features for classification tasks within those domains. When applied to classification, under certain conditions the learned features outperform state of the art spectral and cepstral features.