scispace - formally typeset
Search or ask a question

Showing papers on "Equivariant map published in 2019"


Proceedings Article
01 Jan 2019
TL;DR: In this article, the authors give a general description of E(2)-equivariant convolutions in the framework of Steerable CNNs and show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations.
Abstract: The big empirical success of group equivariant networks has led in recent years to the sprouting of a great variety of equivariant network architectures. A particular focus has thereby been on rotation and reflection equivariant CNNs for planar images. Here we give a general description of E(2)-equivariant convolutions in the framework of Steerable CNNs. The theory of Steerable CNNs thereby yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces. We show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. A general solution of the kernel space constraint is given for arbitrary representations of the Euclidean group E(2) and its subgroups. We implement a wide range of previously proposed and entirely new equivariant network architectures and extensively compare their performances. E(2)-steerable convolutions are further shown to yield remarkable gains on CIFAR-10, CIFAR-100 and STL-10 when used as drop in replacement for non-equivariant convolutions.

283 citations


Posted Content
TL;DR: Gauge equivariant convolution using a single conv2d call is demonstrated, making it a highly scalable and practical alternative to Spherical CNNs and demonstrating substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.
Abstract: The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, and which includes many popular methods from equivariant and geometric deep learning. We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.

213 citations


Proceedings Article
01 Jan 2019
TL;DR: In this article, the authors present a general theory of group equivariant convolutional neural networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere.
Abstract: We present a general theory of Group equivariant Convolutional Neural Networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere. Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields. The theory enables a systematic classification of all existing G-CNNs in terms of their symmetry group, base space, and field type. We also answer a fundamental question: what is the most general kind of equivariant linear map between feature spaces (fields) of given types? We show that such maps correspond one-to-one with generalized convolutions with an equivariant kernel, and characterize the space of such kernels.

206 citations


Proceedings Article
13 May 2019
TL;DR: In this paper, the authors consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer.
Abstract: Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or \emph{equivariant} (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the \emph{equivariant} case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Additionally, unlike many previous works that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

169 citations


Proceedings Article
11 Feb 2019
TL;DR: In this paper, the authors extend the principle of equivariance to symmetry transformations to local gauge transformations, which enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, including many popular methods from equivariant and geometric deep learning.
Abstract: The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, and which includes many popular methods from equivariant and geometric deep learning. We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.

160 citations


Posted Content
TL;DR: The results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.
Abstract: Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

67 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: A group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling joint reasoning over all views in an equivariant (instead of invariant) fashion, up to the very last layer.
Abstract: Several popular approaches to 3D vision tasks process multiple views of the input independently with deep neural networks pre-trained on natural images, where view permutation invariance is achieved through a single round of pooling over all views. We argue that this operation discards important information and leads to subpar global descriptors. In this paper, we propose a group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling, thus, joint reasoning over all views in an equivariant (instead of invariant) fashion, up to the very last layer. We further develop this idea to operate on smaller discrete homogeneous spaces of the rotation group, where a polar view representation is used to maintain equivariance with only a fraction of the number of input views. We set the new state of the art in several large scale 3D shape retrieval tasks, and show additional applications to panoramic scene classification.

63 citations


Posted Content
TL;DR: The theory of Steerable CNNs yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces, and it is shown that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations.
Abstract: The big empirical success of group equivariant networks has led in recent years to the sprouting of a great variety of equivariant network architectures. A particular focus has thereby been on rotation and reflection equivariant CNNs for planar images. Here we give a general description of $E(2)$-equivariant convolutions in the framework of Steerable CNNs. The theory of Steerable CNNs thereby yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces. We show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. A general solution of the kernel space constraint is given for arbitrary representations of the Euclidean group $E(2)$ and its subgroups. We implement a wide range of previously proposed and entirely new equivariant network architectures and extensively compare their performances. $E(2)$-steerable convolutions are further shown to yield remarkable gains on CIFAR-10, CIFAR-100 and STL-10 when used as a drop-in replacement for non-equivariant convolutions.

55 citations


Journal ArticleDOI
TL;DR: In this paper, pure two-bubbles are constructed for energy-critical wave equations, that is solutions which in one time direction approach a superposition of two stationary states both centered at the origin, but asymptotically decoupled in scale.
Abstract: We construct pure two-bubbles for some energy-critical wave equations, that is solutions which in one time direction approach a superposition of two stationary states both centered at the origin, but asymptotically decoupled in scale. Our solution exists globally, with one bubble at a fixed scale and the other concentrating in infinite time, with an error tending to 0 in the energy space. We treat the cases of the power nonlinearity in space dimension 6, the radial Yang-Mills equation and the equivariant wave map equation with equivariance class k > 2. The concentrating speed of the second bubble is exponential for the first two models and a power function in the last case.

54 citations


Posted Content
TL;DR: Equivariant hamiltonian flows are introduced, a method for learning expressive densities that are invariant with respect to a known Lie-algebra of local symmetry transformations while providing an equivariant representation of the data.
Abstract: This paper introduces equivariant hamiltonian flows, a method for learning expressive densities that are invariant with respect to a known Lie-algebra of local symmetry transformations while providing an equivariant representation of the data. We provide proof of principle demonstrations of how such flows can be learnt, as well as how the addition of symmetry invariance constraints can improve data efficiency and generalisation. Finally, we make connections to disentangled representation learning and show how this work relates to a recently proposed definition.

53 citations


Posted Content
TL;DR: It is concluded that although the free parameters of the invariant/equivarint models are exponentially fewer than the one of the usual models, the invarian/equivariant models can approximate the invariants/Equivariant functions to arbitrary accuracy.
Abstract: In this paper, we develop a theory about the relationship between $G$-invariant/equivariant functions and deep neural networks for finite group $G$. Especially, for a given $G$-invariant/equivariant function, we construct its universal approximator by deep neural network whose layers equip $G$-actions and each affine transformations are $G$-equivariant/invariant. Due to representation theory, we can show that this approximator has exponentially fewer free parameters than usual models.

Posted Content
TL;DR: It is demonstrated that a BG that is equivariant with respect to rotations and particle permutations can generalize to sampling nontrivially new configurations where a nonequivariant BG cannot.
Abstract: Flows are exact-likelihood generative neural networks that transform samples from a simple prior distribution to the samples of the probability distribution of interest. Boltzmann Generators (BG) combine flows and statistical mechanics to sample equilibrium states of strongly interacting many-body systems such as proteins with 1000 atoms. In order to scale and generalize these results, it is essential that the natural symmetries of the probability density - in physics defined by the invariances of the energy function - are built into the flow. Here we develop theoretical tools for constructing such equivariant flows and demonstrate that a BG that is equivariant with respect to rotations and particle permutations can generalize to sampling nontrivially new configurations where a nonequivariant BG cannot.

Posted Content
TL;DR: In this article, the authors introduced a moduli space P(G,S) parametrizing G-local system on S with some boundary data, and proved that it carries a cluster Poisson structure, equivariant under the action of the cluster modular group M(G.S), containing the mapping class group of S, the group of outer automorphisms of G, and the product of Weyl / braid groups over punctures / boundary components.
Abstract: Let G be a split semi-simple adjoint group, and S an oriented surface with punctures and special boundary points. We introduce a moduli space P(G,S) parametrizing G-local system on S with some boundary data, and prove that it carries a cluster Poisson structure, equivariant under the action of the cluster modular group M(G,S), containing the mapping class group of S, the group of outer automorphisms of G, and the product of Weyl / braid groups over punctures / boundary components. We prove that the dual moduli space A(G,S) carries a M(G,S)-equivariant cluster structure, and the pair (A(G,S), P(G,S)) is a cluster ensemble. These results generalize the works of V. Fock & the first author, and of I. Le. We quantize cluster Poisson varieties X for any Planck constant h s.t. h>0 or |h|=1. First, we define a *-algebra structure on the Langlands modular double A(h; X) of the algebra of functions on X. We construct a principal series of representations of the *-algebra A(h; X), equivariant under a unitary projective representation of the cluster modular group M(X). This extends works of V. Fock and the first author when h>0. Combining this, we get a M(G,S)-equivariant quantization of the moduli space P(G,S), given by the *-algebra A(h; P(G,S)) and its principal series representations. We construct realizations of the principal series *-representations. In particular, when S is punctured disc with two special points, we get a principal series *-representations of the Langlands modular double of the quantum group Uq(g). We conjecture that there is a nondegenerate pairing between the local system of coinvariants of oscillatory representations of the W-algebra and the one provided by the projective representation of the mapping class group of S.

Journal ArticleDOI
TL;DR: In this article, a novel approach towards the Weinstein splitting theorem was developed, which leads to various generalizations, including their equivariant versions as well as their formulations in new contexts.
Abstract: According to the Weinstein splitting theorem, any Poisson manifold is locally, near any given point, a product of a symplectic manifold with another Poisson manifold whose Poisson structure vanishes at the point. Similar splitting results are known e.g. for Lie algebroids, Dirac structures and generalized complex structures. In this paper, we develop a novel approach towards these results that leads to various generalizations, including their equivariant versions as well as their formulations in new contexts.

Posted Content
TL;DR: Motivic Chern classes are elements in the K-theory of an algebraic variety $X$ depending on an extra parameter $y$. They are determined by functoriality and a normalization property for smooth $X$..
Abstract: Motivic Chern classes are elements in the K-theory of an algebraic variety $X$ depending on an extra parameter $y$. They are determined by functoriality and a normalization property for smooth $X$. In this paper we calculate the motivic Chern classes of Schubert cells in the (equivariant) K-theory flag manifolds $G/B$. The calculation is recursive starting from the class of a point, and using the Demazure-Lusztig operators in the Hecke algebra of the Weyl group of $G$. The resulting classes are conjectured to satisfy a positivity property. We use the recursions to give a new proof that they are equivalent to certain K-theoretic stable envelopes recently defined by Okounkov and collaborators, thus recovering results of Feh{e}r, Rim{a}nyi and Weber. The Hecke algebra action on the K-theory of the dual flag manifold matches the Hecke action on the Iwahori invariants of the principal series representation associated to an unramified character for a group over a nonarchimedean local field. This gives a correspondence identifying the Poincar{e} dual version of the motivic Chern class to the standard basis in the Iwahori invariants, and the fixed point basis to Casselman's basis. We apply this to prove two conjectures of Bump, Nakasuji and Naruse concerning factorizations, and {holomorphy} properties, of the coefficients in the transition matrix between the standard and the Casselman's basis.

Posted Content
TL;DR: In this paper, the authors consider group invariance from the perspective of probabilistic symmetry, and obtain generative functional representations of probability distributions that are invariant or equivariant under the action of a compact group.
Abstract: Treating neural network inputs and outputs as random variables, we characterize the structure of neural networks that can be used to model data that are invariant or equivariant under the action of a compact group. Much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures, in an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings. By considering group invariance from the perspective of probabilistic symmetry, we establish a link between functional and probabilistic symmetry, and obtain generative functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Our representations completely characterize the structure of neural networks that can be used to model such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We demonstrate that examples from the recent literature are special cases, and develop the details of the general program for exchangeable sequences and arrays.

Book ChapterDOI
TL;DR: In this paper, the shifted quantum affine algebras are introduced, which map homomorphically into the quantized K-theoretic Coulomb branches of the SUSY quiver gauge theories.
Abstract: We introduce the shifted quantum affine algebras. They map homomorphically into the quantized K-theoretic Coulomb branches of \(3d\ {\mathcal N}=4\) SUSY quiver gauge theories. In type A, they are endowed with a coproduct, and they act on the equivariant K-theory of parabolic Laumon spaces. In type A1, they are closely related to the type A open relativistic quantum Toda system.

Journal ArticleDOI
TL;DR: Rimanyi et al. as discussed by the authors proved that the full flag bundle of the Grassmannian is a 3D mirror self-symmetric manifold with the same cardinality.
Abstract: Let $X$ be a holomorphic symplectic variety with a torus $\mathsf{T}$ action and a finite fixed point set of cardinality $k$. We assume that elliptic stable envelope exists for $X$. Let $A_{I,J}= \operatorname{Stab}(J)|_{I}$ be the $k\times k$ matrix of restrictions of the elliptic stable envelopes of $X$ to the fixed points. The entries of this matrix are theta-functions of two groups of variables: the Kahler parameters and equivariant parameters of $X$. We say that two such varieties $X$ and $X'$ are related by the 3d mirror symmetry if the fixed point sets of $X$ and $X'$ have the same cardinality and can be identified so that the restriction matrix of $X$ becomes equal to the restriction matrix of $X'$ after transposition and interchanging the equivariant and Kahler parameters of $X$, respectively, with the Kahler and equivariant parameters of $X'$. The first examples of pairs of 3d symmetric varieties were constructed in [Rimanyi R., Smirnov A., Varchenko A., Zhou Z., arXiv:1902.03677], where the cotangent bundle $T^*\operatorname{Gr}(k,n)$ to a Grassmannian is proved to be a 3d mirror to a Nakajima quiver variety of $A_{n-1}$-type. In this paper we prove that the cotangent bundle of the full flag variety is 3d mirror self-symmetric. That statement in particular leads to nontrivial theta-function identities.

Journal ArticleDOI
TL;DR: The proposed regularization is aimed to be a conceptual, theoretical and computational proof of concept for symmetry-adapted representation learning, where the learned data representations are equivariant or invariant to transformations, without explicit knowledge of the underlying symmetries in the data.

Posted Content
TL;DR: In this paper, the cohomological DT/PT correspondence for toric Calabi-Yau 4-folds conjectured by Nekrasov-Okounkov was shown to exist.
Abstract: Recently, Nekrasov discovered a new "genus" for Hilbert schemes of points on $\mathbb{C}^4$. We conjecture a DT/PT correspondence for Nekrasov genera for toric Calabi-Yau 4-folds. We verify our conjecture in several cases using a vertex formalism. Taking a certain limit of the equivariant parameters, we recover the cohomological DT/PT correspondence for toric Calabi-Yau 4-folds recently conjectured by the first two authors. Another limit gives a dimensional reduction to the $K$-theoretic DT/PT correspondence for toric 3-folds conjectured by Nekrasov-Okounkov. As an application of our techniques, we find a conjectural formula for the generating series of $K$-theoretic stable pair invariants of the local resolved conifold. Upon dimensional reduction to the resolved conifold, we recover a formula which was recently proved by Kononov-Okounkov-Osinenko.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the Littlewood-Richardson coefficients of double Grothendieck polynomials indexed by Grassmannian permutations, which are the structure constants of the equivariant K-theory ring of Grassmannians.
Abstract: We study the Littlewood-Richardson coefficients of double Grothendieck polynomials indexed by Grassmannian permutations. Geometrically, these are the structure constants of the equivariant K-theory ring of Grassmannians. Representing the double Grothendieck polynomials as partition functions of an integrable vertex model, we use its Yang-Baxter equation to derive a series of product rules for the former polynomials and their duals. The Littlewood-Richardson coefficients that arise can all be expressed in terms of puzzles without gashes, which generalize previous puzzles obtained by Knutson-Tao and Vakil.

Posted Content
TL;DR: In this paper, a pair of quiver varieties (X; X') related by 3D mirror symmetry are considered, where X =T*Gr(k,n) is the cotangent bundle of the Grassmannian of k-planes of n-dimensional space.
Abstract: We consider a pair of quiver varieties (X;X') related by 3d mirror symmetry, where X =T*Gr(k,n) is the cotangent bundle of the Grassmannian of k-planes of n-dimensional space. We give formulas for the elliptic stable envelopes on both sides. We show an existence of an equivariant elliptic cohomology class on X $\times$ X' (the Mother function) whose restrictions to X and X' are the elliptic stable envelopes of those varieties. This implies, that the restriction matrices of the elliptic stable envelopes for X and X' are equal after transposition and identification of the equivariant parameters on one side with the Kahler parameters on the dual side.

Journal ArticleDOI
TL;DR: The second companion paper of arXiv:1601.03586 is as discussed by the authors, which studies Coulomb branches associated with star shaped quivers, which are expected to be conjectural Higgs branches of $3d$ Sicilian theories in type $A$.
Abstract: This is the second companion paper of arXiv:1601.03586. We consider the morphism from the variety of triples introduced in arXiv:1601.03586 to the affine Grassmannian. The direct image of the dualizing complex is a ring object in the equivariant derived category on the affine Grassmannian (equivariant derived Satake category). We show that various constructions in arXiv:1601.03586 work for an arbitrary commutative ring object. The second purpose of this paper is to study Coulomb branches associated with star shaped quivers, which are expected to be conjectural Higgs branches of $3d$ Sicilian theories in type $A$ by arXiv:1007.0992.

Proceedings Article
24 May 2019
TL;DR: Equivariant Transformers (ETs) as discussed by the authors incorporate functions that are equivariant by construction with respect to these transformations and can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters.
Abstract: How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups. Through the use of specially-derived canonical coordinate systems, ETs incorporate functions that are equivariant by construction with respect to these transformations. We show empirically that ETs can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters. On a real-world image classification task, ETs improve the sample efficiency of ResNet classifiers, achieving relative improvements in error rate of up to 15% in the limited data regime while increasing model parameter count by less than 1%.

Posted Content
TL;DR: A 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points, and builds a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space.
Abstract: We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points. The operator receives a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end transformation equivariance through a novel dynamic routing procedure on quaternions. Further, we theoretically connect dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving \emph{iterative re-weighted least squares} (IRLS) problems with provable convergence properties. It is shown that such group dynamic routing can be interpreted as robust IRLS rotation averaging on capsule votes, where information is routed based on the final inlier scores. Based on our operator, we build a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space. Our architecture allows joint object classification and orientation estimation without explicit supervision of rotations. We validate our algorithm empirically on common benchmark datasets.

Posted Content
TL;DR: Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups in several parameters are proposed.
Abstract: How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups. Through the use of specially-derived canonical coordinate systems, ETs incorporate functions that are equivariant by construction with respect to these transformations. We show empirically that ETs can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters. On a real-world image classification task, ETs improve the sample efficiency of ResNet classifiers, achieving relative improvements in error rate of up to 15% in the limited data regime while increasing model parameter count by less than 1%.

Posted Content
TL;DR: The authors proposed a group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling joint reasoning over all views in an equivariant (instead of invariant) fashion up to the very last layer.
Abstract: Several popular approaches to 3D vision tasks process multiple views of the input independently with deep neural networks pre-trained on natural images, achieving view permutation invariance through a single round of pooling over all views. We argue that this operation discards important information and leads to subpar global descriptors. In this paper, we propose a group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling, thus, joint reasoning over all views in an equivariant (instead of invariant) fashion, up to the very last layer. We further develop this idea to operate on smaller discrete homogeneous spaces of the rotation group, where a polar view representation is used to maintain equivariance with only a fraction of the number of input views. We set the new state of the art in several large scale 3D shape retrieval tasks, and show additional applications to panoramic scene classification.

Journal ArticleDOI
Abstract: We study the category of G(O)-equivariant perverse coherent sheaves on the affine Grassmannian of G. This coherent Satake category is not semisimple and its convolution product is not symmetric, in contrast with the usual constructible Satake category. Instead, we use the Beilinson-Drinfeld Grassmannian to construct renormalized r-matrices. These are canonical nonzero maps between convolution products which satisfy axioms weaker than those of a braiding. We also show that the coherent Satake category is rigid, and that together these results strongly constrain its convolution structure. In particular, they can be used to deduce the existence of (categorified) cluster structures. We study the case G = GL_n in detail and prove that the loop rotation equivariant coherent Satake category of GL_n is a monoidal categorification of an explicit quantum cluster algebra. More generally, we construct renormalized r-matrices in any monoidal category whose product is compatible with an auxiliary chiral category, and explain how the appearance of cluster algebras in 4d N=2 field theory may be understood from this point of view.

Book ChapterDOI
10 Apr 2019
TL;DR: This work incorporates rotational symmetry into an encoder-decoder based network by utilising group equivariant convolutions, specifically using the symmetry group of rotations by multiples of 90 to achieve the state-of-the-art performance on the GlaS challenge dataset.
Abstract: Analysis of the shape of glands and their lumen in digitised images of Haematoxylin & Eosin stained colon histology slides can provide insight into the degree of malignancy. Segmenting each glandular component is an essential prerequisite step for subsequent automatic morphological analysis. Current automated segmentation approaches typically do not take into account the inherent rotational symmetry within histology images. We incorporate this rotational symmetry into an encoder-decoder based network by utilising group equivariant convolutions, specifically using the symmetry group of rotations by multiples of 90\(^\circ \). Our rotation equivariant network splits into two separate branches after the final up-sampling operation, where the output of a given branch achieves either gland or lumen segmentation. In addition, at the output of the gland branch, we use a multi-class strategy to assist with the separation of touching instances. We show that our proposed approach achieves the state-of-the-art performance on the GlaS challenge dataset.

Journal ArticleDOI
TL;DR: In this article, a region is defined as the set of all points that have at least Tukey depth κ w.r.t. the data, and a region can be defined as a set of points that are affine equivariant and robust.
Abstract: Given data in Rp , a Tukey κ-trimmed region is the set of all points that have at least Tukey depth κ w.r.t. the data. As they are visual, affine equivariant and robust, Tukey regions are useful to...