scispace - formally typeset
Search or ask a question
Topic

Equivariant map

About: Equivariant map is a research topic. Over the lifetime, 9205 publications have been published within this topic receiving 137115 citations.


Papers
More filters
Proceedings Article
01 Jan 2020
TL;DR: The SE(3)-Transformer as mentioned in this paper is a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations.
Abstract: We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds and graphs with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

295 citations

Posted Content
TL;DR: The twisted complex K-theory as discussed by the authors can be defined for a space X equipped with a bundle of complex projective spaces, or, equivalently, with bundles of C ∗ -algebras.
Abstract: Twisted complex K-theory can be defined for a space X equipped with a bundle of complex projective spaces, or, equivalently, with a bundle of C ∗ -algebras. Up to equivalence, the twisting corresponds to an element of H 3 (X; Z). We give a systematic account of the definition and basic properties of the twisted theory, emphasizing some points where it behaves differently from ordinary K-theory. (We omit, however, its relations to classical coho- mology, which we shall treat in a sequel.) We develop an equivariant version of the theory for the action of a compact Lie group, proving that then the twistings are classified by the equivariant cohomology group H 3 G (X; Z). We also consider some basic examples of twisted K-theory classes, related to those appearing in the recent work of Freed-Hopkins-Teleman.

295 citations

Proceedings Article
27 Sep 2018
TL;DR: This paper provides a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and shows that their dimension, in case of edge-value graph data, is 2 and 15, respectively.
Abstract: Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant linear layers. Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is 2 and 15, respectively. More generally, for graph data defined on k-tuples of nodes, the dimension is the k-th and 2k-th Bell numbers. Orthogonal bases for the layers are computed, including generalization to multi-graph data. The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs. From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning. In particular, we show that our model is capable of approximating any message passing neural network Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.

294 citations

Proceedings Article
03 Dec 2018
TL;DR: The experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.
Abstract: We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over ℝ3. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.

292 citations


Network Information
Related Topics (5)
Cohomology
21.5K papers, 389.8K citations
93% related
Manifold
18.7K papers, 362.8K citations
93% related
Conjecture
24.3K papers, 366K citations
91% related
Lie group
18.3K papers, 381K citations
91% related
Lie algebra
20.7K papers, 347.3K citations
91% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023463
2022888
2021630
2020658
2019526