scispace - formally typeset
Search or ask a question

Showing papers on "Equivariant map published in 2020"


Proceedings Article
01 Jan 2020
TL;DR: The SE(3)-Transformer as mentioned in this paper is a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations.
Abstract: We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds and graphs with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

295 citations


Posted Content
TL;DR: A general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map is proposed, enabling rapid prototyping and exact conservation of linear and angular momentum.
Abstract: The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems. While translation equivariance provides a powerful inductive bias for images, we often additionally desire equivariance to other transformations, such as rotations, especially for non-image data. We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map. Incorporating equivariance to a new group requires implementing only the group exponential and logarithm maps, enabling rapid prototyping. Showcasing the simplicity and generality of our method, we apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems. For Hamiltonian systems, the equivariance of our models is especially impactful, leading to exact conservation of linear and angular momentum.

162 citations


Journal ArticleDOI
TL;DR: This work defines a class of machine-learned flow-based sampling algorithms for lattice gauge theories that are gauge invariant by construction and demonstrates the application of this framework to U(1) gauge theory in two spacetime dimensions.
Abstract: We define a class of machine-learned flow-based sampling algorithms for lattice gauge theories that are gauge invariant by construction. We demonstrate the application of this framework to U(1) gauge theory in two spacetime dimensions, and find that, at small bare coupling, the approach is orders of magnitude more efficient at sampling topological quantities than more traditional sampling procedures such as hybrid Monte Carlo and heat bath.

150 citations


Proceedings Article
30 Apr 2020
TL;DR: This paper hypothesizes that language compositionality is a form of group-equivariance, and proposes a set of tools for constructing equivariant sequence-to-sequence models that are able to achieve the type compositional generalization required in human language understanding.
Abstract: Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.

94 citations


Posted Content
TL;DR: This work provides a theoretical sufficient criterion showing that the distribution generated by equivariant normalizing flows is invariant with respect to these symmetries by design, and proposes building blocks for flows which preserve symmetry which are usually found in physical/chemical many-body particle systems.
Abstract: Normalizing flows are exact-likelihood generative neural networks which approximately transform samples from a simple prior distribution to samples of the probability distribution of interest. Recent work showed that such generative models can be utilized in statistical mechanics to sample equilibrium states of many-body systems in physics and chemistry. To scale and generalize these results, it is essential that the natural symmetries in the probability density -- in physics defined by the invariances of the target potential -- are built into the flow. We provide a theoretical sufficient criterion showing that the distribution generated by \textit{equivariant} normalizing flows is invariant with respect to these symmetries by design. Furthermore, we propose building blocks for flows which preserve symmetries which are usually found in physical/chemical many-body particle systems. Using benchmark systems motivated from molecular physics, we demonstrate that those symmetry preserving flows can provide better generalization capabilities and sampling efficiency.

90 citations


Journal Article
TL;DR: Drawing on tools from probability and statistics, a link between functional and probabilistic symmetry is established, and generative functional representations of joint and conditional probability distributions are obtained that are invariant or equivariant under the action of a compact group.
Abstract: Treating neural network inputs and outputs as random variables, we characterize the structure of neural networks that can be used to model data that are invariant or equivariant under the action of a compact group. Much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures, in an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings. By considering group invariance from the perspective of probabilistic symmetry, we establish a link between functional and probabilistic symmetry, and obtain generative functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Our representations completely characterize the structure of neural networks that can be used to model such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We demonstrate that examples from the recent literature are special cases, and develop the details of the general program for exchangeable sequences and arrays.

87 citations


Proceedings Article
30 Jun 2020
TL;DR: This paper introduces MDP homomorphic networks for deep reinforcement learning and introduces an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done.
Abstract: This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong.

73 citations


Posted Content
TL;DR: Gauge Equivariant Mesh CNNs are proposed which generalize GCNs to apply anisotropic gauge equivariant kernels and introduce a geometric message passing scheme defined by parallel transporting features over mesh edges.
Abstract: A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to the geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.

72 citations


Posted Content
TL;DR: In this article, a permutation equivariant, multi-channel graph neural network is proposed to model the gradient of the data distribution at the input graph, which implicitly defines permutation invariant distribution for graphs.
Abstract: Learning generative models for graph-structured data is challenging because graphs are discrete, combinatorial, and the underlying data distribution is invariant to the ordering of nodes. However, most of the existing generative models for graphs are not invariant to the chosen ordering, which might lead to an undesirable bias in the learned distribution. To address this difficulty, we propose a permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling. In particular, we design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph (a.k.a., the score function). This permutation equivariant model of gradients implicitly defines a permutation invariant distribution for graphs. We train this graph neural network with score matching and sample from it with annealed Langevin dynamics. In our experiments, we first demonstrate the capacity of this new architecture in learning discrete graph algorithms. For graph generation, we find that our learning approach achieves better or comparable results to existing models on benchmark datasets.

71 citations


Proceedings Article
12 Jul 2020
TL;DR: A neural network architecture that is fully equivariant with respect to transformations under the Lorentz group, a fundamental symmetry of space and time in physics, leads to drastically simpler models that have relatively few learnable parameters and are much more physically interpretable than leading approaches that use CNNs and point cloud approaches.
Abstract: We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group, a fundamental symmetry of space and time in physics. The architecture is based on the theory of the finite-dimensional representations of the Lorentz group and the equivariant nonlinearity involves the tensor product. For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters and are much more physically interpretable than leading approaches that use CNNs and point cloud approaches. The competitive performance of the network is demonstrated on a public classification dataset [27] for tagging top quark decays given energy-momenta of jet constituents produced in proton-proton collisions.

68 citations


Posted Content
TL;DR: The authors propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node, which can eventually be pooled to build node representations.
Abstract: Message-passing has proved to be an effective way to design graph neural networks, as it is able to leverage both permutation equivariance and an inductive bias towards learning local structures in order to achieve good generalization. However, current message-passing architectures have a limited representation power and fail to learn basic topological properties of graphs. We address this problem and propose a powerful and equivariant message-passing framework based on two ideas: first, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node. This matrix contains rich local information about both features and topology and can eventually be pooled to build node representations. Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance. Having a representation that is independent of the specific choice of the one-hot encoding permits inductive reasoning and leads to better generalization properties. Experimentally, our model can predict various graph topological properties on synthetic data more accurately than previous methods and achieves state-of-the-art results on molecular graph regression on the ZINC dataset.

Journal ArticleDOI
TL;DR: In this paper, the authors prove a combinatorial criterion for K-stability of a Q-Fano spherical variety with respect to equivariant special test configurations, in terms of its moment polytope and some graph data associated to the open orbit.
Abstract: We prove a combinatorial criterion for K-stability of a Q-Fano spherical variety with respect to equivariant special test configurations, in terms of its moment polytope and some combinatorial data associated to the open orbit. Combined with the equivariant version of the Yau-Tian-Donaldson conjecture for Fano manifolds proved by Datar and Szekelyhidi, it yields a criterion for the existence of a Kahler-Einstein metric on a spherical Fano manifold. The results hold also for modified K-stability and existence of Kahler-Ricci solitons.

Proceedings Article
12 Jul 2020
TL;DR: This paper characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images, and shows that networks that are composed of these layers are universal approximators of both invariant and Equivariant functions.
Abstract: Learning from unordered sets is a fundamental learning setup, recently attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to their own symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction. In this paper, we present a principled approach to learning sets of general symmetric elements. We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images. We further show that networks that are composed of these layers, called Deep Sets for Symmetric Elements (DSS) layers, are universal approximators of both invariant and equivariant functions, and that these networks are strictly more expressive than Siamese networks. DSS layers are also straightforward to implement. Finally, we show that they improve over existing set-learning architectures in a series of experiments with images, graphs, and point-clouds.

Book ChapterDOI
23 Aug 2020
TL;DR: In this paper, a 3D capsule module is proposed for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points.
Abstract: We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points. The operator receives a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end transformation equivariance through a novel dynamic routing procedure on quaternions. Further, we theoretically connect dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties. It is shown that such group dynamic routing can be interpreted as robust IRLS rotation averaging on capsule votes, where information is routed based on the final inlier scores. Based on our operator, we build a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space. Our architecture allows joint object classification and orientation estimation without explicit supervision of rotations. We validate our algorithm empirically on common benchmark datasets.

Proceedings Article
05 May 2020
TL;DR: It is proved that when the loss is zero, the optimal policy in the abstract MDP can be successfully lifted to the original MDP, and a contrastive loss function is introduced that enforces action equivariance on the learned representations.
Abstract: This work exploits action equivariance for representation learning in reinforcement learning Equivariance under actions states that transitions in the input space are mirrored by equivalent transitions in latent space, while the map and transition functions should also commute We introduce a contrastive loss function that enforces action equivariance on the learned representations We prove that when our loss is zero, we have a homomorphism of a deterministic Markov Decision Process (MDP) Learning equivariant maps leads to structured latent spaces, allowing us to build a model on which we plan through value iteration We show experimentally that for deterministic MDPs, the optimal policy in the abstract MDP can be successfully lifted to the original MDP Moreover, the approach easily adapts to changes in the goal states Empirically, we show that in such MDPs, we obtain better representations in fewer epochs compared to representation learning approaches using reconstructions, while generalizing better to new goals than model-free approaches

Posted Content
TL;DR: This paper finds that for fixed network depth, adding angular features improves the accuracy on most targets, and beats previous state-of-the-art results on the global electronic properties dipole moment, isotropic polarizability, and electronic spatial extent.
Abstract: Equivariant neural networks (ENNs) are graph neural networks embedded in $\mathbb{R}^3$ and are well suited for predicting molecular properties The ENN library e3nn has customizable convolutions, which can be designed to depend only on distances between points, or also on angular features, making them rotationally invariant, or equivariant, respectively This paper studies the practical value of including angular dependencies for molecular property prediction directly via an ablation study with \texttt{e3nn} and the QM9 data set We find that, for fixed network depth and parameter count, adding angular features decreased test error by an average of 23% Meanwhile, increasing network depth decreased test error by only 4% on average, implying that rotationally equivariant layers are comparatively parameter efficient We present an explanation of the accuracy improvement on the dipole moment, the target which benefited most from the introduction of angular features

Proceedings Article
12 Jul 2020
TL;DR: Attentive group equivariant convolutions are presented, a generalization of the group convolution, in which attention is applied during the course of convolution to accentuate meaningful symmetry combinations and suppress non-plausible, misleading ones.
Abstract: Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e.g., relative positions and poses). In this paper, we present attentive group equivariant convolutions, a generalization of the group convolution, in which attention is applied during the course of convolution to accentuate meaningful symmetry combinations and suppress non-plausible, misleading ones. We indicate that prior work on visual attention can be described as special cases of our proposed framework and show empirically that our attentive group equivariant convolutional networks consistently outperform conventional group convolutional networks on benchmark image datasets. Simultaneously, we provide interpretability to the learned concepts through the visualization of equivariant attention maps.

Journal ArticleDOI
TL;DR: Real-analytic Eisenstein series (RES) as discussed by the authors is a family of modular functions on the upper-half plane that admits expansions in involving only rational numbers and single-valued multiple zeta values.
Abstract: We introduce a new family of real-analytic modular forms on the upper-half plane. They are arguably the simplest class of ‘mixed’ versions of modular forms of level one and are constructed out of real and imaginary parts of iterated integrals of holomorphic Eisenstein series. They form an algebra of functions satisfying many properties analogous to classical holomorphic modular forms. In particular, they admit expansions in involving only rational numbers and single-valued multiple zeta values. The first nontrivial functions in this class are real-analytic Eisenstein series.

Journal ArticleDOI
01 Jan 2020
TL;DR: In this article, the authors studied the higher algebra of spectral Mackey functors, which the first named author introduced in Part I of this paper, and showed that the algebraic K-theory of group actions is lax symmetric monoidal.
Abstract: We study the “higher algebra” of spectral Mackey functors, which the first named author introduced in Part I of this paper. In particular, armed with our new theory of symmetric promonoidal ∞-categories and a suitable generalization of the second named author’s Day convolution, we endow the ∞-category of Mackey functors with a well-behaved symmetric monoidal structure. This makes it possible to speak of spectral Green functors for any operad O. We also answer a question of Mathew, proving that the algebraic K-theory of group actions is lax symmetric monoidal. We also show that the algebraic K-theory of derived stacks provides an example. Finally, we give a very short, new proof of the equivariant Barratt–Priddy–Quillen theorem, which states that the algebraic K-theory of the category of finite G-sets is simply the G-equivariant sphere spectrum.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work proposes a novel unsupervised learning of Graph Transformation Equivariant Representations (GraphTER), aiming to capture intrinsic patterns of graph structure under both global and local transformations.
Abstract: Recent advances in Graph Convolutional Neural Networks (GCNNs) have shown their efficiency for nonEuclidean data on graphs, which often require a large amount of labeled data with high cost. It it thus critical to learn graph feature representations in an unsupervised manner in practice. To this end, we propose a novel unsupervised learning of Graph Transformation Equivariant Representations (GraphTER), aiming to capture intrinsic patterns of graph structure under both global and local transformations. Specifically, we allow to sample different groups of nodes from a graph and then transform them node-wise isotropically or anisotropically. Then, we self-train a representation encoder to capture the graph structures by reconstructing these node-wise transformations from the feature representations of the original and transformed graphs. In experiments, we apply the learned GraphTER to graphs of 3D point cloud data, and results on point cloud segmentation/classification show that GraphTER significantly outperforms state-of-the-art unsupervised approaches and pushes greatly closer towards the upper bound set by the fully supervised counterparts. The code is available at: https://github.com/gyshgx868/graph-ter.

Posted Content
TL;DR: This work presents a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data that can provably encode equivariance-inducing parameter sharing for any finite group of symmetry transformations.
Abstract: Many successful deep learning architectures are equivariant to certain transformations in order to conserve parameters and improve generalization: most famously, convolution layers are equivariant to shifts of the input This approach only works when practitioners know the symmetries of the task and can manually construct an architecture with the corresponding equivariances Our goal is an approach for learning equivariances from data, without needing to design custom task-specific architectures We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data Our method can provably represent equivariance-inducing parameter sharing for any finite group of symmetry transformations Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks We provide our experiment code at this https URL

Journal ArticleDOI
TL;DR: In this article, a sharp area estimate for catenoids that allows us to rule out the phenomenon of multiplicity in min-max theory in several settings is presented, and it is shown that the width of a three-manifold with positive Ricci curvature is realized by an orientable minimal surface.
Abstract: We prove a sharp area estimate for catenoids that allows us to rule out the phenomenon of multiplicity in min-max theory in several settings. We apply it to prove that i) the width of a three-manifold with positive Ricci curvature is realized by an orientable minimal surface ii) minimal genus Heegaard surfaces in such manifolds can be isotoped to be minimal and iii) the “doublings” of the Clifford torus by Kapouleas–Yang can be constructed variationally by an equivariant min-max procedure. In higher dimensions we also prove that the width of manifolds with positive Ricci curvature is achieved by an index $1$ orientable minimal hypersurface.

Journal ArticleDOI
TL;DR: This work discusses how to exploit recursion relations between equivariant features of different order (generalizations of N-body invariants that provide a complete representation of the symmetries of improper rotations) to compute high-order terms efficiently.
Abstract: Mapping an atomistic configuration to a symmetrized N-point correlation of a field associated with the atomic positions (e.g., an atomic density) has emerged as an elegant and effective solution to represent structures as the input of machine-learning algorithms. While it has become clear that low-order density correlations do not provide a complete representation of an atomic environment, the exponential increase in the number of possible N-body invariants makes it difficult to design a concise and effective representation. We discuss how to exploit recursion relations between equivariant features of different order (generalizations of N-body invariants that provide a complete representation of the symmetries of improper rotations) to compute high-order terms efficiently. In combination with the automatic selection of the most expressive combination of features at each order, this approach provides a conceptual and practical framework to generate systematically improvable, symmetry adapted representations for atomistic machine learning.

Proceedings Article
12 Jul 2020
TL;DR: The authors propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map, and apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
Abstract: The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems. While translation equivariance provides a powerful inductive bias for images, we often additionally desire equivariance to other transformations, such as rotations, especially for non-image data. We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map. Incorporating equivariance to a new group requires implementing only the group exponential and logarithm maps, enabling rapid prototyping. Showcasing the simplicity and generality of our method, we apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems. For Hamiltonian systems, the equivariance of our models is especially impactful, leading to exact conservation of linear and angular momentum.

Journal ArticleDOI
TL;DR: In this paper, a generalization of the polarized endomorphism is proposed, which keeps all nice properties of polarized case in terms of the singularity, canonical divisor, and equivariant minimal model program.
Abstract: Let X be a normal projective variety. A surjective endomorphism $$f{:}X\rightarrow X$$ is int-amplified if $$f^*L - L =H$$ for some ample Cartier divisors L and H. This is a generalization of the so-called polarized endomorphism which requires that $$f^*H\sim qH$$ for some ample Cartier divisor H and $$q>1$$. We show that this generalization keeps all nice properties of the polarized case in terms of the singularity, canonical divisor, and equivariant minimal model program.

Journal ArticleDOI
TL;DR: In this article, it was shown that brane charge quantization in unstable equivariant cohomotopy implies the anomaly cancellation conditions for Mbranes and D-branes on flat orbi-orientifolds.

Journal ArticleDOI
TL;DR: In this paper, stable envelopes in equivariant elliptic cohomology of Nakajima quiver varieties were constructed, which gave an elliptic generalization of the results of arXiv:1211.1287.
Abstract: We construct stable envelopes in equivariant elliptic cohomology of Nakajima quiver varieties. In particular, this gives an elliptic generalization of the results of arXiv:1211.1287. We apply them to the computation of the monodromy of $q$-difference equations arising the enumerative K-theory of rational curves in Nakajima varieties, including the quantum Knizhnik-Zamolodchikov equations.

Posted Content
TL;DR: A first study of the approximation power of neural architectures that are invariant or equivariant to all three shape-preserving transformations of point clouds: translation, rotation, and permutation is presented.
Abstract: Learning functions on point clouds has applications in many fields, including computer vision, computer graphics, physics, and chemistry. Recently, there has been a growing interest in neural architectures that are invariant or equivariant to all three shape-preserving transformations of point clouds: translation, rotation, and permutation. In this paper, we present a first study of the approximation power of these architectures. We first derive two sufficient conditions for an equivariant architecture to have the universal approximation property, based on a novel characterization of the space of equivariant polynomials. We then use these conditions to show that two recently suggested models are universal, and for devising two other novel universal architectures.

Posted Content
TL;DR: Deep Sets for Symmetric Elements (DSS) as discussed by the authors learns sets of general symmetric elements using linear layers that are equivariant both to element reordering and to the inherent symmetries of elements.
Abstract: Learning from unordered sets is a fundamental learning setup, recently attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to their own symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction. In this paper, we present a principled approach to learning sets of general symmetric elements. We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images. We further show that networks that are composed of these layers, called Deep Sets for Symmetric Elements (DSS) layers, are universal approximators of both invariant and equivariant functions, and that these networks are strictly more expressive than Siamese networks. DSS layers are also straightforward to implement. Finally, we show that they improve over existing set-learning architectures in a series of experiments with images, graphs, and point-clouds.

Posted Content
TL;DR: In this article, the equivalence of equivariant K-polystability with geometric K-semistability was proved. And the existence and uniqueness of minimal optimal destabilizing centers on K-unstable log Fano pairs were proved.
Abstract: We give an algebraic proof of the equivalence of equivariant K-semistability (resp. equivariant K-polystability) with geometric K-semistability (resp. geometric K-polystability). Along the way we also prove the existence and uniqueness of minimal optimal destabilizing centers on K-unstable log Fano pairs.