scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean geometry published in 2018"


Journal ArticleDOI
TL;DR: In this article, the Euclidean gravitational path integral computing the Renyi entropy was studied and analyzed under small variations, and the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion.
Abstract: We study the Euclidean gravitational path integral computing the Renyi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton’s constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.

186 citations


Journal ArticleDOI
TL;DR: In this paper, the Palatini formalism is developed for gravitational theories in flat geometries, where the affine connection is fixed to be metric compatible, as done in the usual teleparallel theories, but the constraints with suitable Lagrange multipliers.
Abstract: The Palatini formalism, which assumes the metric and the affine connection as independent variables, is developed for gravitational theories in flat geometries. We focus on two particularly interesting scenarios. First, we fix the connection to be metric compatible, as done in the usual teleparallel theories, but we follow a completely covariant approach by imposing the constraints with suitable Lagrange multipliers. For a general quadratic theory we show how torsion naturally propagates and we reproduce the Teleparallel Equivalent of General Relativity as a particular quadratic action that features an additional Lorentz symmetry. We then study the much less explored theories formulated in a geometry with neither curvature nor torsion, so that all the geometrical information is encoded in the non-metricity. We discuss how this geometrical framework leads to a purely inertial connection that can thus be completely removed by a coordinate gauge choice, the coincident gauge. From the quadratic theory we recover a simpler formulation of General Relativity in the form of the Einstein action, which enjoys an enhanced symmetry that reduces to a second linearised diffeomorphism at linear order. More general theories in both geometries can be formulated consistently by taking into account the inertial connection and the associated additional degrees of freedom. As immediate applications, the new cosmological equations and their Newtonian limit are considered, where the role of the lapse in the consistency of the equations is clarified, and the Schwarzschild black hole entropy is computed by evaluating the corresponding Euclidean action. We discuss how the boundary terms in the usual formulation of General Relativity are related to different choices of coordinates in its coincident version and show that in isotropic coordinates the Euclidean action is finite without the need to introduce boundary or normalisation terms. Finally, we discuss the double-copy structure of the gravity amplitudes and the bootstrapping of gravity within the framework of coincident General Relativity.

163 citations


Proceedings Article
23 May 2018
TL;DR: In this article, the authors propose hyperbolic versions of important deep learning tools, such as multinomial logistic regression, feed-forward and recurrent neural networks, to perform classification in the Hyperbolic space.
Abstract: Hyperbolic spaces have recently gained momentum in the context of machine learning due to their high capacity and tree-likeliness properties. However, the representational power of hyperbolic geometry is not yet on par with Euclidean geometry, firstly because of the absence of corresponding hyperbolic neural network layers. Here, we bridge this gap in a principled manner by combining the formalism of Mobius gyrovector spaces with the Riemannian geometry of the Poincare model of hyperbolic spaces. As a result, we derive hyperbolic versions of important deep learning tools: multinomial logistic regression, feed-forward and recurrent neural networks. This allows to embed sequential data and perform classification in the hyperbolic space. Empirically, we show that, even if hyperbolic optimization tools are limited, hyperbolic sentence embeddings either outperform or are on par with their Euclidean variants on textual entailment and noisy-prefix recognition tasks.

145 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of identifying the bulk space-time of the SYK model and propose a dual theory with Euclidean AdS signature with additional legfactors, which incorporate the coupling of additional bulk states similar to the discrete states of 2d string theory.
Abstract: We consider the question of identifying the bulk space-time of the SYK model. Focusing on the signature of emergent space-time of the (Euclidean) model, we explain the need for non-local (Radon-type) transformations on external legs of n-point Green’s functions. This results in a dual theory with Euclidean AdS signature with additional legfactors. We speculate that these factors incorporate the coupling of additional bulk states similar to the discrete states of 2d string theory.

83 citations


Proceedings Article
03 Jul 2018
TL;DR: In this paper, a family of nested geodesically convex cones are used to embed directed acyclic graphs in hyperbolic spaces, which can model tree-like structures better than Euclidean geometry.
Abstract: Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning. We here present a novel method to embed directed acyclic graphs. Following prior work, we first advocate for using hyperbolic spaces which provably model tree-like structures better than Euclidean geometry. Second, we view hierarchical relations as partial orders defined using a family of nested geodesically convex cones. We prove that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces, and they canonically define the embedding learning process. Experiments show significant improvements of our method over strong recent baselines both in terms of representational capacity and generalization.

82 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider states of holographic conformal field theories constructed by adding sources for local operators in the Euclidean path integral, with the aim of investigating the extent to which arbitrary bulk coherent states can be represented by such Euclidea path-integrals in the CFT.
Abstract: We consider states of holographic conformal field theories constructed by adding sources for local operators in the Euclidean path integral, with the aim of investigating the extent to which arbitrary bulk coherent states can be represented by such Euclidean path-integrals in the CFT. We construct the associated dual Lorentzian spacetimes perturbatively in the sources. Extending earlier work, we provide explicit formulae for the Lorentzian fields to first order in the sources for general scalar field and metric perturbations in arbitrary dimensions. We check the results by holographically computing the Lorentzian one-point functions for the sourced operators and comparing with a direct CFT calculation. We present evidence that at the linearized level, arbitrary bulk initial data profiles can be generated by an appropriate choice of Euclidean sources. However, in order to produce initial data that is very localized, the amplitude must be taken small at the same time otherwise the required sources diverge, invalidating the perturbative approach.

75 citations


Posted Content
TL;DR: In this paper, a generalization of the universal approximation theorem for neural networks to maps invariant or equivariant with respect to linear representations of groups has been proposed, which is called the charge-conserving convnet.
Abstract: We describe generalizations of the universal approximation theorem for neural networks to maps invariant or equivariant with respect to linear representations of groups. Our goal is to establish network-like computational models that are both invariant/equivariant and provably complete in the sense of their ability to approximate any continuous invariant/equivariant map. Our contribution is three-fold. First, in the general case of compact groups we propose a construction of a complete invariant/equivariant network using an intermediate polynomial layer. We invoke classical theorems of Hilbert and Weyl to justify and simplify this construction; in particular, we describe an explicit complete ansatz for approximation of permutation-invariant maps. Second, we consider groups of translations and prove several versions of the universal approximation theorem for convolutional networks in the limit of continuous signals on euclidean spaces. Finally, we consider 2D signal transformations equivariant with respect to the group SE(2) of rigid euclidean motions. In this case we introduce the "charge--conserving convnet" -- a convnet-like computational model based on the decomposition of the feature space into isotypic representations of SO(2). We prove this model to be a universal approximator for continuous SE(2)--equivariant signal transformations.

66 citations


Posted Content
TL;DR: This work shows that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps.
Abstract: Stability is a key aspect of data analysis. In many applications, the natural notion of stability is geometric, as illustrated for example in computer vision. Scattering transforms construct deep convolutional representations which are certified stable to input deformations. This stability to deformations can be interpreted as stability with respect to changes in the metric structure of the domain. In this work, we show that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps. The resulting representation is stable to metric perturbations of the domain while being able to capture "high-frequency" information, akin to the Euclidean Scattering.

57 citations


Journal ArticleDOI
TL;DR: A deep neural network-based framework which utilizes the view information in the feature extraction stage and proposes an iterative algorithm to optimize the parameters of the view-specific networks from coarse to fine to better adapt the re-id problem.
Abstract: In recent years, a growing body of research has focused on the problem of person re-identification (re-id). The re-id techniques attempt to match the images of pedestrians from disjoint non-overlapping camera views. A major challenge of the re-id is the serious intra-class variations caused by changing viewpoints. To overcome this challenge, we propose a deep neural network-based framework which utilizes the view information in the feature extraction stage. The proposed framework learns a view-specific network for each camera view with a cross-view Euclidean constraint (CV-EC) and a cross-view center loss. We utilize the CV-EC to decrease the margin of the features between diverse views and extend the center loss metric to a view-specific version to better adapt the re-id problem. Moreover, we propose an iterative algorithm to optimize the parameters of the view-specific networks from coarse to fine. The experiments demonstrate that our approach significantly improves the performance of the existing deep networks and outperforms the state-of-the-art methods on the VIPeR, CUHK01, CUHK03, SYSU-mReId, and Market-1501 benchmarks.

56 citations


Proceedings Article
01 Jan 2018
TL;DR: In this article, the authors proposed a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds, where full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannians context, and devise projection mapping layers for regular output layers.
Abstract: Learning representations on Grassmann manifolds is popular in quite a few visual recognition tasks. In order to enable deep learning on Grassmann manifolds, this paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds. In particular, we design full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannian context, and devise projection mapping layers to respect Grassmannian geometry and meanwhile achieve Euclidean forms for regular output layers. To train the Grassmann networks, we exploit a stochastic gradient descent setting on manifolds of the connection weights, and study a matrix generalization of backpropagation to update the structured data. The evaluations on three visual recognition tasks show that our Grassmann networks have clear advantages over existing Grassmann learning methods, and achieve results comparable with state-of-the-art approaches.

51 citations


Journal ArticleDOI
TL;DR: In this article, the authors revisited the large spin asymptotics of 15j symbols in terms of cosines of the 4d Euclidean Regge action, as derived by Barrett and collaborators using a saddle point approximation.
Abstract: We revisit the the large spin asymptotics of 15j symbols in terms of cosines of the 4d Euclidean Regge action, as derived by Barrett and collaborators using a saddle point approximation. We bring it closer to the perspective of area-angle Regge calculus and twisted geometries, and compute explicitly the Hessian and phase offsets. We then extend it to more general SU(2) graph invariants associated to nj-symbols. We find that saddle points still exist for special boundary configurations, and that these have a clear geometric interpretation, but there is a novelty: configurations with two distinct saddle points admit a conformal shape-mismatch of the geometry, and the cosine asymptotic behaviour oscillates with a generalisation of the Regge action. The allowed mismatch correspond to angle-matched twisted geometries, 3d polyhedral tessellations with adjacent faces matching areas and 2d angles, but not their diagonals. We study these geometries, identify the relevant subsets corresponding to 3d Regge data and 4d flat polytope data, and discuss the corresponding Regge actions emerging in the asymptotics. Finally, we also provide the first numerical confirmation of the large spin asymptotics of the 15j symbol. We show that the agreement is accurate to the per cent level already at spins of order 10, and the next-to-leading order oscillates with the same frequency and same global phase.

Posted Content
TL;DR: Wasserstein elliptical embeddings are presented, which consist in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets, and shown to be more intuitive and better behaved numerically than the alternative choice of Gaussianembeddings with the Kullback-Leibler divergence.
Abstract: Embedding complex objects as vectors in low dimensional spaces is a longstanding problem in machine learning. We propose in this work an extension of that approach, which consists in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets. We endow these measures with the 2-Wasserstein metric, with two important benefits: (i) For such measures, the squared 2-Wasserstein metric has a closed form, equal to a weighted sum of the squared Euclidean distance between means and the squared Bures metric between covariance matrices. The latter is a Riemannian metric between positive semi-definite matrices, which turns out to be Euclidean on a suitable factor representation of such matrices, which is valid on the entire geodesic between these matrices. (ii) The 2-Wasserstein distance boils down to the usual Euclidean metric when comparing Diracs, and therefore provides a natural framework to extend point embeddings. We show that for these reasons Wasserstein elliptical embeddings are more intuitive and yield tools that are better behaved numerically than the alternative choice of Gaussian embeddings with the Kullback-Leibler divergence. In particular, and unlike previous work based on the KL geometry, we learn elliptical distributions that are not necessarily diagonal. We demonstrate the advantages of elliptical embeddings by using them for visualization, to compute embeddings of words, and to reflect entailment or hypernymy.

Posted Content
TL;DR: Milsted and Vidal as discussed by the authors showed that the multiscale entanglement renormalization ansatz (MERA) can be interpreted as a light sheet (respectively, a light cone).
Abstract: The multi-scale entanglement renormalization ansatz (MERA) is a tensor network representation for ground states of critical quantum spin chains, with a network that extends in an additional dimension corresponding to scale. Over the years several authors have conjectured, both in the context of holography and cosmology, that MERA realizes a discrete version of some geometry. However, while one proposal argued that the tensor network should be interpreted as representing the hyperbolic plane, another proposal instead equated MERA to de Sitter spacetime. In this \letter we show, using the framework of path integral geometry [A. Milsted, G. Vidal, arXiv:1807.02501], that MERA on the real line (and finite circle) can be given a rigorous interpretation as a two-dimensional geometry, namely a light sheet (respectively, a light cone). Accordingly, MERA describes neither the hyperbolic plane nor de Sitter spacetime. However, we also propose euclidean and lorentzian generalizations of MERA that correspond to a path integral on these two geometries.

Posted Content
TL;DR: In this article, hyperbolic neural networks have been used for text entailment and noisy-prefix recognition tasks in the Poincare model of Hyperbolic spaces. And they have been shown to outperform their Euclidean counterparts on textual entailment.
Abstract: Hyperbolic spaces have recently gained momentum in the context of machine learning due to their high capacity and tree-likeliness properties. However, the representational power of hyperbolic geometry is not yet on par with Euclidean geometry, mostly because of the absence of corresponding hyperbolic neural network layers. This makes it hard to use hyperbolic embeddings in downstream tasks. Here, we bridge this gap in a principled manner by combining the formalism of Mobius gyrovector spaces with the Riemannian geometry of the Poincare model of hyperbolic spaces. As a result, we derive hyperbolic versions of important deep learning tools: multinomial logistic regression, feed-forward and recurrent neural networks such as gated recurrent units. This allows to embed sequential data and perform classification in the hyperbolic space. Empirically, we show that, even if hyperbolic optimization tools are limited, hyperbolic sentence embeddings either outperform or are on par with their Euclidean variants on textual entailment and noisy-prefix recognition tasks.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a general method to construct, directly in configuration space, classes of dynamical systems invariant under generalizations of the Carroll and of the Galilei groups.
Abstract: We introduce a general method to construct, directly in configuration space, classes of dynamical systems invariant under generalizations of the Carroll and of the Galilei groups. The method does not make use of any nonrelativistic limiting procedure, although the starting point is a Lagrangian Poincar\'e invariant in the full space. It consists in considering a spacetime in $D+1$ dimensions and partitioning it in two parts, the first Minkowskian and the second Euclidean. The action consists of two terms that are separately invariant under the Minkowskian and Euclidean partitioning. One of those contains a system of lagrangian multipliers that confine the system to a subspace. The other term defines the dynamics of the system. The total lagrangian is invariant under the Carroll or the Galilei groups with zero central charge.

Proceedings Article
03 Dec 2018
TL;DR: This work designs new differentially private algorithms for the Euclidean k-means problem, both in the centralized model and in the local model of differential privacy, achieving significantly improved error guarantees than the previous state-of-the-art.
Abstract: We design new differentially private algorithms for the Euclidean k-means problem, both in the centralized model and in the local model of differential privacy. In both models, our algorithms achieve significantly improved error guarantees than the previous state-of-the-art. In addition, in the local model, our algorithm significantly reduces the number of interaction rounds. Although the problem has been widely studied in the context of differential privacy, all of the existing constructions achieve only super constant approximation factors. We present—for the first time—efficient private algorithms for the problem with constant multiplicative error. Furthermore, we show how to modify our algorithms so they compute private coresets for k-means clustering in both models.

Journal ArticleDOI
TL;DR: In this article, the authors studied rectifying curves via the dilation of unit speed curves on the unit sphere S in the Euclidean space E, and obtained a necessary and sufficient condition for which the centroid of a unit speed curve α(s) in E is a rectifying curve to improve a main result of [4].
Abstract: First, we study rectifying curves via the dilation of unit speed curves on the unit sphere S in the Euclidean space E . Then we obtain a necessary and sufficient condition for which the centrode d(s) of a unit speed curve α(s) in E is a rectifying curve to improve a main result of [4]. Finally, we prove that if a unit speed curve α(s) in E is neither a planar curve nor a helix, then its dilated centrode β(s) = ρ(s)d(s) , with dilation factor ρ , is always a rectifying curve, where ρ is the radius of curvature of α .

Posted Content
TL;DR: This work aims to bridge the gap between Euclidean and hyperbolic geometry in recommender systems through metric learning approach, and proposes HyperML (Hyperbolic Metric Learning), a conceptually simple but highly effective model for boosting the performance.
Abstract: This paper investigates the notion of learning user and item representations in non-Euclidean space. Specifically, we study the connection between metric learning in hyperbolic space and collaborative filtering by exploring Mobius gyrovector spaces where the formalism of the spaces could be utilized to generalize the most common Euclidean vector operations. Overall, this work aims to bridge the gap between Euclidean and hyperbolic geometry in recommender systems through metric learning approach. We propose HyperML (Hyperbolic Metric Learning), a conceptually simple but highly effective model for boosting the performance. Via a series of extensive experiments, we show that our proposed HyperML not only outperforms their Euclidean counterparts, but also achieves state-of-the-art performance on multiple benchmark datasets, demonstrating the effectiveness of personalized recommendation in hyperbolic geometry.

Posted Content
TL;DR: In this paper, a new construction of the Euclidean π-4$ quantum field theory based on PDE arguments is presented, which is based on an approximation of the stochastic quantization equation on a periodic lattice of mesh size and side length.
Abstract: We present a new construction of the Euclidean $\Phi^4$ quantum field theory on $\mathbb{R}^3$ based on PDE arguments. More precisely, we consider an approximation of the stochastic quantization equation on $\mathbb{R}^3$ defined on a periodic lattice of mesh size $\varepsilon$ and side length $M$. We introduce a new renormalized energy method in weighted spaces and prove tightness of the corresponding Gibbs measures as $\varepsilon \rightarrow 0$, $M \rightarrow \infty$. Every limit point is non-Gaussian and satisfies reflection positivity, translation invariance and stretched exponential integrability. These properties allow to verify the Osterwalder--Schrader axioms for a Euclidean QFT apart from rotation invariance and clustering. Our argument applies to arbitrary positive coupling constant, to multicomponent models with $O(N)$ symmetry and to some long-range variants. Moreover, we establish an integration by parts formula leading to the hierarchy of Dyson--Schwinger equations for the Euclidean correlation functions. To this end, we identify the renormalized cubic term as a \emph{distribution} on the space of Euclidean fields.

Journal ArticleDOI
TL;DR: A point-to-set principle is proved that enables one to use the (relativized, constructive) dimension of a single point in a set E in a Euclidean space to establish a lower bound on the (classical) Hausdorff dimension of E.
Abstract: We formulate the conditional Kolmogorov complexity of xgiveny at precisionr, where x and y are points in Euclidean spaces and r is a natural number. We demonstrate the utility of this notion in two ways;(1) We prove a point-to-set principle that enables one to use the (relativized, constructive) dimension of a single point in a set E in a Euclidean space to establish a lower bound on the (classical) Hausdorff dimension of E. We then use this principle, together with conditional Kolmogorov complexity in Euclidean spaces, to give a new proof of the known, two-dimensional case of the Kakeya conjecture. This theorem of geometric measure theory, proved by Davies in 1971, says that every plane set containing a unit line segment in every direction has Hausdorff dimension 2.(2)We use conditional Kolmogorov complexity in Euclidean spaces to develop the lower and upper conditional dimensions dim(xvy) and Dim(xvy) of x given y, where x and y are points in Euclidean spaces. Intuitively, these are the lower and upper asymptotic algorithmic information densities of x conditioned on the information in y. We prove that these conditional dimensions are robust and that they have the correct information-theoretic relationships with the well-studied dimensions dim(x) and Dim(x) and the mutual dimensions mdim(x : y) and Mdim(x : y).

Journal ArticleDOI
TL;DR: The Stolarsky invariance principle connects the spherical cap of a finite point set on the sphere to the pairwise sum of Euclidean distances between the points as discussed by the authors, and it is shown that the discrepancy is related to the sum of geodesic distances.
Abstract: The classical Stolarsky invariance principle connects the spherical cap $$L^2$$ discrepancy of a finite point set on the sphere to the pairwise sum of Euclidean distances between the points. In this paper, we further explore and extend this phenomenon. In addition to a new elementary proof of this fact, we establish several new analogs, which relate various notions of discrepancy to different discrete energies. In particular, we find that the hemisphere discrepancy is related to the sum of geodesic distances. We also extend these results to arbitrary measures on the sphere and arbitrary notions of discrepancy and apply them to problems of energy optimization and combinatorial geometry and find that, surprisingly, the geodesic distance energy behaves differently than its Euclidean counterpart.

Posted Content
TL;DR: In this article, a family of nested geodesically convex cones are used to embed directed acyclic graphs in hyperbolic spaces, which can model tree-like structures better than Euclidean geometry.
Abstract: Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning. We here present a novel method to embed directed acyclic graphs. Following prior work, we first advocate for using hyperbolic spaces which provably model tree-like structures better than Euclidean geometry. Second, we view hierarchical relations as partial orders defined using a family of nested geodesically convex cones. We prove that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces, and they canonically define the embedding learning process. Experiments show significant improvements of our method over strong recent baselines both in terms of representational capacity and generalization.

Proceedings Article
01 May 2018
TL;DR: In this paper, the squared 2-Wasserstein distance is defined as the weighted sum of the squared Euclidean distance between means and the squared Bures distance between covariance matrices.
Abstract: Embedding complex objects as vectors in low dimensional spaces is a longstanding problem in machine learning. We propose in this work an extension of that approach, which consists in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets. We endow these measures with the 2-Wasserstein metric, with two important benefits: (i) For such measures, the squared 2-Wasserstein metric has a closed form, equal to a weighted sum of the squared Euclidean distance between means and the squared Bures metric between covariance matrices. The latter is a Riemannian metric between positive semi-definite matrices, which turns out to be Euclidean on a suitable factor representation of such matrices, which is valid on the entire geodesic between these matrices. (ii) The 2-Wasserstein distance boils down to the usual Euclidean metric when comparing Diracs, and therefore provides a natural framework to extend point embeddings. We show that for these reasons Wasserstein elliptical embeddings are more intuitive and yield tools that are better behaved numerically than the alternative choice of Gaussian embeddings with the Kullback-Leibler divergence. In particular, and unlike previous work based on the KL geometry, we learn elliptical distributions that are not necessarily diagonal. We demonstrate the advantages of elliptical embeddings by using them for visualization, to compute embeddings of words, and to reflect entailment or hypernymy.

Journal ArticleDOI
TL;DR: In this paper, the Palatini formalism is developed for gravitational theories in flat geometries, and a geometrical framework leading to a purely inertial connection that can be completely removed by a coordinate gauge choice, the coincident gauge.
Abstract: The Palatini formalism is developed for gravitational theories in flat geometries. We focus on two particularly interesting scenarios. First, we fix the connection to be metric compatible, but we follow a completely covariant approach by imposing the constraints with suitable Lagrange multipliers. For a general quadratic theory we show how torsion naturally propagates and we reproduce the Teleparallel Equivalent of General Relativity as a particular quadratic action that features an additional Lorentz symmetry. We then study the much less explored theories formulated in a geometry with neither curvature nor torsion, so that all the geometrical information is encoded in the non-metricity. We discuss how this geometrical framework leads to a purely inertial connection that can thus be completely removed by a coordinate gauge choice, the coincident gauge. From the quadratic theory we recover a simpler formulation of General Relativity in the form of the Einstein action, which enjoys an enhanced symmetry that reduces to a second linearised diffeomorphism at linear order. More general theories in both geometries can be formulated consistently by taking into account the inertial connection and the associated additional degrees of freedom. As immediate applications, the new cosmological equations and their Newtonian limit are considered, where the role of the lapse in the consistency of the equations is clarified, and the Schwarzschild black hole entropy is computed by evaluating the corresponding Euclidean action. We discuss how the boundary terms in the usual formulation of General Relativity are related to different choices of coordinates in its coincident version and show that in isotropic coordinates the Euclidean action is finite without the need to introduce boundary or normalisation terms.

Posted Content
TL;DR: This paper develops new techniques for maximum likelihood inference in latent space, and adress the computational complexity of using geometric algorithms with high-dimensional data by training a separate neural network to approximate the Riemannian metric and cometric tensor capturing the shape of the learned data manifold.
Abstract: Given data, deep generative models, such as variational autoencoders (VAE) and generative adversarial networks (GAN), train a lower dimensional latent representation of the data space. The linear Euclidean geometry of data space pulls back to a nonlinear Riemannian geometry on the latent space. The latent space thus provides a low-dimensional nonlinear representation of data and classical linear statistical techniques are no longer applicable. In this paper we show how statistics of data in their latent space representation can be performed using techniques from the field of nonlinear manifold statistics. Nonlinear manifold statistics provide generalizations of Euclidean statistical notions including means, principal component analysis, and maximum likelihood fits of parametric probability distributions. We develop new techniques for maximum likelihood inference in latent space, and adress the computational complexity of using geometric algorithms with high-dimensional data by training a separate neural network to approximate the Riemannian metric and cometric tensor capturing the shape of the learned data manifold.

Posted Content
16 Mar 2018
TL;DR: In this paper, the probability simplex on a weighted graph introduced by the Wasserstein metric was studied and its geometry formulas were established in Euclidean coordinates, and several examples of differential equations were demonstrated.
Abstract: We study the Riemannian structures of the probability simplex on a weighted graph introduced by $L^2$-Wasserstein metric. The main idea is to embed the probability simplex as a submanifold of the positive orthant. From this embedding, we establish the geometry formulas of the probability simplex in Euclidean coordinates. The geometry computations on discrete simplex guide us to introduce the ones in the Fr{\'e}chet manifold of densities supported on a finite dimensional base manifold. Following the steps of Nelson, Bakery-{\'E}mery, Lott-Villani-Strum and the geometry of density manifold, we demonstrate an identity that connects the Bakery-{\'E}mery $\Gamma_2$ operator (carr{\'e} du champ it{\'e}r{\'e}) and Yano's formula on the base manifold. Several examples of differential equations in probability simplex are demonstrated.

Journal ArticleDOI
TL;DR: In this article, the authors studied the magnitudes of compact sets in Euclidean spaces and showed that the magnitude of an odd-dimensional ball is a rational function of its radius, thus disproving the general form of the Leinster-Willerton conjecture.
Abstract: The notion of the magnitude of a metric space was introduced by Leinster and developed in works by Leinster, Meckes and Willerton, but the magnitudes of familiar sets in Euclidean space are only understood in relatively few cases. In this paper we study the magnitudes of compact sets in Euclidean spaces. We first describe the asymptotics of the magnitude of such sets in both the small- and large-scale regimes. We then consider the magnitudes of compact convex sets with nonempty interior in Euclidean spaces of odd dimension, and relate them to the boundary behaviour of solutions to certain naturally associated higher order elliptic boundary value problems in exterior domains. We carry out calculations leading to an algorithm for explicit evaluation of the magnitudes of balls, and this establishes the convex magnitude conjecture of Leinster and Willerton in the special case of balls in dimension three. In general the magnitude of an odd-dimensional ball is a rational function of its radius, thus disproving the general form of the Leinster-Willerton conjecture. In addition to Fourier-analytic and PDE techniques, the arguments also involve some combinatorial considerations.

Posted Content
TL;DR: In this paper, the authors provide a systematic study of the circumcenter of sets containing finitely many points in Hilbert space, motivated by recent works of Behling, Bello Cruz, and Santos on accelerated versions of the Douglas-Rachford method.
Abstract: A well-known object in classical Euclidean geometry is the circumcenter of a triangle, i.e., the point that is equidistant from all vertices. The purpose of this paper is to provide a systematic study of the circumcenter of sets containing finitely many points in Hilbert space. This is motivated by recent works of Behling, Bello Cruz, and Santos on accelerated versions of the Douglas--Rachford method. We present basic results and properties of the circumcenter. Several examples are provided to illustrate the tightness of various assumptions.

Journal ArticleDOI
TL;DR: In this article, the half-space depth for multivariate data with notions from convex and affine geometry is discussed, which is a generalization of a measure of symmetry for convex sets, well studied in geometry.
Abstract: Little known relations of the renown concept of the halfspace depth for multivariate data with notions from convex and affine geometry are discussed. Halfspace depth may be regarded as a measure of symmetry for random vectors. As such, the depth stands as a generalization of a measure of symmetry for convex sets, well studied in geometry. Under a mild assumption, the upper level sets of the halfspace depth coincide with the convex floating bodies used in the definition of the affine surface area for convex bodies in Euclidean spaces. These connections enable us to partially resolve some persistent open problems regarding theoretical properties of the depth.

Journal ArticleDOI
TL;DR: In this paper, a relation between the derivative of the volume of the floating body and a certain surface area measure, which is called the floating area, is established, which coincides with the well known affine surface area, a powerful tool in affine geometry of convex bodies.
Abstract: We carry out a systematic investigation on floating bodies in real space forms. A new unifying approach not only allows us to treat the important classical case of Euclidean space as well as the recent extension to the Euclidean unit sphere, but also the new extension of floating bodies to hyperbolic space. Our main result establishes a relation between the derivative of the volume of the floating body and a certain surface area measure, which we called the floating area. In the Euclidean setting the floating area coincides with the well known affine surface area, a powerful tool in the affine geometry of convex bodies.