scispace - formally typeset
Search or ask a question
Author

Dhruv Kohli

Bio: Dhruv Kohli is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Poincaré disk model & Convex set. The author has an hindex of 2, co-authored 5 publications receiving 8 citations.

Papers
More filters
Posted Content
TL;DR: Low Distortion Local Eigenmaps (LDLE) as discussed by the authors is a manifold learning technique which constructs a set of low distortion local views of a dataset in lower dimension and registers them to obtain a global embedding.
Abstract: We present Low Distortion Local Eigenmaps (LDLE), a manifold learning technique which constructs a set of low distortion local views of a dataset in lower dimension and registers them to obtain a global embedding. The local views are constructed using the global eigenvectors of the graph Laplacian and are registered using Procrustes analysis. The choice of these eigenvectors may vary across the regions. In contrast to existing techniques, LDLE is more geometric and can embed manifolds without boundary as well as non-orientable manifolds into their intrinsic dimension.

4 citations

Journal ArticleDOI
TL;DR: Using the Poincare disk model of hyperbolic geometry, it was shown in this paper that radial expansion of a spherical convex set about a point inside a sphere, such that the initial set is contained in the closed hemisphere centred at that point, always preserves spherical-convexity.
Abstract: On a flat plane, convexity of a set is preserved by both radial expansion and contraction of the set about any point inside it. Using the Poincare disk model of hyperbolic geometry, we prove that radial expansion of a hyperbolic convex set about a point inside it always preserves hyperbolic convexity. Using stereographic projection of a sphere, we prove that radial contraction of a spherical convex set about a point inside it, such that the initial set is contained in the closed hemisphere centred at that point, always preserves spherical convexity.

2 citations

Journal ArticleDOI
TL;DR: Using the Poincare disk model of hyperbolic geometry, it was shown in this paper that radial expansion of a spherical convex set about a point inside a sphere, such that the initial set is contained in the closed hemisphere centred at that point, always preserves spherical-convexity.
Abstract: On a flat plane, convexity of a set is preserved by both radial expansion and contraction of the set about any point inside it. Using the Poincare disk model of hyperbolic geometry, we prove that radial expansion of a hyperbolic convex set about a point inside it always preserves hyperbolic convexity. Using stereographic projection of a sphere, we prove that radial contraction of a spherical convex set about a point inside it, such that the initial set is contained in the closed hemisphere centred at that point, always preserves spherical convexity.

1 citations

Journal ArticleDOI
TL;DR: In this paper, the authors generalize this result by showing that the asymmetric expansion of a hyperbolic convex set about any point inside it also results in a convex subset.
Abstract: In an earlier paper we showed that the radial expansion of a hyperbolic convex set in the Poincare disk about any point inside it results in a hyperbolic convex set. In this work, we generalize this result by showing that the asymmetric expansion of a hyperbolic convex set about any point inside it also results in a hyperbolic convex set.

1 citations

Journal ArticleDOI
TL;DR: In this paper , the authors characterize the non-degeneracy of an alignment in the noisy setting based on the kernel and positivity of a certain matrix and obtain necessary and sufficient conditions on the overlapping structure of the views for a locally rigid realization.
Abstract: Given a set of overlapping local views (patches) of a dataset, we consider the problem of finding a rigid alignment of the views that minimizes a $2$-norm based alignment error. In general, the views are noisy and a perfect alignment may not exist. In this work, we characterize the non-degeneracy of an alignment in the noisy setting based on the kernel and positivity of a certain matrix. This leads to a polynomial time algorithm for testing the non-degeneracy of a given alignment. Consequently, we focus on Riemannian gradient descent for minimization of the error and obtain a sufficient condition on an alignment for the algorithm to converge (locally) linearly to it. In the case of noiseless views, a perfect alignment exists, resulting in a realization of the points that respects the geometry of the views. Under a mild condition on the views, we show that the non-degeneracy of a perfect alignment is equivalent to the local rigidity of the resulting realization. By specializing the characterization of a non-degenerate alignment to the noiseless setting, we obtain necessary and sufficient conditions on the overlapping structure of the views for a locally rigid realization. Similar results are also obtained in the context of global rigidity.

Cited by
More filters
Book
28 Oct 1996
TL;DR: In this paper, the authors present a mathematical model of the Geometry of Curves and its application to geometry and mechanics. But they do not discuss its application in the area minimization problem.
Abstract: Preface. 1. The Geometry of Curves. Introduction. Arclength Parametrization. Frenet Formulas. Nonunit Speed Curves. Some Implications of Curvature and Torsion. The Geometry of Curves and MAPLE. 2. Surfaces. Introduction. The Geometry of Surfaces. The Linear Algebra of Surfaces. Normal Curvature. Plotting Surfaces in MAPLE. 3. Curvature(s). Introduction. Calculating Curvature. Surfaces of Revolution. A Formula for Gaussian Curvature. Some Effects of Curvature(s). Surfaces of Delaunay. Calculating Curvature with MAPLE. 4. Constant Mean Curvature Surfaces. Introduction. First Notions in Minimal Surfaces. Area Minimization. Constant Mean Curvature. Harmonic Functions. 5. Geodesics, Metrics and Isometries. Introduction. The Geodesic Equations and the Clairaut Relation. A Brief Digression on Completeness. Surfaces not in R3. Isometries and Conformal Maps. Geodesics and MAPLE. 6. Holonomy and the Gauss-Bonnet Theorem. Introduction. The Covariant Derivative Revisited. Parallel Vector Fields and Holonomy. Foucault's Pendulum. The Angle Excess Theorem. The Gauss-Bonnet Theorem. Geodesic Polar Coordinates. 7. Minimal Surfaces and Complex Variables. Complex Variables. Isothermal Coordinates. The Weierstrass-Enneper Representations. BjA-rling's Problem. Minimal Surfaces which are not Area Minimizing. Minimal Surfaces and MAPLE. 8. The Calculus of Variations and Geometry. The Euler-Lagrange Equations. The Basic Examples. The Weierstrass E-Function. Problems with Constraints. Further Applications to Geometry and Mechanics. The Pontryagin Maximum Principle. The Calculus of Variations and MAPLE. 9. A Glimpse at Higher Dimensions. Introduction. Manifolds. The Covariant Derivative. Christoffel Symbols. Curvatures. The Charming Doubleness. List of Examples, Definitions and Remarks. Answers and Hints to Selected Exercises. References. Index.

82 citations

Journal ArticleDOI
TL;DR: A set of novel multiscale basis transforms for signals on graphs that utilize their "dual" domains by incorporating the "natural" distances between graph Laplacian eigenvectors, rather than simply using the eigenvalue ordering.
Abstract: We introduce a set of novel multiscale basis transforms for signals on graphs that utilize their “dual” domains by incorporating the “natural” distances between graph Laplacian eigenvectors, rather than simply using the eigenvalue ordering. These basis dictionaries can be seen as generalizations of the classical Shannon wavelet packet dictionary to arbitrary graphs, and do not rely on the frequency interpretation of Laplacian eigenvalues. We describe the algorithms (involving either vector rotations or orthogonalizations) to construct these basis dictionaries, use them to efficiently approximate graph signals through the best basis search, and demonstrate the strengths of these basis dictionaries for graph signals measured on sunflower graphs and street networks.

11 citations

Journal ArticleDOI
TL;DR: In this paper, the authors generalize this result by showing that the asymmetric expansion of a hyperbolic convex set about any point inside it also results in a convex subset.
Abstract: In an earlier paper we showed that the radial expansion of a hyperbolic convex set in the Poincare disk about any point inside it results in a hyperbolic convex set. In this work, we generalize this result by showing that the asymmetric expansion of a hyperbolic convex set about any point inside it also results in a hyperbolic convex set.

1 citations

Journal ArticleDOI
TL;DR: A set of methods that form a three-step framework to overcome readability and scalability issues with existing methods and enable the boundary compact, readable, and controllable, and can find an ideal position that matches the human visual preference for each label.
Abstract: Drawing boundaries and appending text labels for each class of multi-class scatterplot are two common steps to help people perceive and understand class-level spatial and semantic information hidden in the scatterplot. However, massive data points, highly overlapped classes, widespread outliers, extremely non-uniform density of data points lead to readability and scalability issues with existing methods. In this paper, we propose a set of methods that form a three-step framework to overcome these issues. We enable the boundary compact, readable, and controllable, and can find an ideal position that matches the human visual preference for each label. In the first step, we use a MST-based clustering algorithm to further divide classes into clusters and remove class-level outliers to avoid the distortion of boundaries. A stroke-based interaction is integrated into the algorithm, allowing the user to quickly correct the identified clusters or materialize the clusters in his or her mind. In the second step, we design a grid-based boundary construction pipeline which enables the user to tighten the boundary into the main distribution region of its corresponding class in a controlled manner by gradually filtering out cluster-level outliers. Gridding improves scalability at the scale of data points and helps users gain insights by generating different distributions of classes based on a relative or absolute density threshold. In the third step, by combining three factors: the boundary of the target cluster, the boundary of the label, and the density distribution of the target cluster, we can place the label closer to its visually ideal position. Rich illustrations and two cases demonstrate the effectiveness of our methods.
Posted Content
TL;DR: In this paper, the authors explore the use of a topological manifold, represented as a collection of charts, as the target space of neural network based representation learning tasks and demonstrate its effectiveness by adjusting SimCLR (for representation learning) and standard triplet loss training (for metric learning).
Abstract: We explore the use of a topological manifold, represented as a collection of charts, as the target space of neural network based representation learning tasks. This is achieved by a simple adjustment to the output of an encoder's network architecture plus the addition of a maximal mean discrepancy (MMD) based loss function for regularization. Most algorithms in representation and metric learning are easily adaptable to our framework and we demonstrate its effectiveness by adjusting SimCLR (for representation learning) and standard triplet loss training (for metric learning) to have manifold encoding spaces. Our experiments show that we obtain a substantial performance boost over the baseline for low dimensional encodings. In the case of triplet training, we also find, independent of the manifold setup, that the MMD loss alone (i.e. keeping a flat, euclidean target space but using an MMD loss to regularize it) increases performance over the baseline in the typical, high-dimensional Euclidean target spaces. Code for reproducing experiments is provided at this https URL .