scispace - formally typeset
Search or ask a question
Author

Megha Khosla

Other affiliations: Max Planck Society
Bio: Megha Khosla is an academic researcher from Leibniz University of Hanover. The author has contributed to research in topics: Computer science & Graph (abstract data type). The author has an hindex of 10, co-authored 47 publications receiving 297 citations. Previous affiliations of Megha Khosla include Max Planck Society.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a new scheme of quantum information processing based on spin coherent states of two component Bose-Einstein condensates was proposed, which goes beyond the continuous variable regime such that the full space of the Bloch sphere is used.

50 citations

Book ChapterDOI
16 Sep 2019
TL;DR: In this paper, the authors propose an alternating random walk strategy to generate role specific vertex neighborhoods and train node embeddings in their corresponding source/target roles while fully exploiting the semantics of directed graphs, which maintains separate views or embedding spaces for the two distinct node roles induced by the directionality of the edges.
Abstract: We propose a novel approach for learning node representations in directed graphs, which maintains separate views or embedding spaces for the two distinct node roles induced by the directionality of the edges. We argue that the previous approaches either fail to encode the edge directionality or their encodings cannot be generalized across tasks. With our simple alternating random walk strategy, we generate role specific vertex neighborhoods and train node embeddings in their corresponding source/target roles while fully exploiting the semantics of directed graphs. We also unearth the limitations of evaluations on directed graphs in previous works and propose a clear strategy for evaluating link prediction and graph reconstruction in directed graphs. We conduct extensive experiments to showcase our effectiveness on several real-world datasets on link prediction, node classification and graph reconstruction tasks. We show that the embeddings from our approach are indeed robust, generalizable and well performing across multiple kinds of tasks and graphs. We show that we consistently outperform all baselines for node classification task. In addition to providing a theoretical interpretation of our method we also show that we are considerably more robust than the other directed graph approaches.

46 citations

Journal ArticleDOI
TL;DR: In this paper, the authors established the threshold for the O(n,m,k)-orientability of a given hypergraph with n vertices and m edges, where n is the number of vertices in the graph and m is the set of edges in which no vertex is assigned more than two edges.
Abstract: A $k$-uniform hypergraph $H = (V, E)$ is called $\ell$-orientable, if there is an assignment of each edge $e\in E$ to one of its vertices $v\in e$ such that no vertex is assigned more than $\ell$ edges. Let $H_{n,m,k}$ be a hypergraph, drawn uniformly at random from the set of all $k$-uniform hypergraphs with $n$ vertices and $m$ edges. In this paper we establish the threshold for the $\ell$-orientability of $H_{n,m,k}$ for all $k\ge 3$ and $\ell \ge 2$, i.e., we determine a critical quantity $c_{k, \ell}^*$ such that with probability $1-o(1)$ the graph $H_{n,cn,k}$ has an $\ell$-orientation if $c c_{k, \ell}^*$. Our result has various applications including sharp load thresholds for cuckoo hashing, load balancing with guaranteed maximum load, and massive parallel access to hard disk arrays.

39 citations

Journal ArticleDOI
TL;DR: A large-scale empirical study considering nine popular and recent UNRL techniques and 11 real-world datasets with varying structural properties and two common tasks finds that for non-attributed graphs there is no single method that is a clear winner and that the choice of a suitable method is dictated by certain properties of the embedding methods, task and structural properties of a graph.
Abstract: There has been appreciable progress in unsupervised network representation learning (UNRL) approaches over graphs recently with flexible random-walk approaches, new optimization objectives and deep architectures. However, there is no common ground for systematic comparison of embeddings to understand their behavior for different graphs and tasks. In this paper we theoretically group different approaches under a unifying framework and empirically investigate the effectiveness of different network representation methods. In particular, we argue that most of the UNRL approaches either explicitly or implicit model and exploit context information of a node. Consequently, we propose a framework that casts a variety of approaches -- random walk based, matrix factorization and deep learning based -- into a unified context-based optimization function. We systematically group the methods based on their similarities and differences. We study the differences among these methods in detail which we later use to explain their performance differences (on downstream tasks). We conduct a large-scale empirical study considering 9 popular and recent UNRL techniques and 11 real-world datasets with varying structural properties and two common tasks -- node classification and link prediction. We find that there is no single method that is a clear winner and that the choice of a suitable method is dictated by certain properties of the embedding methods, task and structural properties of the underlying graph. In addition we also report the common pitfalls in evaluation of UNRL methods and come up with suggestions for experimental design and interpretation of results.

36 citations

Proceedings ArticleDOI
TL;DR: This work argues that the post-processing algorithms aimed at only improving diversity among recommendations lead to discrimination among the users and introduces the notion of user fairness which has been overlooked in literature so far and proposes measures to quantify it.
Abstract: Recent works in recommendation systems have focused on diversity in recommendations as an important aspect of recommendation quality. In this work we argue that the post-processing algorithms aimed at only improving diversity among recommendations lead to discrimination among the users. We introduce the notion of user fairness which has been overlooked in literature so far and propose measures to quantify it. Our experiments on two diversification algorithms show that an increase in aggregate diversity results in increased disparity among the users.

35 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal Article
TL;DR: An independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator, or HSIC, is proposed.
Abstract: We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

1,134 citations

Proceedings ArticleDOI
02 Dec 2014
TL;DR: Cuckoo filters support adding and removing items dynamically while achieving even higher performance than Bloom filters, and have lower space overhead than space-optimized Bloom filters.
Abstract: In many networking systems, Bloom filters are used for high-speed set membership tests. They permit a small fraction of false positive answers with very good space efficiency. However, they do not permit deletion of items from the set, and previous attempts to extend "standard" Bloom filters to support deletion all degrade either space or performance. We propose a new data structure called the cuckoo filter that can replace Bloom filters for approximate set membership tests. Cuckoo filters support adding and removing items dynamically while achieving even higher performance than Bloom filters. For applications that store many items and target moderately low false positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters. Our experimental results also show that cuckoo filters outperform previous data structures that extend Bloom filters to support deletions substantially in both time and space.

593 citations

MonographDOI
01 Jan 2016
TL;DR: All those interested in discrete mathematics, computer science or applied probability and their applications will find this an ideal introduction to the subject.
Abstract: From social networks such as Facebook, the World Wide Web and the Internet, to the complex interactions between proteins in the cells of our bodies, we constantly face the challenge of understanding the structure and development of networks. The theory of random graphs provides a framework for this understanding, and in this book the authors give a gentle introduction to the basic tools for understanding and applying the theory. Part I includes sufficient material, including exercises, for a one semester course at the advanced undergraduate or beginning graduate level. The reader is then well prepared for the more advanced topics in Parts II and III. A final part provides a quick introduction to the background material needed. All those interested in discrete mathematics, computer science or applied probability and their applications will find this an ideal introduction to the subject.

565 citations