scispace - formally typeset
D

Danai Koutra

Researcher at University of Michigan

Publications -  167
Citations -  6649

Danai Koutra is an academic researcher from University of Michigan. The author has contributed to research in topics: Computer science & Automatic summarization. The author has an hindex of 29, co-authored 147 publications receiving 4896 citations. Previous affiliations of Danai Koutra include University of California, Riverside & Carnegie Mellon University.

Papers
More filters
Journal ArticleDOI

A Provable Framework of Learning Graph Embeddings via Summarization

TL;DR: Li et al. as mentioned in this paper proposed a graph embedding learning framework based on graph sum-marization, which showed the theoretical ground of learn-ing from summary graphs and the restoration with the three well-known graph embeddings approaches in a closed form.
Posted Content

Bridging Network Embedding and Graph Summarization.

TL;DR: Extensive experiments on both synthetic and real-world graphs show that Multi-LENS achieves 2 - 89% improvement in AUC for link prediction, while requiring less than 79x space compared to existing representation learning approaches.
Journal ArticleDOI

On Performance Discrepancies Across Local Homophily Levels in Graph Neural Networks

TL;DR: In this article , a new parameter to the preferential attachment model is introduced to enable the control of local homophily levels in generated graphs, enabling a systematic empirical study on how local homomorphism can impact performance.
Journal ArticleDOI

Learning node embeddings via summary graphs: a brief theoretical analysis

TL;DR: It is revealed that learning embeddings via graph summarization is actually learning embeddeddings on a approximate graph constructed by the configuration model, and an in-depth theoretical analysis of three embedding learning methods based on introduced kernel matrix is contributed.
Journal ArticleDOI

Exploring the Design of Adaptation Protocols for Improved Generalization and Machine Learning Safety

TL;DR: It is hypothesized and empirically seen that using hardness-promoting augmentations during LP and then FT with augmentations may be particularly effective for trade-off mitigation, and hypothesize and empirical see that appropriate pairing of data augmentation and protocol can substantially mitigate this trade-offs.