scispace - formally typeset
Open AccessJournal ArticleDOI

Mixed Membership Stochastic Blockmodels

TLDR
In this article, the authors introduce a class of variance allocation models for pairwise measurements, called mixed membership stochastic blockmodels, which combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters (mixed membership), and develop a general variational inference algorithm for fast approximate posterior inference.
Abstract
Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Modeling social networks with node attributes using the multiplicative attribute graph model

TL;DR: In this article, a Multiplicative Attribute Graph (MAG) model that considers nodes with categorical attributes and models the probability of an edge as the product of individual attribute link formation affinities is presented.
Journal ArticleDOI

Model selection and clustering in stochastic block models based on the exact integrated complete data likelihood

TL;DR: A greedy inference algorithm is proposed that can be employed to analyze large networks (several tens of thousands of nodes and millions of edges) with no convergence problems and exhibits improvements over existing strategies, both in terms of clustering and model selection.
Proceedings Article

Reducing the Rank in Relational Factorization Models by Including Observable Patterns

TL;DR: This work proposes a novel additive tensor factorization model to learn from latent and observable patterns on multi-relational data and presents a scalable algorithm for computing the factorization.
Proceedings ArticleDOI

User reputation in a comment rating environment

TL;DR: A novel bias-smoothed tensor model is proposed and empirically show that the model significantly outperforms a number of alternatives based on Yahoo! News, Yahoo! Buzz and Epinions datasets.
OtherDOI

Bayesian stochastic blockmodeling

TL;DR: In this article, a self-contained introduction to the use of Bayesian inference to extract large-scale modular structures from network data, based on the stochastic blockmodel (SBM), as well as its degree-corrected and overlapping generalizations, is provided.
References
More filters
Journal ArticleDOI

Gene Ontology: tool for the unification of biology

TL;DR: The goal of the Gene Ontology Consortium is to produce a dynamic, controlled vocabulary that can be applied to all eukaryotes even as knowledge of gene and protein roles in cells is accumulating and changing.
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Journal ArticleDOI

Finding scientific topics

TL;DR: A generative model for documents is described, introduced by Blei, Ng, and Jordan, and a Markov chain Monte Carlo algorithm is presented for inference in this model, which is used to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics.
Related Papers (5)