Learning from labeled and unlabeled data on a directed graph
read more
Citations
Semi-Supervised Learning
Introduction to Semi-Supervised Learning
Feature Selection: A Data Perspective
Random-Walk Computation of Similarities between Nodes of a Graph with Application to Collaborative Recommendation
References
Statistical learning theory
The PageRank Citation Ranking : Bringing Order to the Web
Normalized cuts and image segmentation
Normalized cuts and image segmentation
Authoritative sources in a hyperlinked environment
Related Papers (5)
Frequently Asked Questions (15)
Q2. What is the function w([u, v]) for a graph?
For a strongly connected graph, there is an integer k ≥ 1 and a unique partition V = V0∪V1∪· · ·∪Vk−1 such thatfor all 0 ≤ r ≤ k − 1 each edge [u, v] ∈ E with u ∈
Q3. What is the inverse of the random walk?
Given an n × n invertible matrix A, the time required to compute the inverse A−1 is generally O(n3) and the representation of the inverse requires Ω(n2) space.
Q4. What is the function of the undirected graph?
For an undirected graph G = (V,E), it is well-known that the stationary distribution of the natural random walk has a closed-form expression π(v) = d(v)/ volV, where d(v) denotes the degree of the vertex v, and volV = ∑ u∈V d(u).
Q5. What is the simplest way to partition a graph?
Given a directed graph G = (V, E), it may be partitioned into two parts as follows:1. Define a random walk over G with a transition probability matrix P such that it has a unique stationary distribution.
Q6. What is the eigenvector of the graph?
Compute an eigenvector Φ of Θ corresponding to the second largest eigenvalue, and then partition the vertex set V of G into the two parts S = {v ∈ V |Φ(v) ≥ 0} and Sc = {v ∈ V |Φ(v) < 0}.
Q7. What is the popular method for clustering directed graphs?
In the absence of labeled instances, their approach reduces to a spectral clustering method for directed graphs, which generalizes the work of Shi and Malik (2000) that may be the most popular spectral clustering scheme for undirected graphs.
Q8. What is the probability of a random walk?
Giventhat the authors are currently at vertex u with d+(u) > 0, the next step of this random walk proceeds as follows: (1) with probability 1 − η jump to a vertex chosen uniformly at random over the whole vertex set except u; and (2) with probability ηw([u, v])/d+(u) jump to a vertex v adjacent from u.
Q9. What is the simplest way to classify a random walk?
At the beginning of this section, the authors assume the graph to be strongly connected and aperiodic such that the natural random walk over the graph converges to a unique and positive stationary distribution.
Q10. What is the next step of the random walk?
Given that the authors are currently at vertex u, the next step of this random walk proceeds as follows: first jump backward to a vertex h adjacent to u with probability p−(u, h) = w([h, u])/d−(u); then jump forward to a vertex v adjacent from u with probability p+(h, v) = w([h, v])/d+(h).
Q11. What is the function used for directed graphs?
In the absence of labeled instances, this framework can be utilized in an unsupervised setting as a spectral clustering method for directed graphs.
Q12. What is the function of the function h H(V)?
Define an indicator function h ∈ H(V ) by h(v) = 1 if v ∈ S, and −1 if v ∈ Sc. Denote by ν the volume of S. Clearly, the authors have 0 < ν < 1 due to S ⊂ G.
Q13. What is the function for undirected graphs?
The authors first define a combinational partition criterion, which generalizes the normalized cut criterion for undirected graphs (Shi & Malik, 2000).
Q14. What is the framework for learning from directed graphs?
In the absence of labeled instances, as shown in section 4, this framework can be utilized as a spectral clustering approach for directed graphs.
Q15. What is the function that is used to classify vertices in a directed graph?
a pair of vertices linked by an edge are likely to have the same label; moreover, vertices lying on a densely linked subgraph are likelyto have the same label.