Q2. What is the transitive compaction problem for strongly connected digraphs?
The transitive compaction problem for strongly connected digraphs is: given a strongly con-nected digraph G, find a minimal strongly connected spanning subgraph of it, i.e., a strongly connected spanning subgraph for which the removal of any arc destroys strong connectivity.
Q3. What is the common feature of the problem?
Tw o common features are that there is a simple sequential algorithm for it that seems hard to parallelize and that the related optimization problem (minimum vs. minimal) is NP-hard.
Q4. What is the simplest way to solve the transitive compaction problem?
As before, the redundant arcs can be found in linear time and hence each execution of the repeat loop takes linear time, leading to an O(m + n log n) time sequential algorithm for transitive compaction.
Q5. What is the arc-disjoint forward branching in lemma 2?
As in lemma 2, there exist two arc-disjoint forward branchings in G′ (corresponding to branchings in G), one of which contains at most half the arcs of H − E f . []
Q6. How many times does the repeat loop take?
Since by Lemma 11 the repeat loop is executed O(log n) times, the entire transitive compaction algorithm runs in O(m + n log n) time.
Q7. What is the problem that extends naturally to general digraphs?
Their problem extends naturally to general digraphs: given a digraph G, find a minimal span-ning subgraph of it whose transitive closure is the same as that of G.
Q8. How many times can a repeat loop be executed?
The authors conclude by noting that it is conceivable that one (or both) of their sequential algorithmsruns in linear time, since it is possible that the repeat loop needs to be executed only a constant number of times.
Q9. What is the independence relation of a digraph?
The authors can define the following independence relation on the arcs of a strongly connecteddigraph, G: a set of arcs is independent if it can be removed without destroying strong connectivity of G.
Q10. How can the authors speed up the parallel and sequential algorithms?
Both of these algorithms can be speeded up by a log n factor if the authors use a CRCW PRAM; the authors assume here the COMMONconcurrent-write model in which all processors participating in a concurrent write must write the same value [KR].
Q11. How many processors can be used in step 2?
The following method implements step 2 in O(log t) time with a number of processors linear in the size of P: Initially the authors determine, for each vertex u, the forward arc (v, u) with minimum v (if such an arc exists).
Q12. what is the path q from l to u that avoids arc?
Then the path consisting of arcs in p from the root to k, followed by arc f and then the path q is a path from 1 to u that avoids arc (u − 1, u).
Q13. Why can't the algorithm be implemented to run in linear time?
This isbecause the minimum-weight branching algorithm of Edmonds [Ed2] can be implemented to run in linear time for 0-1 edge weights by using the algorithm in [GGST], with the heaps replaced by two buckets.
Q14. What is the basic lemma for a philic branching?
The fol-lowing lemma explains how these branchings are computed:Lemma 0: An H-philic (H-phobic) branching can be computed by a minimum-weight branching computation with zero-one weights.
Q15. What is the simplest way to reduce the number of arcs in a graph?
A simple modification is to initially reduce the number of arcs to at most 2n − 2 by taking the union of a forward and an inverse branching (defined below).