scispace - formally typeset
Search or ask a question

Showing papers by "Ming-Yang Kao published in 1995"


Journal ArticleDOI
TL;DR: An optimal algorithm that broadcasts on an n-dimensional hypercube in O(n/ log/sub 2/ (n+1)) routing steps with wormhole, e-cube routing and all-port communication is given.
Abstract: We give an optimal algorithm that broadcasts on an n-dimensional hypercube in O(n/ log/sub 2/ (n+1)) routing steps with wormhole, e-cube routing and all-port communication. Previously, the best algorithm of P.K. McKinley and C. Trefftz (1993) requires [n/2] routing steps. We also give routing algorithms that achieve tight time bounds for n /spl les/7. >

57 citations


Proceedings ArticleDOI
23 Oct 1995
TL;DR: It is shown that the greedy algorithm performs within a constant factor of the offline optimal with respect to the L/sub p/ norm, which grows linearly with p, which is best possible, but does not depend on the number of servers and jobs.
Abstract: In the load balancing problem, there is a set of servers, and jobs arrive sequentially. Each job can be run on some subset of the servers, and must be assigned to one of them in an online fashion. Traditionally, the assignment of jobs to servers is measured by the L/sub /spl infin// norm; in other words, an assignment of jobs to servers is quantified by the maximum load assigned to any server. In this measure the performance of the greedy load balancing algorithm may be a logarithmic factor higher than the offline optimal. In many applications, the L/sub /spl infin// norm is not a suitable way to measure how well the jobs are balanced, If each job sees a delay that is proportional to the number of jobs on its server, then the average delay among all jobs is proportional to the sum of the squares of the numbers of jobs assigned to the servers. Minimizing the average delay is equivalent to minimizing the Euclidean (or L/sub 2/) norm. For any fixed p, 1/spl les/p

55 citations


01 Jan 1995
TL;DR: In this paper, the authors present a natural online perfect matching problem motivated by problems in mobile computing, where a total of n customers connect and disconnect sequentially, and each customer has an associated set of stations to which it may connect.
Abstract: We present a natural online perfect matching problem motivated by problems in mobile computing. A total of n customers connect and disconnect sequentially, and each customer has an associated set of stations to which it may connect. Each station has a capacity limit. We allow the network to preemptively switch a customer between allowed stations to make room for a new arrival. We wish to minimize the total number of switches required to provide service to every customer. Equivalently, we wish to maintain a perfect matching between customers and stations and minimize the lengths of the augmenting paths. We measure performance by the worst case ratio of the number of switches made to the minimum number required.

44 citations


Book ChapterDOI
16 Aug 1995
TL;DR: This work presents a natural online perfect matching problem motivated by problems in mobile computing, and wishes to maintain a perfect matching between customers and stations and minimize the lengths of the augmenting paths.
Abstract: We present a natural online perfect matching problem motivated by problems in mobile computing. A total of n customers connect and disconnect sequentially, and each customer has an associated set of stations to which it may connect. Each station has a capacity limit. We allow the network to preemptively switch a customer between allowed stations to make room for a new arrival. We wish to minimize the total number of switches required to provide service to every customer. Equivalently, we wish to maintain a perfect matching between customers and stations and minimize the lengths of the augmenting paths. We measure performance by the worst case ratio of the number of switches made to the minimum number required.

16 citations



Journal ArticleDOI
Ming-Yang Kao1
TL;DR: This paper proves that for a strongly connected planar directed graph of size $n$, a depth-first search tree rooted at a specified vertex can be computed in $O(\log^{5}n)$ time with $n/\log{n}$ processors.
Abstract: This paper proves that for a strongly connected planar directed graph of size $n$, a depth-first search tree rooted at a specified vertex can be computed in $O(\log^{5}n)$ time with $n/\log{n}$ processors. Previously, for planar directed graphs that may not be strongly connected, the best depth-first search algorithm runs in $O(\log^{10}n)$ time with $n$ processors. Both algorithms run on a parallel random access machine that allows concurrent reads and concurrent writes in its shared memory, and in case of a write conflict, permits an arbitrary processor to succeed.

6 citations


Book ChapterDOI
Ming-Yang Kao1
11 Dec 1995
TL;DR: A linear-time algorithm is obtained to find a set of minimal linear invariants that completely characterize the linear invariant information contained in individual rows and columns of a cross tabulated table.
Abstract: To protect sensitive information in a cross tabulated table, it is a common practice to suppress some of the cells. A linear combination of the suppressed cells is called a linear invariant if it has a unique feasible value. Because of this uniqueness, the information contained in a linear invariant is not protected. The minimal linear invariants are the most basic units of unprotected information. This paper establishes a fundamental correspondence between minimal linear invariants of a table and minimal edge cuts of a graph constructed from the table. As one of several consequences of this correspondence, a linear-time algorithm is obtained to find a set of minimal linear invariants that completely characterize the linear invariant information contained in individual rows and columns.

4 citations