scispace - formally typeset
Search or ask a question

Showing papers by "Koji Nakano published in 1999"


Proceedings ArticleDOI
12 Apr 1999
TL;DR: The main contribution of this work is to propose efficient randomized leader election and initialization protocols for Packet Radio Networks (PRN, for short).
Abstract: The main contribution of this work is to propose efficient randomized leader election and initialization protocols for Packet Radio Networks (PRN, for short). As a result of the initialization protocol, the n stations of a PRN are assigned distinct integer IDs from 1 to n. The results include protocols to: (1) initialize the single-channel PRN with the collision detection (CD) capability in O(n) rounds with probability at least 1-1/(2/sup n/); (2) initialize the k-channel PRN with CD capability in O(n/k) rounds with probability at least 1-1/n, whenever k/spl les/n/3 log n; (3) elect a leader on the single-channel PRN with no CD in O((log n)/sup 2/) broadcast rounds with probability at least 1-1/n; (4) initialize the single-channel PRN with no CD in O(n) rounds with probability at least 1-1/(2/spl radic/n); (5) initialize the k-channel PRN with no CD in O(n/k) broadcast rounds with probability at least 1-1/n, whenever k/spl les/n/(4(log n)/sup 2/).

63 citations


Journal ArticleDOI
TL;DR: This work presents elegant broadcast-efficient protocols for permutation routing, ranking, and sorting on single-hop Mobile Radio Networks with p stations and k radio channels, denoted by MRN(p,k).
Abstract: The main contribution of this work is to present elegant broadcast-efficient protocols for permutation routing, ranking, and sorting on single-hop Mobile Radio Networks with p stations and k radio channels, denoted by MRN(p,k). Clearly, any protocol performing these tasks on n items must perform /sup n///sub k/ broadcast rounds because each item must be broadcast at least once. We begin by presenting an optimal off-line permutation routing protocol using /sup n///sub k/ broadcast rounds for arbitrary k, p, and n. Further, we show that optimal on-line routing can be performed in /sup n///sub k/ broadcast rounds, provided that either k=1 or p=n. We then go on to develop an online routing protocol that takes 2/sup n///sub k/+k-1 broadcast rounds on the MRN(p,k), whenever k/spl les//spl radic//sup p///sub 2/. Using these routing protocols as basic building blocks, we develop a ranking protocol that takes 2/sup n///sub k/+o(/sup n///sub k/) broadcast rounds as well as a sorting protocol that takes 3/sup n///sub k/+o(/sup n///sub k/) broadcast rounds, provided that k /spl epsiv/ o(/spl radic/n) and p=n. Finally, we develop a ranking protocol that takes 3/sup n///sub k/+o(/sup n///sub k/) broadcast rounds, as well as a sorting protocol that takes 4/sup n///sub k/+o(/sup n///sub k/) broadcast rounds on the MRN(p,k), provided that k/spl les//spl radic//sup p///sub 2/ and p /spl epsiv/ o(n). Featuring very low proportionality constants, our protocols offer a vast improvement over the state of the art.

50 citations


Book ChapterDOI
16 Dec 1999
TL;DR: In this paper, the authors proposed energy-efficient randomized initialization protocols for ad-hoc radio networks (ARN), where the number n of stations in the ARN is not known beforehand.
Abstract: The main contribution of this work is to propose energy-efficient randomized initialization protocols for ad-hoc radio networks (ARN, for short). First, we show that if the number n of stations is known beforehand, the single-channel ARN can be initialized by a protocol that terminates, with high probability, in O(n) time slots with no station being awake for more than O(log n) time slots. We then go on to address the case where the number n of stations in the ARN is not known beforehand. We begin by discussing, an elegant protocol that provides a tight approximation of n. Interestingly, this protocol terminates, with high probability, in O((log n)2) time slots and no station has to be awake for more than O(log n) time slots. We use this protocol to design an energy-efficient initialization protocol that terminates, with high probability, in O(n) time slots with no station being awake for more than O(log n) time slots. Finally, we design an energy-efficient initialization protocol for the k-channel ARN that terminates, with high probability, in O(n/k + log n) time slots, with no station being awake for more than O(log n) time slots.

33 citations


Book ChapterDOI
12 Apr 1999
TL;DR: The main contribution of this work is to propose a number of broadcast-efficient VLSI architectures for computing the sum and the prefix sums of a ω κ -bit, κ > 2, binary sequence using, as basic building blocks, linear arrays of at most ω 2 shift switches.
Abstract: The main contribution of this work is to propose a number of broadcast-efficient VLSI architectures for computing the sum and the prefix sums of a ω κ -bit, κ > 2, binary sequence using, as basic building blocks, linear arrays of at most ω 2 shift switches. An immediate consequence of this feature is that in our designs broadcasts are limited to buses of length at most ω 2 making them eminently practical. Using our design, the sum of a ω κ -bit binary sequence can be obtained in the time of 2κ-2 broadcasts, using 2ω κ-2 + O(ω κ-3 ) blocks, while the corresponding prefix sums can be computed in 3k - 4 broadcasts using (κ + 2)ω κ-2 + O(κω κ-3 ) blocks.

9 citations


Proceedings ArticleDOI
12 Apr 1999
TL;DR: It is shown that the notoriously difficult problem of finding and reporting the smallest number of vertex-disjoint paths that cover the vertices of a graph can be solved time- and work-optimally for cographs.
Abstract: We show that the notoriously difficult problem of finding and reporting the smallest number of vertex-disjoint paths that cover the vertices of a graph can be solved time- and work-optimally for cographs. Our algorithm solves this problem in O(log n) time using n/log n processors on the EREW-PRAM for an n-vertex cograph G represented by its cotree.

5 citations


Proceedings ArticleDOI
23 Jun 1999
TL;DR: The purpose of the VMesh is to provide a comprehensive environment for development, visualization and debugging of algorithms, which has proven to be useful for studying and understanding the behavior of parallel algorithms on the reconfigured mesh.
Abstract: Many parallel algorithms on the reconfigurable mesh have been developed so far. However it is hard to understand the behavior of these algorithms, because the bus topology is dynamically changing during the execution. This paper describes VMesh, a tool for visualizing parallel algorithms on the reconfigurable mesh. The purpose of the VMesh is to provide a comprehensive environment for development, visualization and debugging of algorithms. This system has proven to be useful for studying and understanding the behavior of parallel algorithms on the reconfigurable mesh.

2 citations


Proceedings ArticleDOI
12 Apr 1999
TL;DR: An efficient reconfigurable parallel prefix counting network based on the recently-proposed technique of shift switching with domino logic, where the charge/discharge signals propagate along the switch chain producing semaphores results in a network that is fast and highly hardware-compact.
Abstract: We propose an efficient reconfigurable parallel prefix counting network based on the recently-proposed technique of shift switching with domino logic, where the charge/discharge signals propagate along the switch chain producing semaphores results in a network that is fast and highly hardware-compact. The proposed architecture for prefix counting N-1 bits features a total delay of (4 log N+/spl radic/N-2)*T/sub d/, where T/sub d/ is the delay for charging or discharging a row of two prefix sum units of eight shift switches. Simulation results reveal that T/sub d/ does not exceed Ins under 0.8-micron CMOS technology. Our design is faster than any design known to us for N/spl les/2/sup 10/ Yet another important and novel feature of the proposed architecture is that it requires very simple controls, partially driven by semaphores, reducing significantly the hardware complexity and fully utilizing the inherent speed of the process.