scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 1988"


Proceedings ArticleDOI
01 Jan 1988
TL;DR: The theorems on finite graphs extend to a theorem about the constrained labeling of infinite graphs about graphs of polynomial size.
Abstract: How to represent a graph in memory is a fundamental data structuring question. In the usual representations of an n-node graph, the names of the nodes (i.e. integers from 1 to n) betray nothing about the graph itself. Indeed, the names (or labels) on the n nodes are just logn bit place holders to allow data on the edges to code for the structure of the graph. In our scenario, there is no such waste. By assigning O(logn) bit labels to the nodes, we completely code for the structure of the graph, so that given the labels of two nodes we can test if they are adjacent in time linear in the size of the labels. Furthermore, given an arbitrary original labeling of the nodes, we can find structure coding labels (as above) that are no more than a small constant factor larger than the original labels. These notions are intimately related to vertex induced universal graphs of polynomial size. For example, we can label planar graphs with structure coding labels of size

131 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: Non-oblivious hashing, where the information gathered by performing “unsuccessful” probes determines the probe strategy, is introduced and used to obtain the following results for static lookup on full tables.
Abstract: Non-oblivious hashing, where the information gathered by performing “unsuccessful” probes determines the probe strategy, is introduced and used to obtain the following results for static lookup on full tables: An O(1) worst case scheme that requires only logarithmic additional memory (improving on the [FKS84] linear space upper bound).An almost sure O(1) probabilistic worst case scheme, without any additional memory (improving on previous logarithmic time upper bounds).Enhancements to hashing: Solving (a) and (b) in the multikey record environment, search can be performed under any key in time O(1); finding the nearest neighbor, the rank, etc. in logarithmic time.Our non-oblivious upper bounds are much better than the appropriate oblivious lower bounds.

20 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: An implicit data structure for n multikey records that supports searching for a record, under any key, in the asymptotically optimal search time (log log n) is described ( improves on [Mun87].
Abstract: We describe an implicit data structure for n multikey records that supports searching for a record, under any key, in the asymptotically optimal search time O(log n). This improves on [Mun87] in which Munro describes an implicit data structure for the problem of storing n k-key records so that search on any key can be performed in O(logkn(log log n)k-1) comparisons. The theoretical tools we develop also yield practical schemes that either halve the number of memory references over obvious solutions to the non-implicit version of the problem, or alternatively reduce the number of pointers involved significantly.

8 citations




01 Jan 1988
TL;DR: It is shown that any data structure or computation implemented on this write-once memory model can be made persistent without sacrificing much in the way of running time or space.
Abstract: We introduce a model of computation based on the use of write-once memory. Write-once memory has the property that bits may be set but not reset. Our model consists of a RAM with a small amount (such as logarithmic or n[alpha] for [alpha] > 1, where n is the size of the problem) of regular memory, and a polynomial amount of write-once memory. Bounds are given on the time required to simulate on write-once memory algorithms which originally run on a RAM with a polynomial amount of regular memory. We attempt to characterize algorithms that can be simulated on our write-once memory model with very little slow-down. A persistent computation is one in which at all times, the memory state of the computation at any previous point in time can be reconstructed. We show that any data structure or computation implemented on this write-once memory model can be made persistent without sacrificing much in the way of running time or space. The space requirements of algorithms running on the write-once model are studied. We show that general simulations of algorithms originally running on a RAM with regular memory by algorithms running on our write-once memory model require space proportional to the number of steps simulated. In order to study the space complexity further, we definie an analogue of the pebbling game, called the pebble-sticker game. A sticker is different from a pebble in that it cannot be removed once placed on a node of the computation graph. As placing pebbles correspond to writes to regular memory, placing stickers correspond to writes to the write-once memory. Bounds are shown on pebble-sticker tradeoffs required to evaluate trees and planar graphs. Finally, we define the complexity class WO-PSPACE as the class of problems which can be solved with a polynomial amount of write-once memory, and show that it is equal to P.

1 citations


Proceedings Article
01 Jan 1988

1 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: This paper explores how the one-bit translation of unbounded message algorithms can be sped up by pipelining, and considers two problems: routing between two processors in an arbitrary network and in some special networks.
Abstract: Many algorithms in distributed systems assume that the size of a single message depends on the number of processors. In this paper, we assume in contrast that messages consist of a single bit. Our main goal is to explore how the one-bit translation of unbounded message algorithms can be sped up by pipelining. We consider two problems. The first is routing between two processors in an arbitrary network and in some special networks (ring, grid, hypercube). The second problem is coloring a synchronous ring with three colors. The routing problem is a very basic subroutine in many distributed algorithms; the three coloring problem demonstrates that pipelining is not always useful.