scispace - formally typeset
Search or ask a question

Showing papers on "Sequential algorithm published in 1984"


Journal ArticleDOI
TL;DR: A sequential algorithm is described which extracts a polygonal approximation to contours in a raster-scanned binary image which features a dynamic data structure which keeps track of the boundaries of objects and holes at any line of the frame.
Abstract: A sequential algorithm is described which extracts a polygonal approximation to contours in a raster-scanned binary image. The design features a dynamic data structure which keeps track of the boundaries of objects and holes at any line of the frame. By manipulating the pointers in the structure, high speed execution is attained and is independent of the complexity of the image. A frame may contain any number of objects and holes of any shape and in any configuration. Contours are generated as an ordered list of (x, y) coordinate pairs, with collinear points removed and a standardized starting point. Object contours are linked counterclockwise and hole contours clockwise and the data structure maintains object/hole relationships. Only one line of the image need ever be stored at any one time. The algorithm has been implemented on an 8086-based microcomputer in less than 2 kb,ytes of memory.

69 citations


Journal ArticleDOI
TL;DR: The speed-up of this algorithm is optimal in the sense that the depth of the algorithm is of the order of the running time of the fastest known sequential algorithm over the number of processors used.

47 citations


Journal ArticleDOI
C.H. Chen1
TL;DR: An effective sequential algorithm is presented that employs an autoregressive modeling of the data and a generalized likelihood ratio test to detect the significant statistical changes in the waveform.

22 citations


Book ChapterDOI
01 Jan 1984
TL;DR: In this paper, Hockney's FACR ( l ) algorithm was shown to have an asymptotic operation count of O(n 2 log log 2 log 2 n ) on an n × n grid.
Abstract: This chapter discusses the efficient direct methods for solving the finite-difference approximation of the Poisson equation on the unit square. The most effective sequential algorithm combines the block cyclic reduction and Fourier analysis schemes. This is Hockney's FACR ( l ) algorithm. It presents the asymptotic operation count for FACR( l ) on an n × n grid is O ( n 2 log 2 log 2 n ), and is achieved when the number l of the block cyclic reduction steps preceding Fourier analysis is taken as (log 2 log 2 n ). Using only cyclic reduction, or Fourier analysis, to solve the problem on a sequential machine requires O ( n 2 log 2 n ) arithmetic operations. Fourier analysis, or the matrix decomposition Poisson solver (MD-Poisson solver), is ideally suited for parallel computation. It consists of performing a set of independent sine transforms, and solving a set of independent tridiagonal systems. On a parallel computer consisting of n 2 processors, with an arbitrarily powerful interconnection network, the MD-Poisson solver for the two-dimensional case requires O (log 2 n ) parallel arithmetic steps.

10 citations


01 Jan 1984
TL;DR: A framework for the design and analysis of concurrent algorithms for search structures is proposed and several new algorithms using this framework are presented and a simulation kit is described to evaluate such algorithms.
Abstract: A search structure is a data structure that supports operations such as search, insert, and delete on a set of items. Common examples of search structures include lists, B-trees, and hash structures. Designing a sequential algorithm for a search structure is a well-understood problem. Designing a concurrent algorithm for such a structure is not. This thesis proposes a framework for the design and analysis of concurrent algorithms for search structures. It presents several new algorithms using this framework and describes a simulation kit to evaluate such algorithms. The framework has two salient features: a general model for search structures; and a correctness criterion based on specifications. We express algorithmic techniques as program fragments on the model. Because the model is general, these techniques apply to all search structures we know of. We consider computations to consist of actions at several levels of virtual machines. The highest level is the one that users interact with. Each action has a specification. For example, the specification of insert(x) is to ensure that x is in some set. We verify a concurrent computation by showing that it produces the same result as a sequence of user level actions performing according to their specifications. We present new algorithms for B-trees, multi-directory hash structures, and other search structures. We introduce concurrent algorithms for range queries, e.g. search for all items with keys between 100 and 110. These algorithms are only examples; one can design many more by following the guidelines of our framework. The purpose of the simulation kit is to evaluate the performance of concurrent search structure algorithms under a variety of conditions. As an example, we describe an experiment on eight B-tree algorithms.

5 citations