scispace - formally typeset
Search or ask a question

Showing papers on "Sequential algorithm published in 1993"


Journal ArticleDOI
TL;DR: This paper describes an insertion algorithm for the Vehicle Routing and Scheduling Problem with Time Windows that builds routes in parallel and uses a generalized regret measure over all unrouted customers to select the next candidate for insertion.

436 citations


Journal ArticleDOI
TL;DR: The temperature parallel algorithm of simulated annealing is considered to be the most suitable for finding the optimal multiple sequence alignment because the algorithm does not require any scheduling for optimization.
Abstract: We have developed simulated annealing algorithms to solve the problem of multiple sequence alignment. The algorithm was shown to give the optimal solution as confirmed by the rigorous dynamic programming algorithm for three-sequence alignment. To overcome long execution times for simulated annealing, we utilized a parallel computer. A sequential algorithm, a simple parallel algorithm and the temperature parallel algorithm were tested on a problem. The results were compared with the result obtained by a conventional tree-based algorithm where alignments were merged by two-way dynamic programming. Every annealing algorithm produced a better energy value than the conventional algorithm. The best energy value, which probably represents the optimal solution, was reached within a reasonable time by both of the parallel annealing algorithms. We consider the temperature parallel algorithm of simulated annealing to be the most suitable for finding the optimal multiple sequence alignment because the algorithm does not require any scheduling for optimization. The algorithm is also useful for refining multiple alignments obtained by other heuristic methods.

64 citations


Proceedings ArticleDOI
01 Jul 1993
TL;DR: A simple approach for constructing geometric partitions in a way that is easy to apply to new problems, which leads to asymptotically faster and more-efficient EREW PRAM parallel algorithms for a number of computational geometry problems, including the development of the first optimal-work NC algorithm for the well-known 3-dimensional convex hull problem.
Abstract: We present a simple approach for constructing geometric partitions in a way that is easy to apply to new problems. We avoid the use of VC-dimension arguments, and, instead, base our arguments on a notion we call the scaffold dimension, which subsumes the VC-dimension and is simpler to apply. We show how to easily construct (1/r)-nets and (1/r)-approximations for range spaces with bounded scaffold dimension, which immediately implies simple algorithms for constructing (1/r)-cuttings (by straight-forward recursive subdivision methods). More significant than simply being a conceptual simplification of previous approaches, however, is that our methods lead to asymptotically faster and more-efficient EREW PRAM parallel algorithms for a number of computational geometry problems, including the development of the first optimal-work NC algorithm for the well-known 3-dimensional convex hull problem, which solves an open problem of Amato and Preparata. Interestingly, our approach also yields a faster sequential algorithm for the distance selection problem, by the parametric searching paradigm, which solves an open problem posed by Agarwal, Aronov, Sharir, and Suri, and reiterated by Dickerson and Drysdale.

62 citations


Proceedings ArticleDOI
01 Jul 1993
TL;DR: An efficient parallel implementation of the Gro¨bner basis problem, a symbolic algebra application, is developed using the following techniques: a sequential algorithm was rewritten in a transition axiom style, and an application-specific scheduler was designed and tuned to get good performance.
Abstract: Parallelism with irregular patterns of data, communication and computation is hard to manage efficiently. In this paper we present a case study of the Gro¨bner basis problem, a symbolic algebra application. We developed an efficient parallel implementation using the following techniques. First, a sequential algorithm was rewritten in a transition axiom style, in which computation proceeds by non-deterministic invocations of guarded statements at multiple processors. Next, the algebraic properties of the problem were studied to modify the algorithm to ensure correctness in spite of locally inconsistent views of the share data structures. This was used to design data structures with very little overhead for maintaining consistency. Finally, an application-specific scheduler was designed and tuned to get good performance. Our distributed memory implementation achieves impressive speedups.

55 citations


Journal ArticleDOI
A. Imai, E. Tick1
TL;DR: The authors show how much the algorithm reduces the contention for critical sections during garbage collection, how well the load-balancing strategy works and its expected overheads, and the expected speedup achieved by the algorithm.
Abstract: A parallel copying garbage collection algorithm for symbolic languages executing on shared-memory multiprocessors is proposed. The algorithm is an extension of Baker's sequential algorithm with a novel method of heap allocation to prevent fragmentation and facilitate load distribution during garbage collection. An implementation of the algorithm within a concurrent logic programming system, VPIM, has been evaluated and the results, for a wide selection of benchmarks, are analyzed here. The authors show 1) how much the algorithm reduces the contention for critical sections during garbage collection, 2) how well the load-balancing strategy works and its expected overheads, and 3) the expected speedup achieved by the algorithm. >

52 citations


Book ChapterDOI
07 Apr 1993
TL;DR: This work offers a symmetric account of sequentiality, by means of symmetric algorithms, which are pairs of sequential functions, mapping input data to output data, and output exploration trees to input exploration trees, respectively.
Abstract: We offer a symmetric account of sequentiality, by means of symmetric algorithms, which are pairs of sequential functions, mapping input data to output data, and output exploration trees to input exploration trees, respectively. We use the framework of sequential data structures, a reformulation of a class of Kahn-Plotkin's concrete data structures. In sequential data structures, data are constructed by alternating questions and answers. Sequential data structures and symmetric algorithms are the objects and morphisms of a symmetric monoidal closed category, which is also cartesian, and is such that the unit is terminal. Our category is a full subcategory of categories of games considered by Lamarche, and by Abramsky-Jagadeesan, respectively.

47 citations


Journal ArticleDOI
01 Aug 1993
TL;DR: A generalization of the sequential simulated annealing algorithm for combinatorial optimization problems by performing a parallel study of the current solution neighbourhood is obtained and is tested by comparing it to the sequential algorithm for two classical problems.
Abstract: In this paper we present a generalization of the sequential simulated annealing algorithm for combinatorial optimization problems. By performing a parallel study of the current solution neighbourhood we obtain an algorithm that can be very efficiently implemented on a massively parallel computer. We test the convergence and the quality of our algorithm by comparing it to the sequential algorithm for two classical problems: the minimization of an unconstrained 0–1 quadratic function and the quadratic sum assignment problem.

45 citations


Proceedings ArticleDOI
19 Apr 1993
TL;DR: Techniques for obtaining random point samples from spatial databases are described, detailing the sample-first, A/R-tree, and partial area tree algorithms.
Abstract: Techniques for obtaining random point samples from spatial databases are described. Random points are sought from a continuous domain that satisfy a spatial predicate which is represented in the database as a collection of polygons. Several applications of spatial sampling are described. Sampling problems are characterized in terms of two key parameters: coverage (selectivity), and expected stabbing number (overlap). Two fundamental approaches to sampling with spatial predicates, depending on whether one samples first or evaluates the predicate first, are discussed. The approaches are described in the context of both quadtrees and R-trees, detailing the sample-first, A/R-tree, and partial area tree algorithms. A sequential algorithm, the one-pass spatial reservoir algorithm, is also described. >

40 citations


Journal ArticleDOI
TL;DR: In this article, a comparison between two assimilation algorithms, sequential and four-dimensional variational, on a 24-hour period extracted from a baroclinic instability situation representative of mid-latitude dynamics is made.
Abstract: The aim of this study is to make a strict comparison between two assimilation algorithms, sequential and four-dimensional variational, on a 24-hour period extracted from a baroclinic instability situation representative of mid-latitude dynamics In the case of linear dynamics, and under the hypothesis of a perfect model, these two four-dimensional algorithms are known to lead to the same optimal estimate of the atmosphere at the end of the assimilation period, and both methods can be generalized in the nonlinear case Because the full sequential algorithm is too resource-demanding to be implemented as such, we shall test the four-dimensional variational method (4D-VAR), and a simplified sequential method based on three-dimensional variational analysis (3D-VAR), deliberately not exceeding the range of validity of the tangent-linear model in the experiments 4D-VAR is then expected to be almost equivalent to the generalization of the sequential Kalman filter in the nonlinear case, ie the extended Kalman filter As for the simplified sequential algorithm, it can be seen as an approximation of this full extended Kalman filter, for which the forecast error matrices are evaluated only approximately before each analysis, instead of being explicitly computed from the complete dynamical equations In the four-dimensional variational scheme, the consistency of the propagation of information with the dynamics is illustrated in an experiment assimilating some localized AIREP data The large impact which these additional observations have over a large geographical area appears to be very beneficial for the quality of the analysis Comparing the results of both methods in various configurations, we found that 4D-VAR systematically behaved substantially better than the simplified sequential algorithm, and had a more accurate analysis at the end of the assimilation period and a much smaller error growth rate in subsequent forecasts On the one hand, extremely bad specifications of initial forecast errors were found to be detrimental to both algorithms On the other hand, the four-dimensional variational algorithm proves to be more robust to the way by which gravity-wave control is implemented

32 citations


Book ChapterDOI
01 Jan 1993
TL;DR: The screening sequential algorithm for generating realizations from Gaussian and Gaussian intrinsic random functions is defined, and it is shown to be exact for the one-dimensional case, while empirical evaluations show that it is highly reliable also in two-dimensional cases.
Abstract: Random functions are in frequent use in applications of spatial statistics. Gaussian and Gaussian intrinsic random functions are differentiated, and the screening sequential algorithm for generating realizations from them are defined. The algorithm is based on the general sequential algorithm and Markov properties for random functions. For exponential and linear variogram functions the algorithm is shown to be exact for the one-dimensional case, while empirical evaluations show that it is highly reliable also in two-dimensional cases. For fractal random functions, the screening sequential algorithm is significally more reliable than the frequently used random midpoint displacement and successive random addition algorithms. The processing requirements for the algorithm is independent of the actual variogram function and linear in number of lattice nodes — both favorable characteristics.

32 citations


Journal ArticleDOI
TL;DR: A sequential algorithm for the maximum matching problem on cographs, where the input is a parse tree of a cograph and the time complexity is O ( n ).

Journal ArticleDOI
01 Sep 1993
TL;DR: A new parallel implementation of a long-range interaction problem on the ring topology for a MIMD computer system is presented and it is shown that for each number of particles N there exists an optimal number of processors p.
Abstract: A new parallel implementation of a long-range interaction problem on the ring topology for a MIMD computer system is presented. The algorithm was applied for the implementation of a forces integrator in the molecular dynamics. The complexity estimation is made and also measured time results are given. It is shown that for each number of particles N there exists an optimal number of processors p. Time complexity O(N2) of a sequential algorithm is reduced to O(N2p) with the proposed parallel implementation. The time requirement for the optimal sequential algorithm is proportional to N22 and the time requirement for the proposed parallel algorithm is proportional to N2(2p).

Proceedings ArticleDOI
Ahmed Ouenes, Naji Saad1
TL;DR: The overall run time was reduced by approximately the number of concurrent calls of the sequential algorithm, which uses, in an optimal manner concurrently many sequential algorithms, to increase the acceptance rate with the optimal use of a CPU.
Abstract: This paper presents a new parallel simulated annealing algorithm for computational intensive problems. The new algorithm enables us to reduce the overall time required to solve reservoir engineering problems by using the simulated annealing method (SAM). A simple geostatistical optimization problem (variogram matching) applied to two fields is used for illustration purposes. The reduction of computation time starts by optimizing the sequential simulated annealing algorithm. This task is achieved by an efficient coding and an appropriate choice of topology. Three different topologies are used and their effects on the overall run time and the quality of the generated image are discussed. After optimizing the sequential algorithm, the problem of high rejection rate at low annealing temperature is solved by using parallelizaticm. The new algorithm uses, in an optimal manner concurrently many sequential algorithms. The number of concurrent algorithms is adjusted throughout the optimization to increase the acceptance rate with the optimal use of a CPU. The new algorithm was implemented on a CRAY Y-M P with 4 processors. A 50,400 (280x180) gridblock field was used to test the parallel optimization method. The overall run (clock) time was reduced by approximately the number of concurrent calls of the sequential algorithm.

Journal ArticleDOI
TL;DR: This work formalizes the notion of a ‘good approximation’ in terms of the Hausdorff metric and shows through experimentation that the application of this metric leads to visually satisfying approximations.

Journal ArticleDOI
TL;DR: A fast parallel algorithm for the recognition of ultrametrics is presented and its time-processor product is of the same order as the time bound of the known sequential algorithm of Culberson and Rudnicki.
Abstract: A fast parallel algorithm for the recognition of ultrametrics is presented. Its time-processor product is of the same order as the time bound of the known sequential algorithm of Culberson and Rudnicki [Inform. Process. Lett., 30 (1990), pp. 215–220] (compare also [SIAM J. Disc. Math., 3 (1990), pp. 1–6] and [Quart. Appl. Math., 26 (1968), pp. 607–609]. By the same way, tree metrics also can be recognized.

Journal ArticleDOI
TL;DR: An accurate proof of the characterization of proper circular arc graphs is presented and the first efficient parallel algorithm which not only recognizes proper circular arcs graphs but also constructs proper circularArc representations is obtained.
Abstract: Based on Tucker's work, we present an accurate proof of the characterization of proper circular arc graphs and obtain the first efficient parallel algorithm which not only recognizes proper circular arc graphs but also constructs proper circular arc representations. The algorithm runs inO(log2 n) time withO(n 3) processors on a Common CRCW PRAM. The sequential algorithm can be implemented to run inO(n 2) time and is optimal if the input graph is given as an adjacency matrix, so to speak.

Patent
08 Jul 1993
TL;DR: In this article, a history buffer compresses an array of i identical horizontal slice units, and a control unit controls execution of the sequential algorithm to condition the comparators to scan symbols in parallel but in each of the blocks sequentially and cause matching sequences and nonmatching sequences of symbols to be stored in the array.
Abstract: An apparatus and method for executing a sequential data compression algorithm that is especially suitable for use where data compression is required in a device (as distinguished from host) controller. A history buffer compresses an array of i identical horizontal slice units. Each slice unit stores j symbols to define j separate blocks in which the symbols in each slice unit are separated by exactly i symbols. Symbols in a string of i incoming symbols are compared by i comparators in parallel with symbols previously stored in the slice units to identify matching sequences of symbols. A control unit controls execution of the sequential algorithm to condition the comparators to scan symbols in parallel but in each of the blocks sequentially and cause matching sequences and nonmatching sequences of symbols to be stored in the array. The parameters i and j are selected to limit the number of comparators required to achieve a desired degree of efficiency in executing the algorithm based upon a trade-off of algorithm execution speed versus hardware cost. A priority encoder calculates from signals output by the slice units each j,i address in which a matching sequence is identified, but it outputs the address of only one (such as the smallest) of these addresses.

Journal ArticleDOI
TL;DR: In this article, the authors present a sequential and a parallel algorithm to solve the maximum weight independent set problem on a permutation graph with O(n log log n) and O(log 2 n) time, respectively, under the CREW PRAM model.

Journal ArticleDOI
TL;DR: The Logarithmic Pipelined Model is introduced, in which a RAM processor of fixed size has pipelined access to a memory ofm cells in time logm, and a new organization of parallel algorithms for list-linked structures is led to.
Abstract: We introduce a new sequential model of computation, called the Logarithmic Pipelined Model (LPM), in which a RAM processor of fixed size has pipelined access to a memory ofm cells in time logm. Our motivation is that the usual assumption that a memory can be accessed in constant time becomes theoretically unacceptable asm increases, while an access time of logm is consistent with VLSI technologies. For a problem II of sizen, IT eP, we denote byS(n) the time required by the fastest known sequential algorithm, and byT(n) the time required by the fastest algorithm solving II in the LPM. LettingO(logn) =O(logm), we define several complexity classes; in particular, LP0 = {II eP:T(n) =O(S(n))}, the class of problems for which the LPM is as efficient as the standard model, and LP∞ =IIeP:T(n) =O(S(n) logn), where the problems are less adequately solved in the new model. We first study the relations between the LPM and other models of computation. Of particular relevance is comparison with the PRAM model. Then we discuss several problems and derive the relative upper and lower bounds in the LPM. Our results lead to a new organization of parallel algorithms for list-linked structures.

Journal ArticleDOI
TL;DR: The problem of optimizing the sequential algorithm for the Boltzmann machine (BM) is addressed and a solution that is based on the locality properties of the algorithm and makes possible the efficient computation of the cost difference between two configurations is presented.
Abstract: The problem of optimizing the sequential algorithm for the Boltzmann machine (BM) is addressed. A solution that is based on the locality properties of the algorithm and makes possible the efficient computation of the cost difference between two configurations is presented. Since the algorithm performance depends on the number of accepted state transitions in the annealing process, a theoretical procedure is formulated to estimate the acceptance probability of a state transition. In addition, experimental data are provided on a well-known optimization problem travelling salesman problem to have a numerical verification of the theory, and to show that the proposed solution obtains a speedup between 3 and 4 in comparison with the traditional algorithm. >

Journal Article
TL;DR: A theorem is presented that states that the predicted average probability of committing a decision error, associated with a Bayesian sequential procedure that accepts the hypothesis of a gene-order configuration with posterior probability equal to or greater than pi, is smaller than 1 - pi *.
Abstract: Determination of the relative gene order on chromosomes is of critical importance in the construction of human gene maps. In this paper the authors develop a sequential algorithm for gene ordering. They start by comparing three sequential procedures to order three genes on the basis of Bayesian posterior probabilities, maximum-likelihood ratio, and minimal recombinant class. In the second part of the paper they extend sequential procedure based on the posterior probabilities to the general case of g genes. They present a theorem that states that the predicted average probability of committing a decision error, associated with a Bayesian sequential procedure that accepts the hypothesis of a gene-order configuration with posterior probability equal to or greater than [pi]*, is smaller than 1 - [pi]*. This theorem holds irrespective of the number of genes, the genetic model, and the source of genetic information. The theorem is an extension of a classical result of Wald, concerning the sum of the actual and the nominal error probabilities in the sequential probability ratio test of two hypotheses. A stepwise strategy for ordering a large number of genes, with control over the decision-error probabilities, is discussed. An asymptotic approximation is provided, which facilitates the calculations withmore » existing computer software for gene mapping, of the posterior probabilities of an order and the error probabilities. They illustrate with some simulations that the stepwise ordering is an efficient procedure. 18 refs., 9 tabs.« less

Book ChapterDOI
TL;DR: This paper reports an effort to parallelize on a network of workstations the partial cylindrical algebraic decomposition based quantifier elimination algorithm over the reals, which was devised by Collins and improved by the author, so that cylinders are constructed in parallel.
Abstract: This paper reports our effort to parallelize on a network of workstations the partial cylindrical algebraic decomposition based quantifier elimination algorithm over the reals, which was devised by Collins and improved by the author. We have parallelized the lifting phase of the algorithm, so that cylinders are constructed in parallel. An interesting feature is that the algorithm sometimes appears to produce super-linear speedups, due to speculative parallelism. Thus it suggests a possible further improvement of the sequential algorithm via simulating parallelism.

02 Jan 1993
TL;DR: A methodology to control the complexity in designing parallel algorithms is introduced, and the result is parallel algorithms with all of the features of sequential ones that deliver the promise of parallelism.
Abstract: Volume rendering is a method for visualizing volumes of sampled data such as CT, MRI, and finite element simulations. Visualization of medical and simulation data improves understanding and interpretation, but volume rendering is expensive and each frame takes from minutes to hours to calculate. Parallel computers provide the potential for interactive volume rendering, but parallel algorithms have not matched sequential algorithm's features, nor have they provided the speedup possible. I introduce a methodology to control the complexity in designing parallel algorithms, and apply this methodology to volume rendering. The result is parallel algorithms with all of the features of sequential ones that deliver the promise of parallelism. My algorithms are sufficiently general to run on single instruction multiple data (SIMD) computers and multiple instruction multiple data (MIMD) computers. Through complexity analysis and performance measurements I show that volume rendering is ideally parallelizeable with linear speedup and low memory overhead.

Journal ArticleDOI
Jun Ma1, Shaohan Ma1
TL;DR: It is shown that the problems to find all connected components, to compute the diameter of an undirected graph, to determine the center of a directed graph and to search for a directed cycle with the minimum (maximum) length in adirected graph can all be solved inO (n2/p+logp) time.
Abstract: In this paper, a sequential algorithm computing the all vertex pair distance matrixD and the path matrixP is given. On a PRAM EREW model withp,1≤p≤n 2, processors, a parallel version of the sequential algorithm is shown. This method can also be used to get a parallel algorithm to compute transitive closure arrayA * of an undirected graph. The time complexity of the parallel algorithm isO (n 3/p). IfD, P andA * are known, it is shown that the problems to find all connected components, to compute the diameter of an undirected graph, to determine the center of a directed graph and to search for a directed cycle with the minimum (maximum) length in a directed graph can all be solved inO (n 2/p+logp) time.

Proceedings ArticleDOI
16 Jun 1993
TL;DR: It is demonstrated that by describing the algorithm as a nondeterministic sequential algorithm, and presenting the optimized parallel algorithm through a series of refinements to that algorithm, the algorithm is easier to understand and the correctness proof becomes manageable.
Abstract: We present an asynchronous MIMD algorithm for Grobner basis computation The algorithm is based on the well-known sequential algorithm of Buchberger Two factors make the correctness of our algorithm nontrivial: the nondeterminism that is inherent with asynchronous parallelism, and the distribution of data structures which leads to inconsistent views of the global state of the system We demonstrate that by describing the algorithm as a nondeterministic sequential algorithm, and presenting the optimized parallel algorithm through a series of refinements to that algorithm, the algorithm is easier to understand and the correctness proof becomes manageable The proof does, however, rely on algebraic properties of the polynomials in the computation, and does not follow directly from the proof of Buchberger's algorithm

Proceedings ArticleDOI
13 Apr 1993
TL;DR: The authors provide optimal parallel solutions to several fundamental link distance problems set in trapezoided rectilinear polygons and imply an optimal linear-time sequential algorithm for constructing a data structure to support rectil inear link distance queries between points.
Abstract: The authors provide optimal parallel solutions to several fundamental link distance problems set in trapezoided rectilinear polygons. All parallel algorithms are deterministic, run in logarithmic time, have an optimal time-processor product and are designed to run on EREW PRAM. The authors develop techniques (e.g. rectilinear window partition) for solving link distance problems in parallel which are expected to find applications in the design of other parallel computational geometry algorithms. They employ these parallel techniques for example to compute the link diameter, link center, and central diagonal of a rectilinear polygon. Their results also imply an optimal linear-time sequential algorithm for constructing a data structure to support rectilinear link distance queries between points. >

Book ChapterDOI
22 Sep 1993
TL;DR: This paper describes an approach which it is hoped will make evaluation of ,~-expressions over finite lattices tractable in practice, and aims to save time by computing only part of the value of each ,kexpression.
Abstract: interpretation in the framework introduced by Cousot and Cousot requires finding fixpoints of continuous functions between abstract lattices [CC77]. Very often these abstract functions are expressed as typed ,~-expressions, even when the language being analysed isn't functional--for example an abstract interpretation derived from a denotational semantics in the style of Nielson [Nie82] will naturally be in this form. So implementing an abstract interpreter often requires an evaluator for ,k-expressions over lattices. Of course, evaluation must always terminate--even when the result is the bottom element of the lattice concerned. In practice this evaluation is intractable. The problem is caused by function types: lattices of functions grow more than exponentially in the size of their type. A-expressions of quite simple types therefore denote functions that are too enormous to manipulate efficiently. Attempts to find clever representations of functions (as frontiers) have really failed to solve the problem [HH92], and in practice the intractability is avoided by avoiding functions--function types may be abstracted as the one point type, discarding all information about function values, or analyses may be constructed from a more intensional semantics in which 'function' values are closures (code-environment pairs), so that true functions do not appear. Such approaches are forced to make more or less ad hoc approximations, at a cost in accuracy, and moreover cannot yield faithful implementations of the many analyses in the literature in which functions are abstracted as functions. In this paper we describe an approach which we hope will make evaluation of )~-expressions over finite lattices tractable in practice. Our basic idea is to save time by computing only part of the value of each ,kexpression. For example, to compute the value of the application el e2 it's not necessary to compute the value of el precisely. We only need to know what value el returns when applied to e2 --one point in the graph of the function, rather than the whole graph. Moreover, if el is a constant function, we may not need to compute the value of e2 at all. Or if e2 is itself a function, el may only apply it to a couple of different arguments, and so we may only need to know its value at a few points rather than its entire graph. Our intention is to lazily compute only as much as is needed of each expression, in the hope that in normal cases this will be only a tiny proportion of the whole. This idea is also at the heart of Young's pending analysis [YH86] and Launchbury's use of minimal function graphs [Lau91] to represent just the part of the graph that is needed, but these approaches are really confined to first-order functions. Our contribution is to extend the same idea smoothly to the higher-order case.

Book ChapterDOI
15 Dec 1993
TL;DR: This paper presents new geometric observations that lead to extremely simple and optimal algorithms for solving, both sequentially and in parallel, the case of this problem where the polygons are rectilinear.
Abstract: Given an n-vertex simple polygon P, the problem of computing the shortest weakly visible subedge of P is that of finding a shortest line segment s on the boundary of P such that P is weakly visible from s (if s exists). In this paper, we present new geometric observations that are useful for solving this problem. Based on these geometric observations, we obtain optimal sequential and parallel algorithms for solving this problem. Our sequential algorithm runs in O(n) time, and our parallel algorithm runs in O(log n) time using O(n/log n) processors in the CREW PRAM computational model. Using the previously best known sequential algorithms to solve this problem would take O(n2) time. We also give geometric observations that lead to extremely simple and optimal algorithms for solving, both sequentially and in parallel, the case of this problem where the polygons are rectilinear.

Journal ArticleDOI
TL;DR: Analysis and simulation show that the symbol error probability of the vector sequential algorithm is essentially the same as for maximum-likelihood sequence estimation using the vector Viterbi algorithm, while its average computational complexity is much less, although computation per symbol is a random variable with the Pareto distribution.
Abstract: A vector sequential sequence estimator is proposed for multiple-channel systems with both intersymbol interference (ISI) and interchannel interference (ICI). Both finite ISI-ICI and infinite ISI-ICI are considered. The estimator consists of a multiple-dimensional whitened matched filter and a vector sequential decoder. The metric of the sequential algorithm is derived, and the algorithm's performance is analyzed. Computer simulation results for a two-dimensional finite ISI-ICI channel and a two-dimensional infinite ISI-ICI channel are presented. Analysis and simulation show that the symbol error probability of the vector sequential algorithm is essentially the same as for maximum-likelihood sequence estimation using the vector Viterbi algorithm, while its average computational complexity is much less, although computation per symbol is a random variable with the Pareto distribution. There exists a signal-to-noise ratio above which the ensemble average computation is bounded. An upper bound on this ratio is found. >

Proceedings Article
07 Dec 1993
TL;DR: In this paper, a short-term hydrothermal scheduling algorithm based on the simulated annealing technique is proposed. But the performance of the algorithm is not compared to a conventional method.
Abstract: This paper develops a short-term hydrothermal scheduling algorithm based on the simulated annealing technique. In the algorithm, the load balance constraint, total water discharge constraint, reservoir volume limits and constraint on the operation limits of the hydrothermal generator and the equivalent thermal generator are fully accounted for. A relaxation method for checking the limits is proposed and included in the algorithm. The performance of the algorithm is demonstrated through an application to a test example. The results are presented and are compared to a conventional method. >