scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 1992"


Journal ArticleDOI
TL;DR: In this article, the authors present a greedy algorithm for the active contour model, which has performance comparable to the dynamic programming and variational calculus approaches, but is more than an order of magnitude faster than that approach, being O(nm).
Abstract: A model for representing image contours in a form that allows interaction with higher level processes has been proposed by Kass et al. (in Proceedings of First International Conference on Computer Vision, London, 1987, pp. 259–269). This active contour model is defined by an energy functional, and a solution is found using techniques of variational calculus. Amini et al. (in Proceedings, Second International Conference on Computer Vision, 1988, pp. 95–99) have pointed out some of the problems with this approach, including numerical instability and a tendency for points to bunch up on strong portions of an edge contour. They proposed an algorithm for the active contour model using dynamic programming. This approach is more stable and allows the inclusion of hard constraints in addition to the soft constraints inherent in the formulation of the functional; however, it is slow, having complexity O(nm3), where n is the number of points in the contour and m is the size of the neighborhood in which a point can move during a single iteration. In this paper we summarize the strengths and weaknesses of the previous approaches and present a greedy algorithm which has performance comparable to the dynamic programming and variational calculus approaches. It retains the improvements of stability, flexibility, and inclusion of hard constraints introduced by dynamic programming but is more than an order of magnitude faster than that approach, being O(nm). A different formulation is used for the continuity term than that of the previous authors so that points in the contour are more evenly spaced. The even spacing also makes the estimation of curvature more accurate. Because the concept of curvature is basic to the formulation of the contour functional, several curvature approximation methods for discrete curves are presented and evaluated as to efficiency of computation, accuracy of the estimation, and presence of anomalies.

1,111 citations


Journal ArticleDOI
TL;DR: This paper proves that some reliability redundancy optimization problems are NP-hard, and derives alternative proofs for the NP- hardness of some special cases of the knapsack problem.

536 citations


Journal ArticleDOI
TL;DR: It is shown that the Metropolis process takes super-polynomial time to locate a clique that is only slightly bigger than that produced by the greedy heuristic, which is one step above the greedy one in its level of sophistication.
Abstract: In a random graph on n vertices, the maximum clique is likely to be of size very close to 2 lg n. However, the clique produced by applying the naive “greedy” heuristic to a random graph is unlikely to have size much exceeding lg n. The factor of two separating these estimates motivates the search for more effective heuristics. This article analyzes a heuristic search strategy, the Metropolis process, which is just one step above the greedy one in its level of sophistication. It is shown that the Metropolis process takes super-polynomial time to locate a clique that is only slightly bigger than that produced by the greedy heuristic.

351 citations


Journal ArticleDOI
TL;DR: A modified Hopfield neural network model for regularized image restoration is presented, which allows negative autoconnections for each neuron and allows a neuron to have a bounded time delay to communicate with other neurons.
Abstract: A modified Hopfield neural network model for regularized image restoration is presented. The proposed network allows negative autoconnections for each neuron. A set of algorithms using the proposed neural network model is presented, with various updating modes: sequential updates; n-simultaneous updates; and partially asynchronous updates. The sequential algorithm is shown to converge to a local minimum of the energy function after a finite number of iterations. Since an algorithm which updates all n neurons simultaneously is not guaranteed to converge, a modified algorithm is presented, which is called a greedy algorithm. Although the greedy algorithm is not guaranteed to converge to a local minimum, the l/sub 1/ norm of the residual at a fixed point is bounded. A partially asynchronous algorithm is presented, which allows a neuron to have a bounded time delay to communicate with other neurons. Such an algorithm can eliminate the synchronization overhead of synchronous algorithms. >

233 citations


Journal ArticleDOI
01 Mar 1992
TL;DR: Two problems for path planning of a mobile robot are considered and a collision-free path is found by a variation of the A* algorithm in an environment of moving obstacles.
Abstract: Two problems for path planning of a mobile robot are considered. The first problem is to find a shortest-time, collision-free path for the robot in the presence of stationary obstacles in two dimensions. The second problem is to determine a collision-free path (greedy in time) for a mobile robot in an environment of moving obstacles. The environment is modeled in space-time and the collision-free path is found by a variation of the A* algorithm. >

152 citations


Journal ArticleDOI
Lov K. Grover1
TL;DR: It is shown that certain NP-complete problems (traveling salesman, min-cut graph partitioning, graph coloring, partition and a version of the satisfiability problem) satisfy a difference equation wit respect to a certain neighborhood that is similar to the wave equation of mathematical physics.

129 citations


Proceedings ArticleDOI
24 Oct 1992
TL;DR: The authors derive matching upper and lower bounds for the competitive ratio of the on-line greedy algorithm for this problem, namely /sup (3n)2/3///sub 2/(1+o(1)), and derive a lower bound, Omega ( square root n), for any other deterministic or randomized on- line algorithm.
Abstract: The setup for the authors' problem consists of n servers that must complete a set of tasks. Each task can be handled only by a subset of the servers, requires a different level of service, and once assigned can not be re-assigned. They make the natural assumption that the level of service is known at arrival time, but that the duration of service is not. The on-line load balancing problem is to assign each task to an appropriate server in such a way that the maximum load on the servers is minimized. The authors derive matching upper and lower bounds for the competitive ratio of the on-line greedy algorithm for this problem, namely /sup (3n)2/3///sub 2/(1+o(1)), and derive a lower bound, Omega ( square root n), for any other deterministic or randomized on-line algorithm. >

123 citations


Journal ArticleDOI
TL;DR: It is shown that for the vast majority of satisfiable 3CNF formulae, the local search heuristic that starts at a random truth assignment and repeatedly flips the variable that improves the number of satisfied clauses the most, almost always succeeds in discovering a satisfying truth assignment.

95 citations


Journal ArticleDOI
TL;DR: It is shown that finding a local optimum solution with respect to the Lin–Kernighan heuristic for the traveling salesman problem is PLS-complete, and thus as hard as any local search problem.
Abstract: It is shown that finding a local optimum solution with respect to the Lin–Kernighan heuristic for the traveling salesman problem is PLS-complete, and thus as hard as any local search problem.

81 citations



Journal ArticleDOI
TL;DR: The Nash equilibrium point for optimum flow control in a noncooperative multiclass environment is studied and the introduction and extension of a not very widely known theorem for the convergence of Gauss-Seidel algorithms in the linear systems theory are extended.
Abstract: The Nash equilibrium point for optimum flow control in a noncooperative multiclass environment is studied. Convergence properties of synchronous and asynchronous greedy algorithms are investigated in the case where several users compete for the resources of a single queue using power as their performance criterion. A proof of the convergence of a synchronous greedy algorithm for the n users case is given, and the necessary and sufficient conditions for the convergence of asynchronous greedy algorithms are obtained. Another important contribution is the introduction to the literature and the extension of a not very widely known theorem for the convergence of Gauss-Seidel algorithms in the linear systems theory. >

Journal ArticleDOI
TL;DR: A game theoretic perspective is presented and analysed as the appropriate framework for the study of the flow control problem and a network—Pareto optimal solution—with two user optimal solutions—Nash and Stackelberg equilibria are compared.
Abstract: Multiple classes of traffic with differing and often conflicting requirements arise in an integrated telecommunications environment as users share the limited existing resources. In this paper, a game theoretic perspective is presented and analysed as the appropriate framework for the study of the flow control problem. Using the notion of power as the performance criterion, we compare a network—Pareto optimal solution—with two user optimal solutions—Nash and Stackelberg equilibria. The appropriateness of each solution is discussed given the operating characteristics of the system. A proposed greedy algorithm is shown to converge to the Nash equilibrium.

Journal ArticleDOI
TL;DR: This work presents Vizing’s theorem, which states that A

Journal ArticleDOI
TL;DR: An approximate cost function and service level constraint is formulated, a greedy heuristic algorithm is presented for solving the resulting approximate constrained optimization problem, and experimental results showing that the heuristics developed have good cost performance relative to optimal are presented.
Abstract: Many organizations providing service support for products or families of products must allocate inventory investment among the parts (or, identically, items) that make up those products or families. The allocation decision is crucial in today's competitive environment in which rapid response and low levels of inventory are both required for providing competitive levels of customer service in marketing a firm's products. This is particularly important in high-tech industries, such as computers, military equipment, and consumer appliances. Such rapid response typically implies regional and local distribution points for final products and for spare parts for repairs. In this article we fix attention on a given product or product family at a single location. This single-location problem is the basic building block of multi-echelon inventory systems based on level-by-level decomposition, and our modeling approach is developed with this application in mind. The product consists of field-replaceable units (i.e., parts), which are to be stocked as spares for field service repair. We assume that each part will be stocked at each location according to an (s, S) stocking policy. Moreover, we distinguish two classes of demand at each location: customer (or emergency) demand and normal replenishment demand from lower levels in the multiechelon system. The basic problem of interest is to determine the appropriate policies (siSi) for each part i in the product under consideration. We formulate an approximate cost function and service level constraint, and we present a greedy heuristic algorithm for solving the resulting approximate constrained optimization problem. We present experimental results showing that the heuristics developed have good cost performance relative to optimal. We also discuss extensions to the multiproduct component commonality problem.

Journal ArticleDOI
TL;DR: An interactive optimization system for multiperiod exhaust relief planning in the local loop of a public telephone network is described, which decomposes the optimization problem into a single-period dynamic programming problem, and a multiperiod greedy heuristic.
Abstract: We describe an interactive optimization system for multiperiod exhaust relief planning in the local loop of a public telephone network. In exhaust relief planning in the local loop one seeks the minimum cost capacity expansion plan that meets projected demand over a given planning horizon. The problem can be modeled as an integer programming problem. However, due to cost structures and varying transmission technologies, the single-period exhaust relief planning problem is NP-complete. The size of the problem precludes the use of general purpose integer programming. Based on the mathematical structure and complexity of the problem, we decompose the optimization problem into a single-period dynamic programming problem, and a multiperiod greedy heuristic. A software system surrounds the optimization algorithm and provides interactive planning capabilities, before and after creation of the optimized plan. Important aspects of the system are the model assumptions made to keep the problem tractable, and their effect on the standardization of input data and methodology. The system is in use by several hundred outside plant planners in a major U.S. telephone company. An overview of major elements of the package is given as well as a summary of important implementation issues that arose during the first three years of the on-going project.

Journal ArticleDOI
TL;DR: In this article, a new heuristic that is a generalization of previous work of Foulds and Robinson is presented, based on a measure of the desirability that two machines be adjacent based on the anticipated flows and technological constraints.
Abstract: The facility layout problem is important in the modern manufacturing environment because increased machine flexibility and product diversification create additional complexities in scheduling and material handling. An important first step in facility layout is the determination of which machines should be adjacent. This problem can be modelled as that of finding a maximum weight planar subgraph of a graph, given a measure of the desirability that two machines be adjacent based on the anticipated flows and technological constraints. We present a new heuristic that is a generalization of previous work of Foulds and Robinson. Preliminary computational results are presented which suggest that this heuristic performs well.

Journal ArticleDOI
TL;DR: An optimization method, called fast simulated diffusion (FSD), is proposed to solve a multiminimial optimization problem on multidimensional continuous space and can find the global minimum with a practical success rate.
Abstract: An optimization method, called fast simulated diffusion (FSD), is proposed to solve a multiminimial optimization problem on multidimensional continuous space. The algorithm performs a greedy search and a random search alternately and can find the global minimum with a practical success rate. An efficient hill-descending method employed as the greedy search in the FSD is proposed. When the FSD is applied to a set of standard test functions, it shows an order of magnitude faster speed than the conventional simulated diffusion. Some of the optimization problems encountered in system and VLSI designs are classified into multioptimal problems. The proposed FSD is successfully applied to a MOSFET parameter extraction problem with a deep submicron MOSFET. >

Proceedings Article
01 Jan 1992
TL;DR: It is shown that the greedy algorithm optimizes all linear objective functions if and only if the problem structure (phrased in terms of either accessible set systems or hereditary languages) is a matroid embedding.
Abstract: The authors present exact characterizations of structures on which the greedy algorithm produces optimal solutions. Our characterization, which are called matroid embeddings, complete the partial characterizations of Rado [A note on independent functions, Proc. London Math. Soc., 7 (1957), pp. 300–320], Gale [Optimal assignments in an ordered set, J. Combin. Theory, 4 (1968), pp. 176–180], and Edmonds [Matroids and the greedy algorithm, Math. Programming, 1 (1971), pp. 127–136], (matroids), and of Korte and Lovasz [Greedoids and linear object functions, SIAM J. Alg. Discrete Meth., 5 (1984), pp. 239–248] and [Mathematical structures underlying greedy algorithms, in Fundamentals of Computational Theory, LNCS 177, Springer-Verlag, 1981, pp. 205–209] (greedoids). It is shown that the greedy algorithm optimizes all linear objective functions if and only if the problem structure (phrased in terms of either accessible set systems or hereditary languages) is a matroid embedding. An exact characterization of the ...

Journal Article
TL;DR: This paper provides a substantial proof GAs work for the traveling salesman problem (TSP) and suggests that the performance of probabilisti c search algorithms (such as GAs) is highly dependent on representation and t he choice of neighborhood operators.
Abstract: This paper provides a substantial proof th at genet ic algorithms (GAs) work for t he traveling salesman problem (TSP) . The method introduced is based on an adjacency matrix representation of the TSP t hat allows t he GA to manipulate edges while using conventional crossover. This combination, interleaved with invers ion (2-opt), allows th e GA to rapidly discover the best known solutions t o seven of the eight TS P test problems frequentl y st udied in t he lit erature. (The GA solut ion is within 2% of t he best known solution for t he eighth problem.) T hese results st and in contrast to earli er t ent ative conclusions that GAs are ill-suited to solving TSP problems, and suggest that the performance of probabilisti c search algorithms (such as GAs) is highly dependent on representation and t he choice of neighborhood operators.

01 Jan 1992
TL;DR: The main conclusions reached are (1) parallel logic simulation of large circuits on general purpose workstations is practical, (2) the EFDP, FGA, and PGA algorithms work very well, and (3) synchronous time algorithms appear to be much faster and more space conscious than asynchronous time algorithms.
Abstract: This dissertation records research on the problem of performing logic simulation of a circuit on a parallel architecture. Research begins with the development of a new circuit partitioning algorithm called the Event Flow and Distribution Partitioning (EFDP) algorithm. This algorithm is discussed in detail and is shown to produce a good partitioning of the circuit. Partitioning is followed by simulation. Test circuits are chosen from VLSI chips designed at the NASA Space Engineering Research Center for VLSI Design at the University of Idaho. These circuits range in size from 292 transistors to 249,897 transistors. Five test vectors are used to perform the simulation of these circuits. Simulation is first run on a uni-processor (a DEC 3000) using NOVA, a logic simulator developed at the NASA SERC. Simulation times varying from a few minutes to a few hours are reported on the suites of five test vectors. These same tests are then run on the same simulator modified to simulate execution on a shared memory multiprocessor using two new synchronous time parallel simulation algorithms and an asynchronous time parallel simulation algorithm. The two new synchronous time parallel simulation algorithms are types of greedy algorithms and are called the Focused Greedy Algorithm (FGA) and the Planned Greedy Algorithm (PGA). It is found that both algorithms performed nearly the same with the FGA showing a slightly higher speedup but also requiring slightly more storage than the PGA. An average speedup of up to 6.6 on an 8 processor architecture and 13.9 on a 16 processor architecture is reported. An asynchronous time parallel simulation algorithm based on the Time Warp algorithm is tested using the same seven circuits and the same suite of five test vectors. An average speedup of up to 4.2 on an 8 processor architecture and 7.7 on a 16 processor architecture is reported. The main conclusions reached are (1) parallel logic simulation of large circuits on general purpose workstations is practical, (2) the EFDP, FGA, and PGA algorithms work very well, and (3) synchronous time algorithms appear to be much faster and more space conscious than asynchronous time algorithms.

Proceedings ArticleDOI
P.-L. Tu1, J.-Y. Chung
10 Nov 1992
TL;DR: A time analysis and simulation study indicate that the intelligent decision-tree algorithm (IDA) is more computationally efficient than ID3, and a simulation study indicates that IDA outperforms ID3.
Abstract: Although decision-tree classification algorithms have been widely used for machine learning in artificial intelligence, there has been little research toward evaluating the performance or quality of the current classification algorithms and investigating the time and computational complexity of constructing the smallest size decision tree which best distinguishes characteristics of multiple distinct groups. A known NP-complete problem, 3-exact cover, is used to prove that this problem is NP-complete. One prevalent classification algorithm in machine learning, ID3, is evaluated. The greedy search procedure used by ID3 is found to create anomalous behavior with inferior decision trees on a lot of occasions. A decision-tree classification algorithm, the intelligent decision-tree algorithm (IDA), that overcomes these anomalies with better classification performance is presented. A time analysis shows that IDA is more computationally efficient than ID3, and a simulation study indicates that IDA outperforms ID3. >

Proceedings ArticleDOI
01 Jul 1992
TL;DR: This paper shows that the formulation of greedy algorithms using constructs that have semantics reducible to that of negative programs under stable model semantics lead to a syntactic class of programs, called stage-stratified programs, that are easily recognized at compile time.
Abstract: The greedy paradigm of algorithm design is a well known tool used for efficiently solving many classical computational problems within the framework of procedural languages. However, it is very difficult to express these algorithms within the declarative framework of logic-based languages. In this paper, we extend the framework of Datalog-like languages to provide simple and declarative formulations of such problems, with computational complexities comparable to those of procedural formulations. This is achieved through the use of constructs, such as least and choice, that have semantics reducible to that of negative programs under stable model semantics. Therefore, we show that the formulation of greedy algorithms using these constructs lead to a syntactic class of programs, called stage-stratified programs, that are easily recognized at compile time. The fixpoint-based implementation of these recursive programs is very efficient and, combined with suitable storage structures, yields asymptotic complexities comparable to those obtained using procedural languages.

Journal ArticleDOI
TL;DR: TESSA as discussed by the authors is a heuristic for determining which facilities should be adjacent in a planar layout, which is polynomial in time and produces good quality solutions, almost all of which are above 90% of the upper bound.
Abstract: TESSA is a heuristic for determining which facilities should be adjacent in a planar layout. Once the adjacencies are known the block plan can be constructed by existing techniques. TESSA overcomes problems with earlier heuristics for determining adjacencies as it does not require planarity testing nor does it restrict the type of layout produced. The algorithm is polynomial in time and produces good quality solutions, almost all of which are above 90% of the (often unattainable) upper bound.

Journal ArticleDOI
TL;DR: The question was posed as to whether a polynomial time greedy algorithm always finds a minimum base for G, and it is shown that the minimum base problem is NP-hard.

Book ChapterDOI
01 Jan 1992
TL;DR: The random network (RN) (Gelenbe 1989, 1990) is used to obtain its approximate solution to minimum graph (vertex) covering, and the overall optimisation obtained is better with the RN approach, than with other published solution methods.
Abstract: Minimum graph (vertex) covering is an NP-hard problem arising in various areas (image processing, transportation models, plant layout, crew scheduling, etc.). We use the random network (RN) (Gelenbe 1989, 1990) to obtain its approximate solution; this model is very close to the “queueing networks with positive and negative customers” model we have also introduced (Gelenbe 1992). We compare the results obtained by our approach to the conventional greedy algorithm and to simulated annealing. The evaluation shows that the random network provides good results, at a computational cost less than that of simulated annealing but greater than that of the conventional greedy algorithm. The overall optimisation obtained is better with the RN approach, than with other published solution methods.

01 Jan 1992
TL;DR: In this article, the authors describe a simple probabilistic and decision-theoretic planning problem and present a two-step planning method: during the first planning phase, a working (i.e., ergodic) but not necessarily optimal plan is constructed.
Abstract: In this report, we describe a simple probabilistic and decision-theoretic planning problem. We show how algorithms developed in the field of Markovian decision theory, a subfield a stochastic dynamic programming (operations research), can be used to construct optimal plans for this planning problem, and we present some of the complexity results known. Although their computational complexity allows only small problems to be solved optimally, the methods presented here are helpful as a theoretical framework. They allow one to make statements about the structure of an optimal plan, to guide the development of heuristic planning methods, and to evaluate their performance. We show the connection between this normative theory and universal planning, reinforcement learning, and anytime algorithms. One can easily construct a one-step planner by using a Markovian decision algorithm and a random assignment of actions to states as the initial plan. In many planning domains, it is easy for human problem solvers to construct a working plan, although it is difficult for them to find the optimal (or a close-to-optimal) plan. Therefore, we propose a two-step planning method: During the first planning phase, a working (i.e. ergodic), but not necessarily optimal plan is constructed. Sometimes, a domain specific planning method might be available for this task. We show that such a planning method might be available for this task. We show that such a planning method can be obtained even in probabilistic domains by taking advantage of deterministic or quasi-deterministic actions. Thus, traditional (deterministic) planners can be useful in probabilistic domains. We also state a general greedy algorithm that accomplishes this task if no domain specific method is available. During the second planning phase, a Markovia Finally, we briefly present a software package that implements our ideas. The probabilistic domain can be modeled using an augmented STRIPS notation. It is automatically translated into a Markovian decision problem, that is then solved.

Journal ArticleDOI
TL;DR: The 2-Peripatetic Salesman Problem (2-PSP) minimizes the total length of 2 edge-disjoint Hamil-tonian cycles, which arises in designing communication or computer networks, or whenever one aims to increase network reliability using disjoint tours.
Abstract: The 2-Peripatetic Salesman Problem (2-PSP) minimizes the total length of 2 edge-disjoint Hamil-tonian cycles. This type of problems arises in designing communication or computer networks, or whenever one aims to increase network reliability using disjoint tours. The NP-hardness of the 2-PSP is shown. Lower bound values are obtained by generalizing the 1-tree approach for the TSP to a 2 edge-disjoint 1-trees approach for the 2-PSP. One can construct 2 edge-disjoint 1-trees using a greedy algorithm, into which a partitioning procedure is incorporated that runs O(n 2 log n) time. Upper bound solutions are obtained by two heuristics based on a lower bound solution and by a modified Savings heuristic for problems up to 140 cities.

01 Jan 1992
TL;DR: Several different kinds of plan justifications are defined, algorithms for finding a justified version of a plan are presented, and it is shown that the task of finding the best possible justified versionof a plan is NP-complete.
Abstract: This paper formalizes the notion of justified plans , which captures the intuition behind “good” plans. A justified plan is one that does not contain operators which are not necessary for achieving a goal. The importance of formalizing this notion is due to two reasons. First, it gives rise to methods for optimizing a given plan by removing “useless” operators. Second, several important concepts describing abstraction hierarchies are defined via justified plans. In the past, relatively few attempts have been made to formalize such a notion. This paper defines several different kinds of plan justifications, presents algorithms for finding a justified version of a plan, and shows that the task of finding the best possible justified version of a plan is NP-complete. Finally, it presents a greedy algorithm for finding a near-optimal justified plan in polynomial time.

Journal ArticleDOI
TL;DR: A special-purpose algorithm capable of solving large problems of this kind, frequently at or beyond the limits of current computational tractability is developed.

Book ChapterDOI
01 Jan 1992
TL;DR: The random network (RN) (Gelenbe 1989,1990) is used to obtain its approximate solution to minimum graph (vertex) covering, and the overall optimisation obtained is better with the RN approach, than with other published solution methods.
Abstract: Minimum graph (vertex) covering is an NP-hard problem arising in various areas (image processing, transportation models, plant layout, crew scheduling, etc.). We use the random network (RN) (Gelenbe 1989,1990) to obtain its approximate solution; this model is very close to the “queueing networks with positive and negative customers” model we have also introduced (Gelenbe 1992). We compare the results obtained by our approach to the conventional greedy algorithm and to simulated annealing. The evaluation shows that the random network provides good results, at a computational cost less than that of simulated annealing but greater than that of the conventional greedy algorithm. The overall optimisation obtained is better with the RN approach, than with other published solution methods.