scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1974"


Journal ArticleDOI
TL;DR: For the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as n^@e, where n is the problem size and @e>0 depends on the algorithm.

2,472 citations


Journal ArticleDOI
I. Tomek1
TL;DR: Two simple heuristic algorithms for piecewise-linear approximation of functions of one variable are described, which use a limit on the absolute value of error and strive to minimize the number of approximating segnents subject to the error limit.
Abstract: Two simple heuristic algorithms for piecewise-linear approximation of functions of one variable are described. Both use a limit on the absolute value of error and strive to minimize the number of approximating segnents subject to the error limit. The first algorithm is faster and gives satisfactory results for sufficiently smooth functions. The second algorithm is not as fast but gives better approximations for less well-behaved functions. The two algorithms are ilustrated by several examples.

108 citations


Proceedings ArticleDOI
14 Oct 1974
TL;DR: The class of P-Complete problems is studied and it is shown that for any constant e ≫0 there is a P-complete problem for which an e-approximate solution can be found in linear time.
Abstract: We study the class of P-Complete problems and show the following: i) for any constant e ≫0 there is a P-complete problem for which an e-approximate solution can be found in linear time ii) there exist P-Complete problems for which linear time approximate solutions that get closer and closer to the optimal (with increasing problem size) can be found iii) there exist P-Complete problems for which the approximation problems are also P-Complete

43 citations


Journal ArticleDOI
TL;DR: In this article, the authors give convergence theorems for several sequential Monte Carlo or stochastic approximation algorithms for finding a local minimum of a function f (bullet) on a set of random variables, such that any convergent subsequence of f(n) converges to a certain necessary condition for constrained optimality.
Abstract: The paper gives convergence theorems for several sequential Monte-Carlo or stochastic approximation algorithms for finding a local minimum of a function $f(\bullet)$ on a set $C$ defined by $C = \{x: q^i(x) \leqq 0, i = 1, 2, \cdots, s\}. f(\bullet)$ is unknown, but "noise perturbed" values can be observed at any desired parameter $x \in C$. The algorithms generate a sequence of random variables $\{X_n\}$ such that (for a.a. $\omega$) any convergent subsequence of $\{X_n(\omega)\}$ converges to a point where a certain necessary condition for constrained optimality holds. The techniques are drawn from both stochastic approximation, and non-linear programming.

28 citations


01 May 1974
TL;DR: In this paper, a simplicial approximation algorithm with a variable initial point and a restart procedure is presented for solving systems of nonlinear equations, which can employ any labeling function in a broadly defined class of admissible labelings.
Abstract: : A simplicial approximation algorithm with a variable initial point and a restart procedure is presented for solving systems of nonlinear equations. The algorithm can employ any labeling function in a broadly defined class of admissible labelings. The generality so obtained furnishes a constructive proof of existence theorems not previously known. For the case of continuously differentiable functions, a labeling is presented which is preferred in the sense that if the system has a nonsingular Jacobian at a zero then the algorithm will converge to the zero if it is started sufficiently close. (Author)

10 citations


Journal ArticleDOI
TL;DR: In this article, three different approximation algorithms are briefly discussed, namely continued fractions, Cantor products, and Engel series, and it is shown that this difficulty may be overcome by application of the new general approximation theorem, but the solution can be provided only by computer.
Abstract: The process of frequency synthesis is the step-by-step approximation to the normalized output frequency. In this paper, three different approximation algorithms are briefly discussed, namely continued fractions, Cantor products, and Engel series. Their common disadvantage is their uniqueness; it is shown that this difficulty may be overcome by application of the new general approximation theorem, but the solution can be provided only by computer. Approximation errors are generally small, sometimes smaller than 1 × 10-12. The advantage of modified Cantor products or Engel series algorithms is the possibility of standardizing the hard ware of frequency synthesizers, particularly if the IC technology and the use of off-the-shelf circuits is emphasized.

8 citations


Journal ArticleDOI
TL;DR: Sequential decision algorithms with on-line feature ordering and a limited look-ahead approximation are considered for multicategory pattern recognition problems and computer simulated results are obtained.
Abstract: Sequential decision algorithms with on-line feature ordering and a limited look-ahead approximation are considered. The algorithms can be used with or without contextual constraints for multicategory pattern recognition problems. Computational complexity due to on-line ordering of features is analyzed and related to system performance. Computer simulated results are obtained using a standard data set (Munson's multiauthor handprinted character files) and a careful test procedure.

8 citations


01 Jun 1974
TL;DR: A simplicial approximation algorithm is given for a problem in which some components of f(x) are required to satisfy a complementarity condition and the other components arerequired to be zero and these conditions provide previously unknown existence results.
Abstract: : Given a continuous mapping f(x) from (R sup N) to (R sup n) the authors consider a problem in which some components of f(x) are required to satisfy a complementarity condition and the other components are required to be zero. This problem includes the nonlinear complementarity problem, the problem of finding a zero of a system of nonlinear equations, and the problem of finding a Kuhn-Tucker point of a nonlinear program with both equality and inequality constraints. A simplicial approximation algorithm for this problem is given and finite termination conditions are established. These conditions provide previously unknown existence results. Application of the algorithm to convex programming is described and computational experience presented. (Author)

6 citations


Journal ArticleDOI
Ivan Tomek1
TL;DR: Several simple and efficient algorithms for piecewise-linear continuous approximation of waveforms are described that can be used when it is desired to reduce the volume of recorded data and act as nonlinear filters.

5 citations


Proceedings ArticleDOI
01 Nov 1974
TL;DR: In this paper, an optimal procedure for incrementing the tap gains of an adaptive tapped-delay-line data channel equalizer is presented, which converges to tap gain values bounded by those which minimize mean square error (MSE) and those which minimise median-square error (MDSE).
Abstract: An optimal procedure for incrementing the tap gains of an adaptive tapped-delay-line data channel equalizer is presented. The equalizer algorithm is a normalized Robbins-Monro stochastic approximation procedure which converges to tap gain values bounded by those which minimize mean-square error (MSE) and those which minimize median-square error (MDSE). A truncated version of the algorithm with minimum and maximum allowable values of tap gains will also converge. The problem addressed here is selection of an optimal scalar stepping sequence for the multi-dimensional stochastic search scheme; the objective is accelerated convergence. The optimal sequence derived is minimax in that maximum MSE in tap gain settings is minimized at each iteration. Generally speaking, the optimal approach is to hold step size constant initially, and to then reduce step size at each iteration.

3 citations


Journal ArticleDOI
TL;DR: To minimize an objective function of k variables defined as the generalized discrete least pth objective using gradient methods.
Abstract: To minimize an objective function of k variables defined as the generalized discrete least pth objective using gradient methods.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the best vector-valued function for a set of experimentally determined exponential decay curves is studied. And a characterization of best approximations to vectorvalued functions is given.
Abstract: This paper deals with characterization of best approximations to vector-valued functions. The approximations are themselves vector-valued functions with components depending nonlinearly on the approximation parameters. The constraint is imposed that certain of the parameters should be identical for all components. An application to exponen- tial approximation is discussed in some detail. 1. Introduction. The work reported in this paper was motivated by the following problem: Suppose a set of experimentally determined exponential decay curves is given. It is desired to approximate the curves by functions of the form a exp(fx), where A should be the same for the entire set of curves and a may vary from curve to curve. The problem is to determine how such approximation might best be made. This problem arises in a number of physical situations. In chemical kinetics, for example, monitoring of a chemical reaction which obeys a first-order rate law leads to just such exponential data, from which one wishes to extract a best A although the initial amount of material (a) varies from experiment to experiment. In a previous paper (1), this type of constrained vector-valued approximation was studied for the simpler situation where the approximating functions depend linearly on the parameters. In this paper, results for nonlinear approximation are presented. Section 2 contains a precise formulation of the problem and a characterization theorem applicable to the construction of best approximations from general classes of nonlinear families. In Section 3, the particular problem discussed in the preceding paragraph is taken up. A very simple alternation theorem is obtained, as well as an interesting theorem on uniqueness.

Journal ArticleDOI
TL;DR: This short note is a response from CAMAL to a paper on REDUCE and MACSYMA, in which it was suggested that one should compare different algorithms on each system.
Abstract: This short note is a response from CAMAL to a paper on REDUCE and MACSYMA [2], in which it was suggested that one should compare different algorithms on each system. Here CAMAL is used on a PDPIO to calculate the functions Urs (r+s≤4) by Fitch's repeated approximation algorithm [1] and Crs (r+s≤4) by Hall's method 2 [4], both solutions of SIGSAM problem 3 [3]. These figures can be compared with a similar comparison for REDUCE [2].

Journal ArticleDOI
TL;DR: Interpolation brings the discrete problem closer to the continuous minimax approximation problem and reduces the uncertainty in the objective function.
Abstract: Minimization of a least pth objective function of k variables using gradient methods. Interpolation brings the discrete problem closer to the continuous minimax approximation problem.

Journal ArticleDOI
TL;DR: A class of algorithms for recursive estimation of the mean of a random yariable is described and its properties are studied and new algorithms are presented that are shown to be statisticaily more efficient and only slightly less efficient than the optimal algorithm of the same type.
Abstract: In this correspondence a class of algorithms for recursive estimation of the mean of a random yariable is described and its properties are studied. New algorithms are presented that are shown to be statisticaily more efficient than the previous algorithms, and only slightly less efficient than the optimal algorithm of the same type. The computation time remains the same as in the best previous algorithm, whereas the optimal algorithm implies a considerably longer computation time.