scispace - formally typeset
Search or ask a question

Showing papers on "Function (mathematics) published in 1996"


Journal ArticleDOI
TL;DR: A new discussion of the complex branches of W, an asymptotic expansion valid for all branches, an efficient numerical procedure for evaluating the function to arbitrary precision, and a method for the symbolic integration of expressions containing W are presented.
Abstract: The LambertW function is defined to be the multivalued inverse of the functionw →we w . It has many applications in pure and applied mathematics, some of which are briefly described here. We present a new discussion of the complex branches ofW, an asymptotic expansion valid for all branches, an efficient numerical procedure for evaluating the function to arbitrary precision, and a method for the symbolic integration of expressions containingW.

5,591 citations


Journal ArticleDOI
TL;DR: The pseudopotential is of an analytic form that gives optimal efficiency in numerical calculations using plane waves as a basis set and is separable and has optimal decay properties in both real and Fourier space.
Abstract: We present pseudopotential coefficients for the first two rows of the Periodic Table. The pseudopotential is of an analytic form that gives optimal efficiency in numerical calculations using plane waves as a basis set. At most, seven coefficients are necessary to specify its analytic form. It is separable and has optimal decay properties in both real and Fourier space. Because of this property, the application of the nonlocal part of the pseudopotential to a wave function can be done efficiently on a grid in real space. Real space integration is much faster for large systems than ordinary multiplication in Fourier space, since it shows only quadratic scaling with respect to the size of the system. We systematically verify the high accuracy of these pseudopotentials by extensive atomic and molecular test calculations. \textcopyright{} 1996 The American Physical Society.

5,009 citations


Journal Article
TL;DR: In this paper, the authors consider the question of determining whether a function f has property P or is e-far from any function with property P. In some cases, it is also allowed to query f on instances of its choice.
Abstract: In this paper, we consider the question of determining whether a function f has property P or is e-far from any function with property P. A property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. In some cases, it is also allowed to query f on instances of its choice. We study this question for different properties and establish some connections to problems in learning theory and approximation.In particular, we focus our attention on testing graph properties. Given access to a graph G in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k-Colorable, or having a p-Clique (clique of density p with respect to the vertex set). Our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph.

870 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a functional formulation of the groundwater flow inverse problem that is sufficiently general to accommodate most commonly used inverse algorithms, including the Gaussian maximum a posteriori (GAP) algorithm.
Abstract: This paper presents a functional formulation of the groundwaterflow inverse problem that is sufficiently general to accommodate most commonly used inverse algorithms. Unknown hydrogeological properties are assumed to be spatial functions that can be represented in terms of a (possibly infinite) basis function expansion with random coefficients. The unknown parameter function is related to the measurements used for estimation by a ''forward operator'' which describes the measurement process. In the particular case considered here, the parameter of interest is the large-scale log hydraulic conductivity, the measurements are point values of log conductivity and piezometric head, and the forward operator is derived from an upscaled groundwaterflow equation. The inverse algorithm seeks the ''most probable'' or maximum a posteriori estimate of the unknown parameter function. When the measurement errors and parameter function are Gaussian and independent, the maximum a posteriori estimate may be obtained by minimizing a least squares performance index which can be partitioned into goodness-of- fit and prior terms. When the parameter is a stationary random function the prior portion of the performance index is equivalent to a regularization term which imposes a smoothness constraint on the estimate. This constraint tends to make the problem well- posed by limiting the range of admissible solutions. The Gaussian maximum a posteriori problem may be solved with variational methods, using functional generalizations of Gauss-Newton or gradient-based search techniques. Several popular groundwater inverse algorithms are either special cases of, or variants on, the functional maximum a posteriori algorithm. These algorithms differ primarily with respect to the way they describe spatial variability and the type of search technique they use (linear versus nonlinear). The accuracy of estimates produced by both linear and nonlinear inverse algorithms may be measured in terms of a Bayesian extension of the Cramer-Rao lower bound on the estimation error covariance. This bound suggests how parameter identifiability can be improved by modifying the problem structure and adding new measurements.

564 citations


Journal ArticleDOI
TL;DR: In this paper, local high-order polynomial fitting is employed for the estimation of the multivariate regression function m(x1,…xd) =E{φ(Yd)φX1=x 1,…,Xd=xd}, and of its partial derivatives, for stationary random processes {Yi, Xi}.
Abstract: . Local high-order polynomial fitting is employed for the estimation of the multivariate regression function m(x1,…xd) =E{φ(Yd)φX1=x1,…,Xd=xd}, and of its partial derivatives, for stationary random processes {Yi, Xi}. The function φ may be selected to yield estimates of the conditional mean, conditional moments and conditional distributions. Uniform strong consistency over compact subsets of Rd, along with rates, are established for the regression function and its partial derivatives for strongly mixing processes.

501 citations


Journal ArticleDOI
TL;DR: In this article, the spectral representation of the stochastic field is used to obtain the mean value, autocorrelation function, and power spectral density function of a multi-dimensional, homogeneous Gaussian field.
Abstract: The subject of this paper is the simulation of multi-dimensional, homogeneous, Gaussian stochastic fields using the spectral representation method. Following this methodology, sample functions of the stochastic field can be generated using a cosine series formula. These sample functions accurately reflect the prescribed probabilistic characteristics of the stochastic field when the number of terms in the cosine series is large. The ensemble-averaged power spectral density or autocorrelation function approaches the corresponding target function as the sample size increases. In addition, the generated sample functions possess ergodic characteristics in the sense that the spatially-averaged mean value, autocorrelation function and power spectral density function are identical with the corresponding targets, when the averaging takes place over the multi-dimensional domain associated with the fundamental period of the cosine series. Another property of the simulated stochastic field is that it is asymptotically Gaussian as the number of terms in the cosine series approaches infinity. The most important feature of the method is that the cosine series formula can be numerically computed very efficiently using the Fast Fourier Transform technique. The main area of application of this method is the Monte Carlo solution of stochastic problems in structural engineering, engineering mechanics and physics. Specifically, the method has been applied to problems involving random loading (random vibration theory) and random material and geometric properties (response variability due to system stochasticity).

421 citations


Journal ArticleDOI
TL;DR: In this article, a simple model based on adding two Weibull survival functions is proposed to model lifetime distributions for many components with a bathtub-shaped failure rate in practice.

418 citations


Journal ArticleDOI
TL;DR: It is proved that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions.
Abstract: We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions. Under these conditions, it is also possible to construct networks that provide a geometric order of approximation for analytic target functions. The permissible activation functions include the squashing function (1-e-x)-1 as well as a variety of radial basis functions. Our proofs are constructive. The weights and thresholds of our networks are chosen independently of the target function; we give explicit formulas for the coefficients as simple, continuous, linear functionals of the target function.

392 citations


Journal ArticleDOI
TL;DR: An example of the common deficiencies seen in typical basis sets, including standard basis sets in GAUSSIAN94, is shown and a set of properly optimized (n + 1)p functions are offered that offer a more compact and satisfactory solution to the proper placements of the node.
Abstract: Although the (n + 1)p orbital is unoccupied in transition-metal ground-state configurations which are all nd(x) (n + 1)s(y) , these (n + 1)p functions play a crucial role in the structure of transition metal complexes. As we show here, the usual solution, adding one or more diffuse functions, can be insufficient to create an orbital of the correct energy. The major problem appears to be due to the incorrect placement of the (n + 1)p orbital's node. Even splitting the most diffuse component of the np orbital and adding a second diffuse function does not completely solve this problem. Although one can usually solve this deficiency by further uncontracting of the np function, here we offer a set of properly optimized (n + 1)p functions that offer a more compact and satisfactory solution to the proper placements of the node. We show an example of the common deficiencies seen in typical basis sets, including standard basis sets in GAUSSIAN94, and show that the new optimized (n + 1)p function performs well compared to a fully uncontracted basis set. © 1996 by John Wiley & Sons, Inc.

390 citations


Journal ArticleDOI
TL;DR: In this paper, a general scheme for constructing symmetric and/or antisymmetric compactly supported orthonormal multi-scaling functions and multi-wavelets is introduced, where the main emphasis is on maximum order of polynomial-reproduction by the scaling functions, or equivalently maximum number of vanishing moments for the corresponding wavelets.

377 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the expansion exists also when f is only supposed to be measurable and bounded, under an additional nondegeneracy condition of Hormander type for the infinitesimal generator of (X====== t>>\s ): to obtain this result, we use the stochastic variations calculus.
Abstract: We study the approximation problem ofE f(X T ) byE f(X ), where (X t ) is the solution of a stochastic differential equation, (X ) is defined by the Euler discretization scheme with stepT/n, andf is a given function. For smoothf's, Talay and Tubaro have shown that the errorE f(X T ) −f(X ) can be expanded in powers of 1/n, which permits to construct Romberg extrapolation precedures to accelerate the convergence rate. Here, we prove that the expansion exists also whenf is only supposed measurable and bounded, under an additional nondegeneracy condition of Hormander type for the infinitesimal generator of (X t ): to obtain this result, we use the stochastic variations calculus. In the second part of this work, we will consider the density of the law ofX and compare it to the density of the law ofX T .

Journal ArticleDOI
TL;DR: In this paper, a genetic algorithm technique was used to determine a set of unknown parameters that best matched the Blaze II chemical laser model predictions with experimental data, which can be used as a method to guide experiments to improve chemical laser performance.
Abstract: A genetic algorithm technique was implemented to determine a set of unknown parameters that best matched the Blaze II chemical laser model predictions with experimental data. This is the first known application of the genetic algorithm technique for modeling lasers, chemically reacting flows, and chemical lasers. Overall, the genetic algorithm technique worked exceptionally well for this chemical laser modeling problem in a cost effective and time efficient manner. Blaze II was baselined to existing chemical oxygen-iodine laser data taken with the research assessment and device improvement chemical laser device with very good agreement. Mixing calculations for the research assessment and device improvement chemical laser nozzle indicate that higher iodine flow rates are necessary to maintain a significant fraction of the nominal performance as the total pressure is increased by the addition of helium ; this agrees with research assessment and device improvement chemical laser experimental data. It may be possible to implement this genetic algorithm technique to optimize the performance of any chemical laser as a function of any of the flow rates, mirror location, mirror size, nozzle configuration, injector sizes, and other factors. This modeling procedure can be used as a method to guide experiments to improve chemical laser performance.

Journal ArticleDOI
TL;DR: Genetic Algorithms have been used in an attempt to optimize a specified objective function related to a clustering problem and it is shown that the proposed method may improve the final output of K-Means where an improvement is possible.

Journal ArticleDOI
TL;DR: In this paper, it is known that the total mass of m is a polynomial in m for su ciently large m (denote by k its degree) and m −k m(m · ) weak −→ ( )d ; where d is the Lebesgue measure supported on and the density ( ) is a piecewise-polynomial function.
Abstract: This is a convex subset of the real vector space P⊗ZR. In fact, it is a convex polytope; see [Br], where this polytope is discussed from an algebraic point of view. It is known that the total mass of m is a polynomial in m for su ciently large m (denote by k its degree) and m −k m(m · ) weak −→ ( )d ; where d is the Lebesgue measure supported on and the density ( ) is a piecewise-polynomial function; we will not use this piecewise polynomiality in this paper. Recall that a real function f de ned on a convex subset U of a vector space V is called concave, if

Journal ArticleDOI
TL;DR: In this paper, the strength and range of interpoint interactions in a spatial point process can be quantified by the function J = (1 - G)/(1 - F), where G is the nearest-neighbour distance distribution function and F the empty space function of the process.
Abstract: The strength and range of interpoint interactions in a spatial point process can be quantified by the function J = (1 - G)/(1 - F), where G is the nearest-neighbour distance distribution function and F the empty space function of the process. J(r) is identically equal to 1 for a Poisson process; values of J(r) smaller or larger than 1 indicate clustering or regularity, respectively. We show that, for a large class of point processes, J(r) is constant for distances r greater than the range of spatial interaction. Hence both the range and type of interaction can be inferred from J without parametric model assumptions. It is also possible to evaluate J(r) explicitly for many point process models, so that J is also useful for parameter estimation. Various properties are derived, including the fact that the J function of the superposition of independent point processes is a weighted mean of the J functions of the individual processes. Estimators of J can be constructed from standard estimators of F and G. We compute estimates of J for several standard point pattern datasets and implement a Monte Carlo test for complete spatial randomness.

Journal ArticleDOI
TL;DR: Two experiments on performance on the traveling salesman problem (TSP) are reported, testing the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points.
Abstract: Two experiments on performance on the traveling salesman problem (TSP) are reported. The TSP consists of finding the shortest path through a set of points, returning to the origin. It appears to be an intransigent mathematical problem, and heuristics have been developed to find approximate solutions. The first experiment used 10-point, the second, 20-point problems. The experiments tested the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points. Both experiments supported the hypothesis. The experiments provided information on the quality of subjects’ solutions. Their solutions clustered close to the best known solutions, were an order of magnitude better than solutions produced by three well-known heuristics, and on average fell beyond the 99.9th percentile in the distribution of random solutions. The solution process appeared to be perceptually based.

Journal ArticleDOI
TL;DR: This article shows that the generalization error can be decomposed into two terms: the approximation error, due to the insufficient representational capacity of a finite sized network, and the estimation error,due to insufficient information about the target function because of the finite number of samples.
Abstract: Feedforward networks together with their training algorithms are a class of regression techniques that can be used to learn to perform some task from a set of examples. The question of generalization of network performance from a finite training set to unseen data is clearly of crucial importance. In this article we first show that the generalization error can be decomposed into two terms: the approximation error, due to the insufficient representational capacity of a finite sized network, and the estimation error, due to insufficient information about the target function because of the finite number of samples. We then consider the problem of learning functions belonging to certain Sobolev spaces with gaussian radial basis functions. Using the above-mentioned decomposition we bound the generalization error in terms of the number of basis functions and number of examples. While the bound that we derive is specific for radial basis functions, a number of observations deriving from it apply to any approximation technique. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem and the kinds of problems that can be effectively solved with finite resources, i.e., with a finite number of parameters and finite amounts of data.

Journal ArticleDOI
TL;DR: Two protocols based on a Boolean formula Phi containing and- , or- and not-operators which verifies an NP-witness of membership in L have the smallest known asymptotic communication complexity among general proofs or arguments for NP.
Abstract: We present a zero-knowledge proof system [19] for any NP language L, which allows showing that x in L with error probability less than 2^−k using communication corresponding to O(|x|^c) + k bit commitments, where c is a constant depending only on L. The proof can be based on any bit commitment scheme with a particular set of properties. We suggest an efficient implementation based on factoring. We also present a 4-move perfect zero-knowledge interactive argument for any NP-language L. On input x in L, the communication complexity is O(|x|^c) max(k; l) bits, where l is the security parameter for the prover. Again, the protocol can be based on any bit commitment scheme with a particular set of properties. We suggest efficient implementations based on discrete logarithms or factoring. We present an application of our techniques to multiparty computations, allowing for example t committed oblivious transfers with error probability 2^−k to be done simultaneously using O(t+k) commitments. Results for general computations follow from this. As a function of the security parameters, our protocols have the smallest known asymptotic communication complexity among general proofs or arguments for NP. Moreover, the constants involved are small enough for the protocols to be practical in a realistic situation: both protocols are based on a Boolean formula Phi containing and- , or- and not-operators which verifies an NP-witness of membership in L. Let n be the number of times this formula reads an input variable. Then the communication complexity of the protocols when using our concrete commitment schemes can be more precisely stated as at most 4n + k + 1 commitments for the interactive proof and at most 5nl +5l bits for the argument (assuming k the number of commitments required for the proof is linear in n. Both protocols are also proofs of knowledge of an NP-witness of membership in the language involved.

Patent
Jim Gray1, Donald C. Reichart1
16 Dec 1996
TL;DR: In this article, an efficient implementation of a multidimensional data aggregation operator that generates all aggregates and super-aggregates for all available values in a results set by first generating a minimal number of aggregates at the lowest possible system level, and second categorizing the aggregate function being applied and applying aggregate function with the fewest possible function calls.
Abstract: An efficient implementation of a multidimensional data aggregation operator that generates all aggregates and super-aggregates for all available values in a results set by first generating a minimal number of aggregates at the lowest possible system level using a minimal number of function calls, and second categorizing the aggregate function being applied and applying the aggregate function with the fewest possible function calls. The aggregates are generated from a union of roll-ups of the n attributes to the GROUP BY clause of the SELECT statement. The number of roll-ups are minimized by including a barrel shift of the attributes being rolled up. A scoreboard array of 2n bits is updated during the roll-up and barrel shifting process to keep track of which roll-ups are complete and with are not yet complete. Generating super-aggregates is further optimized by identifying the type of aggregate function being applied and facilitating the most efficient application of the aggregate function. A lter-- super() function is implemented to facilitate the most efficient application of algebraic aggregate functions that require access to intermediate aggregate data that heretofore was not available to any algebraic aggregation operator.

Journal ArticleDOI
TL;DR: In this article, the authors constructed fully self-consistent models of triaxial galaxies with central density cusps, which have densities that vary as r near the center and r at large radii.
Abstract: We have constructed fully self-consistent models of triaxial galaxies with central density cusps. The triaxial generalizations of Dehnen’s (1993) spherical mass models are presented, which have densities that vary as r near the center and r at large radii. We computed libraries of ∼ 7000 orbits in each of two triaxial models with γ = 1 (“weak cusp”) and γ = 2 (“strong cusp”); these two models have density profiles similar to those of the “core” and “power-law” galaxies observed by HST. Both mass models have short-to-long axis ratios of 1:2 and are maximally triaxial. The major orbit families and their associated periodic orbits were mapped as a function of energy. A large fraction of the orbits in both model potentials are stochastic, as evidenced by their non-zero Liapunov exponents. We show that most of the stochastic orbits in the strong-cusp potential diffuse relatively quickly through their allowed phase-space volumes, on time scales of 10 − 10 dynamical times. Stochastic orbits in the weak-cusp potential diffuse more slowly, often retaining their box-like shapes for 10 dynamical times or longer. Attempts to construct self-consistent solutions using just the regular orbits failed for both mass models. Quasi-equilibrium solutions that include the stochastic orbits exist for both models; however, real galaxies constructed in this way would evolve near the center due to the continued mixing of the stochastic orbits. We attempted to construct more nearly stationary models in which stochastic phase space was uniformly populated at low energies. These “fully mixed” solutions were found to exist only for the weak-cusp potential; as much as ∼ 1/3 of the mass near the center of these models could be placed on stochastic orbits without destroying the self-consistency. No significant fraction of the mass could be placed on fully-mixed stochastic orbits in the strong-cusp model, demonstrating that strong triaxiality can be inconsistent with a high central density. Our results suggest that chaos is a generic feature of the motion in realistic triaxial potentials, but that the presence of chaos is not necessarily inconsistent with the existence of stationary triaxial configurations. Triaxial galaxies with cusps 2 Merritt & Fridman

Journal ArticleDOI
TL;DR: In this paper, two sets of criteria are propossed, the first to define parameter ranges representing each wave type and then, in the particular case of the diffusive wave model, to define criteria for the choice of numerical algorithm and appropriate space and time steps.

Journal ArticleDOI
TL;DR: In this paper, the authors measured the current noise of thin-film resistors as a function of current and temperature and for resistor lengths of $7000, $100, $30, and $1\ensuremath{\mu}\mathrm{m}.
Abstract: We have measured the current noise of silver thin-film resistors as a function of current and temperature and for resistor lengths of $7000$, $100$, $30$, and $1\ensuremath{\mu}\mathrm{m}$. As the resistor becomes shorter than the electron-phonon interaction length, the current noise for large current increases from a nearly current independent value to the interacting hot-electron value $(\sqrt{3}/4)2eI$. However, further reduction in length below the electron-electron interaction length decreases the noise to a value approaching the independent hot-electron value $(1/3)2eI$ first predicted for mesoscopic resistors.

Journal ArticleDOI
TL;DR: It is shown that the class of two-layer neural networks with bounded fan-in is efficiently learnable in a realistic extension to the probably approximately correct (PAC) learning model.
Abstract: We show that the class of two-layer neural networks with bounded fan-in is efficiently learnable in a realistic extension to the probably approximately correct (PAC) learning model. In this model, a joint probability distribution is assumed to exist on the observations and the learner is required to approximate the neural network which minimizes the expected quadratic error. As special cases, the model allows learning real-valued functions with bounded noise, learning probabilistic concepts, and learning the best approximation to a target function that cannot be well approximated by the neural network. The networks we consider have real-valued inputs and outputs, an unlimited number of threshold hidden units with bounded fan-in, and a bound on the sum of the absolute values of the output weights. The number of computation steps of the learning algorithm is bounded by a polynomial in 1//spl epsiv/, 1//spl delta/, n and B where /spl epsiv/ is the desired accuracy, /spl delta/ is the probability that the algorithm fails, n is the input dimension, and B is the bound on both the absolute value of the target (which may be a random variable) and the sum of the absolute values of the output weights. In obtaining the result, we also extended some results on iterative approximation of functions in the closure of the convex hull of a function class and on the sample complexity of agnostic learning with the quadratic loss function.

Journal ArticleDOI
TL;DR: In this paper, a bifurcation theory of the order function is developed on the basis of its self-consistent functional equation to elucidate, in particular, generic scaling behavior of such systems at the onset of cooperative entrainment.

Journal ArticleDOI
TL;DR: A new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function is proposed, and the existence and uniqueness of the optimal predictor are established.
Abstract: We make three related contributions. First, we propose a new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function, and we establish existence and uniqueness of the optimal predictor. Second, we provide a detailed application to optimal prediction of a conditionally heteroscedastic process under asymmetric loss, the insights gained from which are broadly applicable. Finally, we incorporate our results into a general framework for recursive prediction-based model selection under the relevant loss function.

Journal ArticleDOI
TL;DR: If the LIP algorithm is applied to integer data, it gets as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided that A contains small-integer entries.
Abstract: We propose a primal-dual "layered-step" interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns an exact optimum after a finite number of steps--in particular, after O(n3.5c(A)) iterations, wherec(A) is a function of the coefficient matrix. The LLS steps can be thought of as accelerating a classical path-following interior point method. One consequence of the new method is a new characterization of the central path: we show that it composed of at mostn2 alternating straight and curved segments. If the LIP algorithm is applied to integer data, we get as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided thatA contains small-integer entries.

Journal ArticleDOI
TL;DR: In this paper, a simple and general algebraic technique for obtaining results in additive number theory was presented, and applied to derive various new extensions of the Cauchy-Davenport Theorem.

Patent
21 Jun 1996
TL;DR: In this article, a generalized tuning scheme that uses the correlation between changes in the input and corresponding changes in output to tune the operating point is presented, when the system reaches the desired operating point, the correlation goes to zero and the system converges.
Abstract: In any steady-state optimization problem, the output being optimized could be a nonmonotonic function of the controlled variable. Often the output is dependent on the temperature, the load impedance, and other unknown and variable quantities. Thus, it is very useful to have an automatic tuning scheme that will take the system to the desired operating point using only input and output information. The present invention is a generalized tuning scheme that uses the correlation between changes in the input and corresponding changes in the output to tune the operating point. In general terms, the present invention utilizes a correlation function between the controlled variable and a perturbed waveform. When the system reaches the desired operating point, the correlation goes to zero and the system converges. This corresponds to the point at which the derivative of the output with respect to the input is zero. This tuning scheme is appropriate for any tuning problem which has a single maximum or minimum. A variety of tuning problems in power electronics and other areas fall into this category. A tuning scheme based on correlation usually requires an excitation input. The switching action in the controlled circuit perturbs all the states and provides this excitation. Thus, this tuning scheme is appropriate for switching power applications. A preferred embodiment of the present application is used to control a power converter in a solar array application.

Journal ArticleDOI
TL;DR: In this paper, the cow-path problem is studied and the first randomized algorithm for the cow path problem is presented. But the algorithm is optimal for two paths (w = 2) and is not optimal for larger values of w.
Abstract: Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as thew-lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths (w=2) and give evidence that it is optimal for larger values ofw. Subsequent to the preliminary version of this paper, Kaoet al.(in“Proceedings, 5th ACM?SIAM Symposium on Discrete Algorithm,” pp. 372?381, 1994) have shown that our algorithm is indeed optimal for allw?2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect tow?despite similar complexity results for related problems, it appears that this growth has never been analyzed.

Journal ArticleDOI
TL;DR: This paper proposes two algorithms that solve the real time trajectory generation problem for differentially flat systems with (possibly non-minimum phase) zero dynamics and proves convergence of the algorithms for a reasonable class of output trajectories.