scispace - formally typeset
Search or ask a question

Showing papers on "Linear approximation published in 1996"


Proceedings ArticleDOI
14 Oct 1996
TL;DR: The authors present a linear-programming based method for finding "gadgets", i.e., combinatorial structures reducing constraints of one optimization problem to constraints of another.
Abstract: The authors present a linear-programming based method for finding "gadgets", i.e., combinatorial structures reducing constraints of one optimization problem to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method they present a number of new, computer-constructed gadgets for several different reductions. This method also answers the question of how to prove the optimality of gadgets-they show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is NP-hard (improving upon the previous hardness of 71/72 for both problems). They also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of 0.801, This improves upon the previous best bound of 0.7704.

125 citations


Journal ArticleDOI
TL;DR: A control synthesis scheme is presented for nonlinear single-input-single-output systems which have completely unstable (antistable) zero dynamics and achieves an approximately linear input-output response and internal stability.
Abstract: A control synthesis scheme is presented for nonlinear single-input-single-output systems which have completely unstable (antistable) zero dynamics. The approach is similar in spirit to linear approaches for nonminimum phase systems and involves the derivation of an input-output linearizing controller for a suitably-defined nonlinear minimum phase approximation to the original system. The linearizing controller achieves an approximately linear input-output response and internal stability.

93 citations


Journal ArticleDOI
TL;DR: In this article, a modified maximum likelihood estimation of the scale parameter of Rayleigh distribution is proposed. But, the hyperbolic approximation is used instead of linear approximation for a function which appears in the Maximum Likelihood equation.
Abstract: For defining a Modified Maximum Likelihood Estimate of the scale parameter of Rayleigh distribution, a hyperbolic approximation is used instead of linear approximation for a function which appears in the Maximum Likelihood equation. This estimate is shown to perform better, in the sense of accuracy and simplicity of calculation, than the one based on linear approximation for the same function. Also the estimate of the scale parameter obtained is shown to be asymptotically unbiased. Numerical computation for random samples of different sizes from Rayleigh distribution, using type I1 censoring is done and is shown to be better than that obtained by Lee et al. (1980)

55 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an interpretation method for the gravity anomaly of an arbitrary interface separating two homogeneous media, which consists essentially of a downward continuation of the observed anomaly and the division of the continued anomaly by a scale factor involving the density contrast between the media.
Abstract: We present an interpretation method for the gravity anomaly of an arbitrary interface separating two homogeneous media. It consists essentially of a downward continuation of the observed anomaly and the division of the continued anomaly by a scale factor involving the density contrast between the media. The knowledge of the interface depth at isolated points is used to estimate the depth d1 of the shallowest point of the interface, the density contrast Δρ between the two media, and the coefficients c1 and c2 of a first‐order polynomial representing a linear trend to be removed from data. The solutions are stabilized by introducing a damping parameter in the computation of the downward‐continued anomaly by the equivalent layer method. Different from other interface mapping methods using gravity data, the proposed method: (1) takes into account the presence of an undesirable linear trend in data; (2) requires just intervals for both Δρ (rather than the knowledge of its true value) and coefficients c1 and c2...

47 citations


Journal ArticleDOI
TL;DR: In this paper, the effective-index method does not solve the full vector-wave equation that governs the modes, instead, it solves the reduced vector wave equation, which is accurate only for approximately linearly polarized waves, which leads to additional errors as the waveguide being analyzed is a mathematically nonseparable structure.
Abstract: The approximations involved in the effective-index method for analyzing the vector modes of rectangular-core dielectric waveguides are examined in detail It is shown that the effective-index method does not solve the full vector-wave equation that governs the modes Instead, it solves the reduced vector-wave equation, which is accurate only for approximately linearly polarized waves Furthermore, in solving the reduced vector-wave equation, the method of separation of variables is used, which leads to additional errors as the waveguide being analyzed is a mathematically nonseparable structure To characterize the performance of the effective-index method, asymptotic expressions are derived for the errors in the calculation of propagation constants Apart from separating the effects of different approximations involved, these expressions show explicitly how the accuracy of the method depends on the mode type, the normalized frequency, the mode orders, the dimensions of the waveguide, and the relative refractive-index differences between the core and the surrounding media With the help of these expressions, it is demonstrated that more accurate results can be obtained by combining various effective-index solutions

47 citations


Journal ArticleDOI
TL;DR: In this paper, an error bound is proved for a piecewise linear finite element approximation, using a backward-Euler time discretization, of a model for phase separation of a multi-component alloy.
Abstract: An error bound is proved for a fully practical piecewise linear finite element approximation, using a backward-Euler time discretization, of a model for phase separation of a multi-component alloy. Numerical experiments with three components in one and two space dimensions are also presented.

39 citations


Journal ArticleDOI
TL;DR: In this article, the second harmonic sound field of an axial-symmetric source can be reduced to a simple linear combination of a set of exponential integral functions, instead of a complicated triple integral.
Abstract: In a linear approximation, an arbitrary axial‐symmetric source can be expressed as the linear superposition of a set of Gaussian sources and the corresponding radiated sound field can be represented as the superposition of this set of Gaussian beams. The analysis is extended to the second harmonic generation due to nonlinear effects under a quasilinear approximation, then the second harmonic sound field could be considered as a sum of self‐ and cross‐interaction terms produced by a series of Gaussian beams. Therefore, the calculation of the second harmonic sound field of an axial‐symmetric source can be reduced to a simple linear combination of a set of exponential integral functions, instead of a complicated triple integral. In order to verify this calculation approach, the second harmonic generation of a focused and a simple piston source is examined, and the result is in good agreement with that in earlier paper by complicated computation.

32 citations


Journal ArticleDOI
TL;DR: In this paper, a method for computing the microscopic internal displacement fields associated with permanent deformations of 3D asphalt-aggregate cores with complex internal structure and satisfying the small gradient approximation of continuum mechanics is presented.
Abstract: Although existing nondestructive evaluation methods and transducers provide useful quantitative information on composite materials, they measure only macroscopic deformations and, often, only after the strain has exceeded a certain predetermined threshold. It is well established, however, that maintenance intervention is more effective if applied early in the deformation process. A new method for computing the microscopic internal displacement fields associated with permanent deformations of three-dimensional asphalt-aggregate cores with complex internal structure and satisfying the small gradient approximation of continuum mechanics is presented. The displacement fields are computed from a sequence of three-dimensional X-ray computed tomography images, obtained using a new imaging protocol developed specifically for mass-fraction and mix-density estimates of composite cores. By assuming that the image intensity of the tomographic images represents a certain conserved property that is incompressible, a constrained nonlinear regression model for motion estimation is developed. Successive linear approximation is then employed and each linear subsidiary problem is solved using variational calculus. The resulting Euler-Lagrange equations are approximated and solved using finite differencing methods and a conjugate gradient algorithm in a multiresolution framework. The method is validated using pairs of synthetic images of plane shear flow. The three-dimensional displacement field in the interior of a cylindrical asphalt/aggregate core loaded to a state of permanent deformation is calculated using this method.

31 citations


Journal ArticleDOI
TL;DR: In this article, a two-step iterated extended Kalman filter is proposed to improve the estimate error by splitting the cost function minimization into two steps (a linear first step and a nonlinear second step) by defining a set of first-step states that are nonlinear combinations of the desired states.
Abstract: A modie cation of the two-step optimal e lter is presented. The two-step e lter is an alternative to the standard recursive estimators that are applied to nonlinear measurement problems, such as the extended and iterated extended Kalman e lters. It improves the estimate error by splitting the cost function minimization into two steps (a linear e rst step and a nonlinearsecond step )by dee ning a set of e rst-step states that are nonlinear combinations of the desired states. A linear approximation is made in the time update of the e rst-step states rather than in the measurement update as in conventional methods. An extension of that approximation of the time update by including higher-order termsin thestateestimateerrorispresented. Previous work used a e rst-orderexpansion of the nonlinear function relating e rst- and second-step states to e nd the time update of the e rst-step states. Terms to third order in the estimate error are retained here resulting in a time update keeping second-order corrections in both the state estimate error and the covariance. The result is an estimate with lower bias and lower mean square error. A square root implementation of the two-step e lter algorithm is also presented that improves the robustness and accuracy of the e lter. Performance is verie ed using a radar ranging example.

25 citations


Patent
Michael R. Praiswater1
30 Dec 1996
TL;DR: In this paper, an amplifier which outputs a nonlinear function in response to a linear input is presented, where the nonlinear response is a piece-wise linear approximation, and the circuit includes an op amp that outputs a ramping voltage and a series of stages which change the scope of the voltage.
Abstract: An amplifier which outputs a nonlinear function in response to a linear input. The nonlinear response is a piece-wise linear approximation. The circuit includes an op amp which outputs a ramping voltage and a series of stages which change the scope of the ramping voltage. As the output of the op amp reaches a particular breakpoint, an additional stage of the circuit is activated so as to change the slope of the output. The new line segment has a new slope such that the combination of all these stages approximates a nonlinear response.

23 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an iterative method to obtain localized Wannier functions, needed in the framework of correlation energy calculations on polymers with different size-consistent methods using a localized orbital basis.
Abstract: We present an iterative method to obtain localized Wannier functions, needed in the framework of correlation energy calculations on polymers with different size-consistent methods using a localized orbital basis. Test calculations using different possible localization schemes are performed on alternating all-trans polyacetylene (t-PA), which is an example for polymers with covalently bound unit cells. The improvement of the localization is compared with respect to the total correlation energy per unit cell at the level of second order orbital invariant Moller-Plesset perturbation theory (LMP2) to the canonical MP2 (CMP2) method, and also results of the calculation of the correlation energy with the coupled cluster doubles theory (CCD) and its linear approximation (LCCD) are shown, We found that the coupled cluster expansions failed to converge for systems containing the Wannier functions belonging to two interacting unit cells if their interactions are too large (in case of a double zeta basis set and optimally localized Wannier functions). This is probably due to linear dependences in the systems of equations for such a highly symmetric system. Such a behavior can be made plausible with the help of a very simple model. Possibilities to overcome this problem are discussed. However, since in this work we are mainly concerned with the localization properties of Wannier functions in correlation calculations, we concentrate on comparisons of the correlation energy obtained with our localized orbital approximation to the energies as computed in the corresponding canonical orbital basis. Since the latter ones are available only for MP2 we concentrate in the present paper on this method, which can be viewed as a second order approximation to the coupled cluster expansion for double excitations. A comparison of the influence of the localization approximation on the correlation energy obtained with the corresponding canonical procedure is made for Clementi's minimal and double zeta basis sets on the MP2 level and, in addition, the localized Wannier functions of larger systems and the effects of the localized orbital approximation on a potential curve for t-PA are discussed.

Journal ArticleDOI
TL;DR: In this paper, the massive nonsymmetric gravitational theory is shown to possess a linearisation instability at purely GR field configurations, disallowing the use of the linear approximation in these situations, and arbitrarily small antisymmetric sector Cauchy data leads to singular evolution unless an ad hoc condition is imposed on the initial data hypersurface.
Abstract: The massive nonsymmetric gravitational theory is shown to posses a linearisation instability at purely GR field configurations, disallowing the use of the linear approximation in these situations. It is also shown that arbitrarily small antisymmetric sector Cauchy data leads to singular evolution unless an ad hoc condition is imposed on the initial data hypersurface.

Journal ArticleDOI
TL;DR: Two methods for piecewise linear approximation of freeform surfaces are presented, one scheme exploits an intermediate bilinear approximation and the other employs global curvature bounds.
Abstract: Two methods for piecewise linear approximation of freeform surfaces are presented. One scheme exploits an intermediate bilinear approximation and the other employs global curvature bounds. Both methods attempt to adaptively create piecewise linear approximations of the surfaces, employing the maximum norm.

Journal ArticleDOI
TL;DR: In an attempt to gain insight into the design of linear detectors for additive white noise channels (discrete-time case), several procedures are described, both optimal and suboptimal.
Abstract: In an attempt to gain insight into the design of linear detectors for additive white noise channels (discrete-time case), we describe several procedures, both optimal and suboptimal. Using the Wiener representation for nonlinear systems, we derive an ad hoc suboptimal design procedure. Exact designs are found when the noise amplitude's probability distribution is stable and when the noise is Laplacian. Considering all the linear detectors thus derived, no general form for the optimal linear detector's unit-sample response becomes apparent. Performance analyses and simulations indicate substantial performance losses occur when linear detectors are used instead of optimal (likelihood ratio) ones.

Proceedings ArticleDOI
22 Apr 1996
TL;DR: In this paper, four commonly used gravity models are compared and their fundamental differences are summarized in this paper, and the authors give an introduction to the concept of gravity potential and normal gravity potential.
Abstract: Inertial navigation systems must utilize a valid gravity model in order to accurately mechanize the navigation equations. Four commonly used gravity models are compared and their fundamental differences are summarized in this paper. The 4 gravity models (listed with increasing accuracy and/or validity) are as follows. 1. Low altitude gravity model is an approximation to the normal gravity vector. 2. J/sub 2/ gravity model uses an approximation to the normal graviational potential (an infinite series of spherical harmonics) and generates the gravity vector in the Earth-Centered-Earth-Fixed (ECEF) frame. These components are transformed to the local level navigation. 3. Normal gravity model generates a gravity vector normal to the reference ellipsoid on the ellipsoidal surface by definition; above the ellipsoid, the gravity vector has a nonzero north component. 4. General gravity approximation model uses a multiple-term approximation to the true gravitational potential (a double sum of infinite terms). All three components are nonzero at or above the ellipsoidal surface. The components depend upon longitude. This paper gives an introduction to the concept of gravity potential and normal gravity potential. Gravity vector direction comparison for these four gravity models are made at all latitudes, longitudes and altitudes. Coordinate dependency of the gravity components is also examined.

Journal ArticleDOI
TL;DR: In this article, a simple visual servoing based on linear approximation of the inverse kinematics is proposed, which is robust to the calibration error, because it uses neithor camera angles nor joint angles.
Abstract: We propose a simple visual servoing based on linear approximation of the inverse kinematics. When we use a hand-eye system which has a similar structure as a human being, we can approximate the transformation from a binocular visual space to a joint space of the manipulator as a linear function. This relationship makes it possible to produce the desired joint angles from the image data using a constant linear function instead of the variable nonlinear image jacobian and robot jacobian. This method is robust to the calibration error, because it uses neithor camera angles nor joint angles. We show some experimental results which demonstrates the effectiveness of this method.

Journal ArticleDOI
Eric M. Schwarz1, Michael J. Flynn
TL;DR: This paper derives a PPA that produces in the worst case a 16-bit approximation to the square root operation, and utilizes an existing 53 bit multiplier design requiring approximately 1,000 dedicated logic gates of function, additional repowering circuits, and has a latency of one multiplication.
Abstract: Quadratically converging algorithms for high-order arithmetic operations typically are accelerated by a starting approximation. The higher the precision of the starting approximation, the less number of iterations required for convergence. Traditional methods have used look-up tables or polynomial approximations, or a combination of the two called piecewise linear approximations. This paper provides a revision and major extension to our study (1993) proposing a nontraditional method for reusing the hardware of a multiplier. An approximation is described in the form of partial product array (PPA) composed of Boolean elements. The Boolean elements are chosen such that their sum is a high-precision approximation to a high-order arithmetic operation such as square root, reciprocal, division, logarithm, exponential, and trigonometric functions. This paper derives a PPA that produces in the worst case a 16-bit approximation to the square root operation. The implementation of the PPA utilizes an existing 53 bit multiplier design requiring approximately 1,000 dedicated logic gates of function, additional repowering circuits, and has a latency of one multiplication.


Journal ArticleDOI
TL;DR: In this article, a 2D full-waveform inversion of double-couple earthquake sources is implemented, where the source parameters solved for are the spatial location, origin time, amplitude and orientation of each doublecouple and the velocity and density distribution and source time function are assumed to be known a priori but may be arbitrarily complicated.
Abstract: SUMMARY 2-D full-waveform inversion of double-couple earthquake sources is implemented. Temporally and spatially extended sources are represented by superposition of doublecouples. The source parameters solved for are the spatial location, origin time, amplitude and orientation of each double-couple. The velocity and density distribution and source time function are assumed to be known a priori but may be arbitrarily complicated. The non-linear inverse problem is solved by iterative linear approximation. The Jacobian matrix elements for source depth and rupture angle are computed by wavefield extrapolation forward in time, while those for origin time and amplitude are computed analytically. A smoothing technique that results in faster convergence and avoids local minima associated with cycle skipping is applied at each iteration. A spatial sampling interval, between discrete sources, of one-quarter wavelength of the dominant shear wave is optimal for inversion if high uniqueness of the result is desired. The presence of a fault is inferred from the spatial continuity of the rupture solution, rather than being imposed a priori. The method is illustrated by successful application to three synthetic source models: a single double-couple, a single extended rupture and a double extended rupture. The resolutions of the source depth and origin time are higher, and their posterior covariances are lower than those of the amplitude and rupture angle at each source point. Source depth, origin time and amplitude are primarily determined by the data; the rupture angle is more strongly influenced by the a priori information.

Journal ArticleDOI
TL;DR: A DCOC iterative algorithm using reciprocal linear approximation of the displacement constraints is presented and it is shown by numerical examples that the proposed algorithm considerably improves the iterative performance.

01 Jan 1996
TL;DR: The encouraging results presented in this paper show that it is possible to optimize simulation models both successfully and with tolerable computational effort.
Abstract: In recent years optimization of simulation models has become a very important application field of direct optimization strategies. The searcn process of these iterative strategies is only based on cost function values and does not require any additional analytical information like gradients etc. Today the most common direct methods for global optimization are Genetic Algorithms, Evolution Strategies, and Simulated Annealing. All these methods apply sophisticated rrobabilistic search operators which Imitate principles 0 nature. Although these operators have been proven to be well-suited for global search the required computational effort (number of required cost function evaluations) still remains a big problem. In this paper we focus on acceleration methods for direct global optimization strategies. Our approach is based on cost function approximation. As approximation techniques we use a simple ~rid-based method and RecBFNs (Rectangular Basis Function Networks), a special kind of neural networks. The methods we have developed have been applied successfully to model optimization as well as to a selection of mathematical test problems. The encouraging results presented in this paper show that it is possible to optimize simulation models both successfully and with tolerable computational effort.

Journal ArticleDOI
TL;DR: A unified approach to local optimality, robustness, and Bayesian estimation theory concepts in deriving Kalman filtering equations in the case of non-Gaussian observation noise is presented.

Journal ArticleDOI
TL;DR: In this paper, a new method for the approximation of multi-real rational functions via linear programming is presented, which is based on the minimization of a suitable criterion, which arrives from the well-known minimax criterion.

Proceedings ArticleDOI
11 Dec 1996
TL;DR: In this paper, a Lyapunov function approach is used to estimate the region of attraction of a closed-loop system. But the problem is not to find the complete solution for the problem of approximating this subset.
Abstract: Linear control systems closed by saturated linear feedback are widely known in control theory. When studying the region of attraction of such a system the problem arises of estimating some positive invariant its subset within which the closed-loop system does not encounter control constraints. The problem under consideration is characteristic for a Lyapunov function approach, it consists in choosing the class of Lyapunov functions providing the complete solution for the problem of approximating this subset.

01 Oct 1996
TL;DR: In this paper, the ADIFOR automatic differentiation tool is applied to a 3D storm-scale meteorological model to generate a sensitivityenhanced code capable of providing derivatives of all model output variables and related diagnostic (derived) parameters as a function of specified control parameters.
Abstract: The ADIFOR automatic differentiation tool is applied to a 3-D storm-scale meteorological model to generate a sensitivity-enhanced code capable of providing derivatives of all model output variables and related diagnostic (derived) parameters as a function of specified control parameters. The tangent linear approximation, applied to a deep convective storm by the first of its kind using a full-physics compressible model, is valid up to 50 min for a 1% water vapor perturbations. The result is very encouraging considering the highly nonlinear and discontinuous properties of solutions. The ADIFOR-generated code has provided valuable sensitivity information on storm dynamics. Especially, it is very efficient and useful for investigating how a perturbation inserted at earlier time propagates through the model variables at later times. However, it is computationally very expensive to be applied to the variational data assimilation, especially for 3-D meteorological models, which potentially have a large number of input variables.

Proceedings ArticleDOI
Ali Abur1
12 May 1996
TL;DR: In this article, a linear programming-based method for the optimization of the radial electric power distribution networks is proposed, where the radiality constraint is enforced implicitly by modifying the well known simplex method in order to avoid the creation of topologies that contain any loops.
Abstract: This paper describes a linear programming based method by which the radial electric power distribution networks can be optimally reconfigured. The optimization problem is formulated as a linear network flow problem where the resulting network is required to remain radial. The radiality constraint is enforced implicitly by modifying the well known simplex method in order to avoid the creation of topologies that contain any loops during the simplex iterations. Numerical examples are given to illustrate the proposed method.

Proceedings ArticleDOI
22 Apr 1996
TL;DR: Algorithms that allow an intelligent system to dynamically convert between two representations of spatial occupancy, namely, certainty grids and object boundary curves, are presented and their excellent performance is demonstrated against similar algorithms.
Abstract: We present algorithms that allow an intelligent system to dynamically convert between two representations of spatial occupancy, namely, certainty grids and object boundary curves. These algorithms can be used to accomplish many real-time tasks of a mobile robot such as mapping, navigation, object recognition, and robot localization. For conversion from certainty grid to object boundaries, an edge linking algorithm is appropriately modified. Certainty grid 'images' are transformed into a set of object boundary curves. The latter are expressed as oriented piecewise linear segments. Image processing techniques, such as edge detection, thinning, curve tracing, and linear approximation are employed with various modifications. Modifications include a new method for linear curve approximation that is simple, accurate, and efficient. This method monitors chord and arc length and its excellent performance is demonstrated against similar algorithms. The certainty grid to object boundary algorithm is tested against simulated noisy certainty grid maps. An algorithm to do the inverse operation, namely, to convert object boundary curves to an occupancy grid, is also presented.

Journal ArticleDOI
Thoddi C.T. Kotiah1
TL;DR: The best linear approximation of a function f (x) near a point cis the tangent line at (c, f (c)) is the least square linear approximation.
Abstract: The best linear approximation of a function f (x)near a point cis the tangent line at (c, f (c)).In spite of the well‐known measure of the error in the approximation in elementary calculus, it is not clear for how wide an interval about cis the tangent line the best. A fairly strong competitor is the least squares linear approximation of the function that can be defined over any interval (β, α) containing c.General formulae are derived for the slope and y‐intercept of the least squares line. A comparison is made with the tangent line approximations in the special cases of the sine function and the cube root function for selected values of c, βand α. Numerical results indicate that the least squares approximation is generally better if xis relatively far from c.Values of care subject to the common restriction that f(c) must be readily computed. It is shown that, for given x,it is not always possible to find an appropriate csuch that the resulting tangent line approximation is guaranteed to be superior to t...

Journal ArticleDOI
TL;DR: In this paper, extinction efficiency expressions for a homogeneous sphere in the S approximation and in the Hart-Montroll approximation are related through a refractive-index-dependent multiplicative factor.
Abstract: Extinction efficiency expressions for a homogeneous sphere in the S approximation and in the Hart–Montroll approximation are shown to be related through a refractive-index-dependent multiplicative factor.

Proceedings ArticleDOI
11 Dec 1996
TL;DR: In this article, a bilinear approximation technique for identification of a class of nonlinear dynamical models is presented, and it is shown that the natural space of input-output models is that of causal Fock functionals.
Abstract: A bilinear approximation technique for identification of a class of nonlinear dynamical models is presented, It is shown that for bilinear systems with a stable drift term the natural space of input-output models is that of causal Fock functionals. The functional spline interpolators hence obtained are found to have time-varying but asymptotically stable and decoupled bilinear realizations. Motivation is given by the need for systems order reduction and input-output analysis of Carleman bilinearization, with application to the attitude dynamics of a rigid spacecraft.