scispace - formally typeset
Search or ask a question

Showing papers on "Linear approximation published in 1995"


Book
01 Jan 1995
TL;DR: In this paper, the authors consider the regularity of scaling functions and wavelets, and propose a sub-division algorithm to estimate the Lp-Sobolev exponent.
Abstract: Multi-resolution analysis: The continuous point of view The discrete point of view The multivariate case. Wavelets and conjugate quadrature filters: The general case The finite case Wavelets with compact support action of the FWT on oscillating signals. The regularity of scaling functions and wavelets: Regularity and oscillation The sub-division algorithms Spectral estimates of the regularity Estimates of the Lp-Sobolev exponent Applications. Biorthogonal wavelet bases: General principles of sub-band coding Unconditional biorthogonal wavelet bases Dual filters and biorthogonal Riesz bases Examples and applications. Stochastic processes: Linear approximation Linear approximation of images Approximation and compression of real images Piecewise stationary processes Non-linear approximation. Appendices: Quasi-analytic wavelet bases Multivariate constructions Multiscale unconditioned bases Notation.

223 citations


BookDOI
31 Aug 1995
TL;DR: This chapter discusses methods for approximating n-widths and s-numbers using classical and explicit Sic-Like methods, as well as some new methods developed in the second half of last year.
Abstract: 1. Classical Approximation 2. Splines 3. Sinc Approximation 4. Explicit Sic-Like Methods 5. Moment Problems 6. n-widths and s-numbers 7. Optimal Approximation Methods 8. Applications Index

62 citations


Journal ArticleDOI
TL;DR: In this paper, a Fourier analysis approach is taken to investigate the approximation order of scaled versions of certain linear operators into shift-invariant subspaces of L2(Rd).
Abstract: A Fourier analysis approach is taken to investigate the approximation order of scaled versions of certain linear operators into shift-invariant subspaces ofL2(Rd). Quasi-interpolants and cardinal interpolants are special operators of this type, and we give a complete characterization of the order in terms of some type of ellipticity condition for a related function. We apply these results by showing that theL2-approximation order of a closed shift-invariant subspace can often be realized by such an operator.

57 citations


Journal ArticleDOI
TL;DR: In this paper, the sensitivity of ensemble predictions to the optimal perturbation amplitude is analyzed. But the sensitivity is limited to the root-mean-square amplitude of the initial perturbations.
Abstract: Certain characteristics of the perturbations which grow most rapidly over a finite time interval in a primitiveequation atmospheric model are discussed. They are the singular vectors of a linear approximation of the European Centre for Medium-Range Weather Forecasts primitive-equation model. They are computed using the adjoint technique at horizontal spectral truncation T21 with 19 vertical levels. Linear combinations of singular vectors, named optimal perturbations, can be used in ensemble prediction to generate the initial conditions of perturbed integrations. Firstly, having specified the initial amplitude to be comparable with the amplitude of analysis-error estimates, the nonlinear time evolution of optimal perturbations when added to the control initial conditions are studied. In particular, estimates are made of the time limit, TNL, after which nonlinear processes cannot be neglected. Considering optimal perturbations generated using singular vectors with maximum growth over a 36-hour time interval, and characterized by amplitudes comparable with analysis-error estimates, two different methods estimate TNL ≈ 2-2.5 days. Secondly, the sensitivity of ensemble predictions to the optimal perturbation amplitude is analysed. This sensitivity study suggests that an increase of the root-mean-square amplitude of the initial perturbation can give a more realistic ensemble spread. Lastly, an estimate of the possible impact of the reduction of the amplitude of analysis errors on the skill of numerical weather prediction is deduced from the comparison of ensemble experiments run with T21 initial perturbations characterized by different amplitudes. Results indicate that a reduction of the root-mean-square amplitude of the analysis error by a factor √2 may lead to an improvement of medium-range predictability up to 1 day, and that a reduction by a factor 2√2 may reduce the errors of the 7-day forecast to values shown, at present, at forecast-day 5.

54 citations


Journal ArticleDOI
TL;DR: In this paper, an approximation formula for constructing two linear objective functions based on the nonlinear objective function of the equivalent deterministic form (EDF) of the stochastic programming model is presented.

49 citations


Journal ArticleDOI
TL;DR: In this paper, the second-moment closure and non-linear eddy-viscosity models were used to predict attached and separated flows over a high lift aerofoil for a range of incidence angles.
Abstract: A computational study is presented, which examines the performance of variants of second-moment closure and non-linear eddy-viscosity models when used to predict attached and separated flows over a high lift aerofoil for a range of incidence angles. The capabilities of both model types, especially in respect of resolving the onset of suction-side separation at high incident, are contrasted with those of a low-Re - model based on the linear Boussinesq stress- strain relationship. The second-moment model contains a conventional linear approximation of the pressure straining process; a cubic (realisable) variant has been investigated in an earlier study and found to offer no advantages. The quadratic eddy-viscosity model features coefficients which are sensitised to the strain and vorticity invariants. While both models, in the form originally proposed, are superior to the linear eddy-viscosity variant, neither performs well in respect of resolving separation, unless modified so as to return the requisite low level of shear stress in the boundary layer approaching separation. Once separation is resolved with sufficient realism, the near wake aft of the trailing edge is also well represented. All models return poor representations of the far wake which is characterised by low levels of turbulence production to dissipation ratio.

47 citations


Journal ArticleDOI
TL;DR: In this article, the adjoint method for finding optimal or singular modes is employed for studying the finite time stability of steady, tw0-dimensional atmospheric fronts as represented by the uniform potential vorticity semigeostrophic model.
Abstract: The adjoint method for finding optimal or singular modes is employed for studying the finite time stability of steady, tw0-dimensional atmospheric fronts as represented by the uniform potential vorticity semigeostrophic model. The most unstable singular models over a given period of time are computed for a wide range of scalar products. The reference scalar products are relevant to physical space and include total, kinetic, or potential energy; geopotential variance; and enstrophy. A front inspired by observations from FRONTS 87 and including a surface potential temperature anomaly is examined first through the usual linear results. The validity of the linear approximation is considered as a function of amplitude. The modes are also integrated in nonlinear simulations and their life cycles am shown. Results indicate that each norm and wave has its own preferred spatial scale. This severely restricts the concept of scale selection. Energy and geopotential variance modes increase mostly by improvin...

40 citations


Journal ArticleDOI
TL;DR: In this paper, a planar, thermoacoustic (TAC) wave generation and transmission by the heating of a quiescent, isothermal, semi-infinite, gaseous medium's boundary is investigated theoretically.

37 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a shift-invariant space can be described by a system of linear partial difference equations with constant coefficients, whose solvability is characterized by an old theorem of Toeplitz.
Abstract: We take an algebraic approach to the problem of approximation by dilated shifts of basis functions. Given a finite collection D of compactly supported functions in Lp(RIS) (1 0), where Sh is the hdilate of S. We prove that (Sh: h > 0) provides Lp-approximation order r only if S contains all the polynomials of total degree less than r . In particular, in the case where D consists of a single function (p with its moment f (p # 0, we characterize the approximation order of (Sh: h > 0) by showing that the above condition on polynomial containment is also sufficient. The above results on approximation order are obtained through a careful analysis of the structure of shift-invariant spaces. It is demonstrated that a shiftinvariant space can be described by a certain system of linear partial difference equations with constant coefficients. Such a system then can be reduced to an infinite system of linear equations, whose solvability is characterized by an old theorem of Toeplitz. Thus, the Toeplitz theorem sheds light into approximation theory. It is also used to give a very simple proof for the well-known Ehrenpreis principle about the solvability of a system of linear partial differential equations with constant coefficients.

32 citations


Journal ArticleDOI
TL;DR: A logarithmic barrier cutting plane algorithm for convex (possibly non-smooth, semi-infinite) programming which does not solve the linear relaxations to optimality, but rather stays in the interior of the feasible set.
Abstract: The paper presents a logarithmic barrier cutting plane algorithm for convex (possibly non-smooth, semi-infinite) programming. Most cutting plane methods, like that of Kelley, and Cheney and Goldstein, solve a linear approximation (localization) of the problem and then generate an additional cut to remove the linear program's optimal point. Other methods, like the “central cutting” plane methods of Elzinga-Moore and Goffin-Vial, calculate a center of the linear approximation and then adjust the level of the objective, or separate the current center from the feasible set. In contrast to these existing techniques, we develop a method which does not solve the linear relaxations to optimality, but rather stays in the interior of the feasible set. The iterates follow the central path of a linear relaxation, until the current iterate either leaves the feasible set or is too close to the boundary. When this occurs, a new cut is generated and the algorithm iterates. We use the tools developed by den Hertog, Roos and Terlaky to analyze the effect of adding and deleting constraints in long-step logarithmic barrier methods for linear programming. Finally, implementation issues and computational results are presented. The test problems come from the class of numerically difficult convex geometric and semi-infinite programming problems.

29 citations


Journal ArticleDOI
TL;DR: Discusses the allocation of the available capacity of a statistical multiplexer to serve a number of heterogeneous on-off sources, with the cell loss rate as the performance criterion, and derived computationally efficient bounds and asymptotic approximations for thecell loss rate.
Abstract: Discusses the allocation of the available capacity of a statistical multiplexer to serve a number of heterogeneous on-off sources, with the cell loss rate as the performance criterion. In order to avoid using potentially lengthy simulations, the authors have derived computationally efficient bounds and asymptotic approximations for the cell loss rate. The union of all partitions of the available capacity which satisfies the capacity bound and the performance criterion is defined as the capacity region. Both linear approximation and nonlinear approximation of the capacity region are investigated. It is shown that the linear approximation is reasonably accurate when the activity factors of the sources are not too high (less than 0.8). For the case where the linear approximation appears too optimistic, a simple nonlinear approximation for determining the capacity region is suggested. The accuracy of the method is demonstrated using numerical examples. >

Journal ArticleDOI
TL;DR: In this article, it is shown that near the critical temperature of the Gibbs distribution, the time dependent process admits a scaling limit given by a nonlinear stochastic PDE, and the initial conditions of this approximation theorem are verified for equilibrium states when the temperature goes to its critical value in a suitable way.
Abstract: One-dimensional stochastic Ising systems with a local mean field interaction (Kac potential) are investigated. It is shown that near the critical temperature of the equilibrium (Gibbs) distribution the time dependent process admits a scaling limit given by a nonlinear stochastic PDE. The initial conditions of this approximation theorem are then verified for equilibrium states when the temperature goes to its critical value in a suitable way. Earlier results of Bertini-Presutti-Rudiger-Saada are improved, the proof is based on an energy inequality obtained by coupling the Glauber dynamics to its voter type, linear approximation.

Journal ArticleDOI
TL;DR: In this article, a nonlinear regression model was developed to estimate the displacement field associated with permanent deformations of 3D composite objects with complex internal structure for fields satisfying the small displacement gradient approximation of continuum mechanics.
Abstract: We present a new method for computing the internal displacement fields associated with permanent deformations of 3D composite objects with complex internal structure for fields satisfying the small displacement gradient approximation of continuum mechanics. We compute the displacement fields from a sequence of 3D X-ray computed tomography (CT) images. By assuming that the intensity of the tomographic images represents a conserved property which is incompressible, we develop a constrained nonlinear regression model for estimation of the displacement field. Successive linear approximation is then employed and each linear subsidiary problem is solved using variational calculus. We approximate the resulting Euler-Lagrange equations using a finite set of linear equations using finite differencing methods. We solve these equations using a conjugate gradient algorithm in a multiresolution framework. We validate our method using pairs of synthetic images of plane shear flow. Finally, we determine the 3D displacement field in the interior of a cylindrical asphalt/aggregate core loaded to a state of permanent deformation.

Proceedings ArticleDOI
28 Apr 1995
TL;DR: In the first part a full Pade approximation method for interval system is presented, where as in the second part stable Pades approximation is discussed.
Abstract: This paper presents model reduction of linear interval system using Pade approximation method. In the first part a full Pade approximation method for interval system is presented, where as in the second part stable Pade approximation is discussed. A numerical example illustrates the procedure.

Journal ArticleDOI
TL;DR: In this paper, the authors derived an estimate for the expected nonlinearity of a randomly selected injective substitution box in secure block ciphers, which is a crucial requirement for the substitution boxes.
Abstract: Nonlinearity is a crucial requirement for the substitution boxes in secure block ciphers. The authors derive an estimate for the expected nonlinearity of a randomly selected injective substitution box.

Journal ArticleDOI
TL;DR: The high efficiency of the proposed DCOC algorithm lies in the fact that the Lagrange multipliers associated with stress constraints are evaluated explicitly, and therefore the dual-type problem involves only displacement and eigenvalue constraints which are highly reduced compared to traditional OC or dual methods.

Proceedings ArticleDOI
07 Nov 1995
Abstract: In the paper circular transducers with axially-symmetric vibrational profiles were considered. Analytical formulas for the impulse response function h(X,t) for circular transducers and vibration velocity profiles approximated by the linear polynomials on the finite element (annulus) were established. Moreover, similar formulas were obtained for rectangular transducers and linear approximation of the vibrational velocity on the finite element (triangle). These formulas enable accurate calculations of acoustic field distributions in near and far-field respectively. Calculated profiles of an acoustic field were compared with the experimental data. An efficient (coupling) between the calculations of vibrational velocity profiles and the corresponding acoustic field distributions was established.

Journal ArticleDOI
TL;DR: In this paper, an explicit approximation map for shift-invariant subspaces of Lp(Rd), 2? p? ∞, generated by the shifts of one compactly supported function is constructed.

Journal ArticleDOI
TL;DR: The linear precision property of rational Bezier curves is discussed and an algorithm for finding appropriate weights is given.

Proceedings ArticleDOI
24 Jul 1995
TL;DR: In this article, a new universal technique of microwave network complex reflection coefficient measurement based on the solution of the (N+2)-port equations by the maximum likelihood method is proposed.
Abstract: A new universal technique of microwave network complex reflection coefficient measurement based on the solution of the (N+2)-port equations by the maximum likelihood method is proposed. The expressions for the measurement errors are obtained in a linear approximation. It is shown that the measurement accuracy grows with the increase in the number of measurement ports.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: An optimal piecewise linear approximation of the Euclidean norm is presented which is applied to vector median filtering.
Abstract: For reducing impulsive noise without degrading image contours, median filtering is a powerful tool. In multiband images, as for example color images or vector field obtained by optic flow computation, a vector median filter can be used. Vector median filters are defined on the basis of a suitable distance, the best performing distance being the Euclidean. The Euclidean distance is computed by using the Euclidean norm which is quite demanding from the point of view of computation given that a square root is required. An optimal piecewise linear approximation of the Euclidean norm is presented which is applied to vector median filtering.

Journal ArticleDOI
TL;DR: In this paper, two different algorithms, non-linear regression and iterative linear approximation, are compared for peak-shape analysis of Chromatographic data and the accuracy of fitting of severely overlapping peak-groupings is examined against the theoretical input data.
Abstract: Peak-shape analysis of Chromatographic data has to deal with asymmetric signals. Two different algorithms, non-linear regression and iterative linear approximation, are compared. The accuracy of fitting of some severely overlapping peak-groupings is examined against the theoretical input data. Suggestions are made for using these tools in practical applications.

Book ChapterDOI
01 Jan 1995
TL;DR: In this paper, the authors presented an n-dimensional linear approximation that can only overestimate distance, preserving the valuable containment property of the previous 2D method, and provided geometric insight and illustration of the symmetries of the hypercube measure solid.
Abstract: Publisher Summary This chapter presents an n-dimensional linear approximation that can only overestimate distance, preserving the valuable containment property of the previous 2D method. Whereas the latter is solved using trigonometry, this chapter employs geometric methods to derive a family of semi-regular polytopes having cubic symmetry. These solids provide a nested sequence of bounding that encases the n-sphere—that locus of points in n-space lying at a unit distance from the origin. The chapter provides geometric insight and illustration of the symmetries of the n-dimensional “hypercube” measure solid. It is also observed that in higher dimensions the weight equation shows that weights diminish slowly, and the added complexity of both magnitude computation and element sorting strongly favor the use of the Euclidean norm in floating point. In the chapter, the values for 3D linear approximation provided by Ritter are created by empirical testing. This method provides a means of exact computation.

Journal ArticleDOI
TL;DR: In this paper, the authors study the continuous piecewise linear finite element approximation of the following problem: given T > 0, f and uO, find u∊K, where K is a closed convex subset of the So bo lev space, such that for x∊Ω and for any v∊k for a, the solution u is sufficiently regular.
Abstract: In this paper we study the continuous piecewise linear finite element approximation of the following problem: Let Ω be an open set in Rd with d=l or 2. Given T>0, f and uO; find u∊K, where K is a closed convex subset of the So bo lev space , such that for x∊Ω and for any v∊K for a.e. t∊(0,T], where k∊C(0,∞) is a given nonnegative function with k(s)s strictly increasing for s≥O, but possibly degenerate, and p∊(1,∞) depends on k. For such a general problem we establish error bounds in energy type norms for a fully discrete approximation based on the backward Euler time discretisation. We show that these error bounds converge at the optimal rate with respect to the space discretisation, provided p≤2 and the solution u is sufficiently regular.

Journal ArticleDOI
TL;DR: This work derives the normal equation for the optimum value of the time scale parameter and decouple it from that of the basis weights, and develops a numerical method for estimating the optimum time scale.
Abstract: We study the problem of linear approximation of a signal using the parametric gamma bases in L/sup 2/ space. These bases have a time scale parameter, which has the effect of modifying the relative angle between the signal and the projection space, thereby yielding an extra degree of freedom in the approximation. Gamma bases have a simple analog implementation that is a cascade of identical lowpass filters. We derive the normal equation for the optimum value of the time scale parameter and decouple it from that of the basis weights. Using statistical signal processing tools, we further develop a numerical method for estimating the optimum time scale. >

Journal ArticleDOI
TL;DR: In this article, the use of iterative dynamic programming employing exact penalty functions for minimum energy control problems is presented, and it is shown that the choice of an appropriate penalty function factor depends on the relative size of the time delay with respect to the final time and the expected value of the energy consumption.
Abstract: This paper presents the use of iterative dynamic programming employing exact penalty functions for minimum energy control problems. We show that exact continuously non-differentiable penalty functions are superior to continuously differentiable penalty functions in terms of satisfying final state constraints. We also demonstrate that the choice of an appropriate penalty function factor depends on the relative size of the time delay with respect to the final time and on the expected value of the energy consumption. A quadratic approximation (QA) of the delayed variables is much better than a linear approximation (LA) of the same for relatively large time delays. The QA improves the rate of convergence and avoids the formation of 'kinks'. A more general way of selecting appropriate penalty function factors is given and the results obtained using four illustrative examples of varying complexity corroborate the efficacy of the method.

Journal ArticleDOI
TL;DR: This part discusses the algorithm of simulation for a carding process using the state variables, which take into consideration a transference of fibers in the longitudinal and transverse directions, obtained in terms of mathematical equations.
Abstract: The first part of this series discussed a two-dimensional mathematical model of the carding process. This part discusses the algorithm of simulation for a carding process using the state variables, which take into consideration a transference of fibers in the longitudinal and transverse directions, obtained in terms of mathematical equations. An experimental investigation is reported for estimating model parameters that char acterize diffusion of fibers in the transverse direction. The least-squares method with a log-scale is used to process experimental data for a linear approximation of the weight function in the transverse direction. The weight and unit step functions in the transverse direction are obtained using simulation. Realization of random processes is achieved in ten parallel sections in the output of a carding machine. The graphs of profile change for coefficients of variation show a smoothing effect in the carding process. A new system of statistical characteristics for two-dimensional texti...

Journal ArticleDOI
TL;DR: In this article, it was shown that the approximation power provided by a quasi-interpolant and other functional-based operators is equivalent to the polynomial reproducing property possessed by it.

Proceedings ArticleDOI
09 May 1995
TL;DR: To reduce the number of parameters in the Volterra filter a tensor product basis approximation is considered and can be implemented much more efficiently than the original VolterRA filter.
Abstract: To reduce the number of parameters in the Volterra filter a tensor product basis approximation is considered. The approximation can be implemented much more efficiently than the original Volterra filter. In addition, because the design methods are based on partial characterization of the Volterra filter, the approximations are also useful in reducing the complexity of identification and modelling problems. Useful bounds are obtained on the approximation error.