scispace - formally typeset
Search or ask a question

Showing papers on "Linear approximation published in 2011"


Book
02 Nov 2011
TL;DR: The first investigations of nonlinear approximation problems were made by P.L. Chebyshev in the last century, and the entire theory of uniform approximation is strongly connected with his name as discussed by the authors.
Abstract: The first investigations of nonlinear approximation problems were made by P.L. Chebyshev in the last century, and the entire theory of uniform approxima tion is strongly connected with his name. By making use of his ideas, the theories of best uniform approximation by rational functions and by polynomials were developed over the years in an almost unified framework. The difference between linear and rational approximation and its implications first became apparent in the 1960's. At roughly the same time other approaches to nonlinear approximation were also developed. The use of new tools, such as nonlinear functional analysis and topological methods, showed that linearization is not sufficient for a complete treatment of nonlinear families. In particular, the application of global analysis and the consideration of flows on the family of approximating functions intro duced ideas which were previously unknown in approximation theory. These were and still are important in many branches of analysis. On the other hand, methods developed for nonlinear approximation prob lems can often be successfully applied to problems which belong to or arise from linear approximation. An important example is the solution of moment problems via rational approximation. Best quadrature formulae or the search for best linear spaces often leads to the consideration of spline functions with free nodes. The most famous problem of this kind, namely best interpolation by poly nomials, is treated in the appendix of this book."

331 citations


Book
28 Feb 2011
TL;DR: In this paper, the authors present the basic achievements of the subject and discuss some topics from complex rational approximation, including linear approximation, spline approximation, and complex rational approximations.
Abstract: Originally published in 1987, this book is devoted to the approximation of real functions by real rational functions. These are, in many ways, a more convenient tool than polynomials, and interest in them was growing, especially since D. Newman's work in the mid-sixties. The authors aim at presenting the basic achievements of the subject and, for completeness, also discuss some topics from complex rational approximation. Certain classical and modern results from linear approximation theory and spline approximation are also included for comparative purposes. This book will be of value to anyone with an interest in approximation theory and numerical analysis.

273 citations


Journal ArticleDOI
TL;DR: This work develops two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline, and examines the tradeoff between sparsity and signal reconstruction accuracy in these methods.
Abstract: We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which, in turn, offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality.

241 citations


Journal ArticleDOI
TL;DR: A novel approach to contour error calculation of an arbitrary smooth path is proposed in this paper, based on coordinate transformation and circular approximation and incorporated in a position loop-based cross-coupled control structure.
Abstract: Reduction of contour error is the main control objective in contour-following applications. A common approach to this objective is to design a controller based on the contour error directly. In this case, the contour error estimation is a key factor in the contour-following operation. Contour error can be approximated by the linear distance from the actual position to the tangent line or plane at the desired position. This approach suffers from a significant error due to linear approximation. A novel approach to contour error calculation of an arbitrary smooth path is proposed in this paper. The proposed method is based on coordinate transformation and circular approximation. In this method, the contour error is represented by the coordinate of the actual position with respect to a specific virtual coordinate frame. The method is incorporated in a position loop-based cross-coupled control structure. An equivalent robust control system is used to establish stability of the closed-loop system. Experimental results demonstrate the efficiency and performance of the proposed contour error estimation algorithm and the motion control strategy.

149 citations


Journal ArticleDOI
Lei Wu1
TL;DR: In this article, a rigorous segment partition method was proposed to obtain a set of optimal segment points by minimizing the difference between chord and arc lengths, in order to derive a tighter piecewise linear approximation of QCCs and in turn a better UC solution as compared to the equipartition method.
Abstract: This letter provides a tighter piecewise linear approximation of generating units' quadratic cost curves (QCCs) for unit commitment (UC) problems. In order to facilitate the UC optimization process with efficient mixed-integer linear programing (MILP) solvers, QCCs are piecewise linearized for converting the original mixed-integer quadratic programming (MIQP) problem into an MILP problem. Traditionally, QCCs are piecewise linearized by evenly dividing the entire real power region into segments. This letter discusses a rigorous segment partition method for obtaining a set of optimal segment points by minimizing the difference between chord and arc lengths, in order to derive a tighter piecewise linear approximation of QCCs and, in turn, a better UC solution as compared to the equipartition method. Numerical test results show the effectiveness of the proposed method on a tighter piecewise linear approximation for better UC solutions.

110 citations


Journal ArticleDOI
TL;DR: In this paper, a dynamic piecewise linear model is proposed to represent dc transmission losses in optimal scheduling problems, where the linear cuts to approximate quadratic losses in each transmission line are adjusted iteratively as the optimization problem is solved.
Abstract: This paper proposes a dynamic piecewise linear model to represent dc transmission losses in optimal scheduling problems. An iterative procedure is proposed, where the linear cuts to approximate quadratic losses in each transmission line are adjusted iteratively as the optimization problem is solved. Applications of this approach to the network constrained short-term hydrothermal scheduling problem and to static dc optimal power flow problems yield a higher accuracy in representing line transmission losses as compared to other approaches, such as a priori iterative estimation, static piecewise liner model and successive linear programming. Study cases for a large-scale system also show reasonable results regarding CPU times.

91 citations


Proceedings ArticleDOI
27 Jun 2011
TL;DR: A closed-form approximation to the full SLAM problem is proposed, under the assumption that the relative position and the relative orientation measurements are independent, and it is demonstrated that such refinement is often unnecessary, since the linear estimate is already accurate.
Abstract: This article investigates the problem of Simultaneous Localization and Mapping (SLAM) from the perspective of linear estimation theory. The problem is first formulated in terms of graph embedding: a graph describing robot poses at subsequent instants of time needs be embedded in a three-dimensional space, assuring that the estimated configuration maximizes measurement likelihood. Combining tools belonging to linear estimation and graph theory, a closed-form approximation to the full SLAM problem is proposed, under the assumption that the relative position and the relative orientation measurements are independent. The approach needs no initial guess for optimization and is formally proven to admit solution under the SLAM setup. The resulting estimate can be used as an approximation of the actual nonlinear solution or can be further refined by using it as an initial guess for nonlinear optimization techniques. Finally, the experimental analysis demonstrates that such refinement is often unnecessary, since the linear estimate is already accurate.

79 citations


Journal ArticleDOI
TL;DR: In this paper, a single-level mixed integer linear programming (SL-MILP) formulation for bi-level discrete network design problem is presented, where travel time function is appropriately modified to cope with the dependency of node-link adjacency matrix on new links.
Abstract: Discrete network design problem (DNDP) is generally formulated as a bi-level programming. In this paper, a single-level mixed integer linear programming (SL-MILP) formulation for bi-level DNDP is presented. To cope with the dependency of node-link adjacency matrix on new links, travel time function is appropriately modified. The nonlinearity of the travel time function is also removed by means of a convex-combination based linear approximation which takes advantage of a unimodular structure. Two valid inequalities is developed which shorten computation time significantly. The validity of the proposed formulation is examined by two test problems. SL-MILP is able to provide optimal solution.

74 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps, which is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells.
Abstract: We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation. Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N –3/4 dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.

74 citations


Journal ArticleDOI
TL;DR: The linear dynamic parts of the system are modeled by a parametric rational function in the z - or s-domain, while the static nonlinearities are represented by a linear combination of nonlinear basis functions.
Abstract: This paper proposes a parametric identification method for parallel Hammerstein systems. The linear dynamic parts of the system are modeled by a parametric rational function in the z - or s-domain, while the static nonlinearities are represented by a linear combination of nonlinear basis functions. The identification method uses a three-step procedure to obtain initial estimates. In the first step, the frequency response function of the best linear approximation is estimated for different input excitation levels. In the second step, the power-dependent dynamics are decomposed over a number of parallel orthogonal branches. In the last step, the static nonlinearities are estimated using a linear least squares estimation. Furthermore, an iterative identification scheme is introduced to refine the estimates. This iterative scheme alternately estimates updated parameters for the linear dynamic systems and for the static nonlinearities. The method is illustrated on a simulation and a validation measurement example.

65 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed the use of marginal computation (MaC) for dynamic O-D estimation, which is a computationally efficient method that performs a perturbation analysis with kinematic wave theory principles, to derive this relationship.
Abstract: In origin-destination (O-D) estimation methods, the relationship between the link flows and the O-D flows is typically approximated by a linear function described by the assignment matrix that corresponds with the current estimate of the O-D flows. However, this relationship implicitly assumes the link flows to be separable; this assumption leads to biased results in congested networks. The use of a different linear approximation of the relationship between O-D flows and link flows has been suggested to take into account link flows being nonseparable. However, deriving this relationship is cumbersome in terms of computation time. In this paper, the use of marginal computation (MaC) is proposed. MaC is a computationally efficient method that performs a perturbation analysis, with the use of kinematic wave theory principles, to derive this relationship. The use of MaC for dynamic O-D estimation was tested on a study network and on a real network. In both cases the proposed methodology performed better than ...

Journal ArticleDOI
TL;DR: Two closed-form relations are shown that express the frequency and amplitude of the generated oscillation as functions of the parameters of the model Matsuoka neural oscillator.
Abstract: Although the Matsuoka neural oscillator, which was originally proposed as a model of central pattern generators, has widely been used for various robots performing rhythmic movements, its characteristics are not clearly explained even now. This article shows two closed-form relations that express the frequency and amplitude of the generated oscillation as functions of the parameters of the model. Although they are derived based on a rough linear approximation, they accord with the result obtained by a simulation considerably. The obtained relations also give us some nontrivial predictions about the properties of the oscillator.

Journal ArticleDOI
TL;DR: A low cost, high-speed architecture for the computation of the binary logarithm based on the Mitchell approximation with two correction stages: a piecewise linear interpolation with power-of-two slopes and truncated mantissa, and a LUT-based correction stage that correct the piecewise interpolation error.
Abstract: A low cost, high-speed architecture for the computation of the binary logarithm is proposed. It is based on the Mitchell approximation with two correction stages: a piecewise linear interpolation with power-of-two slopes and truncated mantissa, and a LUT-based correction stage that correct the piecewise interpolation error. The architecture has been implemented in an FPGA device and the results are compared with other low cost architectures requiring less area and achieving high-speed.

Journal ArticleDOI
TL;DR: This work proposes a new approach to gradient-based history matching which is based on model reduction, where the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and, consequently, the adjoint of the tangent linear approximation of the original forward model
Abstract: Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to gradient-based history matching which is based on model reduction, where the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and, consequently, the adjoint of the tangent linear approximation of the original forward model is replaced by the adjoint of a linear reduced-order forward model. The reducedorder model is constructed with the aid of the proper orthogonal decomposition method. Due to the linear character of the reduced model, the corresponding adjoint model is easily obtained. The gradient of the objective function is approximated, and the minimization problem is solved in the reduced space; the procedure is iterated with the updated estimate of the parameters if necessary. The proposed approach is adjointfree and can be used with any reservoir simulator. The method was evaluated for a waterflood reservoir with channelized permeability field. A comparison with an adjoint-based history matching procedure shows that the model-reduced approach gives a comparable quality of history matches and predictions. The computational efficiency of the model-reduced approach is lower than of an adjoint-based approach, but higher than of an approach where the gradients are obtained with simple finite differences.

Journal ArticleDOI
TL;DR: The technique provides a new alternative for using focusing information in wavefield-based velocity model building and is appropriate in complex subsurface regions characterized by strong velocity variation.
Abstract: Wave-equation migration velocity analysis is a technique designed to extract and update velocity information from migrated images. The velocity model is updated through the process of optimizing the coherence of images migrated with the known background velocity model. The capacity for handling multi-pathing of the technique makes it appropriate in complex subsurface regions characterized by strong velocity variation. Wave-equation migration velocity analysis operates by establishing a linear relation between a slowness perturbation and a corresponding image perturbation. The linear relationship and the corresponding linearized operator are derived from conventional extrapolation operators and the linearized operator inherits the main properties of frequency-domain wavefield extrapolation. A key step in the implementation is to design an appropriate procedure for constructing an image perturbation relative to a reference image that represents the difference between the current image and a true, or more correct image of the subsurface geology. The target of the inversion is to minimize such an image perturbation by optimizing the velocity model. Using time-shift common-image gathers, one can characterize the imperfections of migrated images by defining the focusing error as the shift of the focus of reflections along the time-shift axis. The focusing error is then transformed into an image perturbation by focusing analysis under the linear approximation. As the focusing error is caused by the incorrect velocity model, the resulting image perturbation can be considered as a mapping of the velocity model error in the image space. Such an approach for constructing the image perturbation is computationally efficient and simple to implement. The technique also provides a new alternative for using focusing information in wavefield-based velocity model building. Synthetic examples demonstrate the successful application of our method to a layered model and a subsalt velocity update problem.

Journal ArticleDOI
TL;DR: This brief presents a rigorous technique, based on mixed-integer linear programming, to obtain optimal coefficients' values, which minimize the maximum relative approximation error while using a reduced number of nonzero bits for the coefficients.
Abstract: The hardware computation of the logarithm function is required in a multitude of applications. This brief investigates logarithmic converters based on piecewise linear approximations. This brief presents a rigorous technique, based on mixed-integer linear programming, to obtain optimal coefficients' values, which minimize the maximum relative approximation error while using a reduced number of nonzero bits for the coefficients. The proposed method results in a sensible reduction of the relative approximation error, as compared with previously published results. The hardware implementation realizes the multiplication by a few shifts and additions, avoiding the use of full multipliers. Implementation details and synthesis results in a 90-nm CMOS technology are also described in this brief.

Journal ArticleDOI
TL;DR: A novel technique for designing piecewise-polynomial interpolators for hardware implementation of elementary functions and it is found that the increase in the approximation error due to constraints between polynomial coefficients can easily be overcome by increasing the fractional bits of the coefficients.
Abstract: A novel technique for designing piecewise-polynomial interpolators for hardware implementation of elementary functions is investigated in this paper. In the proposed approach, the interval where the function is approximated is subdivided in equal length segments and two adjacent segments are grouped in a segment pair. Suitable constraints are then imposed between the coefficients of the two interpolating polynomials in each segment pair. This allows reducing the total number of stored coefficients. It is found that the increase in the approximation error due to constraints between polynomial coefficients can easily be overcome by increasing the fractional bits of the coefficients. Overall, compared with standard unconstrained piecewise-polynomial approximation having the same accuracy, the proposed method results in a considerable advantage in terms of the size of the lookup table needed to store polynomial coefficients. The calculus of the coefficients of constrained polynomials and the optimization of coefficients bit width is also investigated in this paper. Results for several elementary functions and target precision ranging from 12 to 42 bits are presented. The paper also presents VLSI implementation results, targeting a 90 nm CMOS technology, and using both direct and Horner architectures for constrained degree-1, degree-2, and degree-3 approximations.

Journal ArticleDOI
TL;DR: This paper exploits a novel modal decomposition of the state-space model and uses linear matrix inequalities (LMIs) for suboptimal control design of distributed controllers with guaranteed H∞ performance for formations of any size.
Abstract: In this paper, we consider the problem of designing a distributed controller for a formation of spacecraft following a periodic orbit. Each satellite is controlled locally on the basis of information from only a subset of the others (the nearest ones). We describe the dynamics of each spacecraft by means of a linear time-periodic (LTP) approximation, and we cast the satellite formation into a state-space formulation that facilitates control synthesis. Our technique exploits a novel modal decomposition of the state-space model and uses linear matrix inequalities (LMIs) for suboptimal control design of distributed controllers with guaranteed H∞ performance for formations of any size. The application of the method is shown in two case studies. The first example is inspired by a mission in a low, sun-synchronous Earth orbit, namely the new Dutch-Chinese Formation for Atmospheric Science and Technology demonstration mission (FAST), which is now in the preliminary design phase. The second example deals with a formation of spacecraft in a halo orbit.

Journal ArticleDOI
TL;DR: A new method to calculate the similarity of time series based on piecewise linear approximation (PLA) and derivative dynamic time warping (DDTW) that can create line segments to approximate time series faster than conventional linear approximation.
Abstract: We propose a new method to calculate the similarity of time series based on piecewise linear approximation (PLA) and derivative dynamic time warping (DDTW). The proposed method includes two phases. One is the divisive approach of piecewise linear approximation based on the middle curve of original time series. Apart from the attractive results, it can create line segments to approximate time series faster than conventional linear approximation. Meanwhile, high dimensional space can be reduced into a lower one and the line segments approximating the time series are used to calculate the similarity. In the other phase, we utilize the main idea of DDTW to provide another similarity measure based on the line segments just we got from the first phase. We empirically compare our new approach to other techniques and demonstrate its superiority.

Journal ArticleDOI
TL;DR: A multi-wave approximation on rarefaction fan is proposed to avoid the occurrences of raref action shocks in computations and Computational efficiency comparisons show that the developed scheme is capable of reducing the computational time effectively with increasing the time step.

Journal ArticleDOI
TL;DR: In this article, a modified version of the Dyer-Roeder relation is presented and it is shown that this modified relation is consistent with the correction obtained within the weak-lensing approximation.
Abstract: The distance-redshift relation plays an important role in cosmology. In the standard approach to cosmology, it is assumed that this relation is the same as in a homogeneous universe. As the real Universe is not homogeneous, there are several methods used to calculate the correction. The weak-lensing approximation and the Dyer-Roeder relation are among them. This paper establishes a link between these two approximations. It is shown that if the Universe is homogeneous with only small density fluctuations along the line of sight that vanish after averaging, then the distance correction is negligible. It is also shown that a vanishing three-dimensional average of density fluctuations does not imply that the mean of density fluctuations along the line of sight is zero. In this case, even within the linear approximation, the distance correction is not negligible. A modified version of the Dyer-Roeder relation is presented and it is shown that this modified relation is consistent with the correction obtained within the weak-lensing approximation. The correction to the distance for a source at z ∼ 2 is of the order of a few per cent. Thus, with the increasing precision of cosmological observations, an accurate estimation of the distance is essential. Otherwise errors due to miscalculation of the distance can become a major source of systematics.

Journal ArticleDOI
TL;DR: A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated and upper and lower bounds on the GK -variation norms of functions having certain integral representations are given.
Abstract: A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated. For sets Gk obtained by varying a vector y of parameters in a fixed-structure computational unit K(-,y) (e.g., the set of Gaussians with free centers and widths), upper and lower bounds on the GK -variation norms of functions having certain integral representations are given, in terms of the £1-norms of the weighting functions in such representations. Families of functions for which the two norms are equal are described.

Journal ArticleDOI
TL;DR: The accuracy of dynamic parameter compensation is improved by representing the dynamic features as a linear transformation of a window of static features and, importantly, their correlations.
Abstract: Model compensation is a standard way of improving the robustness of speech recognition systems to noise. A number of popular schemes are based on vector Taylor series (VTS) compensation, which uses a linear approximation to represent the influence of noise on the clean speech. To compensate the dynamic parameters, the continuous time approximation is often used. This approximation uses a point estimate of the gradient, which fails to take into account that dynamic coefficients are a function of a number of consecutive static coefficients. In this paper, the accuracy of dynamic parameter compensation is improved by representing the dynamic features as a linear transformation of a window of static features. A modified version of VTS compensation is applied to the distribution of the window of static features and, importantly, their correlations. These compensated distributions are then transformed to distributions over standard static and dynamic features. With this improved approximation, it is also possible to obtain full-covariance corrupted speech distributions. This addresses the correlation changes that occur in noise. The proposed scheme outperformed the standard VTS scheme by 10% to 20% relative on a range of tasks.

Journal ArticleDOI
TL;DR: In order to limit the number of (usually burdensome) physical model runs inside the inversion algorithm to a reasonable level, a nonlinear approximation methodology making use of Kriging and a stochastic EM algorithm is presented.

01 Jan 2011
TL;DR: In this paper, a new substitution box (S -box) using fractional linear transformation of a particular type and analyze proposed box for different analysis such as Strict Avalanche Criterion (SAC), BIC, differential approximation probability (DP), linear approximation probability(LP) and nonlinearity.
Abstract: In this letter, we assemble a new substitution box (S -box) using fractional linear transformation of a particular type and analyze proposed box for different analysis such as Strict Avalanche Criterion (SAC), Bit Independent Criterion (BIC), differential approximation probability (DP), linear approximation probability (LP) and nonlinearity. Further, we evaluate the results of these analyses with AES, APA, Gray, Xyi, Skipjack, S8 AES and Prime S-box to know the rank of our proposed box comparative to other boxes.

01 Jan 2011
TL;DR: In this paper, the authors describe the evaluation of computational complexity of software implementation of finite element method and illustrate the increasing complexity in transition from two-dimensional to three-dimensional problems.
Abstract: This paper describes the evaluation of computational complexity of software implementation of finite element method. It has been used to predict the approximate time in which the given tasks will be solved. Also illustrates the increasing of computational complexity in transition from two to three dimensional problem.

Journal ArticleDOI
TL;DR: In this paper, the cosmological "constant" and the Hubble parameter are considered in the Weyl theory of gravity, by taking them as functions of r and t, respectively.
Abstract: In this paper, the cosmological "constant" and the Hubble parameter are considered in the Weyl theory of gravity, by taking them as functions of r and t, respectively. Based on this theory and in the linear approximation, we obtain the values of H0 and Λ0 which are in good agreement with the known values of the parameters for the current state of the universe.

Journal ArticleDOI
TL;DR: It is shown that deterministic back action can be compensated by using active elements, whereas stochastic back action is unavoidable and depends on the temperature of the measurement device.
Abstract: In this paper, we take a control-theoretic approach to answering some standard questions in statistical mechanics, and use the results to derive limitations of classical measurements. A central problem is the relation between systems which appear macroscopically dissipative but are microscopically lossless. We show that a linear system is dissipative if, and only if, it can be approximated by a linear lossless system over arbitrarily long time intervals. Hence lossless systems are in this sense dense in dissipative systems. A linear active system can be approximated by a nonlinear lossless system that is charged with initial energy. As a by-product, we obtain mechanisms explaining the Onsager relations from time-reversible lossless approximations, and the fluctuation-dissipation theorem from uncertainty in the initial state of the lossless system. The results are applied to measurement devices and are used to quantify limits on the so-called observer effect, also called back action, which is the impact the measurement device has on the observed system. In particular, it is shown that deterministic back action can be compensated by using active elements, whereas stochastic back action is unavoidable and depends on the temperature of the measurement device.

Journal ArticleDOI
TL;DR: A new algorithm for initializing and estimating Wiener-Hammerstein models which makes use of the best linear model of the system which is split in all possible ways into two linear sub-models to avoid many local minima.

Journal ArticleDOI
TL;DR: The main results define new Sobolev spaces on these domains and study polynomial approximations for functions in these spaces, including simultaneous approximation by polynomials and the relation between the best approximation of a function and its derivatives.