scispace - formally typeset
Search or ask a question

Showing papers on "Linear approximation published in 2010"


Proceedings Article
21 Jun 2010
TL;DR: The Greedy-GQ algorithm is an extension of recent work on gradient temporal-difference learning to a control setting in which the target policy is greedy with respect to a linear approximation to the optimal action-value function.
Abstract: We present the first temporal-difference learning algorithm for off-policy control with unrestricted linear function approximation whose per-time-step complexity is linear in the number of features. Our algorithm, Greedy-GQ, is an extension of recent work on gradient temporal-difference learning, which has hitherto been restricted to a prediction (policy evaluation) setting, to a control setting in which the target policy is greedy with respect to a linear approximation to the optimal action-value function. A limitation of our control setting is that we require the behavior policy to be stationary. We call this setting latent learning because the optimal policy, though learned, is not manifest in behavior. Popular off-policy algorithms such as Q-learning are known to be unstable in this setting when used with linear function approximation.

233 citations


Journal ArticleDOI
TL;DR: A new multiscale finite element method which is able to accurately capture solutions of elliptic interface problems with high contrast coefficients by using only coarse quasiuniform meshes, and without resolving the interfaces is introduced.
Abstract: We introduce a new multiscale finite element method which is able to accurately capture solutions of elliptic interface problems with high contrast coefficients by using only coarse quasiuniform meshes, and without resolving the interfaces. A typical application would be the modelling of flow in a porous medium containing a number of inclusions of low (or high) permeability embedded in a matrix of high (respectively low) permeability. Our method is H^1- conforming, with degrees of freedom at the nodes of a triangular mesh and requiring the solution of subgrid problems for the basis functions on elements which straddle the coefficient interface but which use standard linear approximation otherwise. A key point is the introduction of novel coefficientdependent boundary conditions for the subgrid problems. Under moderate assumptions, we prove that our methods have (optimal) convergence rate of O(h) in the energy norm and O(h^2) in the L_2 norm where h is the (coarse) mesh diameter and the hidden constants in these estimates are independent of the “contrast” (i.e. ratio of largest to smallest value) of the PDE coefficient. For standard elements the best estimate in the energy norm would be O(h^(1/2−e)) with a hidden constant which in general depends on the contrast. The new interior boundary conditions depend not only on the contrast of the coefficients, but also on the angles of intersection of the interface with the element edges.

207 citations


Journal ArticleDOI
TL;DR: Two easy-to-implement methods for the piecewise linear approximation of functions of two variables and a detailed description of how the methods can be embedded in a MILP model are considered.

163 citations


Journal ArticleDOI
TL;DR: This work presents a general approach for designing approximation algorithms for a fundamental class of geometric clustering problems in arbitrary dimensions and leads to simple randomized algorithms for the k-means, median and discrete problems.
Abstract: We present a general approach for designing approximation algorithms for a fundamental class of geometric clustering problems in arbitrary dimensions. More specifically, our approach leads to simple randomized algorithms for the k-means, k-median and discrete k-means problems that yield (1+e) approximations with probability ≥ 1/2 and running times of O(2(k/e)O(1)dn). These are the first algorithms for these problems whose running times are linear in the size of the input (nd for n points in d dimensions) assuming k and e are fixed. Our method is general enough to be applicable to clustering problems satisfying certain simple properties and is likely to have further applications.

153 citations


Journal ArticleDOI
TL;DR: This paper focuses on the investigation of the spatial truncation and discretization of the secondary source distribution occurring in real-world implementations and presents a rigorous analysis of evanescent and propagating components in the reproduced sound field.
Abstract: In this paper, we consider physical reproduction of sound fields via planar and linear distributions of secondary sources (i.e., loudspeakers). The presented approach employs a formulation of the reproduction equation in spatial frequency domain which is explicitly solved for the secondary source driving signals. Wave field synthesis (WFS), the alternative formulation, can be shown to be equivalent under equal assumptions. Unlike the WFS formulation, the presented approach does not employ a far-field approximation when linear secondary source distributions are considered but provides exact results. We focus on the investigation of the spatial truncation and discretization of the secondary source distribution occurring in real-world implementations and present a rigorous analysis of evanescent and propagating components in the reproduced sound field.

117 citations


Journal ArticleDOI
TL;DR: It is proved that linearizing the inverse problem of EIT does not lead to shape errors for piecewise-analytic conductivities and bounds are obtained on how well the linear reconstructions and the true conductivity difference agree on the boundary of the linearized equation.
Abstract: For electrical impedance tomography (EIT), the linearized reconstruction method using the Frechet derivative of the Neumann-to-Dirichlet map with respect to the conductivity has been widely used in the last three decades. However, few rigorous mathematical results are known regarding the errors caused by the linear approximation. In this work we prove that linearizing the inverse problem of EIT does not lead to shape errors for piecewise-analytic conductivities. If a solution of the linearized equations exists, then it has the same outer support as the true conductivity change, no matter how large the latter is. Under an additional definiteness condition we also show how to approximately solve the linearized equation so that the outer support converges toward the right one. Our convergence result is global and also applies for approximations by noisy finite-dimensional data. Furthermore, we obtain bounds on how well the linear reconstructions and the true conductivity difference agree on the boundary of t...

110 citations


Journal ArticleDOI
TL;DR: Semidiscrete finite element approximation of the linear stochastic wave equation with additive noise with minimal regularity assumptions is studied in a semigroup framework to prove strong convergence estimates for the Stochastic problem.
Abstract: Semidiscrete finite element approximation of the linear stochastic wave equation (LSWE) with additive noise is studied in a semigroup framework. Optimal error estimates for the deterministic problem are obtained under minimal regularity assumptions. These are used to prove strong convergence estimates for the stochastic problem. The theory presented here applies to multidimensional domains and spatially correlated noise. Numerical examples illustrate the theory.

91 citations


Journal ArticleDOI
TL;DR: The main features of the method are the following: rapid convergence on the entire representative set of parameters, rigorous a posteriori error estimators for the output, and a parameter independent off-linephase and a computationally very efficient on-line phase to enable the rapid solution of many-query problems arising in control, optimization, and design.
Abstract: We propose certified reduced basis methods for the efficient and reliable evaluation of a general output that is implicitly connected to a given parameterized input through the harmonic Maxwell's equations. The truth approximation and the development of the reduced basis through a greedy approach is based on a discontinuous Galerkin approximation of the linear partial differential equation. The formulation allows the use of different approximation spaces for solving the primal and the dual truth approximation problems to respect the characteristics of both problem types, leading to an overall reduction in the off-line computational effort. The main features of the method are the following: (i) rapid convergence on the entire representative set of parameters, (ii) rigorous a posteriori error estimators for the output, and (iii) a parameter independent off-line phase and a computationally very efficient on-line phase to enable the rapid solution of many-query problems arising in control, optimization, and design. The versatility and performance of this approach is shown through a numerical experiment, illustrating the modeling of material variations and problems with resonant behavior.

85 citations


Journal ArticleDOI
TL;DR: In this article, it was argued that the effect of cosmological structure formation on the average expansion rate is negligible, because the linear approximation to the metric remains applicable in the regime of nonlinear density perturbations.
Abstract: It has been argued that the effect of cosmological structure formation on the average expansion rate is negligible, because the linear approximation to the metric remains applicable in the regime of nonlinear density perturbations. We discuss why the arguments based on the linear theory are not valid. We emphasize the difference between Newtonian gravity and the weak field, small velocity limit of general relativity in the cosmological setting.

83 citations


Journal ArticleDOI
TL;DR: In this article, the authors develop explicit, piecewise-linear formulations of functions f(x):Ω(x), n ≥ 3 that are defined on an orthogonal grid of vertex points.
Abstract: We develop explicit, piecewise-linear formulations of functions f(x):ℝ n ↦ℝ, n≤3, that are defined on an orthogonal grid of vertex points. If mixed-integer linear optimization problems (MILPs) involving multidimensional piecewise-linear functions can be easily and efficiently solved to global optimality, then non-analytic functions can be used as an objective or constraint function for large optimization problems. Linear interpolation between fixed gridpoints can also be used to approximate generic, nonlinear functions, allowing us to approximately solve problems using mixed-integer linear optimization methods. Toward this end, we develop two different explicit formulations of piecewise-linear functions and discuss the consequences of integrating the formulations into an optimization problem.

80 citations


Journal ArticleDOI
TL;DR: The Fréchet derivative of a matrix function f at A in the direction E, where A and E are real matrices, can be approximated by Im f(A + ihE)/h for some suitably small h, and is proved to be of second order in h for analytic functions f and also for the matrix sign function.
Abstract: We show that the Frechet derivative of a matrix function f at A in the direction E, where A and E are real matrices, can be approximated by Im f(A + ihE)/h for some suitably small h. This approximation, requiring a single function evaluation at a complex argument, generalizes the complex step approximation known in the scalar case. The approximation is proved to be of second order in h for analytic functions f and also for the matrix sign function. It is shown that it does not suffer the inherent cancellation that limits the accuracy of finite difference approximations in floating point arithmetic. However, cancellation does nevertheless vitiate the approximation when the underlying method for evaluating f employs complex arithmetic. The ease of implementation of the approximation, and its superiority over finite differences, make it attractive when specialized methods for evaluating the Frechet derivative are not available, and in particular for condition number estimation when used in conjunction with a block 1-norm estimation algorithm.

Journal ArticleDOI
TL;DR: This paper investigates hybrid schemes of WENO schemes with high order up-wind linear schemes using different discontinuity indicators and explores the possibility in avoiding the local characteristic decompositions and the nonlinear weights for part of the procedure, hence reducing the cost but still maintaining non-oscillatory properties for problems with strong shocks.

Journal ArticleDOI
TL;DR: In this article, an impulsive Hopfield-type neural network system with piecewise constant argument of generalized type is introduced, and sufficient conditions for the existence of the unique equilibrium are obtained.
Abstract: In this paper we introduce an impulsive Hopfield-type neural network system with piecewise constant argument of generalized type Sufficient conditions for the existence of the unique equilibrium are obtained Existence and uniqueness of solutions of such systems are established Stability criterion based on linear approximation is proposed Some sufficient conditions for the existence and stability of periodic solutions are derived An example with numerical simulations is given to illustrate our results

Journal ArticleDOI
TL;DR: In this paper, a general GFEM/XFEM formulation is presented to solve two-dimensional problems characterized by C0 continuity with gradient jumps along discrete lines, such as those found in the thermal and structural analysis of heterogeneous materials or in line load problems in homogeneous media.
Abstract: A general GFEM/XFEM formulation is presented to solve two-dimensional problems characterized by C0 continuity with gradient jumps along discrete lines, such as those found in the thermal and structural analysis of heterogeneous materials or in line load problems in homogeneous media. The new enrichment functions presented in this paper allow solving problems with multiple intersecting discontinuity lines, such as those found at triple junctions in polycrystalline materials and in actively cooled microvascular materials with complex embedded networks. We show how the introduction of enrichment functions yields accurate finite element solutions with meshes that do not conform to the geometry of the discontinuity lines. The use of the proposed enrichments in both linear and quadratic approximations is investigated, as well as their combination with interface enrichment functions available in the literature. Through a detailed convergence study, we demonstrate that quadratic approximations do not require any correction to the method to recover optimal convergence rates and that they perform better than linear approximations for the same number of degrees of freedom in the solution of these types of problems. In the linear case, the effectiveness of correction functions proposed in the literature is also investigated. Copyright © 2009 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, an adaptive generalized predictive control (GPC) system is presented for the management of output power of solid oxide fuel cells (SOFCs) using a fractional order model, which is more accurate than an integer order model to depict the dynamics.

Journal ArticleDOI
TL;DR: In this article, a generalized gradient approximation based on the Perdew, Burke, Ernzerhof (PBE) functional form interpolates between rapidly PBE and slowly varying PBE density regimes.
Abstract: We propose a generalized gradient approximation constructed for hybrid interfaces, which is based on the Perdew, Burke, Ernzerhof (PBE) functional form and interpolates between the rapidly PBE and slowly varying (PBEsol, the revised PBE for solid-state systems) density regimes. This functional approximation (named PBEint) recovers the right second-order gradient expansion of the exchange energy and is accurate for jellium surfaces, interacting jellium slabs, molecules, solids, and metal-molecule interfaces.

Journal ArticleDOI
TL;DR: This work introduces a genetically optimized synergy based on intervals' numbers, or INs, which demonstrates invariably a better capacity for generalization and learns orders-of-magnitude faster than alternative methods inducing clearly fewer rules.

01 Jan 2010
TL;DR: An iterative version of the IMplicit Pressure Explicit Saturation method for two-phase immiscible fluid flow in porous media with different capillarity pressures is presented and the convergence theorem of the method is established under the natural conditions.
Abstract: This work is a continuation of Kou and Sun (36) where we present an efficient im- provement on the IMplicit Pressure Explicit Saturation (IMPES) method for two-phase immiscible fluid flow in porous media with different capillarity pressures. In the previous work, we present an implicit treatment of capillary pressure appearing in the pressure equation. A linear approx- imation of capillary function is used to couple the implicit saturation equation into the pressure equation that is solved implicitly. In this paper, we present an iterative version of this method. It is well-known that the fully implicit scheme has unconditional stability. The new method can be used for solving the coupled system of nonlinear equations arisen after the fully implicit scheme. We follow the idea of the previous work, and use the linear approximation of capillary function at the current iteration. This is different from iterative IMPES that computes capillary pressure by the saturations at the previous iteration. From this approximation, we couple the saturation equation into the pressure equation, and establish the coupling relation between the pressure and saturation. We employ the relaxation technique to control the convergence of the new method, and we give a choice of relaxation factor. The convergence theorem of our method is established under the natural conditions. Numerical examples are provided to demonstrate the performance of our approach, and the results show that our method is efficient and stable.


Journal ArticleDOI
TL;DR: In this article, a variational multiscale analysis for Lagrangian shock hydrodynamics is presented, where numerical instabilities are controlled by a stabilizing operator derived using the paradigm of the variational multi-scale analysis.

Journal ArticleDOI
16 Nov 2010-Langmuir
TL;DR: These analytical expressions are significant improvements over the existing equations in the literature that are valid only for large κa because the new equations facilitate the modeling of EDL interactions between nanoscale particles and surfaces over a wide range of ionic strength.
Abstract: Exact, closed-form analytical expressions are presented for evaluating the potential energy of electrical double layer (EDL) interactions between a sphere and an infinite flat plate for three different types of interactions: constant potential, constant charge, and an intermediate case as given by the linear superposition approximation (LSA). By taking advantage of the simpler sphere-plate geometry, simplifying assumptions used in the original Derjaguin approximation (DA) for sphere-sphere interaction are avoided, yielding expressions that are more accurate and applicable over the full range of κa. These analytical expressions are significant improvements over the existing equations in the literature that are valid only for large κa because the new equations facilitate the modeling of EDL interactions between nanoscale particles and surfaces over a wide range of ionic strength.

Journal ArticleDOI
TL;DR: A class of hp-DG methods that is closely related to other DG schemes, however, combines both p-optimal jump penalty as well as lifting stabilization and it is proved that the resulting error estimates are optimal with respect to both the local element sizes and polynomial degrees.
Abstract: The aim of this paper is to overcome the well-known lack of p-optimality in hp-version discontinuous Galerkin (DG) discretizations for the numerical approximation of linear elliptic problems. For this purpose, we shall present and analyze a class of hp-DG methods that is closely related to other DG schemes, however, combines both p-optimal jump penalty as well as lifting stabilization. We will prove that the resulting error estimates are optimal with respect to both the local element sizes and polynomial degrees.

Journal ArticleDOI
TL;DR: This work shows that, if the segments have the same width (to reduce circuit complexity), then the number of segments is given by s(@e)~c@e, (@e->0^+), where c=(b-a)|f^''|"m"a"x4.

Journal ArticleDOI
TL;DR: A generalization of local maximum entropy (max‐ent) approximation for high orders of consistency (i.e. quadratic, cubic, …) based upon the application of the de Boor's algorithm to the standard, linear local max‐ent approximation.
Abstract: We present here a generalization of local maximum entropy (max-ent) approximation for high orders of consistency (i.e. quadratic, cubic, …). The method is based upon the application of the de Boor's algorithm to the standard, linear local max-ent approximation. The resulting approximation possesses some interesting properties, such as non-negativity, ∞ smoothness, exact interpolation on the boundary and variation diminishing (no Gibbs effect). The resulting structure has many similarities with B-spline surfaces, but without the tensor-product structure typical of that approximation. Examples are provided of its use in the framework of a Galerkin method showing the potential of the proposed method in solving boundary value problems. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors construct approximate solutions for a general stochastic integrodifferential equation which is not explicitly solvable and whose coefficients functionally depend on Lebesgue integrals and Stochastic integrals with respect to martingales.

Journal ArticleDOI
TL;DR: In this paper, the existence and uniqueness of eigenvalues were studied and three numerical algorithms, namely Picard iteration, nonlinear Rayleigh quotient iteration and successive linear approximation method (SLAM), were investigated.
Abstract: Nonlinear rank-one modiflcation of the symmetric eigenvalue problem arises from eigenvibrations of mechanical structures with elastically attached loads and calculation of the propagation modes in optical flber. In this paper, we flrst study the existence and uniqueness of eigenvalues, and then investigate three numerical algorithms, namely Picard iteration, nonlinear Rayleigh quotient iteration and successive linear approximation method (SLAM). The global convergence of the SLAM is proven under some mild assumptions. Numerical examples illustrate that the SLAM is the most robust method. Mathematics subject classiflcation: 65F15, 65H17, 15A18, 35P30, 65Y20.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the VILT solution space and derived a linear approximation which greatly simplifies the computation of the transfers, and is amenable to broad global searches using Tisserand graphs and heuristic optimization procedure.
Abstract: The announced missions to the Saturn and Jupiter systems renewed the space community interest in simple design methods for gravity assist tours at planetary moons A key element in such trajectories are the V-Infinity Leveraging Transfers (VILT) which link simple impulsive maneuvers with two consecutive gravity assists at the same moon VILTs typically include a tangent impulsive maneuver close to an apse location, yielding to a desired change in the excess velocity relative to the moon In this paper we study the VILT solution space and derive a linear approximation which greatly simplifies the computation of the transfers, and is amenable to broad global searches Using this approximation, Tisserand graphs, and heuristic optimization procedure we introduce a fast design method for multiple-VILT tours We use this method to design a trajectory from a highly eccentric orbit around Saturn to a 200-km science orbit at Enceladus The trajectory is then recomputed removing the linear approximation, showing a Δv change of <4% The trajectory is 27 years long and comprises 52 gravity assists at Titan, Rhea, Dione, Tethys, and Enceladus, and several deterministic maneuvers Total Δv is only 445 m/s, including the Enceladus orbit insertion, almost 10 times better then the 39 km/s of the Enceladus orbit insertion from the Titan–Enceladus Hohmann transfer The new method and demonstrated results enable a new class of missions that tour and ultimately orbit small mass moons Such missions were previously considered infeasible due to flight time and Δv constraints


Journal ArticleDOI
TL;DR: Numerical experiments demonstrate that although this method is more computationally intensive than traditional methods, it produces more accurate approximation functions.

Journal ArticleDOI
TL;DR: In this article, it is shown that if the universe is homogeneous with only small, vanishing after averaging, density fluctuations along the line of sight, then the distance correction is negligible.
Abstract: The distance-redshift relation plays an important role in cosmology. In the standard approach to cosmology it is assumed that this relation is the same as in the homogeneous universe. As the real universe is not homogeneous there are several methods to calculate the correction. The weak lensing approximation and the Dyer-Roeder relation are one of them. This paper establishes a link between these two approximations. It is shown that if the universe is homogeneous with only small, vanishing after averaging, density fluctuations along the line of sight, then the distance correction is negligible. It is also shown that a vanishing 3D average of density fluctuations does not imply that the mean of density fluctuations along the line of sight is zero. In this case, even within the linear approximation, the distance correction is not negligible. The modified version of the Dyer-Roeder relation is presented and it is shown that this modified relation is consistent with the correction obtained within the weak lensing approximation. The correction to the distance for a source at z ~ 2 is of order of a few percent. Thus, with an increasing precision of cosmological observations an accurate estimation of the distance is essential. Otherwise errors due to miscalculation the distance can become a major source of systematics.