scispace - formally typeset
Search or ask a question
Topic

Linear approximation

About: Linear approximation is a research topic. Over the lifetime, 3901 publications have been published within this topic receiving 74764 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Chebyshev approximation by nonlinear families on a general compact space is studied and the existence of a minimal set on which a locally best approximation is locally best is shown.
Abstract: Chebyshev approximation by nonlinear families on a general compact space is studied. Attention is restricted to approximants satisfying a local Haar condition. A necessary and sufficient condition for the approximant to be locally best is given. A linear approximation problem is given which is equivalent to the nonlinear problem of locally best approximation. The existence of a minimal set on which a locally best approximation is locally best is shown. An alternation result is given for approximation on an interval.

17 citations

Proceedings ArticleDOI
20 Jun 2007
TL;DR: This paper proposes to improve the convergence speed of piecewise linear function approximation by tracking the dynamics of the value function with the Kalman filter using a random-walk model and casts this as a general framework in which the TD, Q-Learning and MAXQ algorithms are implemented.
Abstract: Reinforcement learning algorithms can become unstable when combined with linear function approximation. Algorithms that minimize the mean-square Bellman error are guaranteed to converge, but often do so slowly or are computationally expensive. In this paper, we propose to improve the convergence speed of piecewise linear function approximation by tracking the dynamics of the value function with the Kalman filter using a random-walk model. We cast this as a general framework in which we implement the TD, Q-Learning and MAXQ algorithms for different domains, and report empirical results demonstrating improved learning speed over previous methods.

17 citations

Journal ArticleDOI
TL;DR: In this article, a bi-level optimization problem to estimate offline time-dependent origin-destination demand on the basis of link flows and historical O-D matrices is presented.
Abstract: This paper presents a bi-level optimization problem to estimate offline time-dependent origin–destination (time-dependent O-D) demand on the basis of link flows and historical O-D matrices. The upper-level problem aimed to minimize the summation of errors in both traffic counts and O-D demand. Conventionally, O-D flows are linearly mapped to link flows with the assignment matrix proportions obtained from the dynamic traffic assignment, which is typically formulated as the lower-level problem. However, the linear relationship may be invalid when congestion builds up in the network, and a nonlinear relation between O-D flows and link flows may result. The nonlinearity may lead to a converged solution that is far from the global optimum. An accurate solution should be able to relax the linear assumption and to consider the effect of other O-D flows on the links’ traffic volumes. In this study, a solution method that relied on the sensitivity of assignment proportions to O-D flows was proposed and applied; it...

17 citations

Journal ArticleDOI
TL;DR: It is proved that the classical Bernstein Voronovskaja-type theorem remains valid in general for all sequences of positive linear approximation operators.

17 citations

Journal ArticleDOI
TL;DR: Synthetic and field inversion experiments demonstrate that the proposed approach dramatically reduces the cost of the Hamiltonian Monte Carlo inversion while preserving an accurate and efficient sampling of the posterior probability.
Abstract: Markov chain Monte Carlo algorithms are commonly employed for accurate uncertainty appraisals in non‐linear inverse problems. The downside of these algorithms is the considerable number of samples needed to achieve reliable posterior estimations, especially in high‐dimensional model spaces. To overcome this issue, the Hamiltonian Monte Carlo algorithm has recently been introduced to solve geophysical inversions. Different from classical Markov chain Monte Carlo algorithms, this approach exploits the derivative information of the target posterior probability density to guide the sampling of the model space. However, its main downside is the computational cost for the derivative computation (i.e. the computation of the Jacobian matrix around each sampled model). Possible strategies to mitigate this issue are the reduction of the dimensionality of the model space and/or the use of efficient methods to compute the gradient of the target density. Here we focus the attention to the estimation of elastic properties (P‐, S‐wave velocities and density) from pre‐stack data through a non‐linear amplitude versus angle inversion in which the Hamiltonian Monte Carlo algorithm is used to sample the posterior probability. To decrease the computational cost of the inversion procedure, we employ the discrete cosine transform to reparametrize the model space, and we train a convolutional neural network to predict the Jacobian matrix around each sampled model. The training data set for the network is also parametrized in the discrete cosine transform space, thus allowing for a reduction of the number of parameters to be optimized during the learning phase. Once trained the network can be used to compute the Jacobian matrix associated with each sampled model in real time. The outcomes of the proposed approach are compared and validated with the predictions of Hamiltonian Monte Carlo inversions in which a quite computationally expensive, but accurate finite‐difference scheme is used to compute the Jacobian matrix and with those obtained by replacing the Jacobian with a matrix operator derived from a linear approximation of the Zoeppritz equations. Synthetic and field inversion experiments demonstrate that the proposed approach dramatically reduces the cost of the Hamiltonian Monte Carlo inversion while preserving an accurate and efficient sampling of the posterior probability.

17 citations


Network Information
Related Topics (5)
Nonlinear system
208.1K papers, 4M citations
92% related
Robustness (computer science)
94.7K papers, 1.6M citations
88% related
Matrix (mathematics)
105.5K papers, 1.9M citations
88% related
Differential equation
88K papers, 2M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20237
202229
202197
2020134
2019124
2018147