scispace - formally typeset
Search or ask a question

Showing papers on "Linear approximation published in 2015"


Journal ArticleDOI
TL;DR: A visualization method that uses prosection (projection of a section) to visualize 4-D approximation sets is proposed that reproduces the shape, range, and distribution of vectors in the observed approximation sets well and can handle multiple large approximation sets while being robust and computationally inexpensive.
Abstract: In evolutionary multiobjective optimization, it is very important to be able to visualize approximations of the Pareto front (called approximation sets) that are found by multiobjective evolutionary algorithms. While scatter plots can be used for visualizing 2-D and 3-D approximation sets, more advanced approaches are needed to handle four or more objectives. This paper presents a comprehensive review of the existing visualization methods used in evolutionary multiobjective optimization, showing their outcomes on two novel 4-D benchmark approximation sets. In addition, a visualization method that uses prosection (projection of a section) to visualize 4-D approximation sets is proposed. The method reproduces the shape, range, and distribution of vectors in the observed approximation sets well and can handle multiple large approximation sets while being robust and computationally inexpensive. Even more importantly, for some vectors, the visualization with prosections preserves the Pareto dominance relation and relative closeness to reference points. The method is analyzed theoretically and demonstrated on several approximation sets.

196 citations


Journal ArticleDOI
TL;DR: A local approximation of the gridded cost-to-go is used to derive an analytic solution for the optimal torque split decision at each point in the time and state grid, indicating that computation time can be reduced by orders of magnitude with only a slight degradation in simulated fuel economy.
Abstract: The computationally demanding dynamic programming (DP) algorithm is frequently used in academic research to solve the energy management problem of a hybrid electric vehicle (HEV). This paper is exclusively focused on how the computational demand of such a computation can be reduced. The main idea is to use a local approximation of the gridded cost-to-go and derive an analytic solution for the optimal torque split decision at each point in the time and state grid. Thereby, it is not necessary to quantize the torque split and identify the optimal decision by interpolating in the cost-to-go. Two different approximations of the cost-to-go are considered in this paper: 1) a local linear approximation and 2) a quadratic spline approximation. The results indicate that computation time can be reduced by orders of magnitude with only a slight degradation in simulated fuel economy. Furthermore, with a spline approximated cost-to-go, it is also possible to significantly reduce the memory storage requirements. A parallel plug-in HEV is considered in this paper, but the method is also applicable to an HEV.

99 citations


Journal ArticleDOI
TL;DR: This paper presents a method to identify parallel Wiener-Hammerstein systems starting from input-output data only, and the consistency of the proposed initialization procedure is proven.

46 citations


Posted Content
TL;DR: In this paper, an iterative local adaptive majorize-minimization (I-LAMM) is proposed to simultaneously control algorithmic complexity and statistical error when fitting high dimensional models.
Abstract: We propose a computational framework named iterative local adaptive majorize-minimization (I-LAMM) to simultaneously control algorithmic complexity and statistical error when fitting high dimensional models I-LAMM is a two-stage algorithmic implementation of the local linear approximation to a family of folded concave penalized quasi-likelihood The first stage solves a convex program with a crude precision tolerance to obtain a coarse initial estimator, which is further refined in the second stage by iteratively solving a sequence of convex programs with smaller precision tolerances Theoretically, we establish a phase transition: the first stage has a sublinear iteration complexity, while the second stage achieves an improved linear rate of convergence Though this framework is completely algorithmic, it provides solutions with optimal statistical performances and controlled algorithmic complexity for a large family of nonconvex optimization problems The iteration effects on statistical errors are clearly demonstrated via a contraction property Our theory relies on a localized version of the sparse/restricted eigenvalue condition, which allows us to analyze a large family of loss and penalty functions and provide optimality guarantees under very weak assumptions (For example, I-LAMM requires much weaker minimal signal strength than other procedures) Thorough numerical results are provided to support the obtained theory

45 citations


Journal ArticleDOI
TL;DR: This work designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions and provides consistent increase in compression and approximation performance compared with conventional methods.
Abstract: We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen–Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better $n$ -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods.

41 citations


Journal ArticleDOI
TL;DR: A new tensor completion model via folded-concave penalty for estimating missing values in tensor data is proposed and an efficient LLA-ALM algorithm is derived for finding a good local solution of the resulting nonconvex optimization problem.

40 citations


Journal ArticleDOI
TL;DR: This paper discusses a nonlinear Model Predictive Control (MPC) algorithm for multiple-input multiple-output dynamic systems represented by cascade Hammerstein-Wiener models and demonstrates that the algorithm gives control accuracy very similar to that obtained in the MPC approach with nonlinear optimisation.
Abstract: This paper discusses a nonlinear Model Predictive Control (MPC) algorithm for multiple-input multiple-output dynamic systems represented by cascade Hammerstein–Wiener models. The block-oriented Hammerstein–Wiener model, which consists of a linear dynamic block embedded between two nonlinear steady-state blocks, may be successfully used to describe numerous processes. A direct application of such a model for prediction in MPC results in a nonlinear optimisation problem which must be solved at each sampling instant on-line. To reduce the computational burden, a linear approximation of the predicted system trajectory linearised along the future control scenario is successively found on-line and used for prediction. Thanks to linearisation, the presented algorithm needs only quadratic optimisation, time-consuming and difficult on-line nonlinear optimisation is not necessary. In contrast to some control approaches for cascade models, the presented algorithm does not need inverse of the steady-state blocks of the model. For two benchmark systems, it is demonstrated that the algorithm gives control accuracy very similar to that obtained in the MPC approach with nonlinear optimisation while performance of linear MPC and MPC with simplified linearisation is much worse.

35 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of linear and nonlinear simulation for pig contact forces at different differential pressures and showed that the linear simulation model is incapable of predicting the contact force or the frictional force between the pig and the pipeline because the elastic behaviour of the sealing disc is described in the linear approximation.

34 citations


Book ChapterDOI
06 Dec 2015
TL;DR: This paper analyses two variants of SIMON family of light-weight block ciphers against variants of linear cryptanalysis and presents the best linear cryptanalytic results on these variants of reduced-round SIMON to date.
Abstract: In this paper we analyse two variants of SIMON family of light-weight block ciphers against variants of linear cryptanalysis and present the best linear cryptanalytic results on these variants of reduced-round SIMON to date. We propose a time-memory trade-off method that finds differential/linear trails for any permutation allowing low Hamming weight differential/linear trails. Our method combines low Hamming weight trails found by the correlation matrix representing the target permutation with heavy Hamming weight trails found using a Mixed Integer Programming model representing the target differential/linear trail. Our method enables us to find a 17-round linear approximation for SIMON-48 which is the best current linear approximation for SIMON-48. Using only the correlation matrix method, we are able to find a 14-round linear approximation for SIMON-32 which is also the current best linear approximation for SIMON-32. The presented linear approximations allow us to mount a 23-round key recovery attack on SIMON-32 and a 24-round Key recovery attack on SIMON-48/96 which are the current best results on SIMON-32 and SIMON-48. In addition we have an attack on 24 rounds of SIMON-32 with marginal complexity.

32 citations


Journal ArticleDOI
TL;DR: A novel penalised weighted least squares procedure is introduced to select the significant covariates and identify the constant coefficients among the coefficients of the selected covariates, which could thus specify the semiparametric modelling structure.
Abstract: In this paper, we study the model selection and structure specification for the generalised semi-varying coefficient models (GSVCMs), where the number of potential covariates is allowed to be larger than the sample size. We first propose a penalised likelihood method with the LASSO penalty function to obtain the preliminary estimates of the functional coefficients. Then, using the quadratic approximation for the local log-likelihood function and the adaptive group LASSO penalty (or the local linear approximation of the group SCAD penalty) with the help of the preliminary estimation of the functional coefficients, we introduce a novel penalised weighted least squares procedure to select the significant covariates and identify the constant coefficients among the coefficients of the selected covariates, which could thus specify the semiparametric modelling structure. The developed model selection and structure specification approach not only inherits many nice statistical properties from the local maximum likelihood estimation and nonconcave penalised likelihood method, but also computationally attractive thanks to the computational algorithm that is proposed to implement our method. Under some mild conditions, we establish the asymptotic properties for the proposed model selection and estimation procedure such as the sparsity and oracle property. We also conduct simulation studies to examine the finite sample performance of the proposed method, and finally apply the method to analyse a real data set, which leads to some interesting findings.

32 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe computationally efficient model predictive control MPC algorithms for nonlinear dynamic systems represented by discrete-time state-space models, where the model is successively linearized on-line and used for prediction, while in the second one a linear approximation of the future process trajectory is directly found online.
Abstract: This paper describes computationally efficient model predictive control MPC algorithms for nonlinear dynamic systems represented by discrete-time state-space models. Two approaches are detailed: in the first one the model is successively linearised on-line and used for prediction, while in the second one a linear approximation of the future process trajectory is directly found on-line. In both the cases, as a result of linearisation, the future control policy is calculated by means of quadratic optimisation. For state estimation, the extended Kalman filter is used. The discussed MPC algorithms, although disturbance state observers are not used, are able to compensate for deterministic constant-type external and internal disturbances. In order to illustrate implementation steps and compare the efficiency of the algorithms, a polymerisation reactor benchmark system is considered. In particular, the described MPC algorithms with on-line linearisation are compared with a truly nonlinear MPC approach with nonlinear optimisation repeated at each sampling instant.

Journal ArticleDOI
TL;DR: Numerical experiments and numerical comparisons are presented to show the efficiency and the accuracy of the proposed scheme and the two-dimensional unsteady Burger's equation.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the initial value problem for the generalized cubic double dispersion equation in n-dimensional space and proved the global existence and asymptotic decay of solutions for all space dimensions n ≥ 1.
Abstract: In this paper, we study the initial value problem for the generalized cubic double dispersion equation in n-dimensional space. Under a small condition on the initial data, we prove the global existence and asymptotic decay of solutions for all space dimensions n ≥ 1. Moreover, when n ≥ 2, we show that the solution can be approximated by the linear solution as time tends to infinity.

Journal ArticleDOI
TL;DR: Theoretically, the proposed Delaunay-based surface reconstruction algorithm is justified by establishing a topological guarantee on the 3D shape-hull with the help of topological rules and the effectiveness of the approach is demonstrated with experimental results on models with sharp features and sparsely distributed point clouds.
Abstract: Given a finite set of points S ? R 2 , we define a proximity graph called as shape-hull graph ( SHG ( S ) ) that contains all Gabriel edges and a few non-Gabriel edges of Delaunay triangulation of S . For any S , SHG ( S ) is topologically regular with its boundary (referred to as shape-hull ( SH )) homeomorphic to a simple closed curve. We introduce the concept of divergent concavity for simple, closed, planar curves based on the alignment of curves in concave portions and discuss various measures to characterize curves having divergent concavity. Under sufficiently dense sampling, we prove that SH ( S ) , where S is sampled from a divergent concave curve Σ D , represents a piece-wise linear approximation of Σ D . We extend this result to provide a sculpting algorithm for closed surface reconstruction from a set of raw samples. The surface is constructed through a repeated elimination of Delaunay tetrahedra subjected to circumcenter and topological constraints. Theoretically, we justify our algorithm by establishing a topological guarantee on the 3D shape-hull with the help of topological rules. We demonstrate the effectiveness of our approach with experimental results on models with sharp features and sparsely distributed point clouds. Compared to existing sculpting approaches for surface reconstruction that require either a parameter tuning or several stages, our approach is simple, non-parametric, single stage and reconstructs topologically correct piece-wise linear approximation for divergent concave surfaces. Delaunay-based surface reconstruction algorithm has been proposed.It is a non-parametric and single stage approach.Theoretical guarantee has been discussed.

Posted Content
TL;DR: In this article, the authors give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed SSA-analytic type smoothness in the infinite-dimensional case.
Abstract: We give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed Sobolev-analytic-type smoothness in the infinite-dimensional case where specific summability properties of the smoothness indices are fulfilled. These estimates are then applied to the linear approximation of functions from the associated spaces in terms of the $\varepsilon$-dimension of their unit balls. Here, the approximation is based on linear information. Such function spaces appear for example for the solution of parametric and stochastic PDEs. The obtained upper and lower bounds of the approximation error as well as of the associated $\varepsilon$-complexities are completely independent of any dimension. Moreover, the rates are independent of the parameters which define the smoothness properties of the infinite-variate parametric or stochastic part of the solution. These parameters are only contained in the order constants. This way, linear approximation theory becomes possible in the infinite-dimensional case and corresponding infinite-dimensional problems get tractable.

Journal ArticleDOI
TL;DR: A new algorithm for real-time damage assessment that uses a linear approximation method in conjunction with antiresonant frequencies that are identified from transmissibility functions to demonstrate the potential of the proposed algorithm over existing ones.

Journal ArticleDOI
TL;DR: In this article, a model selection and structure specification approach for generalised semi-varying coefficient models (GSVCMs) is proposed, where the number of potential covariates is allowed to be larger than the sample size.
Abstract: In this paper, we study the model selection and structure specification for the generalised semi-varying coefficient models (GSVCMs), where the number of potential covariates is allowed to be larger than the sample size.We first propose a penalised likelihood method with the LASSO penalty function to obtain the preliminary estimates of the functional coefficients. Then, using the quadratic approximation for the local log-likelihood function and the adaptive group LASSO penalty (or the local linear approximation of the group SCAD penalty) with the help of the preliminary estimation of the functional coefficients, we introduce a novel penalised weighted least squares procedure to select the significant covariates and identify the constant coefficients among the coefficients of the selected covariates, which could thus specify the semiparametric modelling structure. The developed model selection and structure specification approach not only inherits many nice statistical properties from the local maximum likelihood estimation and nonconcave penalised likelihood method, but also computationally attractive thanks to the computational algorithm that is proposed to implement our method. Under some mild conditions, we establish the asymptotic properties for the proposed model selection and estimation procedure such as the sparsity and oracle property.We also conduct simulation studies to examine the finite sample performance of the proposed method, and finally apply the method to analyse a real data set, which leads to some interesting findings.

Proceedings ArticleDOI
10 May 2015
TL;DR: In this article, a high frequency (HF) model and the parameter identification of electrical machines in hybrid or electric vehicles is presented, where a linear approximation of the BH characteristic of the iron sheets is carried out for determination of the frequency dependent self inductance and resistance of one single coil.
Abstract: This paper presents a high-frequency (HF) model and the parameter identification of electrical machines in hybrid or electric vehicles. For improved results, the winding design in the finite element analysis (FEA) considers individual wires and parallel wire strands. A linear approximation of the BH characteristic of the iron sheets is carried out for the determination of the frequency dependent self inductance and resistance of one single coil. The equivalent circuit model is realized with parallel RL-branches for simulation in the time and frequency domain. Furthermore, an electrostatic solver is used to calculate the parasitic capacitances. An analytical approximation is presented for consideration of the winding overhang in the capacitance determination. The proposed HF model and its parameters are validated by measurements. There is good agreement of simulation and measurements, which proves not only the values of the parasitic capacitances, but the full HF model can be parametrized by FEA (with the exception of in-feed conductors and connector inductance).

Posted Content
TL;DR: In this paper, the authors show that if limited, potentially higher order interpolation is used for the mesh transfer, convergence is guaranteed and provide numerical tests for the mean-variance optimal investment problem and the uncertain volatility option pricing model.
Abstract: An advantageous feature of piecewise constant policy timestepping for Hamilton-Jacobi-Bellman (HJB) equations is that different linear approximation schemes, and indeed different meshes, can be used for the resulting linear equations for different control parameters. Standard convergence analysis suggests that monotone (i.e., linear) interpolation must be used to transfer data between meshes. Using the equivalence to a switching system and an adaptation of the usual arguments based on consistency, stability and monotonicity, we show that if limited, potentially higher order interpolation is used for the mesh transfer, convergence is guaranteed. We provide numerical tests for the mean-variance optimal investment problem and the uncertain volatility option pricing model, and compare the results to published test cases.

Posted Content
TL;DR: In this article, the authors consider the task of generating discrete-time realisations of a nonlinear multivariate diffusion process satisfying an Ito stochastic differential equation conditional on an observation taken at a fixed future time-point.
Abstract: We consider the task of generating discrete-time realisations of a nonlinear multivariate diffusion process satisfying an Ito stochastic differential equation conditional on an observation taken at a fixed future time-point. Such realisations are typically termed diffusion bridges. Since, in general, no closed form expression exists for the transition densities of the process of interest, a widely adopted solution works with the Euler-Maruyama approximation, by replacing the intractable transition densities with Gaussian approximations. However, the density of the conditioned discrete-time process remains intractable, necessitating the use of computationally intensive methods such as Markov chain Monte Carlo. Designing an efficient proposal mechanism which can be applied to a noisy and partially observed system that exhibits nonlinear dynamics is a challenging problem, and is the focus of this paper. By partitioning the process into two parts, one that accounts for nonlinear dynamics in a deterministic way, and another as a residual stochastic process, we develop a class of novel constructs that bridge the residual process via a linear approximation. In addition, we adapt a recently proposed construct to a partial and noisy observation regime. We compare the performance of each new construct with a number of existing approaches, using three applications.

Journal ArticleDOI
TL;DR: An analytical standard uncertainty evaluation (ASUE) toolbox that automatically performs the analytical method for multivariate polynomial systems and goes on to show how this expression can be used to prevent overdesign and/or suboptimal design solutions.
Abstract: Uncertainty evaluation plays an important role in ensuring that a designed system can indeed achieve its desired performance. There are three standard methods to evaluate the propagation of uncertainty: 1) analytic linear approximation; 2) Monte Carlo (MC) simulation; and 3) analytical methods using mathematical representation of the probability density function (pdf). The analytic linear approximation method is inaccurate for highly nonlinear systems, which limits its application. The MC simulation approach is the most widely used technique, as it is accurate, versatile, and applicable to highly nonlinear systems. However, it does not define the uncertainty of the output in terms of those of its inputs. Therefore, designers who use this method need to resimulate their systems repeatedly for different combinations of input parameters. The most accurate solution can be attained using the analytical method based on pdf. However, it is unfortunately too complex to employ. This paper introduces the use of an analytical standard uncertainty evaluation (ASUE) toolbox that automatically performs the analytical method for multivariate polynomial systems. The backbone of the toolbox is a proposed ASUE framework. This framework enables the analytical process to be automated by replacing the complex mathematical steps in the analytical method with a Mellin transform lookup table and a set of algebraic operations. The ASUE toolbox was specifically designed for engineers and designers and is, therefore, simple to use. It provides the exact solution obtainable using the MC simulation, but with an additional output uncertainty expression as a function of its input parameters. This paper goes on to show how this expression can be used to prevent overdesign and/or suboptimal design solutions. The ASUE framework and toolbox substantially extend current analytical techniques to a much wider range of applications.

Journal ArticleDOI
TL;DR: In this paper, a subspace-based method of identifying the Wiener-Hammerstein system is proposed, where a nonlinearity is sandwiched by two linear subsystems.

Journal ArticleDOI
TL;DR: In this article, a gravity inversion method was developed to estimate a discontinuous basement relief based on an extended version of Bott's method that allows variable density contrasts between sediments and basement, optimizes the modulus of the solution correction at each iteration, and provides for solution stabilization.
Abstract: We have developed a gravity inversion method to estimate a discontinuous basement relief based on an extended version of Bott’s method that allows variable density contrasts between sediments and basement, optimizes the modulus of the solution correction at each iteration, and provides for solution stabilization. Initially, we obtain a linear approximation stabilized by the total variation functional that correctly maps the horizontal positions of the existing high-angle faults but produced poor estimates of the basin depths. Subsequent iterations update the depth estimates toward the correct values, at the same time preserving the correct fault horizontal positions. Additionally, we stabilize each solution correction by the smoothness constraint without inverting any matrix. The method was substantially more efficient than the nonlinear method, which solves a system of linear equations by the conjugate gradient method at each iteration. For 3000 parameters, it is almost four times faster than the...

Journal ArticleDOI
TL;DR: This paper addresses the generation of initial estimates for a Wiener-Hammerstein model (LNL cascade) with a well-designed multisine excitation with pairwise coupled random phases.

Journal ArticleDOI
TL;DR: In this paper, the Lagrange polynomial approximation is implemented to predict an initial guess of both voltage magnitude and phase angle at time instants in vicinity of the given power-flow solutions.
Abstract: This study presents a developed formulation for solving the quasi-static time-series simulation in unbalanced power distribution systems. This simulation is very important for analysing a set of given daily load curves under various operating conditions. The Lagrange polynomial approximation is implemented to predict an initial guess of both voltage magnitude and phase angle at time instants in vicinity of the given power-flow solutions. The developed methods are categorised based on the required number of power-flow solutions to predict the initial guess. The linear approximation of the Lagrange polynomial requires the knowledge of two power-flow solutions, whereas the non-linear approximation requires three power-flow solutions. The predicted values of both voltage magnitudes and angles are corrected using power-flow engine. The adopted power-flow solver uses the forward/backward sweep. The developed methods were tested using the unbalanced IEEE 123-node and 33-node test feeders with a set of daily load curves and intermittent distributed energy resources. The developed methods are compared with the method which utilises the previous power-flow solution as an initial guess. The results show that the number of iterations and computation time of quasi-static time-series simulations are greatly reduced.

Journal ArticleDOI
30 Jul 2015
TL;DR: In this article, an extended version of Whitehead's theory of gravity in connection with the flyby anomaly is considered, and a circulating vector field of force in the low velocities' approximation for a rotating planet is deduced, in addition to Newtonian gravity.
Abstract: In this paper, we consider an extended version of Whitehead’s theory of gravity in connection with the flyby anomaly. Whitehead’s theory is a linear approximation defined in a background Minkowski spacetime, which gives the same solutions as standard general relativity for the Schwarzschild and Kerr metrics cast in Kerr–Schild coordinates. For a long time and because it gives the same results for the three classical tests—perihelion advance, light bending and gravitational redshift—it was considered a viable alternative to general relativity, but as it is really a linear approximation, it fails in more stringent tests. The model considered in this paper is a formal generalization of Whitehead’s theory, including all possible bilinear forms. In the resulting theory, a circulating vector field of force in the low velocities’ approximation for a rotating planet is deduced, in addition to Newtonian gravity. This extra force gives rise to small variations in the asymptotic velocities of flybys around the Earth to be compared to the recently reported flyby anomaly.

Journal ArticleDOI
TL;DR: It is proved that the classical Bernstein Voronovskaja-type theorem remains valid in general for all sequences of positive linear approximation operators.

Journal ArticleDOI
TL;DR: This study deals with the L 1 analysis of stable finite-dimensional linear time-invariant (LTI) systems, by which the authors mean the computation of the L ∞-induced norm of these systems.
Abstract: This study deals with the L 1 analysis of stable finite-dimensional linear time-invariant (LTI) systems, by which the authors mean the computation of the L ∞-induced norm of these systems. To compute this norm, they need to integrate the absolute value of the impulse response of the given system, which corresponds to the kernel function in the convolution formula for the input/output relation. However, it is very difficult to compute this integral exactly or even approximately with an explicit upper bound and lower bound. They first review an approach named input approximation, in which the input of the LTI system is approximated by a staircase or piecewise linear function and computation methods for an upper bound and lower bound of the L ∞-induced norm are given. They further develop another approach using an idea of kernel approximation, in which the kernel function in the convolution is approximated by a staircase or piecewise linear function. These approaches are introduced through fast-lifting, by which the interval [0, h) with a sufficiently large h is divided into M subintervals with an equal width. It is then shown that the approximation errors in staircase or piecewise linear approximation are ensured to be reciprocally proportional to M or M 2, respectively. The effectiveness of the proposed methods is demonstrated through numerical examples.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the problem of the transformation of surface gravity waves over a bottom step in a basin of arbitrary depth in the linear approximation and found that strict analytical results can be obtained only when a denumerable set of modes condensed near the step is taken into account.
Abstract: We analyze in detail the problem of the transformation of surface gravity waves over a bottom step in a basin of arbitrary depth in the linear approximation. We found that strict analytical results can be obtained only when a denumerable set of modes condensed near the step is taken into account. At the same time, one can use the formulas suggested in this work for the practical calculations. They provide an accuracy of 5% for the wave transmission coefficient. The specific peculiarities of transformation coefficients are discussed, including their nonmonotonic dependence on the parameters, asymptotic behavior at strong depth variations, etc. The data of a direct numerical simulation of wave transformation over a step are presented, which are compared with the exact and approximate formulas. The coefficients of excitation of modes condensed near the step by an incident quasi-monochromatic wave are found. A relationship between the transformation coefficients that follows from the conservation law of wave energy flux is found.

Proceedings Article
06 Jul 2015
TL;DR: In this article, Boyan et al. considered LSTD(λ), the least squares temporal-difference algorithm with eligibility traces, and derived a high probability bound on the rate of convergence to its limit.
Abstract: We consider LSTD(λ), the least-squares temporal-difference algorithm with eligibility traces algorithm proposed by Boyan (2002). It computes a linear approximation of the value function of a fixed policy in a large Markov Decision Process. Under a β-mixing assumption, we derive, for any value of λ e (0; 1), a high-probability bound on the rate of convergence of this algorithm to its limit. We deduce a high-probability bound on the error of this algorithm, that extends (and slightly improves) that derived by Lazaric et al. (2012) in the specific case where λ = 0. In the context of temporal-difference algorithms with value function approximation, this analysis is to our knowledge the first to provide insight on the choice of the eligibility-trace parameter λ with respect to the approximation quality of the space and the number of samples.