Topic
Linear approximation
About: Linear approximation is a research topic. Over the lifetime, 3901 publications have been published within this topic receiving 74764 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the authors proposed a new inverse optimization methodology for multi-objective convex optimization that determines a weight vector producing a weakly Pareto optimal solution that preserves the decision maker's trade-off intention encoded in the input solution.
31 citations
•
TL;DR: In this paper, the authors present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function, and the starting point of their approach is the addition of a global linear approximation of the overall network behavior to the verification problem.
Abstract: We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers.
The starting point of our approach is the addition of a global linear approximation of the overall network behavior to the verification problem that helps with SMT-like reasoning over the network behavior. We present a specialized verification algorithm that employs this approximation in a search process in which it infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving. We also show how to infer additional conflict clauses and safe node fixtures from the results of the analysis steps performed during the search. The resulting approach is evaluated on collision avoidance and handwritten digit recognition case studies.
31 citations
••
TL;DR: A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated and upper and lower bounds on the GK -variation norms of functions having certain integral representations are given.
Abstract: A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated. For sets Gk obtained by varying a vector y of parameters in a fixed-structure computational unit K(-,y) (e.g., the set of Gaussians with free centers and widths), upper and lower bounds on the GK -variation norms of functions having certain integral representations are given, in terms of the £1-norms of the weighting functions in such representations. Families of functions for which the two norms are equal are described.
31 citations
•
01 Jan 1998
TL;DR: In this paper, a summarized presentation of solution methods for rational expectations models, based on eigenvalue/eigenvector decompositions, is provided for stochastic linear difference equations by relying on the use of stability conditions derived from the eigenvectors associated to unstable eigenvalues of the coefficient matrices in the system.
Abstract: We provide a summarized presentation of solution methods for rational expectations models, based on eigenvalue/eigenvector decompositions These methods solve systems of stochastic linear difference equations by relying on the use of stability conditions derived from the eigenvectors associated to unstable eigenvalues of the coefficient matrices in the system For nonlinear models, a linear approximation must be obtained, and the stability conditions are approximate, This is however, the only source of approximation error, since the nonlinear structure of the original model is used to produce the numerical solution After applying the method to a baseline stochastic growth model, we explain how it can be used: i) to salve some identification problems that may arise in standard growth models, and ii) to solve endogenous growth models
31 citations