scispace - formally typeset
Search or ask a question
Topic

Piecewise linear function

About: Piecewise linear function is a research topic. Over the lifetime, 8133 publications have been published within this topic receiving 161444 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques.
Abstract: A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.

505 citations

Proceedings Article
08 Dec 2014
TL;DR: In this article, the authors study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have.
Abstract: We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep networks are able to sequentially map portions of each layer's input-space to the same output. In this way, deep models compute functions that react equally to complicated patterns of different inputs. The compositional structure of these functions enables them to re-use pieces of computation exponentially often in terms of the network's depth. This paper investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piecewise linear activation functions. In particular, our analysis is not specific to a single family of models, and as an example, we employ it for rectifier and maxout networks. We improve complexity bounds from pre-existing work and investigate the behavior of units in higher layers.

494 citations

01 Jan 2002
TL;DR: The proposed algorithm, GUIDE, is specifically designed to eliminate variable selection bias, a problem that can undermine the reliability of inferences from a tree structure and allows fast computation speed, natural ex- tension to data sets with categorical variables, and direct detection of local two- variable interactions.
Abstract: We propose an algorithm for regression tree construction called GUIDE. It is specifically designed to eliminate variable selection bias, a problem that can undermine the reliability of inferences from a tree structure. GUIDE controls bias by employing chi-square analysis of residuals and bootstrap calibration of signif- icance probabilities. This approach allows fast computation speed, natural ex- tension to data sets with categorical variables, and direct detection of local two- variable interactions. Previous algorithms are not unbiased and are insensitive to local interactions during split selection. The speed of GUIDE enables two further enhancements—complex modeling at the terminal nodes, such as polynomial or best simple linear models, and bagging. In an experiment with real data sets, the prediction mean square error of the piecewise constant GUIDE model is within ±20% of that of CART r � . Piecewise linear GUIDE models are more accurate; with bagging they can outperform the spline-based MARS r � method.

484 citations

Book
01 Jan 1966

483 citations

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, a nonparametric approach to the measurement of productive efficiency can be specified either through a flexible form of the production function which satisfies the efficiency hypothesis, or a set-theoretic characterization of an efficient isoquant.
Abstract: The nonparametric approach to the measurement of productive efficiency can be specified either through a flexible form of the production function which satisfies the efficiency hypothesis, or a set-theoretic characterization of an efficient isoquant. In the first case the production frontier can be of any general shape satisfying some very weak conditions like quasi-concavity or monotonicity, although in most empirical and applied work piecewise linear or, log-linear functions have been frequently used. Thus both Farrell and Johansen applied linear programming models in the specification of the production frontier. Farrell’s efficiency measure is based on estimating by a sequence of linear programs (LPs) a convex hull of the observed input coefficients in the input space. Two features of Farrell efficiency make it very useful in applied research. One is that it is completely data-based i.e., it uses only the observed inputs and outputs of the sample units while assuming production functions to be homogeneous of degree one. Hence it has many potential applications for the public sector units, where for most of the inputs and outputs the price data are not available. For example consider educational production functions for public schools, where outputs such as test scores in achievement tests are only proxy variables for learning; inputs such as average class size, experience of teachers or ethnic background of students do not have observed market prices. Secondly, Farrell’s method uses a set of LP models to estimate the efficiency parameters, so that the production frontier appears as piecewise linear functions. Nonnegativity conditions on the parameter estimates can therefore be easily incorporated.

477 citations


Network Information
Related Topics (5)
Nonlinear system
208.1K papers, 4M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Robustness (computer science)
94.7K papers, 1.6M citations
86% related
Differential equation
88K papers, 2M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023179
2022377
2021312
2020353
2019329
2018297