Topic
Piecewise linear function
About: Piecewise linear function is a research topic. Over the lifetime, 8133 publications have been published within this topic receiving 161444 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, a family of piecewise-linear systems that can be written as a feedback structure is considered, and a simplifying procedure is given to obtain equivalent state equations with a minimum number of nonzero coefficients and a minimum nonlinear dynamical equations (canonical forms).
Abstract: A basic methodology to understand the dynamical behavior of a system relies on its decomposition into simple enough functional blocks. In this work, following that idea, we consider a family of piecewise-linear systems that can be written as a feedback structure. By using some results related to control systems theory, a simplifying procedure is given. In particular, we pay attention to obtain equivalent state equations containing both a minimum number of nonzero coefficients and a minimum number of nonlinear dynamical equations (canonical forms). Two new canonical forms are obtained, allowing to classify the members of the family in different classes. Some consequences derived from the above simplified equations are given. The state equations of different electronic oscillators with two or three state variables and two or three linear regions are studied, illustrating the proposed methodology.
97 citations
•
01 Sep 2017TL;DR: Monotonic deep lattice networks as discussed by the authors are monotonic with respect to a user-specified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators with appropriate constraints for monotonicity.
Abstract: We propose learning deep models that are monotonic with respect to a user-specified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network. We implement the layers and projections with new computational graph nodes in TensorFlow and use the Adam optimizer and batched stochastic gradients. Experiments on benchmark and real-world datasets show that six-layer monotonic deep lattice networks achieve state-of-the art performance for classification and regression with monotonicity guarantees.
96 citations
••
TL;DR: A new segmentation criterion is proposed that improves computing efficiency and two novel online piecewise linear segmentation methods are developed, the feasible space window method and the stepwise feasible spacewindow method.
Abstract: To efficiently and effectively mine massive amounts of data in the time series, approximate representation of the data is one of the most commonly used strategies. Piecewise linear approximation is such an approach, which represents a time series by dividing it into segments and approximating each segment with a straight line. In this paper, we first propose a new segmentation criterion that improves computing efficiency. Based on this criterion, two novel online piecewise linear segmentation methods are developed, the feasible space window method and the stepwise feasible space window method. The former usually produces much fewer segments and is faster and more reliable in the running time than other methods. The latter can reduce the representation error with fewer segments. It achieves the best overall performance on the segmentation results compared with other methods. Extensive experiments on a variety of real-world time series have been conducted to demonstrate the advantages of our methods.
96 citations
••
TL;DR: A branch-and-cut algorithm for solving linear programs (LPs) with continuous separable piecewise-linear cost functions (PLFs) and gives two families of valid inequalities, which demonstrate the effectiveness of the cuts.
Abstract: We give a branch-and-cut algorithm for solving linear programs (LPs) with continuous separable piecewise-linear cost functions (PLFs). Models for PLFs use continuous variables in special-ordered sets of type 2 (SOS2). Traditionally, SOS2 constraints are enforced by introducing auxiliary binary variables and other linear constraints on them. Alternatively, we can enforce SOS2 constraints by branching on them, thus dispensing with auxiliary binary variables. We explore this approach further by studying the inequality description of the convex hull of the feasible set of LPs with PLFs in the space of the continuous variables, and using the new cuts in a branch-and-cut scheme without auxiliary binary variables. We give two families of valid inequalities. The first family is obtained by lifting the convexity constraints. The second family consists of lifted cover inequalities. Finally, we report computational results that demonstrate the effectiveness of our cuts, and that branch-and-cut without auxiliary binary variables is significantly more practical than the traditional mixed-integer programming approach.
95 citations
••
26 Feb 2004TL;DR: A piecewise linear recursive approximation scheme is applied to the computation of the sigmoid function and its derivative in artificial neurons with learning capability that provides high approximation accuracy with very low memory requirements.
Abstract: A piecewise linear recursive approximation scheme is applied to the computation of the sigmoid function and its derivative in artificial neurons with learning capability. The scheme provides high approximation accuracy with very low memory requirements. The recursive nature of this method allows for the control of the rate accuracy/computation-delay just by modifying one parameter with no impact on the occupied area. The error analysis shows an accuracy comparable to or better than other reported piecewise linear approximation schemes. No multiplier is needed for a digital implementation of the sigmoid generator and only one memory word is required to store the parameter that optimises the approximation.
94 citations