scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Convex piecewise-linear fitting

01 Mar 2009-Optimization and Engineering (Springer US)-Vol. 10, Iss: 1, pp 1-17
TL;DR: The method described, which is a variation on the K-means algorithm for clustering, seems to work well in practice, at least on data that can be fit well by a convex function.
Abstract: We consider the problem of fitting a convex piecewise-linear function, with some specified form, to given multi-dimensional data. Except for a few special cases, this problem is hard to solve exactly, so we focus on heuristic methods that find locally optimal fits. The method we describe, which is a variation on the K-means algorithm for clustering, seems to work well in practice, at least on data that can be fit well by a convex function. We focus on the simplest function form, a maximum of a fixed number of affine functions, and then show how the methods extend to a more general form.

Summary (1 min read)

1 Convex piecewise-linear fitting problem

  • The convex piecewise-linear fitting problem (1) is to find the function f , from the given family F of convex piecewise-linear functions, that gives the best RMS fit to the given data.
  • Of course the authors can expand any function with the more general form (4) into its max-affine representation.
  • This allows us to normalize the dependent variable data in various ways.
  • 4 Outline In Sect. 2 the authors describe several applications of convex piecewise-linear fitting.
  • In Sect. 3, the authors describe a basic heuristic algorithm for solving the maxaffine fitting problem (1).

2 Applications

  • In this section the authors briefly describe some applications of convex piecewise-linear fitting.
  • This convex piecewise-linear approximate value function can be used to construct a simple feedback controller that approximately minimizes fuel use; see, e.g., Bemporad et al. (2002).
  • The authors can write the algorithm as LEAST-SQUARES PARTITION ALGORITHM.
  • The authors can interpret the algorithm as a Gauss-Newton method for the problem (3).
  • In any case, convergence failure has no practical consequences since the algorithm is terminated after some fixed maximum number of steps, and moreover, the authors recommend that it be run from a number of starting points, with the best fit obtained used as the final fit.

4 Numerical examples

  • Figure 1 shows the RMS fits obtained after Ntrials = 10 trials (top curve), and after Ntrials = 100 trials (bottom curve).
  • Evidently the best of even a modest number of trials will be quite good.
  • The authors set the iteration limit for both forms as lmax = 100, and take the best fit obtained in Ntrials = 10 trials.
  • Figure 4 shows the RMS fit obtained for the two forms, versus k.

5 Conclusions

  • The authors have described a new method for fitting a convex piecewise linear function to a given (possibly large) set of data (with a modest number of independent variables).
  • Numerical examples suggest, however, that the method works very well in practice, on data that can be fit well by a convex function.
  • Data samples can be used to generate piecewise-linear convex functions, which in turn can be used to construct linear programming models.
  • This work was carried out with support from C2S2, the MARCO Focus Center for Circuit and System Solutions, under MARCO contract 2003-CT-888.
  • The authors are grateful to Jim Koford for suggesting the problem.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Optim Eng (2009) 10: 1–17
DOI 10.1007/s11081-008-9045-3
Convex piecewise-linear fitting
Alessandro Magnani · Stephen P. Boyd
Received: 14 April 2006 / Accepted: 4 March 2008 / Published online: 25 March 2008
© Springer Science+Business Media, LLC 2008
Abstract We consider the problem of tting a convex piecewise-linear function, with
some specified form, to given multi-dimensional data. Except for a few special cases,
this problem is hard to solve exactly, so we focus on heuristic methods that find
locally optimal fits. The method we describe, which is a variation on the K-means
algorithm for clustering, seems to work well in practice, at least on data that can be
fit well by a convex function. We focus on the simplest function form, a maximum of
afixednumberofafnefunctions,andthenshowhowthemethodsextendtoamore
general form.
Keywords Convex optimization · Piecewise-linear approximation · Data fitting
1Convexpiecewise-linearttingproblem
We consider the problem of tting some given data
(u
1
,y
1
), . . . , (u
m
,y
m
) R
n
× R
with a convex piecewise-linear function f : R
n
R from some set F of candidate
functions. With a least-squares fitting criterion, we obtain the problem
minimize J(f)=
m
!
i=1
(f (u
i
) y
i
)
2
subject to f F ,
(1)
A. Magnani · S.P. Boyd (
!
)
Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA
e-mail: boyd@stanford.edu
A. Magnani
e-mail: alem@stanford.edu

2 A. Magnani, S.P. Boyd
with variable f .Wereferto(J (f )/m)
1/2
as the RMS (root-mean-square) fit of the
function f to the data. The convex piecewise-linear fitting problem (1)istofindthe
function f ,fromthegivenfamilyF of convex piecewise-linear functions, that gives
the best (smallest) RMS fit to the given data.
Our main interest is in the case when n (the dimension of the data) is relatively
small, say not more than 5 or so, while m (the number of data points) can be relatively
large, e.g., 10
4
or more. The methods we describe, however, work for any values of
n and m.
Several special cases of the convex piecewise-linear fitting problem (1)canbe
solved exactly. When F consists of the affine functions, i.e., f has the form f(x)=
a
T
x + b,theproblem(1)reducestoanordinarylinearleast-squaresprobleminthe
function parameters a R
n
and b R and so is readily solved. As a less trivial ex-
ample, consider the case when F consists of all piecewise-linear functions from R
n
into R,withnootherconstraintontheformoff .Thisisthenonparametric convex
piecewise-linear fitting problem. Then the problem (1)canbesolved,exactly,viaa
quadratic program (QP); see (Boyd and Vandenberghe 2004,Sect.6.5.5).Thisnon-
parametric approach, however, has two potential practical disadvantages. First, the
QP that must be solved is very large (containing more than mn variables), limiting
the method to modest values of m (say, a thousand). The second potential disadvan-
tage is that the piecewise-linear function fit obtained can be very complex, with many
terms (up to m).
Of course, not all data can be fit well (i.e., with small RMS fit) with a convex
piecewise-linear function. For example, if the data are samples from a function that
has strong negative (concave) curvature, then no convex function can fit it well. More-
over, the best fit (which will be poor) will be obtained with an affine function. We can
also have the opposite situation: it can occur that the data can be perfectly fit by an
affine function, i.e., we can have J = 0. In this case we say that the data is interpo-
lated by the convex piecewise-linear function f .
1.1 Max-affine functions
In this paper we consider the parametric fitting problem, in which the candidate func-
tions are parametrized by a finite-dimensional vector of coefficients α R
p
,where
p is the number of parameters needed to describe the candidate functions. One very
simple form is given by F
k
ma
,thesetoffunctionsonR
n
with the form
f(x)= max{a
T
1
x + b
1
,...,a
T
k
x + b
k
}, (2)
i.e., a maximum of k affine functions. We refer to a function of this form as ‘max-
affine’, with k terms. The set F
k
ma
is parametrized by the coefficient vector
α = (a
1
,...,a
k
,b
1
,...,b
k
) R
k(n+1)
.
In fact, any convex piecewise-linear function on R
n
can be expressed as a max-affine
function, for some k,sothisformisinasenseuniversal.Ourinterest,however,is
in the case when the number of terms k is relatively small, say no more than 10 ,
or a few 10s. In this case the max-affine representation (2)iscompact,inthesense

Convex piecewise-linear fitting 3
that the number of parameters needed to describe f (i.e., p)ismuchsmallerthan
the number of parameters in the original data set (i.e., m(n + 1)). The methods we
describe, however, do not require k to be small.
When F = F
k
ma
,thefittingproblem(1)reducestothenonlinearleast-squares
problem
minimize J(α)=
m
!
i=1
"
max
j=1,...,k
(a
T
j
u
i
+ b
j
) y
i
#
2
, (3)
with variables a
1
,...,a
k
R
n
, b
1
,...,b
k
R.ThefunctionJ is a piecewise-
quadratic function of α.Indeed,foreachi, f(u
i
) y
i
is piecewise-linear, and J
is the sum of squares of these functions, so J is convex quadratic on the (polyhe-
dral) regions on which f(u
i
) is affine. But J is not globally convex, so the fitting
problem (3)isnotconvex.
1.2 A more general parametrization
We will also consider a more general parametrized form for convex piecewise-linear
functions,
f(x)= ψ(φ(x,α)), (4)
where ψ : R
q
R is a (fixed) convex piecewise-linear function, and φ :
R
n
× R
p
R
q
is a (fixed) bi-affine function. (This means that for each x, φ(x, α)
is an affine function of α,andforeachα, φ(x, α) is an affine function of x.) The
simple max-affine parametrization (2)hasthisform,withq = k, ψ(z
1
,...,z
k
) =
max{z
1
,...,z
k
},andφ
i
(x, α) = a
T
i
x + b
i
.
As an example, consider the set of functions F that are sums of k terms, each of
which is the maximum of two affine functions,
f(x)=
k
!
i=1
max{a
T
i
x + b
i
,c
T
i
x + d
i
}, (5)
parametrized by a
1
,...,a
k
,c
1
,...,c
k
R
n
and b
1
,...,b
k
,d
1
,...,d
k
R.This
family corresponds to the general form (4)with
ψ(z
1
,...,z
k
,w
1
,...,w
k
) =
k
!
i=1
max{z
i
,w
i
},
and
φ(x,α) = (a
T
1
x + b
1
,...,a
T
k
x + b
k
,c
T
1
x + d
1
,...,c
T
k
x + d
k
).
Of course we can expand any function with the more general form (4)intoits
max-affine representation. But the resulting max-affine representation can be very
much larger than the original general form representation. For example, the function
form (5)requiresp = 2k(n + 1) parameters. If the same function is written out as
amax-afnefunction,itrequires2
k
terms, and therefore 2
k
(n + 1) parameters. The

4 A. Magnani, S.P. Boyd
hope is that a well chosen general form can give us a more compact fit to the given
data than a max-affine form with the same number of parameters.
As another interesting example of the general form (4), consider the case in which
f is given as the optimal value of a linear program (LP) with the right-hand side of
the constraints depending bi-affinely on x and the parameters:
f(x)= min{c
T
v | Av b + Bx}.
Here c and A are fixed; b and B are considered the parameters that define f .This
function can be put in the general form (4)using
ψ(z) = min{c
T
v | Av z},φ(x,b,B)= b + Bx.
The function ψ is convex and piecewise-linear (see, e.g., Boyd and Vandenberghe
2004); the function φ is evidently bi-affine in x and (b, B).
1.3 Dependent variable transformation and normalization
We can apply a nonsingular afne transformation to the dependent variable u,by
forming
˜u
i
= Tu
i
+ s, i = 1,...,m,
where T R
n×n
is nonsingular and s R
n
.Dening
˜
f(˜x) = f(T
1
(x s)),we
have
˜
f(˜u
i
) = f(u
i
).Iff is piecewise-linear and convex, then so is
˜
f (and of course,
vice versa). Provided F is invariant under composition with affine functions, the
problem of fitting the data (u
i
,y
i
) with a function f F is the same as the prob-
lem of fitting the data ( ˜u
i
,y
i
) with a function
˜
f F .
This allows us to normalize the dependent variable data in various ways. For ex-
ample, we can assume that it has zero (sample) mean and unit (sample) covariance,
¯u = (1/m)
m
!
i=1
u
i
= 0,$
u
= (1/m)
m
!
i=1
u
i
u
T
i
= I, (6)
provided the data u
i
are affinely independent. (If they are not, we can reduce the
problem to an equivalent one with smaller dimension.)
1.4 Outline
In Sect. 2 we describe several applications of convex piecewise-linear fitting. In
Sect. 3,wedescribeabasicheuristicalgorithmfor(approximately)solvingthemax-
affine fitting problem (1). This basic algorithm has several shortcomings, such as
convergence to a poor local minimum, or failure to converge at all. By running this
algorithm a modest number of times, from different initial points, however, we obtain
afairlyreliablealgorithmforleast-squaresfittingofamax-afnefunctiontogiven
data. Finally, we show how the algorithm can be extended to handle the more general
function parametrization (4). In Sect. 4 we present some numerical examples.

Convex piecewise-linear fitting 5
1.5 Previous work
Piecewise-linear functions arise in many areas and contexts. Some general forms for
representing piecewise-linear functions can be found in, e.g., Kang and Chua, Kahlert
and Chua (1978, 1990). Several methods have been proposed for fitting general
piecewise-linear functions to (multidimensional) data. A neural network algorithm is
used in Gothoskar et al. (2002); a Gauss-Newton method is used in Julian et al., Horst
and Beichel (1998, 1997)tofindpiecewise-linearapproximationsofsmoothfunc-
tions. A recent reference on methods for least-squares with semismooth functions is
Kanzow and Petra (2004). An iterative procedure, similar in spirit to our method,
is described in Ferrari-Trecate and Muselli (2002). Software for fitting general
piecewise-linear functions to data include, e.g., Torrisi and Bemporad (2004), Storace
and De Feo (2002).
The special case n = 1, i.e., fitting a function on R,byapiecewise-linearfunction
has been extensively studied. For example, a method for finding the minimum num-
ber of segments to achieve a given maximum error is described in Dunham (1986);
the same problem can be approached using dynamic programming (Goodrich 1994;
Bellman and Roth 1969;HakimiandSchmeichel1991;Wangetal.1993), or a ge-
netic algorithm (Pittman and Murthy 2000). The problem of simplifying a given
piecewise-linear function on R,toonewithfewersegments,isconsideredinImai
and Iri (1986).
Another related problem that has received much attention is the problem of fitting
apiecewise-linearcurve,orpolygon,inR
2
to given data; see, e.g., Aggarwal et al.
(1985), Mitchell and Suri (1992). An iterative procedure, closely related to the k-
means algorithm and therefore similar in spirit to our method, is described in Phillips
and Rosenfeld (1988), Yin (1998).
Piecewise-linear functions and approximations have been used in many appli-
cations, such as detection of patterns in images (Rives et al. 1985), contour trac-
ing (Dobkin et al. 1990), extraction of straight lines in aerial images (Venkateswar
and Chellappa 1992), global optimization (Mangasarian et al. 2005), compression of
chemical process data (Bakshi and Stephanopoulos 1996), and circuit modeling (Ju-
lian et al. 1998;ChuaandDeng1986;Vandenbergheetal.1989).
We are aware of only two papers which consider the problem of tting a piecewise-
linear convex function to given data. Mangasarian et al. (2005)describeaheuristic
method for fitting a piecewise-linear convex function of the form a + b
T
x +&Ax +
c&
1
to given data (along with the constraint that the function underestimate the data).
The focus of their paper is on finding piecewise-linear convex underestimators for
known (nonconvex) functions, for use in global optimization; our focus, in contrast,
is on simply fitting some given data. The closest related work that we know of is Kim
et al. (2004). In this paper, Kim et al. describe a method for fitting a (convex) max-
affine function to given data, increasing the number of terms to get a better fit. (In
fact they describe a method for tting a max-monomial function to circuit models;
see Sect. 2.3.)

Citations
More filters
Journal ArticleDOI
TL;DR: A collection of methods for improving the speed of MPC, using online optimization, which can compute the control action on the order of 100 times faster than a method that uses a generic optimizer.
Abstract: A widely recognized shortcoming of model predictive control (MPC) is that it can usually only be used in applications with slow dynamics, where the sample time is measured in seconds or minutes. A well-known technique for implementing fast MPC is to compute the entire control law offline, in which case the online controller can be implemented as a lookup table. This method works well for systems with small state and input dimensions (say, no more than five), few constraints, and short time horizons. In this paper, we describe a collection of methods for improving the speed of MPC, using online optimization. These custom methods, which exploit the particular structure of the MPC problem, can compute the control action on the order of 100 times faster than a method that uses a generic optimizer. As an example, our method computes the control actions for a problem with 12 states, 3 controls, and horizon of 30 time steps (which entails solving a quadratic program with 450 variables and 1284 constraints) in around 5 ms, allowing MPC to be carried out at 200 Hz.

1,369 citations


Cites methods from "Convex piecewise-linear fitting"

  • ..., using the methods described in [50]–[52], replacing it with a piecewise-affine function with a manageable number of regions....

    [...]

Journal ArticleDOI
TL;DR: This tutorial paper collects together in one place the basic background material needed to do GP modeling, and shows how to recognize functions and problems compatible with GP, and how to approximate functions or data in a formcompatible with GP.
Abstract: A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this is not possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them.

1,215 citations


Cites methods from "Convex piecewise-linear fitting"

  • ...But there is a relatively simple method, based on monomial fitting and data point clustering, that works well in practice [103]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a modified system frequency response model is derived and used to find analytical representation of system minimum frequency in thermal-dominant multi-machine systems, and an effective piecewise linearization (PWL) technique is employed to linearize the nonlinear function representing the minimum system frequency, facilitating its integration in the SCUC problem.
Abstract: Rapidly increasing the penetration level of renewable energies has imposed new challenges to the operation of power systems. Inability or inadequacy of these resources in providing inertial and primary frequency responses is one of the important challenges. In this paper, this issue is addressed within the framework of security-constrained unit commitment (SCUC) by adding new constraints representing the system frequency response. A modified system frequency response model is first derived and used to find analytical representation of system minimum frequency in thermal-dominant multi-machine systems. Then, an effective piecewise linearization (PWL) technique is employed to linearize the nonlinear function representing the minimum system frequency, facilitating its integration in the SCUC problem. The problem is formulated as a mixed-integer linear programming (MILP) problem which is solved efficiently by available commercial solvers. The results indicate that the proposed method can be utilized to integrate renewable resources into power systems without violating system frequency limits.

271 citations


Cites methods from "Convex piecewise-linear fitting"

  • ...The advantage of the PWL technique of [27] is that it optimally determines the intervals over which the linear segments are defined....

    [...]

  • ...Beside the method introduced in [27], there are commercial solvers which are able to solve this type of problems, e....

    [...]

  • ...A heuristic least-squares method is proposed in [27] to solve this problem....

    [...]

Proceedings Article
17 Jul 2017
TL;DR: Input convex neural networks as discussed by the authors are a generalization of neural networks with constraints on the network parameters such that the output of the network is a convex function of some of the inputs.
Abstract: This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the network is a convex function of (some of) the inputs. The networks allow for efficient inference via optimization over some inputs to the network given others, and can be applied to settings including structured prediction, data imputation, reinforcement learning, and others. In this paper we lay the basic groundwork for these models, proposing methods for inference, optimization and learning, and analyze their representational power. We show that many existing neural network architectures can be made input-convex with a minor modification, and develop specialized optimization algorithms tailored to this setting. Finally, we highlight the performance of the methods on multi-label prediction, image completion, and reinforcement learning problems, where we show improvement over the existing state of the art in many cases.

183 citations

Journal ArticleDOI
TL;DR: This work introduces convex adaptive partitioning (CAP), which creates a globally convex regression model from locally linear estimates fit on adaptively selected covariate partitions and demonstrates empirical performance by comparing the performance of CAP to other shape-constrained and unconstrained regression methods for predicting weekly wages and value function approximation for pricing American basket options.
Abstract: We propose a new, nonparametric method for multivariate regression subject to convexity or concavity constraints on the response function. Convexity constraints are common in economics, statistics, operations research, financial engineering and optimization, but there is currently no multivariate method that is stable and computationally feasible for more than a few thousand observations. We introduce convex adaptive partitioning (CAP), which creates a globally convex regression model from locally linear estimates fit on adaptively selected covariate partitions. CAP is a computationally efficient, consistent method for convex regression. We demonstrate empirical performance by comparing the performance of CAP to other shape-constrained and unconstrained regression methods for predicting weekly wages and value function approximation for pricing American basket options.

130 citations


Cites background or methods from "Convex piecewise-linear fitting"

  • ...Refitting a series of hyperplanes can be done in a frequentist (Magnani and Boyd, 2009) or Bayesian (Hannah and Dunson, 2011) manner....

    [...]

  • ...Similar methods for refitting hyperplanes have been proposed in Breiman (1993) and Magnani & Boyd (2009)....

    [...]

  • ...8 Magnani and Boyd (2009) 10156....

    [...]

  • ...Refitting hyperplanes in this manner can be viewed as a Gauss-Newton method for the non-linear least squares problem (Magnani & Boyd 2009), minimize n∑ i=1 ( yi − max k∈{1,...,K} ( αk + β T k xi ))2 ....

    [...]

  • ...Low noise or noise free problems often occur when a highly complicated convex function needs to be approximated by a simpler one (Magnani and Boyd, 2009)....

    [...]

References
More filters
Book
01 Mar 2004
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Abstract: Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.

33,341 citations

Book
01 Jan 1991
TL;DR: The author explains the design and implementation of the Levinson-Durbin Algorithm, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing a Quantizer.
Abstract: 1 Introduction- 11 Signals, Coding, and Compression- 12 Optimality- 13 How to Use this Book- 14 Related Reading- I Basic Tools- 2 Random Processes and Linear Systems- 21 Introduction- 22 Probability- 23 Random Variables and Vectors- 24 Random Processes- 25 Expectation- 26 Linear Systems- 27 Stationary and Ergodic Properties- 28 Useful Processes- 29 Problems- 3 Sampling- 31 Introduction- 32 Periodic Sampling- 33 Noise in Sampling- 34 Practical Sampling Schemes- 35 Sampling Jitter- 36 Multidimensional Sampling- 37 Problems- 4 Linear Prediction- 41 Introduction- 42 Elementary Estimation Theory- 43 Finite-Memory Linear Prediction- 44 Forward and Backward Prediction- 45 The Levinson-Durbin Algorithm- 46 Linear Predictor Design from Empirical Data- 47 Minimum Delay Property- 48 Predictability and Determinism- 49 Infinite Memory Linear Prediction- 410 Simulation of Random Processes- 411 Problems- II Scalar Coding- 5 Scalar Quantization I- 51 Introduction- 52 Structure of a Quantizer- 53 Measuring Quantizer Performance- 54 The Uniform Quantizer- 55 Nonuniform Quantization and Companding- 56 High Resolution: General Case- 57 Problems- 6 Scalar Quantization II- 61 Introduction- 62 Conditions for Optimality- 63 High Resolution Optimal Companding- 64 Quantizer Design Algorithms- 65 Implementation- 66 Problems- 7 Predictive Quantization- 71 Introduction- 72 Difference Quantization- 73 Closed-Loop Predictive Quantization- 74 Delta Modulation- 75 Problems- 8 Bit Allocation and Transform Coding- 81 Introduction- 82 The Problem of Bit Allocation- 83 Optimal Bit Allocation Results- 84 Integer Constrained Allocation Techniques- 85 Transform Coding- 86 Karhunen-Loeve Transform- 87 Performance Gain of Transform Coding- 88 Other Transforms- 89 Sub-band Coding- 810 Problems- 9 Entropy Coding- 91 Introduction- 92 Variable-Length Scalar Noiseless Coding- 93 Prefix Codes- 94 Huffman Coding- 95 Vector Entropy Coding- 96 Arithmetic Coding- 97 Universal and Adaptive Entropy Coding- 98 Ziv-Lempel Coding- 99 Quantization and Entropy Coding- 910 Problems- III Vector Coding- 10 Vector Quantization I- 101 Introduction- 102 Structural Properties and Characterization- 103 Measuring Vector Quantizer Performance- 104 Nearest Neighbor Quantizers- 105 Lattice Vector Quantizers- 106 High Resolution Distortion Approximations- 107 Problems- 11 Vector Quantization II- 111 Introduction- 112 Optimality Conditions for VQ- 113 Vector Quantizer Design- 114 Design Examples- 115 Problems- 12 Constrained Vector Quantization- 121 Introduction- 122 Complexity and Storage Limitations- 123 Structurally Constrained VQ- 124 Tree-Structured VQ- 125 Classified VQ- 126 Transform VQ- 127 Product Code Techniques- 128 Partitioned VQ- 129 Mean-Removed VQ- 1210 Shape-Gain VQ- 1211 Multistage VQ- 1212 Constrained Storage VQ- 1213 Hierarchical and Multiresolution VQ- 1214 Nonlinear Interpolative VQ- 1215 Lattice Codebook VQ- 1216 Fast Nearest Neighbor Encoding- 1217 Problems- 13 Predictive Vector Quantization- 131 Introduction- 132 Predictive Vector Quantization- 133 Vector Linear Prediction- 134 Predictor Design from Empirical Data- 135 Nonlinear Vector Prediction- 136 Design Examples- 137 Problems- 14 Finite-State Vector Quantization- 141 Recursive Vector Quantizers- 142 Finite-State Vector Quantizers- 143 Labeled-States and Labeled-Transitions- 144 Encoder/Decoder Design- 145 Next-State Function Design- 146 Design Examples- 147 Problems- 15 Tree and Trellis Encoding- 151 Delayed Decision Encoder- 152 Tree and Trellis Coding- 153 Decoder Design- 154 Predictive Trellis Encoders- 155 Other Design Techniques- 156 Problems- 16 Adaptive Vector Quantization- 161 Introduction- 162 Mean Adaptation- 163 Gain-Adaptive Vector Quantization- 164 Switched Codebook Adaptation- 165 Adaptive Bit Allocation- 166 Address VQ- 167 Progressive Code Vector Updating- 168 Adaptive Codebook Generation- 169 Vector Excitation Coding- 1610 Problems- 17 Variable Rate Vector Quantization- 171 Variable Rate Coding- 172 Variable Dimension VQ- 173 Alternative Approaches to Variable Rate VQ- 174 Pruned Tree-Structured VQ- 175 The Generalized BFOS Algorithm- 176 Pruned Tree-Structured VQ- 177 Entropy Coded VQ- 178 Greedy Tree Growing- 179 Design Examples- 1710 Bit Allocation Revisited- 1711 Design Algorithms- 1712 Problems

7,015 citations


"Convex piecewise-linear fitting" refers methods in this paper

  • ...The algorithm is closely related to the k-means algorithm used in least-squares clustering (Gersho and Gray 1991)....

    [...]

Journal ArticleDOI
TL;DR: This tutorial paper collects together in one place the basic background material needed to do GP modeling, and shows how to recognize functions and problems compatible with GP, and how to approximate functions or data in a formcompatible with GP.
Abstract: A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this is not possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them.

1,215 citations


"Convex piecewise-linear fitting" refers background in this paper

  • ...The function ψ is convex and piecewise-linear (see, e.g., Boyd and Vandenberghe 2004); the function φ is evidently bi-affine in x and (b,B)....

    [...]

  • ...Then the problem (1) can be solved, exactly, via a quadratic program (QP); see (Boyd and Vandenberghe 2004, Sect....

    [...]

Journal ArticleDOI
TL;DR: The availability of the explicit structure of the MPC controller provides an insight into the type of control action in different regions of the state space, and highlights possible conditions of degeneracies of the LP, such as multiple optima.
Abstract: We study model predictive control (MPC) schemes for discrete-time linear time-invariant systems with constraints on inputs and states, that can be formulated using a linear program (LP). In particular, we focus our attention on performance criteria based on a mixed 1 -norm, namely, 1-norm with respect to time and -norm with respect to space. First we provide a method to compute the terminal weight so that closed-loop stability is achieved. We then show that the optimal control profile is a piecewise affine and continuous function of the initial state and briefly describe the algorithm to compute it. The piecewise affine form allows to eliminate online LP, as the computation associated with MPC becomes a simple function evaluation. Besides practical advantages, the availability of the explicit structure of the MPC controller provides an insight into the type of control action in different regions of the state space, and highlights possible conditions of degeneracies of the LP, such as multiple optima.

765 citations


"Convex piecewise-linear fitting" refers methods in this paper

  • ...This convex piecewise-linear approximate value function can be used to construct a simple feedback controller that approximately minimizes fuel use; see, e.g., Bemporad et al. (2002)....

    [...]

Journal ArticleDOI
TL;DR: Hybrid systems description language (HYSDEL) as discussed by the authors is a high-level modeling language for discrete hybrid automata (DHA) and a set of tools for translating DHA into hybrid models.
Abstract: This paper presents a computational framework for modeling hybrid systems in discrete-time. We introduce the class of discrete hybrid automata (DHA) and show its relation with several other existing model paradigms: piecewise affine systems, mixed logical dynamical systems, (extended) linear complementarity systems, min-max-plus-scaling systems. We present HYSDEL (hybrid systems description language), a high-level modeling language for DHA, and a set of tools for translating DHA into any of the former hybrid models. Such a multimodeling capability of HYSDEL is particularly appealing for exploiting a large number of available analysis and synthesis techniques, each one developed for a particular class of hybrid models. An automotive example shows the modeling capabilities of HYSDEL and how the different models allow to use several computational tools.

448 citations

Frequently Asked Questions (1)
Q1. What have the authors contributed in "Convex piecewise-linear fitting" ?

The authors consider the problem of fitting a convex piecewise-linear function, with some specified form, to given multi-dimensional data. The method the authors describe, which is a variation on the K-means algorithm for clustering, seems to work well in practice, at least on data that can be fit well by a convex function. The authors focus on the simplest function form, a maximum of a fixed number of affine functions, and then show how the methods extend to a more general form.