# Parameter Range Reduction in Ordinary Differential Equation Models

TL;DR: This paper presents an algorithm for parameter range reduction in systems of ordinary differential equations that is tested on two ordinary differential equation models and the reduced ranges are shown to significantly improve the performance of traditional parameter estimation methods.

Abstract: This paper presents an algorithm for parameter range reduction in systems of ordinary differential equations. Parameter values are assumed only to be known to lie in potentially large regions of parameter space. Interval arithmetic and a family of monotonic discretizations are used to prune regions of parameter space that are inconsistent with given time series data. The algorithm is tested on two ordinary differential equation models and the reduced ranges are shown to significantly improve the performance of traditional parameter estimation methods.

## Summary (2 min read)

Jump to: [1 Introduction] – [2 Interval Arithmetic and Notation] – [3 Parameter Range Reduction] – [4 Results] and [5 Conclusion]

### 1 Introduction

- Given a system of ordinary differential equations (ODEs) that models some physical process, the authors are interested in the inverse problem of identifying an appropriate set of parameter values from time series data.
- Using estimates of how the cost function varies with the parameters, new parameter values are chosen until the cost function is hopefully minimized.
- The algorithm discretizes the model equations and uses these discretizations to quickly prune regions of parameter space that are deemed to be inconsistent with the data.
- For simplicity, the authors will refer to such regions as consistent regions.

### 2 Interval Arithmetic and Notation

- The authors now briefly introduce some concepts in interval arithmetic.
- There are many sophisticated interval range enclosure methods [9], but for their purposes the authors will require only the use of natural interval extensions, that are obtained by substituting all occurrences of each argument of f with its interval-valued equivalent.
- In addition to providing sharpness, establishing monotonicity can significantly decrease the computational time of the algorithm.
- If each variable on which F depends appears only once in the expression, then the enclosure F, obtained from a natural interval extension, is sharp.
- In some cases, such a rearrangement will not be possible, but factoring is in general always beneficial.

### 3 Parameter Range Reduction

- The authors outline the parameter range reduction scheme.
- The time interval [t0, ts] will be referred to as the discretization window.
- To obtain a sharper result, the authors may consider alternate rearrangements of F before deriving the natural interval extension.
- Since ∂f1 ∂λ1 = x2, monotonicity with respect to λ1 cannot be established if their continuous representation of x2 spans zero at some time in the discretization window.
- The core of the parameter range reduction algorithm is the following test.

### 4 Results

- To demonstrate the effectiveness of the algorithm, the authors first consider a twodimensional nonlinear pendulum system with three parameters.
- The data were simulated by numerically integrating with a true parameter set, obtaining a dense set of sample data points and adding a low level of noise.
- In each case, the optimization routine was unable to locate a minimum close to the true parameter values, when given as input the initial a priori ranges.
- Results from the parameter range reduction algorithm are presented in Table 3.

### 5 Conclusion

- This paper considered the problem of identifying parameters in an ODE model given time series data.
- The algorithm presented reduces a priori ranges of parameter values and outputs tighter bounds and a starting value for traditional parameter estimation schemes.
- The algorithm requires all model variables to be measured.
- It may not be reasonable to assume that time series data will be available for the velocity of the nonlinear pendulum.
- Strong results in this line of research have been found and their details will be reported elsewhere.

Did you find this useful? Give us your feedback

Journal of Scientiﬁc Computing manuscript No.

(will be inserted by the editor)

Parameter range reduction in ordinary diﬀerential

equation models

Andrew Skelton · Allan R. Willms

This is a post-print of the article by the same title published in:

The Journal of Scientiﬁc Computing, Vol. 62 (2) pp. 517–531, (2015).

doi = 10.1007/s10915-014-9865-6

Abstract This paper presents an algorithm for parameter range reduction in

systems of ordinary diﬀerential equations. Parameter values are assumed only

to be known to lie in potentially large regions of parameter space. Interval

arithmetic and a family of monotonic discretizations are used to prune regions

of parameter space that are inconsistent with given time series data. The algo-

rithm is tested on two ordinary diﬀerential equation models and the reduced

ranges are shown to signiﬁcantly improve the performance of traditional pa-

rameter estimation methods.

Keywords parameter estimation · ordinary diﬀerential equations · numerical

methods · interval arithmetic

PACS 34A55 · 65L06

1 Introduction

Given a system of ordinary diﬀerential equations (ODEs) that models some

physical process, we are interested in the inverse problem of identifying an ap-

propriate set of parameter values from time series data. We will, in particular,

be concerned with models of the form

x

0

= f(t, x, λ), x ∈ R

m

, λ ∈ R

q

, (1)

where the model parameters, λ, are to be determined from time series data

of the form S = {(t

n

obs

, x

n

obs

)}

N

n=1

. We denote the observation time window

A. Skelton

University of Guelph, Guelph, Ontario, Canada, N1G 2W1

E-mail: skeltona@uoguelph.ca

A.R. Willms

E-mail: awillms@uoguelph.ca

2

I

obs

= [t

1

obs

, t

N

obs

] and for a ﬁxed λ, the solution to (1) passing through the

initial point (τ, ξ) as x(t, τ, ξ, λ).

Parameter identiﬁcation in this context is typically accomplished by se-

lecting a set of initial parameter values, numerically integrating the model

equations and comparing the result to the time series data. A cost function,

such as weighted least squares, is used to measure the suitability of these pa-

rameters. Using estimates of how the cost function varies with the parameters,

new parameter values are chosen until the cost function is hopefully minimized.

It is often the case that little is known a priori about the parameter values,

thus making an initial selection diﬃcult. If the initial selection must be made

from a very large region of parameter space, it is possible that this selection will

result in a system that cannot be numerically integrated over the desired time

window. If the cost function has multiple local minima, or large ﬂat regions

in parameter space, it may be diﬃcult for the procedure to converge to a

reasonable minimum. To combat this problem, one could employ a multistart

method [12] in which a large number of initial parameter selections are made

and the results are analyzed to identify a global minimizer. An alternative is a

simulated annealing method [2] in which the algorithm attempts to eﬃciently

explore as much of parameter space as possible. A variety of global methods

are compared in [1,3,6]. In all cases, signiﬁcant computational time is spent

numerically integrating the model equations. It is useful to be able to reduce

the size of the parameter space each method is required to search.

This paper presents an improvement on a parameter range reduction method

ﬁrst introduced in [13,14]. The algorithm discretizes the model equations and

uses these discretizations to quickly prune regions of parameter space that

are deemed to be inconsistent with the data. The ﬁnal result is a collection

of boxes from which a better initial parameter selection can be made. The

method in [13,14] required that the discretization over a given time window

be monotonic with respect to each parameter. We relax here this condition

and show that it results in a smaller region of parameter space that fails to be

inconsistent. For simplicity, we will refer to such regions as consistent regions.

The remainder of this paper is organized as follows. In Section 2, we provide

a brief summary of interval arithmetic and present some theorems that will

allow for better and more eﬃcient parameter range reduction. In Section 3, we

discuss the theory and structure of the algorithm itself. In Section 4, we present

experimental results obtained from two ODE models and show the eﬀect of

reduced parameter ranges on traditional parameter estimation schemes.

2 Interval Arithmetic and Notation

We now brieﬂy introduce some concepts in interval arithmetic. More detailed

and complete reviews of interval analysis can be found in [4,7–9]. In this paper,

interval-valued quantities will be denoted with a bold typeface, while under

and over bars will denote respectively the lower and upper endpoints of an

interval. If a = [ a , a ] and b =

b , b

and • denotes any of the operators

3

+, −, ×, ÷, then we can deﬁne elementary interval arithmetic as

a • b = {a • b : a ∈ a, b ∈ b}.

This deﬁnition is equivalent to the following:

a + b =

a + b , a + b

,

a − b =

a − b , a − b

,

a × b =

min{a b, a b, a b, a b} , max{a b, a b, a b, a b}

,

a ÷ b = [ a , a ] ×

1

b

,

1

b

, if 0 /∈ b.

Elementary functions can likewise be deﬁned over intervals [4]. If n is a non-

negative integer, for example, we can deﬁne

a

n

=

[1, 1] if n = 0,

[ a

n

, a

n

] if a ≥ 0, or n is odd,

[ a

n

, a

n

] if a ≤ 0, and n is even,

[ 0, max( a

n

, a

n

) ] if a ≤ 0 ≤ a, and n > 0 is even.

Given a function f : D ⊂ x → R, we wish to determine an interval that

encloses as tightly as possible the set Range(f; D) = {f(x) : x ∈ D}. There are

many sophisticated interval range enclosure methods [9], but for our purposes

we will require only the use of natural interval extensions, that are obtained

by substituting all occurrences of each argument of f with its interval-valued

equivalent. An interval-valued enclosure of the range of a function will be

denoted F : x → I. We will denote the upper and lower endpoints of I respec-

tively by F(x) and F(x). If f is a suﬃciently nice function, it has been shown

that interval extensions satisfy the property of inclusion isotonicity [7], that

is,

Range(f; x) ⊆ F(x).

If Range(f; x) = F(x), then the interval enclosure F is said to be sharp.

Consider, for example, the function f : [0, 2] → R given by f(x) = x − x

2

. It

is easy to show that the range of this function is Range(f; [0, 2]) =

−2,

1

4

.

Using the natural interval extension F(x) = x − x

2

, it can be see that

Range(f; [0, 2]) ⊂ F([0, 2]) = [0, 2] − [0, 2]

2

= [0, 2] − [0, 4] = [−4, 2].

This interval enclosure is not sharp because of the dependency problem; the two

instances of x are incorrectly treated as independent. We can, however, obtain

sharpness by considering an alternate form of f. Using the natural interval

extension of the alternate form g(x) = −

x −

1

2

2

+

1

4

, it can be seen that

Range(g; [0, 2]) = G([0, 2]) = −

[0, 2] −

1

2

2

+

1

4

=

−

9

4

,

3

4

+

1

4

=

−2,

1

4

.

4

Since alternate forms of the same expression can give diﬀerent results when

extended to intervals, it is very important to consider the form of the original

equation before applying an interval extension. The following theorems give

useful conditions for determining if an interval enclosure will be sharp.

Theorem 1 [7] An interval enclosure F(x), obtained from a natural interval

extension, is sharp if and only if both F and F are computed in terms of a

single endpoint of each of the variables on which F depends.

Theorem 2 [4] If F is monotonically nondecreasing or nonincreasing with

respect to each argument, then a sharp enclosure F can be calculated by evalu-

ating the function F at the appropriate end points of each interval argument.

We will make frequent use of this monotonicity test. In addition to providing

sharpness, establishing monotonicity can signiﬁcantly decrease the computa-

tional time of the algorithm. In this case, the values of F and F must be

calculated independently, but do not require the use of interval arithmetic.

We note that any checks of monotonicity must be made over the entire range

of values in each interval-valued argument. Any partial derivatives used in this

analysis are therefore also interval extensions and may themselves fail to be

sharp, which can lead to pessimistic results. It is, however, often the case that

establishing monotonicity is not a necessary condition for sharpness.

Theorem 3 [7] If each variable on which F depends appears only once in the

expression, then the enclosure F, obtained from a natural interval extension,

is sharp.

When we are constructing interval extensions, it is therefore useful to con-

sider alternate rearrangements in which each interval-valued quantity, espe-

cially those for which monotonic properties cannot be established, appears

only once in the given expression. In some cases, such a rearrangement will

not be possible, but factoring is in general always beneﬁcial.

Deﬁnition 1 [7] The interval arithmetic addition and multiplication opera-

tors are sub-distributive. That is, x(y + z) ⊆ xy + xz.

The lesson to be learned is that the algebraic structure of any expression

must be carefully considered before extending it to interval arithmetic if tight

enclosures are desired.

3 Parameter Range Reduction

In this section, we outline the parameter range reduction scheme. We ﬁrst

discuss the family of discretizations used by the scheme and their interval

extensions. We then discuss monotonicity tests that enable our interval exten-

sions to be as sharp as possible. We then discuss the main test used by the

scheme and ﬁnally, we outline the structure of the algorithm.

5

The algorithm requires as input an ordinary diﬀerential equation model,

initial parameter ranges and a range for each model variable at every time

t ∈ I

obs

. It is assumed that the true value of each parameter lies within its

initial range, that may be as wide as (−∞, +∞) if no a priori information is

known. Converting the discrete time series datas to a continuous representation

may be accomplished using a variety of methods. In this paper, we use an

algorithm presented in [10] that replaces time series data with a continuous

piecewise linear band that encloses all data points. The results of the range

reduction algorithm depend on the representation and thus on what the user

thinks is a valid continuous representation that encloses the true solution.

3.1 Discretizations

The algorithm will make use of a speciﬁc family of linear multistep discretiza-

tions. In [14], it was shown that the best discretization formulae for parameter

range reduction were called A1OUT discretizations, whose form is

F (t

0

, h, s; x

0

, x

1

, . . . , x

s

, λ) := x

0

− x

s

+ h

s

X

i=0

β

i

f

i

, (2)

where h > 0 is the constant step size, s is the number of steps in the dis-

cretization, each x

i

= [x

i

1

, . . . , x

i

m

]

T

is an independent variable in R

m

and

represents an approximation to the solution x(t

i

, τ, ξ, λ) of the ODE at time

t

i

= t

0

+ ih, and f

i

= f (t

i

, x

i

, λ). The time interval [t

0

, t

s

] will be referred to

as the discretization window. Superscripts are used to denote time indices and

subscripts to denote spatial indices. For readability, we will often drop the ﬁrst

three arguments of F , when it is not necessary to emphasize this dependence.

The β

i

coeﬃcients in equation (2) are chosen using the following criteria:

1. β

i

≥ 0, 0 ≤ i ≤ s, (monotonicity),

2. β

i

= β

s−i

, 0 ≤ i ≤ b

s

2

c, (symmetry),

3. β

i

are chosen to maximize the order of the discretization and, if this does

not uniquely identify them, they are chosen to minimize the error constant

as deﬁned in [14].

A table of β

i

values for 1 ≤ s ≤ 17 can be found in [14] with details of their

derivation.

The natural interval extension of (2) is given by

F(t

0

, h, s; x

0

, x

1

, . . . , x

s

, λ) := x

0

− x

s

+ h

s

X

i=0

β

i

f

i

,

where λ := λ

1

×λ

2

×· · ·×λ

q

is the Cartesian product of the parameter intervals

(the parameter box), and f

i

= f(t

i

, x

i

, λ) is the natural interval extension of

f

i

. Each x

i

is determined from the continuous representations of the time

series data and for simplicity, we will often write x := x

0

, . . . , x

s

. To obtain a

sharper result, we may consider alternate rearrangements of F before deriving

the natural interval extension.

##### Citations

More filters

••

TL;DR: An algorithm for parameter range reduction from partial data in systems of differential algebraic equations is presented and its outputs are shown to have beneficial effect as input to traditional parameter estimation schemes.

••

01 Jan 2016TL;DR: The regulations governing the safe application of manure are based on laboratory data, which may or may not accurately reflect the environmental fluctuations seen in field conditions as discussed by the authors, and therefore, they may not be accurate.

Abstract: The application of manure is an important component of nutrient management in the production of field crops. The regulations governing the safe application of manure are based on laboratory data, which may or may not accurately reflect the environmental fluctuations seen in field conditions.

##### References

More filters

•

01 Jan 1987

TL;DR: Finite representations Finite evaluation Finite convergence Computable sufficient conditions for existence and convergence Safe starting regions for iterative methods.

Abstract: Finite representations Finite evaluation Finite convergence Computable sufficient conditions for existence and convergence Safe starting regions for iterative methods Applications to mathematical programming Applications to operator equations An application in finance Internal rates-of-return.

2,983 citations

••

TL;DR: This Second Edition of Global Optimization Using Interval Analysis expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance.

Abstract: Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.

1,705 citations

### "Parameter Range Reduction in Ordina..." refers background in this paper

...Theorem 2 [4] If F is monotonically nondecreasing or nonincreasing with respect to each argument, then a sharp enclosure F can be calculated by evaluating the function F at the appropriate end points of each interval argument....

[...]

...Elementary functions can likewise be defined over intervals [4]....

[...]

••

01 Jan 1995

TL;DR: This paper presents algorithms for global optimization of mixed-integer nonlinear programs using the Reformulation-Linearization/Convexification Technique (RLT) and an introduction to dynamical search.

Abstract: Preface. 1. Tight relaxations for nonconvex optimization problems using the Reformulation-Linearization/Convexification Technique (RLT) H.D. Sherali. 2. Exact algorithms for global optimization of mixed-integer nonlinear programs M. Tawarmalani, N.V. Sahinidis. 3. Algorithms for global optimization and discrete problems based on methods for local optimization W. Murray, Kien-Ming Ng. 4. An introduction to dynamical search L. Pronzato, et al. 5. Two-phase methods for global optimization F. Schoen. 6. Simulated annealing algorithms for continuous global optimization M. Locatelli. 7. Stochastic Adaptive Search G.R. Wood, Z.B. Zabinsky. 8. Implementation of Stochastic Adaptive Search with Hit-and-Run as a generator Z.B. Zabinsky, G.R. Wood. 9. Genetic algorithms J.E. Smith. 10. Dataflow learning in coupled lattices: an application to artificial neural networks J.C. Principe, et al. 11. Taboo Search: an approach to the multiple-minima problem for continuous functions D. Cvijovic, J. Klinowski. 12. Recent advances in the direct methods of X-ray crystallography H.A. Hauptman. 13. Deformation methods of global optimization in chemistry and physics L. Piela. 14. Experimental analysis of algorithms C.C. McGeoch. 15. Global optimization: software, test problems, and applications J.D. Pinter.

1,546 citations