scispace - formally typeset
Open AccessJournal ArticleDOI

The synthesis of dynamical systems

Roger W. Brockett
- 01 Jan 1972 - 
- Vol. 30, Iss: 1, pp 41-50
About
This article is published in Quarterly of Applied Mathematics.The article was published on 1972-01-01 and is currently open access. It has received 2 citations till now. The article focuses on the topics: Dynamical systems theory.

read more

Content maybe subject to copyright    Report

QUARTERLY OF APPLIED MATHEMATICS
APRIL, 1972
SPECIAL ISSUE: SYMPOSIUM ON
"THE FUTURE OF APPLIED MATHEMATICS"
THE SYNTHESIS OF DYNAMICAL SYSTEMS*
By
R. W. BROCKETT
Harvard University
Abstract. A significant part of contemporary applied mathematics is concerned
directly with communication, control and computation. In these fields many of the
central problems involve the synthesis of algorithms, or dynamical systems, as opposed
to the analysis of dynamical systems which predominates in mathematical physics.
Arithmetic and numerical algorithms, finite-state machines and electrical filters are
examples of the types of dynamical systems which are frequently needed to operate
on data, in continuous or discrete form, and to produce data on a compatible time scale.
In this paper we discuss the scope and success of some of the synthesis procedures
currently available to treat these problems.
1. Computation and control. Dynamical phenomena have attracted the attention
of mathematicians for centuries; indeed, several important branches of analysis have
developed out of studies in this direction. In mathematics the usual idea of a dynamical
system is that of a deterministic, autonomous physical system; the goal of a mathe-
matical investigation is to make qualitative and quantitative predictions about the
nature of future behavior [1, pp. 1-4]. In this tradition, tremendous emphasis has been
placed on problems of an astronomical origin. In particular, from 1773 (Laplace) to
1889 (Poincar6) the question of determining whether or not the solar system is stable
was intensively studied, whereas the question of what to do about it should it prove to be
unstable was, apparently, left to the theologians. In any case, the descriptive branch
of dynamics has a much longer history than the prescriptive one; though there is evidence
that the idea of designing and controlling a dynamical system has roots in antiquity
[2, pp. 5-10], the mathematical development of this field is mostly a product of the
20th century.
As contrasted with the Newtonian concept of dynamics, the view is held in numerous
areas of applied mathematics that a dynamical system is a kind of operator or algorithm
which accepts inputs and produces outputs; the goal of a mathematical theory is either
to define a dynamical system which corresponds to a given set of input-output pairs or,
in the event that the dynamical system is given, to construct an input which will steer
the system to a desired goal. This is typical in control theory and in abstract computa-
tional theory. On the control side one can trace this circle of ideas back to the English
scientists Heaviside and Rayleigh who, around 1885, discussed the response of physical
systems (electrical networks and acoustical resonators) to sinusoidal forcing functions
of an arbitrary frequency. Heaviside's goal was to extend Kirchoff's penetrating analysis
* This work was supported in part by the U. S. Office of Naval Research under the Joint Services
Electronics Program by Contract N00014-67-A-0298-0006.

42 R. W. BROCKETT
of linear, resistive electrical networks to all linear time-invariant electrical networks
through the introduction of "capacitative operators" and "inductive operators." This
leads to the extension of Ohm's law itself and to the characterization of an electrical
network through its "impedance" (Heaviside's term [3]) or its "driving point reaction"
(Rayleigh's term [4]). As far as computation is concerned, according to Knuth [5, p. 607],
algorithms are "precise rules for transforming specified inputs into specified outputs in
a finite number of steps." This makes algorithms very much akin to dynamical systems
in the above sense of the words. In 1936 Turing [6] set out to formalize computation in
terms of a mechanical device which accepts input data and produces outputs. Subse-
quent developments led to the formalization of the idea of a finite automaton—perhaps
the most elementary dynamical input-output model in existance. If further justification
for our language is required recall that even elementary arithmetic with its system of
carrying and borrowing corresponds to our intuitive idea of a dynamic process.
Thus we regard both linear dynamical systems (filters) and automata as models for
algorithms. Both are dynamic and both accept inputs and produce outputs. The tech-
nological interest in each case stems from the fact that specific examples of these models
can be synthesized readily and the computations which these systems perform can be
very useful. Despite the apparent dissimilarities, even the synthesis procedures for these
systems proceed along similar lines. One first of all determines a state space, codes the
states and then builds the memoryless inter-connection. In both cases the design proceeds
smoothly in simple cases, but in the more complex cases combinatorial problems can be
severe and may even prevent the successful completion of phase two. We feel that certain
problems in numerical analysis are enough like problems arising in linear system theory
to justify a deeper investigation. In both fields the synthesis of algorithms is a major
problem. Convergence is the prime objective in many naive applications. Optimality
is an issue but of limited practical significance.
In this paper we want to discuss, in a general way, some synthesis questions related
to dynamical systems of the input-output, or if you like, algorithmic, type. At times it
will be necessary to assume that the reader has some familiarity with linear system
theory as it is discussed in the recent literature [7, 8].
2. Discovering interative schemes. Assume that one has a certain function or set
of functions to be evaluated. A number of procedures are known for finding recursive
schemes to carry out the computation. For example, if the function is a polynomial
and if it is to be evaluated in a single point X then one has Horner's rule,
x(k + 1) = Xx(k) + pk ; x(0) = 0,
which evaluates piSn + p2s"_1 + + pn+i at X with just n additions and n multipli-
cations. This is a special case of a much more general idea which we will go into below.
If U is a set then by U* we mean the monoid (semigroup with identity) whose
elements consist of strings of the elements of U and whose multiplication is concatenation.
We include the empty string which is the identity. We consider the problem of evaluating
functions defined on U* mapping into a set Y, and in particular we want to discuss what
happens if we give U and Y additional structure corresponding to various types of
practical computation.
2.1 Functions defined on finite sets. Suppose U is a finite set. If r maps U* into Y
then r defines an equivalence relation on U* according to which ul £ U* is equivalent
to u2 G U* if for each u3 £ U* we have r(utu?) = r(u2u3). We denote this equivalence

SYNTHESIS OF DYNAMICAL SYSTEMS 43
by Ui = r u2 Nerode [9] has shown that there exists an iterative scheme to evaluate r
which has a finite number of states if and only if U*/=, is a finite set. That is to say,
if there are only a finite number of equivalence classes in U* then there exists a finite
set X, functions a and c such that a:X X U —> X; c:X X U —> Y, and r is evaluated by
x(k + 1) = a[x(k), u(k)]; y(k) = c[x(k), u(k)]; x(0) = x0 .
The set X is called the set of states. Any two such iterative schemes for evaluating r which
are minimal in the sense that every state can be reached from the starting state for some
input string, and every two states can be distinguished knowing the input and output
data, have the same number of states and this number equals the number of elements
in U*/ = r
2.2 Functions defined on groups. Now suppose that U and F admit a group struc-
ture. Say U = (U, •) and F = (F, *). Let r'.U* —> F be a given function. When can
we evaluate r by an iterative method of the type
x(k + 1) = b[u(k)]- a[x(k)]-, y(k) = c[x(k)]; x(0) = x0 ,
where x takes on values in a group X = (X, •), a:X —> X; b:U —> X and c:X —> F are
all group homomorphisms? Since the solution of the above iteration is
2/(0) = c[x0]
2/(1) = c[5[w(0)]-a[xo]]
2/(2) = c[6[w(l)]-a[6[«(0)]]-a2[zo]]
y(ja) = c[b[u(p - 1 )]-a[b[u(jp - 2)]] a"[x0]]
it is clear that in order to be able to evaluate r using a method of this form it is necessary
that the value of r corresponding to the sequence «(0), u(l), , u(p) be expressible
as T0[u(p 1 )]*Ti[u(p 2]* *T^[u(0)]*vv . If U is a finite group then in order
for X to be finite we require that the Nerode condition be satisfied. Assuming this to
be the case, then it is known [10] that there exists a realization of the given form if and
only if the sequence T, , T2 is ultimately periodic in that there exist p and q such
that Tk+V = Tk for k = q, q + 1, q + 2, . Moreover, if there are two minimal iterative
schemes which compute the same input-output map, say
x(k + 1) = 6[m(&)]-o[x(fc)]; y(k) = c[x(k)],
z(k + 1) = g[u(k)]-j[z(k)]\ y{k) = h[z(k)],
then X and Z are homomorphic as groups [10] and there exists a one-to-one and onto
homomorphism p\X —> Z such that
p(«(p_1(-))) = /(•), p(6(-)) = £/(•), c(p-\-)) = h(-).
2.3 Functions defined on vector spaces. Perhaps the most intensively studied methods
are the linear ones including, for example, Horner's rule as a special case. In this context
we give U and F a vector space structure and consider
x(k + 1) = Ax(k) + Bu(k)\ y = Cx(k)
where x(k) £ u(k) £ (Rm; y(k) £ and A, B, and C are linear maps defined on the
appropriate spaces.

44 R. W. BROCKETT
This type of first-order iteration is appropriate for evaluating the product
CB 0 0
CAB CB 0
CA2B CAB CB
u( 0)
w(l)
«(2)
y( i)
2/(2)
2/(3)
(*)
It is well known [7] that given a block Toeplitz matrix it is possible to evaluate its
product with a vector, using a first-order linear recurrence with x n-dimensional, if
and only if the family of Hankel matrices
Hr =
CB CAB CAT'lB
CAB CA2B CA'B
CA'-'B CA'B ■■■ CA2r~lB_
r = 1, 2,
are each of rank n or less. Moreover, if there is one H, which is of rank n then no lower-
dimensional recursive scheme exists and any two recursive schemes of this form, and
this minimal order, are related in a simple way. That is, if
x(k + 1) = Ax(k) + Bu(lc); y(k) = Cx(k)
z(k + 1) = Fx(k) + Gu(k)] y(k) = Hz(k)
both compute the same function and if both are of minimal dimension, then there exists
a nonsingular constant matrix P such that
PAP'1 = F, PB = G, CP-1 = H.
The minimum dimension of a realization of the map (*) is called its McMillan degree.
Considering, for a moment, the case where u and y are scalars, there are two canonical
realizations available over any field (not necessarily algebraically closed). The first
corresponds to the choice
A =
r o i o o
0 0 1 ••• 0
b =
Po ~Pl —Pi —Pn-1
and second corresponds to the choice
0 1 0 0
, 0 0 1 0
A =
Po —Pi —Pa ' ' ' P-iJ
C Iffn-l ) Qn-2 , ' * ' ) ?o]
b =
c = [1, 0, ••• 0]

SYNTHESIS OF DYNAMICAL SYSTEMS 45
The relationship between the two is most conveniently expressed in terms of a generating
function relationship
g„-1z""' + g„-Bz" 2 + + go _ i -1 i ; -2 .
n i n— 1 » I tO2 I h* I
2 + p„_i 0 + + pn
2.4 Functions defined on modules. To indicate further the scope of these methods
we consider an example where the iteration takes place in a module. Starting with a
one-dimensional diffusion equation with constant coefficients and periodic boundary
conditions, say
- a = u(t, z); z(i, 0) - *«, *), ^ ,
a standard discretization with respect to time and space gives
x(k + 1) = Ax(k) + u(Jc)
where A is a circulant matrix. Let Rp[z] denote the ring of polynomials of degree less
than p with real coefficients, where all computations are done modulo z" 1. As is
well known, the set of p by p circulant matrices form a ring which is isomorphic to Itp[z].
If ^[2] is the n-dimensional module over #,,[2] then we can study
x(fc + 1) = Ax(k) + Bu(k)-, y(k) = Cx(k)
where u, x and y take on values in (R"[z], <P^[z] and (R°[e] and A, B, and C are all module
homomorphisms. See Kalman, Falb, and Arbib [7] for a general discussion. This example
comes from [11] where the above isomorphism is used to good advantage.
3. The optimization of algorithms. The procedures of the previous section can be
used to minimize the memory associated with a given computation. Recently, however,
there has been a great deal of work devoted to the development of algorithms which
minimize other criteria such as the number of multiplications necessary to compute
a given function. The problem of evaluating a polynomial at one or more points and
the problem of multiplying two polynomials together are examples of problems which
have been looked at from this point of view. There has also been some investigation of
optimal iteration, as we will discuss later.
3.1 Arithmetic optimality. A number of frequently occurring computations, including
the two just mentioned, center around evaluating the product of a Toeplitz matrix and
vector. That is, to evaluate p(s) = ptsn + p2s"_1 + p„*i at X using Horner's rule
one must compute the product
P2
1 0 0
X2
X" X"-1 X""2
P3
LPn-1.
V
Xpi + p2
X2P!+XP2+ p3
pM
To compute the product of pis" + P2Sn + p„+1 and <hsm + q2sm~ + qm+1 one
must compute

Citations
More filters
References
More filters
Book

The Art of Computer Programming

TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Journal ArticleDOI

On Computable Numbers, with an Application to the Entscheidungsproblem

TL;DR: This chapter discusses the application of the diagonal process of the universal computing machine, which automates the calculation of circle and circle-free numbers.
Book

Foundations of mechanics

Ralph Abraham
TL;DR: In this article, Ratiu and Cushman introduce differential theory calculus on manifolds and derive an overview of qualitative and topological properties of differentiable properties of topological dynamics.
Book

Dynamical Theories of Brownian Motion

Edward Nelson
TL;DR: In a course of lectures given by Professor Nelson at Princeton during the spring term of 1966, the authors traces the history of earlier work in Brownian motion, both the mathematical theory, and the natural phenomenon with its physical interpretations.