Dynamical Systems: Introduction & Definition

Danilo Poccia

January 26, 1999

1  Introduction

Given the problem to study a system we have to find a way to express its state. Usually we can use a set of variables, like a vector x, so that knowing this vector we can (in theory) predict the future states of the system. For example, in mechanics problems, we have to know positions and speeds.

If the system is not in equilibrium, it will change as time flows, and so hopefully will change the describing vector of variables too.

We may call the initial vector x(t0), to remember us that is related to a time t0. Now after some time Dt things have changed and we have another vector. We may call it x(t0 + Dt). These vectors are both taken from a vectorial space X where each element represents a possible configuration, so x(t0) Î X and x(t0 + Dt) Î X. We'd like to find a rule between x(t0) and x(t0 + Dt). It comes out that it may be expressed in different ways.

Usually we know how the system is going to change when it is in a certain state. But the state is expressed by the vector x(t) and its change rate is just its differential versus time, [(dx)/( dt)] (t). So we found ourselves managing a differential equation like the following

dx
dt
(t) = f(x(t))
(1)
Here we use the fact that x(t) contains all the informations about the state of the systems, so we don't need to make f depend on time t explicitly, neither from older states (t¢ < t). By this I mean that there are no forces that are time-dependent, so that if you have two times t1 and t2, t1 \not = t2, such that
x(t1) = x(t2)
(2)
then for all Dt > 0 we have1
x(t1 + Dt) = x(t2 + Dt)
(3)
If this is not possible, then we can't describe the system with (1).

Often it's not possible to analytically solve equation (1), so we can try a numerical approach, but supposing that we are able to integrate f(x(t)) respct t, and that the indefinite integral is F(t), we have2

ó
õ
t0 + Dt

t0 
dx
dt
(t)dt
=
ó
õ
t0 + Dt

t0 
f(x(t)) dt
x(t0 + Dt) - x(t)
=
F(t0 + Dt) - F(t0)
(4)
Now we'd like to inveret the function x(t), but it may not be globally invertible, for example if the orbit is periodic or in equilibrium. Discarding the still state, so that x(t) ¹ k, we define a function t(x) such that
t:X ® Â
(5)
t(x) : = min
{t \mid x(t) = x}
(6)
We have that may be
t(x(t0)) ¹ t0
(7)
but this doesn't change the properties of F as we see from (3) and we find
x(t0 + Dt) - x(t0) = F(t(x(t0)) + Dt)) - F(t(x(t0)))
(8)
We can choose F so that F(t(x(t0))) = 0, and it follows that
x(t0 + Dt) - x(t0)
=
F(t(x(t0)) + Dt)
x(t0 + Dt)
=
x(t0) + F(t(x(t0)) + Dt)
(9)
If we choose a fixed Dt we can define
M:X ® X
(10)
M(x) : = x + F(t(x) + Dt)
(11)
and write
x(t + Dt) = M(x(t))
(12)
Moreover if we call
x0
: =
x(t0)
x1
: =
x(t0 + Dt)
x2
: =
x(t0 + 2 Dt)
:
xn
: =
x(t0 + n Dt)
(13)
we can study the trajectory in a countable number of points xn. The evolving rule is
xn + 1 = M(xn)
(14)
where M maps a state of the system into its successor after time Dt. In fact M is called a map. I have to note that our construction of M is self-recursive, so may not be applicable in pratice. There are other ways to build a map M from a trajectory x(t), like the Poincaré section method that we'll see later.

2  Definition

Let's try to get some results from what we have done above. If state of a system can be fitted into a vector x Î X, then we have seen that the evolution of a system can be predicted using different ways:

We can define a dynamical system as:

As an example we can look at one of the physicians' classic: the harmonic oscillator. Being r the position, the differential equation is

..
r
 
= -w2 r
(15)
The analytical solution for this equation is known to be
r = Acos(wt + f)
(16)
where A, the amplitude, and f, the phase, can be chosen freely. The state of the system, indicated by position r and speed v, can be fitted into the vector
x = æ
ç
è
r
v
ö
÷
ø
(17)
The differential equation, put into a first-order form (ODE), is now
.
r
 
=
v
(18)
.
v
 
=
-w2 r
(19)
and in vectorial notation
d
dt
æ
ç
è
r
v
ö
÷
ø
= æ
ç
è
0
1
-w2
0
ö
÷
ø
æ
ç
è
r
v
ö
÷
ø
(20)
We can integrate this in respect to the time in an interval (t0, t0 + Dt) and obtain
Dr(Dt)
=
ó
õ
t0 + Dt

t0 
v(t)dt
(21)
Dv(Dt)
=
-w2 ó
õ
t0 + Dt

t0 
r(t)dt
(22)
We can remove explicit dependences from time t0, but not from the initial conditions r(t0) and v(t0)
Dr(Dt)
=
ó
õ
t0 + Dt

t0 
[v(t0) + Dv(t- t0)] dt
(23)
=
v(t0) Dt + ó
õ
Dt

0 
Dv(t) dt
(24)
and the same with Dv
Dv(Dt)
=
-w2 ó
õ
t0 + Dt

t0 
[r(t0) + Dr(t- t0)] dt
(25)
=
-w2 { r(t0) Dt + ó
õ
Dt

0 
Dr(t) dt}
(26)
so that
r(t0 + Dt)
=
r(t0) + Dr(Dt)
(27)
v(t0 + Dt)
=
v(t0) + Dv(Dt)
(28)
but, as you can expect, this definition is self-recursive, being the result of our method, and so not applicable in pratice.

We can avoid this problem using numerical approximations to resolve the integrals, since they are computed on an interval of amplitude Dt which we can choose to be enough small to considerate r and v constants during the interval. This technique used to solve an ODE is known as explicit euler. Or we can use other finer techniques, but that's another story...

3  Studing

When studing a dynamical system the first thing that we can think is to take an initial condition, x(t0) or x0, and concentrate on a single trajectory, being it continuous or discrete. But another interesting possibility is to concentrate on the evolution of a set of initial conditions, i.e. a subset Y Ì X, and look at its behavour under the evolving rule. Subsets of X evolve and go into other subsets of X. So we can write, using a map M and a the set of initial conditions Y0,

Y1
=
M(Y0)
:
Yn+1
=
M(Yn)
(29)

This is conceptually a great change of view.

Let's go on with the example of the harminc oscillator. As we have seen the space X of the possible configurations is formed, in this case, by couples of position and speed (r, v), so X = Â×Â.

In a 2-d plan where r is on the X axis and v is on the Y axis each point can be tought as an initial condition for the system. This plan is called phase space and can be built for every dinamical system. Clearly it has always the same dimension of the vectorial space X.

If we choose to begin with time t0 = 0, then the initial condition (r(0), v(0)) gives the values of A and f

r(0)
=
Acos(f)
(30)
v(0)
=
Awsin(f)
(31)
so that
A
=
Ö
(r(0))2+(v(0)/w)2
 
(32)
f
=
arctan v(0)/w
r(0)
(33)
and the evolution is given by
r(t)
=
Acos(wt + f)
(34)
v(t)
=
Awsin(wt + f)
(35)

It's clear that a set of initial conditions Y Ì X on the phase space rotates and eventually stretches along the Y axis with time, the latter depending on the value of w. In we choose w = 1 this is a perfect rotaion.

We can ask if there are sets which are invariant under the evolution. A set with this property is called a manifold

Y Í M(Y)
(36)
For the harmonic oscillator these manifolds are ellipses, or circles if w = 1. But thinking twice we see that this sets are also the periodic orbits of this system. In fact (36) tells us that if a point is in a manifold, it will remain there forever, and that is true for all periodic orbits.

Here I used a map, but we can always pass from a countinous trajectory to a map. Let's call the trajectory

T : = {x(t)}
(37)
we have
T Ì X
(38)
and we can imagine this subset in the phase space.

Now let's loose some time on what we call a surface. In our 3-d space it is a 2-d subset. Now if X has a dimension dim(X) ³ 1, S is a surface of X if

S Ì X
(39)
and
dim(S) = dim(X) - 1
(40)
If we choose S so that the intersection between S and T is not empty and numerable, we may call
t1
: =
min
{t \mid x(t) Î S}
t2
: =
min
{t > t1 \mid x(t) Î S}
t3
: =
min
{t > t2 \mid x(t) Î S}
:
tn
: =
min
{t > tn-1 \mid x(t) Î S}
(41)
Defining
x1
: =
x(t1)
x2
: =
x(t2)
x3
: =
x(t3)
:
xn
: =
x(tn)
(42)
(43)
we have a succession extracted from the trajectory, and we can think of it as the iteration of map. This method is called the Poincare' section.

As an example of it let's see what it can do with the harmonic oscillator. If we choose S so that

S : = {(r, v) \mid r = 0}
(44)
then
x1
=
(0, vmax)
x2
=
(0, -vmax)
x3
=
(0, vmax)
x4
=
(0, -vmax)
:
xn
=
(0, (-1)n+1vmax)
(45)
so we have a succession of period 2. This means that
xn = xn+2 = M(M(xn))
(46)
that me can write as
xn = xn+2 = M2(xn))
(47)
In the general case, if a map M has period p, then each of the p elements of the period is a fixed point for the map Mp.

If T is periodic, then each map M derived from T is periodic and can be set to have period 1 with this method.

4  Chaos

What is chaos? What do we mean about it?

To be continued...


Footnotes:

1 This is not true for Dt < 0.

2 This has sense since x(t) is defined by f via (1).


File translated from TEX by TTH, version 2.25.
On 10 Dec 1999, 22:10.