Dynamical Systems: Introduction & Definition
Danilo Poccia
January 26, 1999
1 Introduction
Given the problem to study a system
we have to find a way to express
its state. Usually we can use a set of variables,
like a vector x, so that knowing this vector we can (in theory)
predict the future states of the system.
For example, in mechanics problems, we have to know positions
and speeds.
If the system is not in equilibrium, it will change as time flows,
and so hopefully will change the describing vector of variables too.
We may call the initial vector x(t0),
to remember us that is related to a time t0.
Now after some time Dt things have changed
and we have another vector.
We may call it x(t0 + Dt).
These vectors are both taken from a vectorial space X
where each element represents a possible configuration, so x(t0) Î X
and x(t0 + Dt) Î X.
We'd like to find a rule between
x(t0) and x(t0 + Dt).
It comes out that it may be expressed in different ways.
Usually we know how the system is going to change
when it is in a certain state.
But the state is expressed by the vector x(t)
and its change rate is just
its differential versus time, [(dx)/( dt)] (t).
So we found ourselves managing a differential equation
like the following
Here we use the fact that x(t) contains all the informations
about the state of the systems, so we don't need to make f
depend on time t explicitly, neither from older states (t¢ < t).
By this I mean that there are no forces that are time-dependent,
so that if you have two times t1 and t2,
t1 \not = t2, such that
then for all Dt > 0 we have1
If this is not possible,
then we can't describe the system with (1).
Often it's not possible to analytically solve equation (1),
so we can try a numerical approach,
but supposing that we are able to integrate f(x(t)) respct t,
and that the indefinite integral is F(t),
we have2
|
|
ó õ
|
t0 + Dt
t0
|
|
dx dt
|
(t)dt |
|
|
| |
|
| (4) |
| |
|
Now we'd like to inveret the function x(t),
but it may not be globally invertible,
for example if the orbit is periodic or in equilibrium.
Discarding the still state, so that x(t) ¹ k,
we define a function t(x) such that
t(x) : = |
min
| {t \mid x(t) = x} |
| (6) |
We have that may be
but this doesn't change the properties of F
as we see from (3) and we find
x(t0 + Dt) - x(t0) = F(t(x(t0)) + Dt)) - F(t(x(t0))) |
| (8) |
We can choose F so that F(t(x(t0))) = 0, and it follows that
If we choose a fixed Dt we can define
M(x) : = x + F(t(x) + Dt) |
| (11) |
and write
Moreover if we call
we can study the trajectory in a countable number of points xn.
The evolving rule is
where M maps a state of the system into its successor
after time Dt.
In fact M is called a map.
I have to note that our construction of M is self-recursive,
so may not be applicable in pratice.
There are other ways to build a map M from a trajectory x(t),
like the Poincaré section method that we'll see later.
2 Definition
Let's try to get some results from what we have done above.
If state of a system can be fitted into a vector x Î X,
then we have seen that the evolution of a system can be predicted
using different ways:
- a differential equation;
- its continuous solution;
- a map.
We can define a dynamical system as:
- a space of possible configurations;
- a rule to evolve such configurations.
As an example we can look at one of the physicians' classic:
the harmonic oscillator.
Being r the position, the differential equation is
The analytical solution for this equation is known to be
where A, the amplitude, and f, the phase, can be chosen freely.
The state of the system, indicated by position r and speed v,
can be fitted into the vector
The differential equation, put into a first-order form (ODE), is now
and in vectorial notation
|
d dt
|
|
æ ç
è
|
| |
ö ÷
ø
|
= |
æ ç
è
|
| |
ö ÷
ø
|
|
æ ç
è
|
| |
ö ÷
ø
|
|
| (20) |
We can integrate this in respect to the time in an interval
(t0, t0 + Dt) and obtain
We can remove explicit dependences from time t0,
but not from the initial conditions r(t0) and v(t0)
|
|
|
|
ó õ
|
t0 + Dt
t0
|
[v(t0) + Dv(t- t0)] dt |
| (23) | |
|
v(t0) Dt + |
ó õ
|
Dt
0
|
Dv(t) dt |
| (24) |
| |
|
and the same with Dv
|
|
|
-w2 |
ó õ
|
t0 + Dt
t0
|
[r(t0) + Dr(t- t0)] dt |
| (25) | |
|
-w2 { r(t0) Dt + |
ó õ
|
Dt
0
|
Dr(t) dt} |
| (26) |
| |
|
so that
but, as you can expect, this definition is self-recursive,
being the result of our method, and so not applicable in pratice.
We can avoid this problem using numerical approximations
to resolve the integrals, since they are computed on an interval
of amplitude Dt which we can choose to be enough small
to considerate r and v constants during the interval.
This technique used to solve an ODE is known as explicit euler.
Or we can use other finer techniques,
but that's another story...
3 Studing
When studing a dynamical system the first thing that we can think
is to take an initial condition, x(t0) or x0, and
concentrate on a single trajectory, being it continuous or discrete.
But another interesting possibility is to concentrate on the evolution
of a set of initial conditions, i.e. a subset Y Ì X,
and look at its behavour under the evolving rule.
Subsets of X evolve and go into other subsets of X.
So we can write, using a map M
and a the set of initial conditions Y0,
This is conceptually a great change of view.
Let's go on with the example of the harminc oscillator.
As we have seen the space X of the possible configurations
is formed, in this case, by couples of position and speed (r, v),
so X = Â×Â.
In a 2-d plan where r is on the X axis and v is on the Y axis
each point can be tought as an initial condition for the system.
This plan is called phase space and can be built
for every dinamical system.
Clearly it has always the same dimension of the vectorial space X.
If we choose to begin with time t0 = 0, then the initial condition
(r(0), v(0)) gives the values of A and f
so that
and the evolution is given by
It's clear that a set of initial conditions
Y Ì X on the phase space
rotates and eventually stretches along the Y axis with time,
the latter depending on the value
of w.
In we choose w = 1 this is a perfect rotaion.
We can ask if there are sets which are invariant under the evolution.
A set with this property is called a manifold
For the harmonic oscillator these manifolds are ellipses,
or circles if w = 1.
But thinking twice we see that this sets are also the periodic
orbits of this system.
In fact (36) tells us that if a point is
in a manifold, it will remain there forever,
and that is true for all periodic orbits.
Here I used a map, but we can always pass from a countinous trajectory
to a map. Let's call the trajectory
we have
and we can imagine this subset in the phase space.
Now let's loose some time on what we call a surface.
In our 3-d space it is a 2-d subset.
Now if X has a dimension dim(X) ³ 1,
S is a surface of X if
and
If we choose S so that the intersection between S and T
is not empty and numerable,
we may call
|
|
|
| |
|
|
min
| {t > t1 \mid x(t) Î S} |
| |
|
|
min
| {t > t2 \mid x(t) Î S} |
| | |
|
|
min
| {t > tn-1 \mid x(t) Î S} |
| (41) |
| |
|
Defining
we have a succession extracted from the trajectory,
and we can think of it as the iteration of map.
This method is called the Poincare' section.
As an example of it let's see what it can do
with the harmonic oscillator.
If we choose S so that
S : = {(r, v) \mid r = 0} |
| (44) |
then
so we have a succession of period 2.
This means that
that me can write as
In the general case, if a map M has period p,
then each of the p elements of the period
is a fixed point for the map Mp.
If T is periodic,
then each map M derived from T is periodic
and can be set to have period 1 with this method.
4 Chaos
What is chaos? What do we mean about it?
To be continued...
Footnotes:
1
This is not true for Dt < 0.
2
This has sense since x(t) is defined by f via (1).
File translated from TEX by TTH, version 2.25.
On 10 Dec 1999, 22:10.