# Anatomy of a finite difference scheme: How to simulate a simple harmonic oscillator with Julia

In this post, I will try to analyse a finite difference simulation of a simple harmonic oscillator (SHO) with damping, and code the algorithm using the programming language Julia. If you are not familiar with Julia, I recommend you visit this website: http://www.julialang.org.

As you probably know, the equation for an SHO is the following:

$$\frac{d^2 u}{dt^2} = -\omega_0^2 u - 2\sigma \frac{du}{dt}$$

where $u$ is the position of the oscillator around the equilibrium point, and $\omega_0$ and $\sigma$ are two positive parameters. This second order differential equation requires two initial conditions to be fully specified, the initial position and velocity of the SHO, i.e.:

$$u(0) = u_0, \qquad \text{ and } \qquad \frac{du}{dt}\Bigr|_{t=0} = v_0.$$

A finite difference implementation of this equation requires the discretisation of time in integer multiples of $k$, the time step. Let $u^n$ denote the discrete approximation to $u(t)$ calculated at the discrete time $nk$, with $n$ positive integer. Given the initial values $u^0$ and $u^1$, the idea is to find a recursion that gives $u^2, \dots, u^n$ in a way that is consistent with the continuous equation. Many different schemes are available, which present different properties. One possibility is the following:

$$\frac{u^{n+1} - 2u^n + u^{n-1}}{k^2} = -\omega_0^2 u^n - 2\sigma \frac{u^{n+1} - u^{n-1}}{2k}.$$

Rearranging the terms, it possible to write the unknown $u^{n+1}$ in terms of the known values, thus obtaining the update formula for the system:

$$u^{n+1} = \frac{(2 -\omega_0^2 k^2 )}{(1 + \sigma k)} u^n - \frac{(1 - \sigma k)}{(1 + \sigma k)} u^{n-1}.$$

Initial conditions in the finite difference domain can be specified as follows:

$$u^0 = u_0, \qquad \text{ and } \qquad \frac{u^1-u^0}{k} = v_0.$$

### Anatomy of the code¶

The Julia code that implements this recursion is very simple, and can be divided into two parts---the precomputation stage and the main loop. The former computes the various parameters and allocates memory for the variables, while the latter iterates the above formula and solves the differential equation numerically.

First of all, let me define some parameters that characterise the model. These are the natural frequency $f$ of the SHO and the loss parameter $\sigma$, together with the initial conditions $u_0$ and $v_0$. Then, the sampling rate and the duration of the simulation are chosen.

In [ ]:
# SHO parameters
f = 100
σ = 0.1

# Initial conditions
x0=1
v0=0

SR=44100      # Sampling rate
T=1           # Duration of the simulation (s)


Then, some additional constants are derived from these values.

In [ ]:
# Calculate the time step
k=1/SR
# Calculate the number of iterations
NF=floor(SR*T)

# Calculate ω0 and a0. They will be useful later
ω0=2*pi*f
a0=σ * k

# Calculate the recursion coefficients
B = (2-ω0^2*k^2)/(1+a0)
C = (1 - a0)/ (1 + a0)


The scheme presented above is stable only when $k \leq 2/\omega_0$. In order to ensure this, a check is inserted to prevent the program from running if this condition is not met.

In [ ]:
# Check the stability condition
if k>(2/ω0)
println("Stability condition violated!")
return
end


The program then initialises the finite difference scheme, by setting the values of $u^0$ and $u^1$ according to the initial conditions. Memory is also allocated for the output array that will store the position of the SHO at every time step, and the first two values are saved.

In [ ]:
# Initialise and allocate
u0=x0
u1=x0+k*v0
out=zeros(NF,1)
out[1]=u0
out[2]=u1


Finally, the program enters the main loop, where the differential equation is solved numerically. The operations are very simple, and consist in:

• updating $u^n$
• saving its value in the output array
• resetting the known values
In [ ]:
# Start the Main Loop
for nn=3:NF
# Update the recursion
u=B*u1-C*u0

# Store the new value for the position
out[nn]=u

# Reset the known values of the recursion
u0, u1 = u1, u;

end


### Results¶

Here is the entire program, expressed as a function that takes two parameters, $f$ and $\sigma$ and returns the output array.

In [38]:
function sho(f, σ)

# Initial conditions
x0=1
v0=0

SR=44100      # Sampling rate
T=1           # Duration of the simulation (s)

# Calculate the time step
k=1/SR
# Calculate the number of iterations
NF=floor(SR*T)

# Calculate ω0 and a0. They will be useful later
ω0=2*pi*f
a0=σ * k

# Calculate the recursion coefficients
B = (2-ω0^2*k^2)/(1+a0)
C = (1 - a0)/ (1 + a0)

# Check the stability condition
if k>(2/ω0)
println("Stability condition violated!")
return
end

# Initialise and allocate
u0=x0
u1=x0+k*v0
out=zeros(NF)
out[1]=u0
out[2]=u1

# Start the Main Loop
for nn=3:NF
# Update the recursion
u=B*u1-C*u0

# Store the new value for the position
out[nn]=u

# Reset the known values of the recursion
u0, u1 = u1, u;

end

# At the end of the loop, return output
return out
end

Out[38]:
sho (generic function with 2 methods)

Now we can play around with this function, and plot a graph. You can see the oscillation with an exponentially decaying profile.

In [44]:
using Gadfly
out = sho(100, 0.9)
NF = length(out)
t = collect(0:NF-1)/NF
plot(x=t, y=out, Geom.line,
Guide.xlabel("time (s)"), Guide.ylabel("position (m)"),
Guide.title("FD simulation - SHO"))

Out[44]:

Feel free to download this code from my Github profile and to play around with it.