import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.dpi'] = 120
plt.rcParams['figure.figsize'] = [8,4]
plt.rcParams['axes.grid'] = True
#plt.rcParams.keys()
Two types of basic problems:
$$\vec y'=\vec F(t, \vec y), \qquad t\in \mathbb{R}, \qquad \vec y, \vec F\in \mathbb{R}^N$$Consider a two point value problem (second order equation can be rewritten as a set of two first order equations) $\vec y = (u, u')^T$,
$$u''+u=0\qquad u(0)=1, u(L)=b$$Solution
$$u(t)=A\cos x+B\sin x\qquad A=1,\;\;\cos L+B\sin L=b$$However, the IVP has always a unique solution
$$u(0)=a, u'(0)=b \qquad\Rightarrow\qquad u(x) = a\cos x + b\sin x$$If $F(x,\vec y)$ is smooth and has smooth derivatives $\partial F/\partial \vec y$ in a certain vicinity of $x_0$ there exists a neighborhood in which the solution to the IVP $\vec y'=F(x, \vec y)$ $\vec y(x_0)=\vec y_0$ exists and is unique.
But one can also do this
$$y'_{n+1}=\frac{y_{n+1}-y_n}{h} + \mathcal{O}(h)$$and this gives
$$y_{n+1} = y_n+ hF(t_\color{red}{n+1}, \color{red}{y_{n+1}})$$which is an equation for $y_{n+1}$. It is called implicit Euler method.
Consider $y'= -y$ with a condition gives $y(t)=y(0)e^{-t}$
explicit method gives simply
Implicit method gives $$y_{n+1} = y_{n} - hy_{n+1} \Rightarrow y_{n+1} = \frac{y_n}{1+h}=\frac{y_0}{(1+h)^{n+1}}$$
t = np.linspace(0, 10, 20)
h = t[1]-t[0]
y_ex = np.empty(t.size)
y_im = np.empty(t.size)
y_ex[0] = y_im[0] = 1 # initial conditions
for n, T in enumerate(t[:-1]):
y_ex[n+1] = y_ex[n] + h *(-y_ex[n])
y_im[n+1] = y_im[n]/(1 + h)
plt.plot(t, y_ex, '-+', label='Explicit method')
plt.plot(t, y_im, '-o', label='Implicit method')
plt.plot(t, np.exp(-t), label='Exact solution')
plt.xlabel('$t$')
plt.ylabel('$y(t)$')
plt.legend()
plt.show()
so
$$y(t+h) = y(t) + F\left(t+\frac{h}{2}, \color{red}{y\left(t+\frac{h}{2}\right)}\right) $$But we still don't know $y(t+h/2)$
Local error $\mathcal{O}(h^3)$, total error $\mathcal{O}(h^2)$
This was the simplies examples of the Runge Kutta method. Most popular method is explicit RK4: $${\displaystyle {\begin{aligned}k_{1}&=h\ F(t_{n},y_{n}),\\k_{2}&=h\ F\left(t_{n}+{\frac {h}{2}},y_{n}+{\frac {k_{1}}{2}}\right),\\k_{3}&=h\ F\left(t_{n}+{\frac {h}{2}},y_{n}+{\frac {k_{2}}{2}}\right),\\k_{4}&=h\ F\left(t_{n}+h,y_{n}+k_{3}\right).\end{aligned}}}$$
$${\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+{\tfrac {1}{6}}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right),\\t_{n+1}&=t_{n}+h\\\end{aligned}}}$$More useful for solving PDE with fixed step
Most universal solver: LSODE (Livermore Solver for Ordinary Differential Equations):
When solving equation $y'=F(y)$ we can expand $F$ in a Taylor series
$$y'=F(y_n) + (y-y_n) F'(y_n) +\ldots$$Since $y'_n=F(y_n)$, we can linearize the equation by introducing $\tilde y=y-y_n$
$$ \tilde y'=F'(y_n) \tilde y$$For simplicity we drop $\tilde{}$ from now on.
The method is absolutely stable if for equation $y'=\lambda y$ the solution tends to 0: $y_n\to 0$ as $n\to\infty$.
def test_method_stability(F):
xlist = np.linspace(-3.0, 3.0, 100)
ylist = np.linspace(-3.0, 3.0, 100)
X, Y = np.meshgrid(xlist, ylist)
λ = X+Y*1j
Z = np.abs(F(-λ))
levels = np.linspace(0, 1, 6)
contour = plt.contour(X, Y, Z, levels, colors='k')
contour_filled = plt.contourf(X, Y, Z, levels)
plt.title('Stability of '+F.__name__)
plt.xlabel('Re $\lambda h$')
plt.ylabel('Im $\lambda h$')
def Euler_explicit(λ): return (1- λ)
def Euler_implicit(λ): return 1/(1+ λ)
def explicit_midpoint(λ): return 1-λ+λ**2/2
def implicit_midpoint(λ): return (1-λ/2)/(1+λ/2)
plt.figure(figsize=(10, 10), dpi=100)
for (n, method) in enumerate([Euler_explicit, Euler_implicit, explicit_midpoint, implicit_midpoint]):
plt.subplot(2, 2, n+1)
test_method_stability(method)
plt.show()
xlist = np.linspace(-4.0, 4.0, 500)
ylist = np.linspace(-4.0, 4.0, 500)
X, Y = np.meshgrid(xlist, ylist)
λ = X+Y*1j
Λ = λ.flatten()
Λ.shape
#print(Λ)
def test_f(t, y):
#print("Caylculating: t = ", t, "y=", y)
return Λ*y
def check_scipy_solvers(Method):
t_span = (0, 1)
y0 = np.ones(Λ.shape, dtype=complex)
import scipy.integrate
res = scipy.integrate.solve_ivp(test_f, t_span, y0, method=Method, atol = 1e8, first_step=1)
z = res.y[:,1]
Z = z.reshape(λ.shape)
Z = np.abs(Z)
plt.grid(True)
levels = np.linspace(0, 1, 6)
contour = plt.contour(X, Y, Z, levels, colors='k')
contour_filled = plt.contourf(X, Y, Z, levels)
plt.title('Stability of ' + Method)
plt.xlabel('Re $\lambda h$')
plt.ylabel('Re $\lambda h$')
plt.show()
plt.figure(figsize=(10, 10), dpi=100)
for (n, method) in enumerate(['RK23', 'RK45']):
plt.subplot(2, 2, n+1)
check_scipy_solvers(method)
plt.show()
For diffusion equations $$u_{t} = u_{xx}+F(t, x, u)=0$$
plt.figure(figsize=(8, 8), dpi=100)
rel = np.array([-0.1, -2.5, 0, 0])
iml = np.array([0, 0, -1, 1])
plt.plot(rel, iml, "ro")
plt.plot(rel*0.7, iml*0.7, "bo")
test_method_stability(Euler_explicit)
N = 10
x = np.linspace(0,10, N)
y = np.zeros(N)
y[0] = 1
h = x[1]-x[0]
# explicit method
for n in range(N-1):
y[n+1] = y[n] + h*np.sin(y[n])
y_explicit = y.copy()
y[0] = 1
for n in range(N-1):
"""
solve y[n+1] = y[n] + h*np.sin(y[n+1])
F(y[n+1]) = y[n] + h*np.sin(y[n+1]) - y[n+1]
"""
y[n+1] = y[n]
k = 0
for k in range(20):
u = y[n] + h*np.sin(y[n+1])
dy = abs(y[n+1]-u)
y[n+1] = u
if(abs(dy)<1e-10): break
#print(k)
y_implicit1 = y.copy()
# explicit method
y[0] = 1
for n in range(N-1):
"""
solve y[n+1] = y[n] + h*np.sin(y[n+1])
F(y[n+1]) = y[n] + h*np.sin(y[n+1]) - y[n+1]
"""
y[n+1] = y[n]
k = 0
for k in range(8):
F = y[n] + h*np.sin(y[n+1]) - y[n+1]
dF = h*np.cos(y[n+1]) - 1
dy = F/dF
y[n+1] -= dy
if(abs(dy)<1e-10): break
#print(k)
# y[n+1] = y[n] + h*np.sin(y[n])
plt.grid(True)
plt.plot(x,y_implicit1, "-o", label="naive implicit")
plt.plot(x,y_explicit, "-o", label='explicit')
plt.plot(x,y, "-o", label='Newton implicit')
plt.legend()
plt.show()
In the naive method we solve the equation be iteration: $$y_n^{(k+1)}=y_n+hF\left(y_n^{(k)}\right)$$ This method converges when
$$\left|hF'\left(y_n^{(k+1)}\right)\right|<1$$Which is equivalent to $|\lambda h|<1$ which is the stability convergence for explicit method.
Second order equation $$y''+y=0$$ can be solved directly using $$y_n''\approx \frac{y_{n-1}-2y_n+y_{n+1}}{h^2}$$ which lead to a multistep scheme $$y_{n+1} = -y_{n-1}+2y_n-h^2 y_{n}$$
t = np.linspace(0, 20*np.pi, 1000)
h2 = (t[1]-t[0])**2
y = np.empty(t.size)
y[0] = y[1] = 1
for n, T in enumerate(t[1:]):
y[n+1] = -y[n-1] + (2-h2)*y[n]
plt.plot(t, y)
plt.show()
E = (y[1:]-y[:-1])**2/h2 + y[1:]**2
plt.plot(t[1:], E)
Or it can be stransformed into a system of two equations $u=(y, y')$
$$ \vec u'= \left(\begin{array}{cc} 0& 1\\ -1& 0\end{array}\right)\vec u = M\vec u $$Explicit Euler method gives $$\vec u_{n+1}=(\mathbb{1}+hM)\vec u_n$$ Implicit Euler method gives $$\vec u_{n+1}=(\mathbb{1}-hM)^{-1}\vec u_n$$
import sympy as sp
sp.init_printing(use_unicode=True)
h = sp.symbols('h')
A = sp.Matrix([[1, h], [-h, 1]])
B = sp.Matrix([[1, -h], [h, 1]])
B.inv()
t = np.linspace(0, 10*np.pi, 1000)
h = (t[1]-t[0])
u_exp = np.empty([t.size, 2])
u_exp[0, :] = np.array([1, 0])
for n, T in enumerate(t[1:]):
u_exp[n+1, 0] = u_exp[n, 0] + h*u_exp[n, 1]
u_exp[n+1, 1] = u_exp[n, 1] - h*u_exp[n, 0]
u_imp = np.empty([t.size, 2])
u_imp[0, :] = np.array([1, 0])
fac = (1+h*h)
for n, T in enumerate(t[1:]):
u_imp[n+1, 0] = (u_imp[n, 0] + h*u_imp[n, 1])/fac
u_imp[n+1, 1] = (u_imp[n, 1] - h*u_imp[n, 0])/fac
plt.plot(t, u_exp[:, 0], label='explicit')
plt.plot(t, u_imp[:, 0], label='implicit')
plt.legend()
For hamiltonian systems energy = volume in the phase space $(y, y')$
Explicit Euler method is a map maps: $\vec u_n\to \vec u_{n+1}=A\vec u_n=(\mathbb{1}+hM)\vec u_n$
Implicit Euler method is a map maps: $\vec u_n\to \vec u_{n+1}=B\vec u_n=(\mathbb{1}-hM)^{-1}\vec u_n$
A.det()
B.inv().det().simplify()
Volume space increases in explicit methods and decreases for implicit
Update velocity from updated position $$u_{n+1, 0} = u_{n, 0} + h\,u_{n, 1}$$
$$u_{n+1, 1} = u_{n, 1} - h\,u_{n+1, 0}$$Or equivalently composition of two maps: $$\vec u_{n+1}=\left(\begin{array}{cc} 1& h\\ 0& 1\end{array}\right) \left(\begin{array}{cc} 1& 0\\ -h& 1\end{array}\right) \vec u_{n}=M_1M_2\vec u_n$$
h = t[1]-t[0]
u_sym = np.empty([t.size, 2])
u_sym[0, :] = np.array([1, 0])
for n, T in enumerate(t[1:]):
u_sym[n+1, 0] = u_sym[n, 0] + h*u_sym[n, 1]
u_sym[n+1, 1] = u_sym[n, 1] - h*u_sym[n+1, 0]
plt.plot(t, u_exp[:, 0], label='explicit')
plt.plot(t, u_imp[:, 0], label='implicit')
plt.plot(t, u_sym[:, 0], label='symplectic')
plt.legend()
Symplectic method preserve energy (or closely related quantity) $$\vec u_{n+1}= \left(\begin{array}{cc} 1& 0\\ -h& 1\end{array}\right) \left(\begin{array}{cc} 1& h\\ 0& 1\end{array}\right) \vec u_{n}=M_2(h)M_1(h)\vec u_n$$ where obviously $\det M_1=\det M_2=1$ But they loose this property when the step size is changed they can be used only for conserved systems.
Higher order methods can be used
$$\vec u_{n+1}=M_1\left(\frac h2\right)M_2(h)M_1\left(\frac h2\right)\,\vec u_{n}$$Symplectic (geometric) integrators can be applied for hamiltonian systems: $H = H(p, q)=T(P)+V(q)$
and hamiltonian equations
Formally $y'=H(t, y)y$ can be solved using the relation
$$y(t)=e^{\int H dt}$$For very stiff systems if $H$ can be easily separated as
$$H=O^TH_1O + H_2$$where $H_1$ and $H_2$ are diagonal and $O$ is orthogonal (or unitary) one can use a split-step method
$$y_{n+1}\approx e^{(O^TH_1O + H_2)h}y_{n}\approx Oe^{H_1h}O^Te^{H_2h}y_{n}$$This method is often used for solving time-dependent Schroedinger equation, but can be used for diffusion as well.
Examples:
$\;\;\;\displaystyle y''=2 y (y^2-1)\,, \qquad y(0) = 0,\, y(\infty)\approx y(L)=1$
Eigenvalue problem (find $\omega$) $\;\;\;\displaystyle -y''+\left(4-\frac{6}{\cosh^2x }\right)y=\omega^2 y\,,\qquad y(0)=1,\,y'(0)=0,\, y(\infty)\approx y(L)=0$
or $y'(0)=0,\,y'(0)=1,\, y(\infty)\approx y(L)=0$
One can adopt a shooting method:
Note: Solutions of the ODE can diverge exponentially fast
$$|\delta y(L)|\approx |\delta y(0)|\,e^{\lambda L},$$where $\lambda$ is a Lyapunov exponent.
Sometimes some modifications of the original problem are required to decrease the Lyapunov exponent.
For example change equation to $y''=0$ when $|y|>10$
One can also shoot from both ends to some point in the middle - more difficult but better.
Often better: discretize the equation and solve a huge system of algebraic equations (linear or nonlinear). Use sparse matrix solvers.
Best: use spectral methods (and full matrices).