## loading python libraries
# necessary to display plots inline:
%matplotlib inline
# load the libraries
import matplotlib.pyplot as plt # 2D plotting library
import numpy as np # package for scientific computing
from math import * # package for mathematics (pi, arctan, sqrt, factorial ...)
Lecture notes of MAA102.
For the sake of simplicity we review the definitions in the case of positive sequences.
Imagine that we observe a sequence $(a_n)_n$ which seems to grow exponentially fast. More precisely we make the guess that $$ a_n\stackrel{n\to+\infty}{\sim} cn^\alpha \rho^n, $$ for some $c>0, \alpha\in\mathbb{R}, \rho>1$. The dominating term in the above approximation is of course $\rho^n$. How to approximate $\rho$ from an observation of $a_0,a_1,\dots, a_n$ ?
Proof of the Proposition
Case of $a_{n+1}/a_n$.
\begin{align*}
\frac{a_{n+1}}{a_n}&=\frac{c(n+1)^\alpha \rho^{n+1}(1+\varepsilon_{n+1})}{cn^\alpha \rho^n(1+\varepsilon_n)},\\
&=\rho \left(\frac{n+1}{n}\right)^{\alpha} (1+\varepsilon_{n+1})(1-\varepsilon_{n}+o(\varepsilon_{n}))\qquad \text{ (Recall }1/(1+u)=1-u+o(u).)\\
&=\rho \left(1+\frac{1}{n}\right)^{\alpha} (1+\varepsilon_{n+1}-\varepsilon_{n}+o(\varepsilon_{n}))\\
&=\rho \left(1+\frac{\alpha}{n}+o(1/n)\right) (1+\varepsilon_{n+1}-\varepsilon_{n}+o(\varepsilon_{n}))\qquad \text{ (Recall }(1+u)^\alpha=1+\alpha u+o(u).)\\
&=\rho\left(1+\frac{\alpha}{n}+\varepsilon_{n+1}-\varepsilon_n+\mathrm{o}(1/n)+\mathrm{o}(\varepsilon_n) \right).
\end{align*}
Case of $a_{n}^{1/n}$.
\begin{align*}
a_{n}^{1/n}&=\exp\big(\frac{1}{n}\log (cn^\alpha \rho^n(1+\varepsilon_n))\big),\\
&=e^{c/n} \exp(\alpha\log(n)/n) \times \rho\times \exp\big(\frac{\varepsilon_n}{n}\big)\\
&=\rho\left(1+\alpha\frac{\log(n)}{n}+\mathrm{o}(\log(n)/n) \right),
\end{align*}
since $\frac{\varepsilon_n}{n}=\mathrm{o}(\log(n)/n)$.
End of proof .
In order to compare the two estimates (and check that the first one is the best), let us illustrate the convergence in the case of the $n$-th Fibonacci numbers. In that case we have that $$ F_n=\frac{1}{\sqrt{5}}\left(\frac{1+\sqrt{5}}{2}\right)^n+o(1), $$ which is (a stronger form of) hypothesis above with $c=1/\sqrt{5},\alpha=0,\rho=(1+\sqrt{5})/2$. Observe that in this case $\varepsilon_n =\mathrm{o}(\rho^{-n})=\mathrm{o}(1/n)$.
def fibonacci(n,memory={0:0,1:1,2:1}):
if n not in memory:
memory[n]=fibonacci(n-1)+fibonacci(n-2)
return memory[n]
N_min=10
N_max=50
fibo_values=[fibonacci(n) for n in range(N_max)]
plt.plot(range(N_min,N_max),[fibo_values[n]/fibo_values[max(1,n-1)] for n in range(N_min,N_max)],'o-',label='a_{n+1}/a_n')
plt.plot(range(N_min,N_max),[np.exp(np.log(fibo_values[n])/n) for n in range(N_min,N_max)],'o-',label='a_n^{1/n}')
plt.plot(range(N_min,N_max),[fibo_values[n]**(1/n) for n in range(N_min,N_max)],'o-')
plt.legend()
plt.show()
For $x\in \mathbb{N}$ let $$ \pi(x)=\mathrm{card}\left\{\text{prime numbers }\leq x\right\}. $$ Thus $\pi(1)=0,\pi(2)=1,\pi(3)=2,\pi(4)=2,\dots$ In tutorials we will investigate the asymptotic behaviour of $(\pi(x))_x$. Let us state and prove bounds for $\pi(x)$.
Proof of the lemma.
The proof goes by induction, the idea is to make effective the proof by Euclid that there are infinitely many primes.
For $n=1$ the inequality is clear. Assume it holds up to some $n\geq 1$. The integer $p_1p_2\dots p_n +1$ must have a prime factor. It cannot be any of the terms $p_1,\dots,p_n$ so we have found a new prime between $p_n$ and $p_1p_2\dots p_n +1$. Thus \begin{align*} p_{n+1}&\leq p_1p_2\dots p_n +1\\ &\leq 2^{2^1}2^{2^2}\dots 2^{2^n} +1\\ &=2^{2+2^2+\dots+2^n}+1\\ &=2^{2^{n+1}-2}+1\\ &=2^{2^{n+1}}(1/4+2^{-n-1})\leq 2^{2^{n+1}}. \end{align*}
End of proof of the Lemma.
Proof of the Proposition.
The upper bound $\pi(x)\leq x$ is obvious so we focus on the lower bound.
By definition of $x\mapsto \pi(x)$ we have for every integer $x$ that
$$
p_{\pi(x)}\leq x <p_{\pi(x)+1}.
$$
Using the Lemma we therefore have
$$
x\leq 2^{2^{\pi(x)+1}}.
$$
Taking twice the logarithm (in basis 2) of both sides we obtain
$$
\log_2(\log_2(x))\leq \pi(x)+1.
$$
End of proof of the Proposition.
In experimental mathematics we often encounter sequences defined (more or less explicitly) by a recurrence of the form $u_{n+1}=f(u_n)$. Let us recall from MAA102 how to deal with such recurrences.
a=1
for n in range(31):
print('n = ',n,' u_n = ',a)
a=np.sqrt(2)**(a)
n = 0 u_n = 1 n = 1 u_n = 1.4142135623730951 n = 2 u_n = 1.632526919438153 n = 3 u_n = 1.7608395558800285 n = 4 u_n = 1.8409108692910108 n = 5 u_n = 1.892712696828511 n = 6 u_n = 1.926999701847101 n = 7 u_n = 1.9500347738058181 n = 8 u_n = 1.9656648865173194 n = 9 u_n = 1.976341754409703 n = 10 u_n = 1.9836683993038218 n = 11 u_n = 1.988711773413954 n = 12 u_n = 1.9921908829470578 n = 13 u_n = 1.9945944507121018 n = 14 u_n = 1.9962566662658592 n = 15 u_n = 1.9974070011413367 n = 16 u_n = 1.9982034775087025 n = 17 u_n = 1.9987551330845927 n = 18 u_n = 1.9991373101193919 n = 19 u_n = 1.9994021183249975 n = 20 u_n = 1.9995856229356819 n = 21 u_n = 1.9997127963296408 n = 22 u_n = 1.9998009354929713 n = 23 u_n = 1.9998620237577835 n = 24 u_n = 1.9999043644433365 n = 25 u_n = 1.9999337115821005 n = 26 u_n = 1.9999540528978217 n = 27 u_n = 1.9999681521492445 n = 28 u_n = 1.999977924873871 n = 29 u_n = 1.9999846987470957 n = 30 u_n = 1.9999893940078126
The above script seems to indicate that $u_n \to 2$. The following plot gives an insight of what happens:
xx=np.arange(0,5,0.01)
plt.plot(xx,xx,label='x -> x')
plt.plot(xx, [np.sqrt(2)**(x) for x in xx], label='x -> root(2)^x')
plt.legend()
plt.show()
Indeed, $x\mapsto \sqrt{2}^x$ has two fixed points $x=2$ and $x=4$. The interval $[0,2]$ is clearly stable under $f$ and in this interval $\sqrt{2}^x \geq x$. This shows that $(u_n)$ is bounded and increasing, so it converges and the limit $\ell$ is the only fixed point of $f$ in that interval: $\ell=2$.
Actually one can do better. First observe that $f'(x)=\log(\sqrt{2})\sqrt{2}^x$ so that the mean value theorem applied to the interval $[0,2]$ says that, for every $x,y\in [0,2]$ $$ |f(x)-f(y)|\leq \sup_{[0,2]} f'\times |x-y|= \log(\sqrt{2})\sqrt{2}^2\times |x-y|=\log(2)|x-y|. $$ In other words, $f$ is contraction. This implies, for $n\geq m$, $$ |u_n-u_m|\leq \log(2)|u_{n-1}-u_{m-1}|\leq \log(2)^2|u_{n-2}-u_{m-2}|\leq \dots \leq\log(2)^{m}|u_{n-m}-u_{0}|\leq \log(2)^{m}\times 1. $$ Letting $n\to +\infty$ (while letting $m$ fixed) gives finally $$ |\ell-u_m|\leq \log(2)^{m}, $$ so that the convergence is exponentially fast.