Now we come to a technique that’s very important in physics and engineering, and can be used to calculate logarithms, exponentials, and trigonometric functions to any desired precision.
Before we start I’d like to refresh your memory on some notations that will be used in this section. The factorial notation, for example \[5! = 5\times4\times3\times2\times1 = 120\] is covered in Section 13.3: Factorials.
Sigma notation, for example \[\sum_{n=3}^6 n^2 = 3^2 + 4^2 + 5^2 + 6^2\] is covered in Section 7.7: Sequences.
I’ll also be using Lagrange’s notation for higher derivatives, for example $f^{(4)}(x)$ means the fourth derivative of $f(x)$, i.e. the derivative of $f’’’(x)$ (see Section 15.3.5: Higher Derivatives). For the sake of simplicity, I’ll be using $f^{(0)}(x)$ to mean $f(x)$, which could be considered as the ‘zeroth’ derivative.
Suppose we want an approximation for $e^x$ for small values of $x$. Imagine that we want to find $e^{0.1}$ for example, and calculators and log tables haven’t been invented yet. First of all, we know it’s approximately $1$, because $e^0 = 1$. Already that’s a pretty good approximation (the real value is about $1.10517$) but how can we make it better?
What if we decided to approximate $\e^x$ with a linear function $a+bx$, and we want to choose $a$ and $b$ so that the linear function is tangent to $\e^x$ at $x=0$. In other words, we want the values of the functions to match when $x=0$ (so that they pass through the same point) and we want their derivatives to match there too (so that they have the same slope). What would $a$ and $b$ be? What is our approximation for $\e^{0.1}$ now? Show answer
\begin{align*} \frac{\dif }{\dif x}(a+bx) &= b\\ \frac{\dif }{\dif x} \e^x &= \e^x = 1 \text{ when }x=0\\ \therefore b&= 1\\ \\ a+bx &= a \text{ when }x=0\\ \e^0 &= 1\\ \therefore a&= 1 \end{align*} So now we have $\e^x\approx 1+x$, and $\e^{0.1}\approx 1 + 0.1 = 1.1$, which is even better than our first approximation.
Now maybe we decide that we want an even better approximation, so not only do we want the values and the first derivatives to match when $x=0$, but also the second derivatives, so we choose a quadratic function, $a+bx + cx^2$. Find $a$, $b$, and $c$. What is our new approximation for $\e^{0.1}$? Show answer
As before, $a=1$ and $b=1$. The second derivative is $2c$, which we want to be $1$, so $c = 1/2$. Now we have \begin{align*} \e^x &\approx 1 + x + \frac{x^2}{2}\\ \e^{0.1} &\approx 1 + 0.1 + \frac{0.1^2}{2} = 1.105 \end{align*}
What if we now decide we also want the third, fourth, and fifth derivatives to match $f(x)=\e^x$ when $x=0$, so we use a quintic function \[g(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5.\] In other words, we want $f(0)=g(0)$, $f’(0)=g’(0)$, and $f’’(0) = g’’(0)$ as before, but also $f’’’(0)=g’’’(0)$, $f^{(4)}(0) = g^{(4)}(0)$, and $f^{(5)}(0) = g^{(5)}(0)$. Find the coefficients $a_0$ to $a_5$. Show answer
The $n$th derivative of $\e^x$ is always $\e^x$, which is $1$ when $x=0$, so we want all derivatives to be $1$. \begin{align*} g(0) &= a_0\\ g’(x) &= a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + 5a_5x^4 \\ g’(0) &= a_1\\ g’’(x) &= 2a_2 + 3\times2a_3x + 4\times3a_4x^2 + 5\times4a_5x^3 \\ g’’(0) &= 2a_2\\ g’’’(x) &= 3\times2a_3 + 4\times3\times2a_4x + 5\times4\times3a_5x^2 \\ g’’’(0) &= 3\times2a_3\\ g^{(4)}(x) &= 4\times3\times2a_4 + 5\times4\times3\times2a_5x \\ g^{(4)}(0) &= 4\times3\times2a_4 = 4!a_4\\ g^{(5)}(x) &= 5\times4\times3\times2a_5 = 5!a_5 \end{align*} You’ve probably noticed the pattern by now: $g^{(n)}(0) = n!a_n$. (We can consider $g^{(0)}$, the ‘zeroth’ derivative, to be the function $g$ itself, and we can define $0! = 1$ so that this works for $n=0,1,2,\ldots$.) We want all the derivatives to be $1$, so \begin{align*} a_0 &= 1 = \frac{1}{0!}\\ a_1 &= 1 = \frac{1}{1!}\\ a_2 &= \frac{1}{2!}\\ a_3 &= \frac{1}{3!}\\ a_4 &= \frac{1}{4!}\\ a_5 &= \frac{1}{5!} \end{align*}
We could continue this process indefinitely. Write $\e^x$ as a polynomial of infinite degree (using sigma notation). Show answer
\begin{align*} \e^x &= \frac{x^0}{0!} + \frac{x^1}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots\\ &= \sum_{n=0}^\infty \frac{x^n}{n!} \end{align*}
What we just found is called the Taylor series for $\e^x$ (or Maclaurin series since it’s around $x=0$). Many mathematicians were involved in inventing Taylor series, including Brook Taylor, James Gregory, Colin Maclaurin, and Madhava of Sangamagrama, but Taylor was the first one to write down the general method.
A Taylor series can be found for any function we can keep differentiating, but it doesn’t always behave nicely (the series may be divergent for some values of $x$ – see Question 15.5.8 below). In the case of $\e^x$ it always converges though, and we can even use this Taylor series as an alternative definition of $\e^x$ and say that they are the same thing.
This is the basic idea that calculators use to find $\e^x$. They use as many terms of the Taylor series as they want to get the desired number of significant figures. (To find something like $\e^{2.3}$ they might calculate $\e\times\e\times\e^{0.3}$, using the Taylor series to find $\e^{0.3}$, rather than apply the Taylor series directly with $x=2.3$, which would take longer to converge.)
Let’s apply the same idea to $\sin x$. Write down a polynomial of infinite degree with the same value and (higher) derivatives as $\sin x$ when $x=0$. Show answer
\begin{align*} \sin 0 &= 0\\\\ \frac{\dif }{\dif x}\sin x &= \cos x \\ &= 1 \text{ when }x=0\\\\ \frac{\dif^2 }{\dif x^2}\sin x &= -\sin x \\ &= 0 \text{ when }x=0\\\\ \frac{\dif^3 }{\dif x^3}\sin x &= -\cos x \\ &= -1 \text{ when }x=0\\\\ \frac{\dif^4 }{\dif x^4}\sin x &= \sin x \\ &= 0 \text{ when }x=0 \end{align*} ... and so on. The derivatives go through the pattern $0,1,0,-1,\ldots$ forever.
We know from our work on $\e^x$ that the $n$th derivative of the polynomial evaluated at $x=0$ will be $n!$ times the $n$th coefficient. So to find the coefficients we need to divide the derivatives by $n!$: \begin{align*} \sin x &= \frac{0}{0!} + \frac{1}{1!}x + \frac{0}{2!}x^2 + \frac{-1}{3!}x^3 + \frac{0}{4!}x^4 + \frac{1}{5!}x^5 + \cdots\\ &= x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots\\ &= \sum_{n = 0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!} \end{align*}
Infinite series for sine and other trigonometric functions were found much earlier than for $\e^x$, by Madhava of Sangamagrama around the fourteenth century.
We can see on the following graphs how the approximations to sine become better and better as we add more terms to the polynomial: The cubic, $x - x^3/3!$, is a pretty good approximation up to about $x=1$, then goes too low. The quintic works well up to about $x=2$ then goes too high. And the degree-$7$ polynomial works up to about $x=3$. Of course for sine we don’t really need anything beyond $\pi/2$ because we can use the sine of the principal angle (see Section 11.4.3: The Unit Circle) to work out the sine of any angles beyond this range. Many calculators use the cordic algorithm instead to work out sine (see Question 11.4.29).
Let’s work out a general formula for finding the Taylor series of any function.
We have a function $f(x)$, and we can work out $f(0)$, $f’(0)$, $f’’(0)$, and so on. We want to find a polynomial \[g(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots\] that matches $f(x)$ in value and (higher) derivatives when $x=0$. In other words, we want $g^{(n)}(0) = f^{(n)}(0)$ for $n=0,1,2,\ldots$. Find the coefficients of the polynomial (write them in terms of $f(0)$, $f’(0)$, and so on). Show hint
Back when we were working on $\e^x$ we found that the $n$th derivative of $g(x)$, evaluated at $x=0$, was given by $g^{(n)}(0) = n!a_n$. We want $g^{(n)}(0)$ to be equal to $f^{(n)}(0)$, so all we need to do is choose $a_n = f^{(n)}(0)/n!$.
This leads to: \begin{align*} g(x) &= a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots\\ &= \frac{f^{(0)}(0)}{0!} + \frac{f^{(1)}(0) x}{1!} + \frac{f^{(2)}(0) x^2}{2!} + \frac{f^{(3)}(0) x^3}{3!} + \cdots\\ &= f(0) + f’(0) x + \frac{f’’(0) x^2}{2!} + \frac{f’’’(0) x^3}{3!} + \cdots\\ &= \sum_{n=0}^\infty \frac{f^{(n)}(0)x^n}{n!} \end{align*}
What if we want to find the Taylor series of $\ln x$, which is undefined at $x=0$? So far we’ve been making the Taylor series match $f(0)$, $f’(0)$, and so on, in other words, we’ve been focusing on the point $x=0$. We don’t have to do that; the Taylor series can be found around any arbitrary point. We might instead want the Taylor series to have a value of $f(a)$ when $x=a$ for some number $a$, and a derivative of $f’(a)$, a second derivative $f’’(a)$, and so on.
How can we do this? If we use \[g(x) = f(a) + f’(a)x + \frac{f’’(a)}{2!}x^2 + \frac{f’’’(a)}{3!}x^3 + \cdots\] then $g(0) = f(a)$, $g’(0) = f’(a)$, and so on, but what we really want is $g(a) = f(a)$, $g’(a) = f’(a)$, and so on. How can we make it work? Show hint
Just like when we want a graph to shift $a$ units to the right, we need to replace $x$ with $x-a$. So what we need is: \[g(x) = f(a) + f’(a)(x-a) + \frac{f’’(a)}{2!}(x-a)^2 + \frac{f’’’(a)}{3!}(x-a)^3 + \cdots\] If you like sigma notation, you can write it as \[\sum_{n=0}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!}\]
Let’s use this result to explore the Taylor series of $f(x) = \ln x$.
Find the first $4$ terms of the Taylor series around $x=1$. (Technically it will be only $3$ terms, because the constant term is $0$). Show answer
First we need to differentiate $\ln x$ several times so we can find $f’(1)$, $f’’(1)$, and $f’’’(1)$: \begin{align*} f’(x) &= \frac{1}{x}\\ f’’(x) &= -\frac{1}{x^2}\\ f’’’(x) &= \frac{2}{x^3} \end{align*} This gives: \begin{align*} f(1) &= \ln 1 = 0\\ f’(1) &= 1\\ f’’(1) &= -1\\ f’’’(1) &= 2 \end{align*} \begin{align*} g(x) &= 0 + 1(x-1) -\frac{1}{2!}(x-1)^2 + \frac{2}{3!}(x-1)^3 + \cdots\\ &= (x-1) - \frac{1}{2}(x-1)^2 + \frac{1}{3}(x-1)^3 + \cdots \end{align*}
Find an expression for $f^{(n)}(1)$. Show hint
So far we’ve found the derivatives up to $f’’’(x) = 2/x^3$. Let’s keep going: \begin{align*} f^{(4)}(x) &= -\frac{3\times2}{x^4}\\ f^{(5)}(x) &= \frac{4\times3\times2}{x^5}\\ f^{(6)}(x) &= -\frac{5\times4\times3\times2}{x^6} \end{align*} Evaluated at $x=1$, this gives: \begin{align*} f^{(4)}(1) &= -3\times2\\ f^{(5)}(1) &= 4\times3\times2 = 4!\\ f^{(6)}(1) &= -5\times4\times3\times2 = -5! \end{align*} Now we start to see a pattern: $f^{(n)}(1) = (-1)^{n-1} (n-1)!$. (You may have chosen to write it as $(-1)^{n+1}(n-1)!$ or $-(-1)^n(n-1)!$, which is perfectly fine and comes to the same thing, but I preferred the repetition of $n-1$.)
If we’d like this to work for $n=0$ then we’ll have to use piece-wise notation: \[f^{(n)}(1) = \left\{\begin{array}{ll}0&\text{for }n=0\\(-1)^{n-1}(n-1)!&\text{for }n\gt 0\end{array}\right.\]
Hence state the Taylor series of $\ln x$ about $x=1$, using sigma notation. Show answer
\begin{align*} \sum_{n=0}^\infty \frac{f^{(n)}(1)(x-1)^n}{n!} &= \sum_{n=1}^{\infty} \frac{(-1)^{n-1}(n-1)!(x-1)^n}{n!}\\ &= \sum_{n=1}^{\infty} \frac{(-1)^{n-1}(x-1)^n}{n} \end{align*} (The summation starts at $n=1$ in this case since the $n=0$ term is zero.)
Hence write $\ln 2$ as an infinite sequence. Show answer
\[\sum_{n=1}^{\infty} \frac{(-1)^{n-1}(2-1)^n}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots\]
Write $\ln 3$ as an infinite series. Do you think this sequence will converge? Show hint
\[2 - \frac{4}{2} + \frac{8}{3} - \frac{16}{4} + \frac{32}{5} - \frac{64}{6} + \cdots\] The terms keep getting bigger, so this sequence can’t help us work out $\ln 3$.
For what values of $x$ do you think the series will converge? Show hint
Informally, because of the $(x-1)^n$ part of the terms, if $x-1\gt 1$ then they’ll keep increasing.
For a more rigorous argument, we can do a ratio test: \begin{align*} \lim_{n\to\infty} \abs{\frac{(-1)^n(x-1)^{n+1}}{n+1} \div \frac{(-1)^{n-1}(x-1)^n}{n}} &= \lim_{n\to\infty} \abs{\frac{-n}{n+1}(x-1)}\\ &= \abs{x-1} \end{align*} If $\abs{x-1}\lt 1$ then the series will converge. So it converges when $0\lt x\lt 2$. For $x=2$ it also converges (by the alternating series test).
How might a calculator find $\ln 3$ using only addition, subtraction, multiplication, and division? Show hint
We could find $\ln \frac{1}{3}$ using Taylor series, then take its negative to get $\ln 3$. Or we could find $\ln\frac{3}{\e}$, which is $\ln 3 - \ln\e = \ln3 - 1$.
Although $\e^x$ is equal to its Taylor series for all values of $x$, this is not true for every function. The Taylor series can diverge for some values of $x$.
Remember:
The Taylor series of $f$ at $a$:
\[f(a) + f’(a)(x-a) + \frac{f’’(a)}{2!}(x-a)^2 + \frac{f’’’(a)}{3!}(x-a)^3 + \cdots\]
or written using sigma notation:
\[\sum_{n=0}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!}\]
When $a=0$, it is also known as a Maclaurin series:
\[\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n = f(0) + f’(0)x + \frac{f’’(0)}{2!}x^2 + \frac{f’’’(0)}{3!}x^3 + \cdots\]
Taylor series are very useful in physics and engineering, because they allow complicated functions to be approximated by simple polynomials. If we’re interested in values of $f(x)$ when $x$ is very close to $a$ then we usually only need two or three terms of the Taylor series to get a good approximation.
Classical mechanics says that kinetic energy is $\frac{1}{2}mv^2$ where $m$ is mass and $v$ is speed, but special relativity says that kinetic energy is $\gamma(v) mc^2 - mc^2$ where $c$ is the speed of light, and \[\gamma(v) = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}.\] Show that the relativistic formula is approximately the same as the classical one when the speed is slow ($v\approx 0$). Show hint
\begin{align*} \gamma(v) &= \left(1-\frac{v^2}{c^2}\right)^{-1/2}\\ \gamma’(v) &= -\frac{1}{2} \left(1-\frac{v^2}{c^2}\right)^{-3/2} \frac{-2v}{c^2}\\ &= \frac{v}{c^2\left(1-\frac{v^2}{c^2}\right)^{3/2}} \end{align*} This gives $\gamma(0) = 1$ and $\gamma’(0) = 0$ so all we have so far of the Maclaurin series is $\gamma(v)\approx 1$. We’ll need to find the next term to get a better approximation: \begin{align*} \gamma’’(v) &= \frac{1c^2\left(1-\frac{v^2}{c^2}\right)^{3/2} - vc^2\frac{3}{2}\left(1-\frac{v^2}{c^2}\right)^{1/2}\frac{-2v}{c^2}}{c^4\left(1-\frac{v^2}{c^2}\right)^{3}}\\ &= \frac{c^2\left(1-\frac{v^2}{c^2}\right)^{3/2} + 3v^2\left(1-\frac{v^2}{c^2}\right)^{1/2}}{c^4\left(1-\frac{v^2}{c^2}\right)^{3}}\\ \gamma’’(0) &= \frac{c^2 \times 1 - 0}{c^4\times1}\\ &= \frac{1}{c^2}\\ \\ \gamma(v) &\approx \gamma(0) + \gamma’(0)v + \frac{\gamma’’(0)}{2!}v^2\\ &\approx 1 + 0v + \frac{1/c^2}{2}v^2\\ &\approx 1 + \frac{v^2}{2c^2} \end{align*} Now we can substitute this into the relativistic formula for kinetic energy: \begin{align*} \gamma(v)mc^2 - mc^2 &\approx \left(1 + \frac{v^2}{2c^2}\right) mc^2 - mc^2\\ &\approx mc^2 + \frac{v^2m^2c^2}{2c^2} - mc^2\\ &\approx \frac{1}{2} m v^2 \end{align*}
A less obvious, but neater, way to show this would be to define \[f(x) = \frac{1}{\sqrt{1-x}}\] and find its Maclaurin series. Then that would give us the Maclaurin series for $\gamma(v)$ if we just substitute $x=\frac{v^2}{c^2}$. In this case we only need two terms of the series to get a useful approximation: \begin{align*} f(0) &= 1\\ f’(x) &= \frac{1}{2}(1-x)^{-3/2}\\ f’(0) &= \frac{1}{2}\\ f(x) &\approx 1 + \frac{1}{2}x\\ \gamma(v) &= f\left(\frac{v^2}{c^2}\right)\\ &\approx 1 + \frac{1}{2}\times\frac{v^2}{c^2}\\ &\approx 1 + \frac{v^2}{2c^2} \end{align*} So this method gives us the same answer as before.