## 15.5 Taylor Series

Now we come to a technique that’s very important in physics and engineering, and can be used to calculate logarithms, exponentials, and trigonometric functions to any desired precision.

Before we start I’d like to refresh your memory on some notations that will be used in this section. The factorial notation, for example $5! = 5\times4\times3\times2\times1 = 120$ is covered in Section 13.3: Factorials.

Sigma notation, for example $\sum_{n=3}^6 n^2 = 3^2 + 4^2 + 5^2 + 6^2$ is covered in Section 7.7: Sequences.

I’ll also be using Lagrange’s notation for higher derivatives, for example $f^{(4)}(x)$ means the fourth derivative of $f(x)$, i.e. the derivative of $f’’’(x)$ (see Section 15.3.5: Higher Derivatives). For the sake of simplicity, I’ll be using $f^{(0)}(x)$ to mean $f(x)$, which could be considered as the ‘zeroth’ derivative.

Suppose we want an approximation for $e^x$ for small values of $x$. Imagine that we want to find $e^{0.1}$ for example, and calculators and log tables haven’t been invented yet. First of all, we know it’s approximately $1$, because $e^0 = 1$. Already that’s a pretty good approximation (the real value is about $1.10517$) but how can we make it better?

##### Question 15.5.1

What if we decided to approximate $\e^x$ with a linear function $a+bx$, and we want to choose $a$ and $b$ so that the linear function is tangent to $\e^x$ at $x=0$. In other words, we want the values of the functions to match when $x=0$ (so that they pass through the same point) and we want their derivatives to match there too (so that they have the same slope). What would $a$ and $b$ be? What is our approximation for $\e^{0.1}$ now? Show answer

##### Question 15.5.2

Now maybe we decide that we want an even better approximation, so not only do we want the values and the first derivatives to match when $x=0$, but also the second derivatives, so we choose a quadratic function, $a+bx + cx^2$. Find $a$, $b$, and $c$. What is our new approximation for $\e^{0.1}$? Show answer

##### Question 15.5.3

What if we now decide we also want the third, fourth, and fifth derivatives to match $f(x)=\e^x$ when $x=0$, so we use a quintic function $g(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5.$ In other words, we want $f(0)=g(0)$, $f’(0)=g’(0)$, and $f’’(0) = g’’(0)$ as before, but also $f’’’(0)=g’’’(0)$, $f^{(4)}(0) = g^{(4)}(0)$, and $f^{(5)}(0) = g^{(5)}(0)$. Find the coefficients $a_0$ to $a_5$. Show answer

##### Question 15.5.4

We could continue this process indefinitely. Write $\e^x$ as a polynomial of infinite degree (using sigma notation). Show answer

What we just found is called the Taylor series for $\e^x$ (or Maclaurin series since it’s around $x=0$). Many mathematicians were involved in inventing Taylor series, including Brook Taylor, James Gregory, Colin Maclaurin, and Madhava of Sangamagrama, but Taylor was the first one to write down the general method.

A Taylor series can be found for any function we can keep differentiating, but it doesn’t always behave nicely (the series may be divergent for some values of $x$ – see Question 15.5.8 below). In the case of $\e^x$ it always converges though, and we can even use this Taylor series as an alternative definition of $\e^x$ and say that they are the same thing.

This is the basic idea that calculators use to find $\e^x$. They use as many terms of the Taylor series as they want to get the desired number of significant figures. (To find something like $\e^{2.3}$ they might calculate $\e\times\e\times\e^{0.3}$, using the Taylor series to find $\e^{0.3}$, rather than apply the Taylor series directly with $x=2.3$, which would take longer to converge.)

##### Question 15.5.5

Let’s apply the same idea to $\sin x$. Write down a polynomial of infinite degree with the same value and (higher) derivatives as $\sin x$ when $x=0$. Show answer

Infinite series for sine and other trigonometric functions were found much earlier than for $\e^x$, by Madhava of Sangamagrama around the fourteenth century.

We can see on the following graphs how the approximations to sine become better and better as we add more terms to the polynomial: The cubic, $x - x^3/3!$, is a pretty good approximation up to about $x=1$, then goes too low. The quintic works well up to about $x=2$ then goes too high. And the degree-$7$ polynomial works up to about $x=3$. Of course for sine we don’t really need anything beyond $\pi/2$ because we can use the sine of the principal angle (see Section 11.4.3: The Unit Circle) to work out the sine of any angles beyond this range. Many calculators use the cordic algorithm instead to work out sine (see Question 11.4.29).

Let’s work out a general formula for finding the Taylor series of any function.

##### Question 15.5.6

We have a function $f(x)$, and we can work out $f(0)$, $f’(0)$, $f’’(0)$, and so on. We want to find a polynomial $g(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots$ that matches $f(x)$ in value and (higher) derivatives when $x=0$. In other words, we want $g^{(n)}(0) = f^{(n)}(0)$ for $n=0,1,2,\ldots$. Find the coefficients of the polynomial (write them in terms of $f(0)$, $f’(0)$, and so on). Show hint

What if we want to find the Taylor series of $\ln x$, which is undefined at $x=0$? So far we’ve been making the Taylor series match $f(0)$, $f’(0)$, and so on, in other words, we’ve been focusing on the point $x=0$. We don’t have to do that; the Taylor series can be found around any arbitrary point. We might instead want the Taylor series to have a value of $f(a)$ when $x=a$ for some number $a$, and a derivative of $f’(a)$, a second derivative $f’’(a)$, and so on.

##### Question 15.5.7

How can we do this? If we use $g(x) = f(a) + f’(a)x + \frac{f’’(a)}{2!}x^2 + \frac{f’’’(a)}{3!}x^3 + \cdots$ then $g(0) = f(a)$, $g’(0) = f’(a)$, and so on, but what we really want is $g(a) = f(a)$, $g’(a) = f’(a)$, and so on. How can we make it work? Show hint

##### Question 15.5.8

Let’s use this result to explore the Taylor series of $f(x) = \ln x$.

1. Find the first $4$ terms of the Taylor series around $x=1$. (Technically it will be only $3$ terms, because the constant term is $0$). Show answer

2. Find an expression for $f^{(n)}(1)$. Show hint

3. Hence state the Taylor series of $\ln x$ about $x=1$, using sigma notation. Show answer

4. Hence write $\ln 2$ as an infinite sequence. Show answer

5. Write $\ln 3$ as an infinite series. Do you think this sequence will converge? Show hint

6. For what values of $x$ do you think the series will converge? Show hint

7. How might a calculator find $\ln 3$ using only addition, subtraction, multiplication, and division? Show hint

Although $\e^x$ is equal to its Taylor series for all values of $x$, this is not true for every function. The Taylor series can diverge for some values of $x$.

Remember:
The Taylor series of $f$ at $a$: $f(a) + f’(a)(x-a) + \frac{f’’(a)}{2!}(x-a)^2 + \frac{f’’’(a)}{3!}(x-a)^3 + \cdots$ or written using sigma notation: $\sum_{n=0}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!}$ When $a=0$, it is also known as a Maclaurin series: $\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n = f(0) + f’(0)x + \frac{f’’(0)}{2!}x^2 + \frac{f’’’(0)}{3!}x^3 + \cdots$

Taylor series are very useful in physics and engineering, because they allow complicated functions to be approximated by simple polynomials. If we’re interested in values of $f(x)$ when $x$ is very close to $a$ then we usually only need two or three terms of the Taylor series to get a good approximation.

##### Question 15.5.9

Classical mechanics says that kinetic energy is $\frac{1}{2}mv^2$ where $m$ is mass and $v$ is speed, but special relativity says that kinetic energy is $\gamma(v) mc^2 - mc^2$ where $c$ is the speed of light, and $\gamma(v) = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}.$ Show that the relativistic formula is approximately the same as the classical one when the speed is slow ($v\approx 0$). Show hint