Trying out MathJax

David Ding

October 4, 2020

Hello again my fellow enthusiasts! Today I want to tell you about a piece of good news. I finally discovered a way to embedded LaTeX directly into my blog! Previously, I had to use an online LaTeX editor, input my LaTeX code on there, and then export and download the equations as an image to embed into my articles. As you can imagine, this is a lot of steps that definitely should have alternatives in 2020. And indeed, folks at MathJax have done a wonderful job doing exactly that! MathJax is a series of JavaScript plug-ins that will render LaTeX directly on an HTML page, and all I had to do was to include a couple of scripts and I can immediately type LaTeX directly in my posts without any added work. What I mean is that the LaTeX goes directly into the inner text of HTML elements, without any extra tags, attributes, or nodes. No violation of HTML5 standard, no special syntax, just LaTeX as HTML. Exactly what I have been looking for!

Therefore, this post is all about me trying out this neat tool that will undoubtedly make posting much more efficient in the future. Of course, I wouldn't do it without having a central theme for this post. We will continue our journey from last time where we explored how infinite series and calculus go hand-in-hand, and we will do it while I get the handle on MathJax. Before I begin, however, I would just like to take the time to thank the folks over at MathJax for creating such a wonderful tool, and I encourage all my readers to take some time look over their project site and donate, if you wish. I will also change all of my inline math texts from previous posts into MathJax in a gradual process (please visit my blog often!).

Power Series and Analytic Functions

You cannot talk about infinite series without talking about power series, because it is one of those series that are patterned enough to be analyzed, but not so much that it become boring. Allow me to explain. First, a power series is of the form: \begin{equation} f(x) = \sum_{n=0}^\infty c_n(x-a)^n \end{equation}

A lot of things are happening in the above equation, so let me explain from left to right. First of all, the power series can be a function of \(x\). What I mean by that is that you can input any value for \(x\), say \(x = 0\), and evaluate it on the right in the power series by substituting \(x\) with 0. The converse is not true--not all functions can be represented by power series. The functions that do are called analytic functions. Next, the power series is an infinite series that can be thought of as a generalized version of the geometric series. In a geometric series, each term's coefficient are the same. However, in a power series, the coefficients can vary and so we denote the nth term's coefficient as \(c_n\). Next, we have the actual "powers" in the power series in the \((x-a)^n\) part. The parameter \(a\) denotes about which value we are "centering" our input, \(x\). Don't worry about \(a\) too much here. In many cases, \(a\) is taken to be 0 and so we just have powers of \(x\) in the series.

This brings me to my previous point. The pattern in the power series is that the powers of \((x-a)\) are incrementing by 1 in \(n\), just like in a geometric series. However, unlike that of a geometric series, each term now has a distinct coefficient in \(c_n\). We have a geometric series pattern but made more exciting thanks to the coefficients. This makes power series very useful in analyzing functions. For one, remember how we saw that projections are just breaking apart a complicated function into individual components so we can analyze things component by component and then putting it back together? Well, we can do the same for power series, if it has a positive radius of convergence. This includes integration and differentiation!

Radius of Convergence

If you look at the power series in general, you might get a sense that the series might not converge. You would be correct. For one, we are adding up an infinite number of terms, and each term contains \(x-a\) raised to higher and higher powers. Therefore, it seems that you are adding larger and larger numbers infinite number of times. Surely it cannot converge?! Well, for one, if the value of \(x-a\) is small enough, then as the powers increase, the value of \((x-a)^n\) would go down to zero and the series would have a chance at converging. However, how small would that value be? The answer: \(R\) = radius of convergence.

In determining whether or not a series would converge, there is a very useful test that usually serves as the first, and if lucky enough, the only step in finding out the answer. It is called the ratio test. For the ratio test, we are looking at the ratios of consecutive terms in the series, and see how those ratios converge, hence the name "ratio test". In essence for a series: \begin{equation} \sum_{n=0}^\infty p_n \end{equation} We look at the following limit: \begin{equation} L = \lim\limits_{n \to \infty} \left | \frac{p_{n+1}}{p_n} \right | \end{equation} And arrive at three scenarios:

  1. If \(L < 1\), then the series converges.
  2. If \(L > 1\), then the series diverges.
  3. If \(L = 1\), then the result is inconclusive. In this case, try another method like the comparison test.

We are now going to focus on the first two scenario as clues to find out about the radius of convergence of power series. Applying the ratio test to our power series, we have the fact that the power series converges if: \begin{align} \lim\limits_{n \to \infty} \left | \frac{c_{n+1}(x-a)^{n+1}}{c_n(x-a)^n} \right | &< 1 \\ \lim\limits_{n \to \infty} \left | \frac{c_{n+1}(x-a)}{c_n} \right | &< 1 \\ |x-a| \lim\limits_{n \to \infty} \left | \frac{c_{n+1}}{c_n} \right | &< 1 \\ |x-a| &< \lim\limits_{n \to \infty} \left | \frac{c_n}{c_{n+1}} \right | \\ |x-a| &< R \end{align} In other words: \begin{equation} \boxed{R = \lim\limits_{n \to \infty} \left | \frac{c_n}{c_{n+1}} \right |} \end{equation}

There we go. The radius of convergence \(R\) is simply a value in which \(|x-a|\) can test against for power series convergence via the ratio test:

  1. If \(|x-a| < R\), then the series converges.
  2. If \(|x-a| > R\), then the series diverges.
  3. If \(|x-a| = R\), then the result is inconclusive. In this case, actually plug \(\pm R\) for \(x-a\) into the series and see what happens.

And remember, if \(R > 0\), i.e. the power series has a radius of convergence, then we can differentiate and integrate the series term by term and those operations would be equivaluent to differentiating and integrating the analytic function \(f(x)\) directly. This will come handy later on.

Taylor Series

One of the most well-known power series is the Taylor series. It is named after the British mathematician Brook Taylor. What makes Taylor series special is that it has a specific form for \(c_n\) for the class of analytic functions that are infinitely differentiable. Here, let's do something fun and different. Rather than me showing you what Taylor series look like, let us instead derive it. Remember: \begin{equation} f(x) = \sum_{n=0}^\infty c_n(x-a)^n \end{equation} Our goal is to figure out what the set of \(\{c_n\}\)'s looks like. To do this, we need to isolate each term by picking values for \(x\) strategically. First, please observe that: \begin{equation} f(x) = c_0 + c_1(x-a) + c_2(x-a)^2 + c_3(x-a)^3 + \dots \end{equation} Writing out the first few terms of the power series allows us to see that something special happens at \(x=a\): \begin{align} f(a) &= c_0 + c_1(a-a) + c_2(a-a)^2 + c_3(a-a)^3 + \dots \\ &= c_0 \end{align} Since 0 to the power of any non-zero number is 0. What about \(c_1\)? Well, we can't use the expression for \(f(x)\) directly since \(c_1\) is the coefficient for \(x-a\) and we also have the \(c_0\) term. However, let's see what happens if we take the derivative of our function at \(x=a\): \begin{align} f'(a) &= c_1 + 2c_2(a-a) + 3c_3(a-a)^2 + \dots \\ &= c_1 \end{align} Similarly: \begin{align} f''(a) &= 2c_2 + (2*3)c_3(a-a) + (3*4)c_4(a-a)^2 + \dots \\ &= 2c_2 \end{align} Do you see a pattern here?

In general, when we take the kth derivative of \(f(x)\) evaluated at \(x=a\), all terms \(c_n\) for \(n < k\) goes away since they are constants, and all terms \(c_n\) for \(n > k\) goes away since they have the \((a-a)^n\) term. Only the \(c_k\) term remains such that: \begin{equation} f^{(k)}(a) = k!c_k \end{equation} This means that \begin{equation} c_n = \frac{f^{(n)}(a)}{n!} \end{equation} Where \(f^{(0)}(x) = f(x)\) and \(0!\) is defined as 1. This yields the Taylor series for an infinitely differentiable analytic function: \begin{equation} \boxed{f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!} (x-a)^n} \end{equation} If we center the Taylor series about 0, i.e. letting \(a=0\), we get a special case called the Maclaurin series: \begin{equation} f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n \end{equation} As an example, we note that for \(f(x) = e^x\), the exponential function's derivative is itself, and at \(x=0\), \(e^x = 1\). This means that: \begin{equation} e^x = \sum_{n=0}^\infty \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots \end{equation} In particular, for \(x=1\), we have: \begin{equation} e = 2 + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \dots \approx 2.7183 \end{equation}

Radius of Convergence for Taylor Series

Putting everything together and using our results about \(R\) earlier, we can derive the radius of convergence for Taylor Series as follows: \begin{align} R &= \lim\limits_{n \to \infty} \left | \frac{c_n}{c_{n+1}} \right | \\ &= \lim\limits_{n \to \infty} \left | \frac{\frac{f^{(n)}(a)}{n!}}{\frac{f^{(n+1)}(a)}{(n+1)!}} \right | \\ &= \lim\limits_{n \to \infty} \left | \frac{f^{(n)}(a)(n+1)!}{f^{(n+1)}(a)n!} \right | \\ &= \lim\limits_{n \to \infty} \left | (n+1) \frac{f^{(n)}(a)}{f^{(n+1)}(a)} \right | \end{align} For example, for \(f(x) = e^x\) centered about \(a=0\), we have: \begin{align} R &= \lim\limits_{n \to \infty} \left | (n+1) \frac{f^{(n)}(a)}{f^{(n+1)}(a)} \right | \\ &= \lim\limits_{n \to \infty} \left | (n+1) \frac{e^0}{e^0} \right | \\ &= \lim\limits_{n \to \infty} (n+1) \\ &= \infty \end{align} This means that for any value of \(x\), \(e^x\) can be written as a power series via the Taylor series expansion centered about 0.

Infinite Geometric Series

The infinite geometric series is just a special case of power series. To see this, we write the expression out: \begin{equation} f(x) = \sum_{n=0}^\infty ax^n = a + ax + ax^2 + ax^3 + \dots \end{equation} So the relationship is that the above series is a power series where \(c_n = a\) for all \(n = 0, 1, 2, \dots\) and centered about 0.

Radius of convergence is therefore: \begin{align} R &= \lim\limits_{n \to \infty} \left | \frac{c_n}{c_{n+1}} \right | \\ &= \lim\limits_{n \to \infty} \left | \frac{a}{a} \right | \\ &= 1 \end{align} So for \(|x| < 1\) the series converges, and for \(|x| > 1\), the series diverges. At \(|x| = 1\), we have two cases: \(x = 1\) and \(x = -1\). Please substitue the infinite geometric series with those two values and see for yourself that in neither of those cases does the series converge. So the interval of convergence is also \(|x| < 1\).

Using the geometric series formula, we see that, for \(|x| < 1\): \begin{equation} \boxed{\sum_{n=0}^\infty ax^n = \lim\limits_{n \to \infty}\frac{ax^n}{1-x} = \frac{a}{1-x}} \end{equation} We will also make use of the above equation later on.

Fun Stuff

Let's apply our knowledge about power series we've seen so far to calculus. In my previous post, I've shown how infinite series of \(\pi\) went hand-in-hand with calculus a-la Fourier series. This time, I will show other examples of how series and calculus tie together.

Pi Strikes Again!

Show that: \begin{equation} \int_0^\infty \frac{x}{e^x - 1} dx = \frac{\pi^2}{6} \end{equation}

Where do we begin? The integrand does not seem to have an elementary anti-derivative, and in fact, it doesn't. However, that \(\frac{\pi^2}{6}\) looks awfully familar, because it is the sum of all of the fractional squares of positive integers we've derived from the last post! So if we some how introduce an infinite series in our integral, we might get somewhere. Well, the integrand sure looks like the formula for the infinite geometric series doesn't it? \begin{align} \int_0^\infty \frac{x}{e^x - 1} dx &= \int_0^\infty \frac{xe^{-x}}{1 - e^{-x}} dx \\ &= \int_0^\infty \left(\sum_{n=0}^\infty xe^{-x}e^{-nx}\right)dx \\ &= \int_0^\infty \left(\sum_{n=1}^\infty xe^{-x}e^{-(n-1)x}\right)dx \\ &= \int_0^\infty \left(\sum_{n=1}^\infty xe^{-nx}\right)dx \\ \end{align} Okay! We've got somewhere now. Before we continue, we make sure that \(0 < e^{-x} < 1\) for \(x > 0\), so that the infinite geometric series actually converges. Great, our next intention would be to flip the integral with the summation. Doing so is equivalent to saying that we wish to analyze the infinite series term by term. This is valid here, because the radius of convergence for an infinite geometric series, like the one we've got here, is 1, which is greater than 0. So let's continue: \begin{align} \int_0^\infty \left(\sum_{n=1}^\infty xe^{-nx}\right)dx &= \sum_{n=1}^\infty \int_0^\infty xe^{-nx}dx \\ &= \sum_{n=1}^\infty \left[\left.\frac{xe^{-nx}}{-n}\right|_{x=0}^{x=\infty} + \frac{1}{n} \int_0^\infty e^{-nx} dx \right] \\ &= \sum_{n=1}^\infty \frac{1}{n} \int_0^\infty e^{-nx} dx \\ &= \sum_{n=1}^\infty \left.\frac{e^{-nx}}{n^2}\right|_{x=\infty}^{x=0} \\ &= \sum_{n=1}^\infty \frac{1}{n^2} \end{align} Does the last line look familiar to you? \begin{equation} \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} \end{equation} As desired. For how we've established the last equation, look here.

Series, Series, Series

What does the following series converge to? \begin{equation} \sum_{n=2}^\infty \sum_{k=2}^\infty \frac{1}{k^n k!} \end{equation} A word on double summation. Double summation, along with double integration, simply means fixing a particular value of the outer summation/integration and doing the inner summation/integration, and repeat for all summed/integrated value of the outer summation/integration. So for example, in our expression above, we would start with \(n=2\), and evaluate: \begin{equation} \sum_{k=2}^\infty \frac{1}{k^2 k!} \end{equation} And repeat for \(n=3, 4, 5, \dots\) Alright, let's roll up our sleeves and get to work. Yes we can switch the summation, as long as the series converges, which if we find the value that it converges to, will prove its convergence. So we will switch the summation and make sure that we operate within the intended regions of convergence. \begin{align} \sum_{n=2}^\infty \sum_{k=2}^\infty \frac{1}{k^n k!} &= \sum_{k=2}^\infty \sum_{n=2}^\infty \frac{1}{k^n k!} \\ &= \sum_{k=2}^\infty \frac{1}{k!} \left[\frac{1}{1 - \frac{1}{k}} - \frac{1}{k} - 1 \right]\\ &= \sum_{k=2}^\infty \frac{1}{k!} \left(\frac{1}{k(k-1)}\right) \end{align} When we switched the order of summations, in terms of \(k\), we have an infinite geometric series. Since \(k\) starts at 2, the common ration, \(\frac{1}{k}\), will never exceed \(\frac{1}{2}\), which means that the infinite geometric series will converge. Following the formulat we arrive at our latest result.

Next we use a classic telescoping series to deal with \(\frac{1}{k(k-1)}\). Please note that: \begin{align} \sum_{k=2}^\infty \frac{1}{k(k-1)} &= \frac{1}{2} + \frac{1}{6} + \frac{1}{12} + \frac{1}{20} + \dots \\ &= \left(1 - \frac{1}{2}\right) + \left(\frac{1}{2} - \frac{1}{3}\right) + \left(\frac{1}{3} - \frac{1}{4}\right) + \dots \\ & = 1 \end{align} Regrettably though, we don't have a nice telescope in our original expression, but it's worth writing it out: \begin{align} \sum_{k=2}^\infty \frac{1}{k!} \left(\frac{1}{k(k-1)}\right) &= \frac{1}{2!}\left(1 - \frac{1}{2}\right) + \frac{1}{3!}\left(\frac{1}{2} - \frac{1}{3}\right) + \dots \\ &= \frac{1}{2!} - \frac{1}{2}\left(\frac{1}{2!} - \frac{1}{3!}\right) - \frac{1}{3}\left(\frac{1}{3!} - \frac{1}{4!}\right) - \dots \\ &= \frac{1}{2!} - \frac{1}{2}\left(\frac{3-1}{3!}\right) - \frac{1}{3}\left(\frac{4-1}{4!}\right) - \dots \\ &= \frac{1}{2!} - \frac{1}{3!} - \frac{1}{4!} - \frac{1}{5!} - \dots \\ \end{align} Aha! Now our expression looks much cleaner. Only if there is a third series that can help us with adding up fractions of factorials....

Enter Taylor series! We've seen before that: \begin{equation} e = 2 + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \dots \end{equation} So that \begin{equation} \frac{1}{3!} + \frac{1}{4!} + \frac{1}{5!} + \dots = e - 2 - \frac{1}{2!} = e - \frac{5}{2} \end{equation} Using this result, let us continue: \begin{align} \frac{1}{2!} - \frac{1}{3!} - \frac{1}{4!} - \frac{1}{5!} - \dots &= \frac{1}{2!} - \left(e - \frac{5}{2}\right) \\ &= \frac{1}{2} - \left(e - \frac{5}{2}\right) \\ &= 3 - e \end{align} So in the end: \begin{equation} \boxed{\sum_{n=2}^\infty \sum_{k=2}^\infty \frac{1}{k^n k!} = 3 - e} \end{equation} A nice relay for a trio of series: infinite geometric series, telescoping series, and Taylor series!

Infinite Geometric Series...and Harmonic Series?

In our final example, let's take a look at a very interesting infinite series: \begin{equation} \sum_{n=1}^\infty \frac{1}{n2^n} \end{equation} It will help if we write the series out: \begin{equation} \sum_{n=1}^\infty \frac{1}{n2^n} = \frac{1}{2} + \frac{1}{2}\left(\frac{1}{4}\right) + \frac{1}{3}\left(\frac{1}{8}\right) + \frac{1}{4}\left(\frac{1}{16}\right) + \dots \end{equation} Well, well, well...what have we got here? It seems that we have a geometric series of common ratio of \(\frac{1}{2}\), except each term is paired with the corresponding one from the harmonic series! The harmonic series diverges, while the infinite geometric series for this common ratio converges. Does the overall series even converge? What is this madness?

Cue George PĆ³lya, the famous Hungarian mathematician and author of the How to Solve It series, who said that one of the strategies to solving any math problem is to solve the general case, and apply the result to the particular question at hand. What we have seems like a power series, because it is a geometric series with different coefficients. However, power series describes functions in general, so let's generalize our series to a function! \begin{equation} f(x) = \sum_{n=1}^\infty \frac{x^n}{n} \end{equation} Even though we start the summation at 1 instead of 0, the above function is still a power series. (Think of it as \(c_0 = 0\)). For a power series, it's not about how you start, but how you finish, when it comes to whether or not it converges (this is true for infinite series in general). For the above function, it is clear that \(c_n = \frac{1}{n}\) for \(n \geq 1\), so let's calculate the radius of convergence for our power series! \begin{align} R &= \lim\limits_{n \to \infty} \left | \frac{c_n}{c_{n+1}} \right | \\ &= \lim\limits_{n \to \infty} \left | \frac{\frac{1}{n}}{\frac{1}{n+1}} \right | \\ &= \lim\limits_{n \to \infty} \left | \frac{n+1}{n} \right | \\ &= 1 \end{align} So our power series has a radius of convergence of 1. We note that for our original problem: \begin{align} \sum_{n=1}^\infty \frac{1}{n2^n} &= \sum_{n=1}^\infty \frac{\left(\frac{1}{2}\right)^n}{n} \\ &= f\left(\frac{1}{2}\right) \end{align} The original problem is just a special case of the power series being evaluated for \(x = \frac{1}{2}\). We also note that \(\frac{1}{2} < 1 = R\) and so the original series converges. We also note that \(R > 0\) so we can differentiate and integrate our power series term by term without breaking any underlying mathematical axioms.

Going back to the general problem, then, would be to find a closed expression for \(f(x)\) when \(|x| < 1\). In order to do so, we need to make a few observations. Recall that \(R > 0\) so we can differentiate term by term, we have an expression for the derivative of \(f(x)\): \begin{align} f'(x) &= x' + \left(\frac{x^2}{2}\right)' + \left(\frac{x^3}{3}\right)' + \dots \\ &= 1 + \frac{2x}{2} + \frac{3x^2}{3} + \dots \\ &= 1 + x + x^2 + x^3 + \dots \\ &= \sum_{n=0}^\infty x^n \end{align} We got back our infinite geometric series! What's more convenient is that the derivative of \(f(x)\) also has the same radius of convergence as that for \(f(x)\) (it's actually a theorem, but we'll take the convenience), so for \(x\) within the radius of convergence from -1 to 1 exclusive, we can apply the infinite geometric series formulat to get: \begin{equation} f'(x) = \frac{1}{1-x} \end{equation} Therefore, we can solve for f(x) from the previous differential equation: \begin{align} f(x) &= \int \frac{1}{1-x} dx \\ &= -\log(1-x) + C \\ f(0) &= 0 = -\log(1) + C \\ C &= 0 \end{align} For completeness, let us compute the interval of convergence. For \(x = 1\), plugging this value back to our power series yields the harmonic series, which divergences. So \(x = 1\) is not part of the interval of convergence. However, for \(x = -1\), plugging this value gives us an alternating series whose terms converges to 0, so that series would converge. Hence, the interval of convergence would be \(-1 \leq x < 1\). \begin{equation} f(x) = \sum_{n=1}^\infty \frac{x^n}{n} = -\log(1-x), \quad -1 \leq x < 1 \end{equation}

Finally, plugging \(x = \frac{1}{2}\) into our result yields: \begin{align} \sum_{n=1}^\infty \frac{1}{n2^n} &= f\left(\frac{1}{2}\right) \\ &= -\log\left(1-\frac{1}{2}\right) \\ &= -\log\left(\frac{1}{2}\right) \\ &= \log(2) \end{align} So, in the end, \begin{equation} \boxed{\sum_{n=1}^\infty \frac{1}{n2^n} = \log(2)} \end{equation}

As an added bonus, let's see what \(f(-1)\) evaluates to, since \(x = -1\) is part of the interval of convergence: \begin{align} f(-1) &= \left.-\log(1-x)\right|_{x=-1} \\ &= -\log(1-(-1)) \\ &= -\log(2) \\ f(-1) &= \left.\sum_{n=1}^\infty \frac{x^n}{n}\right|_{x=-1} \\ &= -1 + \frac{1}{2} - \frac{1}{3} + \frac{1}{4} - \frac{1}{5} + \dots \end{align} Multiply our two results by -1 on both sides, and we get: \begin{equation} \boxed{1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \dots = \log(2)} \end{equation} Pretty neat huh?