Tag Archives: function

Is Math Broken?

On the sixth anniversary of our relationship, I dedicate this entry to my fiancée Amy.

What is 1 + 2 + 3+ 4+ 5 + ...? Infinite, right? Well, this is apparently not the only answer. You can find a lot of stuff on the web explaining how this sum is equivalent to the huge number -\frac{1}{12}. I’m not kidding. How can something seemingly infinite and positive be equal to a negative fraction?

Folks on the web usually show some manipulation of the sum that results in this weird answer. I have not found these methods satisfactory, so I worked through it myself in an effort to convince myself of this absurdity. In this entry I show my work.

First, I start with the the Riemann zeta Function:
\zeta(s) = \sum\limits_{n=1}^\infty \frac{1}{n^s}
and the Dirichlet eta Function:
\eta(s) = \sum\limits_{n=1}^\infty \frac{(-1)^{n+1}}{n^s}

We can show there is a relationship between these two sums.
\eta(s) - \zeta(s) = \sum\limits_{n=1}^\infty \frac{(-1)^{n+1} - 1}{n^s}
Here it can be seen that odd n terms are 0, and even n terms are -\frac{2}{n^s}. This can be reduced to:
= \sum\limits_{n=1}^\infty - \frac{2}{2^s n^s}
which is just the Riemann zeta Function with a coefficient:
= -\frac{2}{2^s}\sum\limits_{n=1}^\infty \frac{1}{n^s} = -2^{1-s} \zeta(s)
So now we can express the difference between eta and zeta as:
\eta(s) - \zeta(s) = -2^{1-s}\zeta(s)
\eta(s) = (1-2^{1-s})\zeta(s)

If we plug in some numbers for s into the Riemann and Dirichlet Functions we get a few series that we will evaluate:
\zeta(-1) = \sum\limits_{n=1}^\infty n = 1 + 2 + 3 + 4 + 5 + ...

\eta(-1) = \sum\limits_{n=1}^\infty (-1)^{n+1} n = 1 - 2 + 3 - 4 + 5 - ...

\eta(0) =  \sum\limits_{n=1}^\infty (-1)^{n+1}  = 1 - 1 + 1 - 1 + 1 - ...

Let’s look at that third series. We can use the recursion of \eta(0) to show:
\eta(0) = 1 - \eta(0)
and by solving for \eta(0) we get:
\eta(0) = \frac{1}{2}

You can also get this result with the geometric series:
\sum\limits_{n=0}^\infty x^n = \frac{1}{1-x}
Plug in x = -1 to get:
\sum\limits_{n=0}^\infty (-1)^n = \frac{1}{2}

Now, this is weird but sort of makes sense. The partial sum of \eta(0) bounces around between 1 and 0. So this result of \frac{1}{2} is like the average of this bouncing.

The figure below shows the partial sum of \eta(0) in blue, bouncing. The “total” of \frac{1}{2} is shown in black.


What about \eta(-1) = 1 - 2 + 3 -4 + ...? Does this sum result in some finite value like \eta(0)? To find out, let’s use a similar method to what we did previously: we will take the difference between two series to see if the result is one of the series used in the difference.
\eta(-1) - \eta(0) = \sum\limits_{n=1}^\infty  (-1)^{n+1} n - (-1)^{n+1} = \sum\limits_{n=1}^\infty  (-1)^{n+1} (n-1)
The first term of this series is 0, so it is equivalent to starting the index at n=2:
= \sum\limits_{n=2}^\infty  (-1)^{n+1} (n-1)
We can then arbitrarily adjust the index by saying n = m + 1:
= \sum\limits_{m=1}^\infty  (-1)^{m+2} m
This is equivalent to -\eta(-1). So now we have:
\eta(-1) - \eta(0) = -\eta(-1)
We know that \eta(0) = \frac{1}{2}, so it is a matter of solving for \eta(-1):
\eta(-1) = \frac{1}{4}

Weird! This implies that 1-2+3-4+5-... = \frac{1}{4}.
This is sort of like an average too though. The partial sum is bouncing around a center here too. Even though the magnitude of the partial sum is increasing, it is bouncing around a center of \frac{1}{4}.

The figure below shows the partial sum of \eta(-1) in blue. The center of \frac{1}{4} in black, and the lines bounding the partial sum in red (note where they intersect).


We can now take advantage of that equation we derived at the beginning to find the value for \zeta(-1):
\eta(s) = (1-2^{1-s})\zeta(s)
Plug in s = -1:
\eta(-1) = -3 \zeta(-1)
We know that \eta(-1) = \frac{1}{4}, so we can solve for \zeta(-1):
\zeta(-1) = -\frac{1}{12}
Now this is especially weird because, remember what the definition of \zeta(-1) is above: it implies that
1 + 2 + 3+ 4+ 5 + ... = -\frac{1}{12}

The partial sum of this series is not bouncing around some value, so what is this?
The figure below shows the partial sum of \zeta(-1) in blue. The result of the infinite sum of -\frac{1}{12} in black.


Is math broken? Is this an inconsistency, a la Gödel? This sort of thing actually shows up in nature: namely the Casimir Effect, so there is something real happening here.

This result is also known as a Ramanujan sum: where the partial sums do not converge, yet you can arrive at a finite value that characterizes an infinite sum. This sum is not the only Ramanujan sum that causes head scratching. Learn some more of them and baffle your friends at parties.

Translating a Wave Function

In algebra, or pre-calc, you learn that you can change the position of a function by modifying the argument. In quantum physics this idea is used to displace wave functions. If a function starts off at one position, and moves to another position, all that is needed is a change in argument. However, quantum physics likes to use linear operators to alter functions. What would an operator look like if it can change the argument of a function? In this post, I will construct a 1-D example of such an operator.

A general wave function can be written as: \Psi (x), where the shape of \Psi is dependent on the spatial variable x.
To translate a function by distance a, modify the argument of \Psi,
To move the function right by a, \Psi(x) \rightarrow \Psi (x-a)
To move the function left by a, \Psi(x) \rightarrow \Psi(x+a)
Let’s just take the \Psi(x+a) example, and without loss of generality say that a can be positive or negative.
Next we can take advantage of Taylor expansions.
A function f(x) can be expanded around a point a: f(x) = f(a) + (x-a)\frac{d}{dx}f(a) + \frac{(x-a)^2}{2!} \frac{d^2}{dx^2}f(a) + ...
Here, in our example, we want to expand \Psi(x+a) around x, to express the translated function \Psi(x+a) in terms of the original function \Psi(x):
\Psi(x+a) = \Psi(x) + a\frac{d}{dx}\Psi(x) + \frac{a^2}{2!} \frac{d^2}{dx^2} \Psi(x) + ...
Note: \frac{d}{dx} = \frac{i}{\hbar} p.
A more complete version of this expression would be:
\Psi(x+a) = \sum\limits_{n=0}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n} \Psi(x) = \Psi(x) + \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n} \Psi(x)
This sum’s structure is similar to the Taylor expansion of the exponential function.
e^x = 1 + x + x^2/2! + ... = \sum\limits_{n=1}^{\infty} \frac{x^n}{n!}
Every operator in the \Psi(x+a) expansion can be reduced into an simplified operator: \Psi(x+a) = e^{a\frac{d}{dx}} \Psi(x)
= e^{\frac{ai}{\hbar} p} \Psi(x).
This new operator e^{a\frac{d}{dx}} can be expanded to return to what we had before: e^{a\frac{d}{dx}} = \sum\limits_{n=0}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n}.
So the translated function is a version of the original function with a specific type of interference. Such that the structure is: Translated = Original + Interference.
\Psi(x+a) = \Psi(x) + \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n} \Psi(x) = \Psi(x) + \Delta(a)\Psi(x)
We can reduce the operators in the interference terms into an exponential in the usual way:
\Delta(a) = \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n}
= e^{a \frac{d}{dx} } - 1

The expectation value \left< e^{a\frac{d}{dx}} \right> characterizes the average overlap between \Psi(x) and \Psi(x+a).
\left< e^{a\frac{d}{dx}} \right> = \left< \Psi(x) \right| e^{a\frac{d}{dx}} \left| \Psi(x) \right>
= \left< \Psi(x) | \Psi(x+a) \right>
= \left< \Psi(x) \right| 1 + \Delta(a) \left| \Psi(x) \right>
= 1 + \left< \Delta(a) \right>
The expectation value of the translation operator is then unity plus the expectation value of the interference operator.
\left< \Delta (a) \right> = \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \left< \frac{d^n}{dx^n} \right>
= \sum\limits_{n=1}^{\infty} \frac{(ia)^n}{(\hbar)^n n!} \left< p^n \right>
The interference expectation value is shown to be an expectation value of a function of the momentum operator.
In the case of localized waves, as a gets much greater than the width of \Psi(x), the interference term approaches -1, since the overlap between the original and translated wave function decreases. This is equivalent to the original and displaced wave function becoming more and more orthogonal.
limit _{a\rightarrow \infty} \left< \Delta (a) \right> = -1
limit _{a\rightarrow \infty} \left< \Psi(x) | \Psi(x+a) \right> = 0