Sunday, July 4, 2010

0 = infinity, for certain values of infinity


(unless n=-1, in which case this formula doesn't work - you remember our friend the logarithm, I hope)

Straightforward integral, right? Okay, then. Here's your task: integrate 1/x^2 from -1 to +1. Done? Congratulations - you just "proved" that the definite integral of a positive definite function is equal to -2.

Good one - still think this is simple?

Let's dig deeper. Now take the same integral, except from +1 to infinity. Ready? I hope you got 1 as your answer. Do the same thing for -1 to -infinity - feel free to use the symmetry of the function if you want, I don't mind!

Now, put these two results together with the result from the last paragraph...what do we have?



That's right! Nevermind the fact that 1/x^2 is positive everywhere along the real line, except at 0, where it is infinite; we've just "proved" that its integral over the whole thing is equal to zero. Yay!

What is going on here? Something has clearly gone wrong, and you probably know what it is. You're not allowed to integrate over any point where a function becomes infinite. Our calculus teachers beat that mantra into our heads, and it certainly seems like reasonable advice if ignoring it gets us results like this!

But happily (sadly, if you hate math) that's not the entire story. We've got a few more rules to break: imagine the state of mathematics if we were still worried about the fact that the square root of two could not be perfectly expressed as a fraction! This is a similar situation, where we obtain more by drawing outside the lines than by following them precisely.

Here's the magic: the expressions above are, in fact, correct. However, they don't mean what you probably think they mean - we have not shown that 0 = infinity, alas. But what we have shown is far more useful and pretty!

To see what we've got here, we need to talk for a moment about analytic functions. I'm sure you know what a function f(x) over the real numbers is - for any real value x we can pick another real value f(x), and this defines a function. Now, for the complex numbers, we'll write z = x + iy where both x and y are real numbers. What is a complex function f(z)? Here we have a slight problem. A function of z alone is a different beast than a function of x and y separately. When we speak of complex functions, we usually mean analytic functions; we mean functions of z alone, that don't depend separately on x or y. In other words, f(z) = 3z + z(z+1) is an analytic function of z, but f(z,x,y) = 3z + x + y*x is not, because it explicitly depends on x and y, not merely on the combination x+iy.

Now, determining whether a particular function f(x,y) is analytic as a function of z=x+iy is slightly tricky, though straightforward (just a bit of calculus - look up the Cauchy Riemann equations if you care). There are actually some interesting connections here to electrostatics, but I digress...in any case, one of the most important facts about analytic functions is that once you know the way the function looks in a finite area or strip (for instance, between 0 and 1 on the real line, or in a small disk), this actually determines the function's values everywhere. You can even get by merely knowing the power series at a point! It does not suffice to know the values at scattered points, though - you actually need a continuous piece. With that in hand, the only other trick is to figure out a practical way to get an expression everywhere else - this is not always easy, or possible! On this point, keep in mind that with probability one, most functions are not expressible in elementary terms; in fact, most functions are not even nameable (i.e. picking a random function, even from fairly nice families of functions, out of a hat, there is zero probability that you can describe it in any way whatsoever) due to the simple fact that analytic functions are at least as numerous as the real numbers (which also cannot be named with probability one).

Back to the point. If knowing an analytic function along a strip is enough to determine it everywhere, then we have a fabulously useful corollary: if we know that two analytic functions are equal along any strip, we then know that they must be equal everywhere! So for instance, if we know that e^ix = cos x + i sin x along the real line, then we don't need to separately verify that e^iz = cos z + i sin z for complex z! It automatically holds, as long as each side is well defined; we'll see later that one of the most useful applications of analytic continuation is to handle cases where one side is not well defined in certain areas of the complex plane and we can match a nicer expression to it where it is defined. If they are equal there, then we can use the nice expression to actually define the nasty one elsewhere.

So now you may see the gist of where we're going, though the details may be unclear. Alas, I have no time to finish up now, so the resolution of these issues will have to wait until I return. Until then...