I haven't done a mathnerd post yet. I think it's about time, but alas, I lack the time to write a fresh one right now; this is migrated from my now defunct math blog, and should prove beyond a shadow of a doubt that I'm a total weenie. This series will move into some fun stuff soon (if you're a loser like me), so stay tuned...
So you think you understand elementary calculus, eh? Good for you, but let's go slow at first, just to be sure. We're going to start at the beginning: finding the integral of a power of x.
Or as an indefinite integral
Of course, this is not the case for n=-1; indeed, if n=-1 then the denominator becomes zero. The indefinite integral in that case is
Something seems fishy here, though. After all, the difference between and is just about nil for a specific value of x; can it really be the case that the antiderivative is so different that we actually need a different functional form?
No! Miraculous as it may seem, the natural log function is, to some extent, the magical in-between value between the powers of x that blow up at x=0 and remain zero there. Huh? Let's look further...
If we didn't happen to know that the antiderivative of 1/x is the ln function, we might inquire as to what happens in the limit as n -> -1 in the expression for the antiderivative of a power of x. An excellent question is, from which side? Well, we should look at both, as a rule. From above, we get, for small epsilon approaching zero
whereas from below we get
Just for fun, let's look at the value x = e. Using the Taylor series for e^x = 1 + x + x^2/2! + ... we see that the first expression becomes
Interesting - in the limit as epsilon approaches zero, the 1/epsilon will diverge to infinity, and the rest approaches 1 (each term except epsilon/epsilon goes to zero). Perhaps we can absorb that 1/epsilon into the constant of integration and declare that this is equal to 1? [In which case we have verified, at least for this particular case, that this limit appears to match what we'd expect of the ln function]
This seems a shady - it smacks of the renormalization tricks that physicists love to play, haphazardly discarding infinities whenever they are inconvenient. In this case, it's not nearly as sinister as it looks; I'm just saving a bit of time by working with the indefinite integral instead of the definite one, where that infinity would never crop up in the first place - I encourage you to try it yourself.
And we have a bonus: as it turns out, both the "from above" and "from below" expressions turn out to be equal in the limit, as long as we remove those infinite terms in the appropriate way (or work with the definite integrals instead).
Without further ado, the punchline: a few lines of algebra will quite easily confirm that the limit expression
serves quite nicely as a definition of the natural logarithm function, at least over the positive real numbers; when we move to the complex plane, things become a little trickier because one must specify quite precisely what is meant by that limit and by the raising of x to that power (recall that complex roots may be multivalued).
So to make a long story short, the logarithm function is not a special case at all, nor do we have to resort to its status as the inverse of the exponential function to define it naturally. It's simply what we get if we look at a series of monomials normalized so that their derivatives are pure monomials (w/o constant factors) and push through zero. Modulo normalization and a shift, the logarithm is "merely" a non-constant power of x with zero exponent, which is pretty cool.
In fact, recalling that the inverse of the x^n function is x^(1/n), we see that the exponential function is, in a sense, a power of x with infinite exponent (again modulo normalization and a shift). From the expression for the ln function shown above, you can even derive the usual limit expression (the product one) for the exponential function (exercise!).
Next time: we're going to start with the first of many "proofs" that 0 = infinity, 0 = 1, and 0 = just about anything you want it to equal. Amusingly enough, a couple of these "proofs" are arguably correct and even (gasp!) useful, as long as you know what you're talking about when you write down the symbols!
Tuesday, April 21, 2009
Subscribe to:
Posts (Atom)