There are many rigorous ways to show them. Perhaps the easiest way is to derive the solution from the Laws of Indices. Take the example of multiplying powers:
am × an = a(m+n)
20 × 23 = 2(0+3) = 23 = 8
The result implies that 20 must be 1. Subsequently, we can generalize that
a0 = 1, and it follows that e0 = 1.
Next, we have to determine log(0). As Dr. Epenoy pointed out the relationship between the exponent function and logarithmic function:
ex = y,
x = log(y),
we can formulate the problem of log(0) as
ex = 0.
Then, we can use the limit approach to find out the solution. Intuitively, we can rewrite the problem as
lim {1/(e−x)} = 0.
This tells us that (e−x) has to be an extremely large number. Naturally, this implies −x = ∞, because e∞ = ∞ and lim {1/∞} = 0.
So, we have log(0) = x = −∞.
Edit: Since we get this result, therefore the logarithm of zero is UNDEFINED, just like the division by zero.
Off record, but I can't resist. The Sherlock Holmes' rule (I like it) may be misleading without few extra comments. Example: no number squared is negative - but it does not mean that the sentence "John is in love with Mary" must be true.
Simply, neither -infinity nor infinity is a number. They can be included in the set of numbers, but then the wished to be valid operations are not well defined; e.g infinity - infinity is what? Operating on limits of expressions makes sense as and addendum to the arithmetic operations.
Returning to the log: let me add that the definition log(0)=-infinity makes sense as a shorthand for limit of log(x) = - infinity if x approches 0 through positive numbers. However, admitting the complex numbers, the limit would be i\pi if x approaches 0 through negative numbers (or -i\pi, 3\pi etc, depending on what branch of the inverse of the periodin function \exp is chosen). Thus, every time one wants to use log(0) - a strict assumption of the way the limit is attained should be formulated.
Nice example! We have also log(2) = sum_{1,\infty} 1^n (-1)^{-n}/n = 1 -1/2+1/3-1/4+1/5 . . .= 1/2 + 1/12 + 1/42 + . . . which is OK, too!
All these type 'positive' examples are exceptional in the huge sea of really bad ones. For instance log(2.00000000001) cannot be obtained in this way.
A much worse is 1/(1-x) = 1+x+x^2+. . . . accordingly implying
1/(1-1) = 1+1+1+1+1 . . . = \infty.
And how about that 1/x goes to -\infty as x goes to 0 through negatives?
Additional remark
log(2+a) = log(2 (1+a/2) ) = log 2 + log(1+a/2) and to the second term the series expansion is ok whenever |a/2|< 1. In this way we can use series expansion of log for arbitrary positive number (and even for complex numbers!)
YES, since for rational functions infinity is just one point of the compacted complex plane. For 1st course students when just real numbers are considered, a good explanation can be presented as follows:
I agree with the answers by Joachim Domsta and Salah A. M. There is need to specify whether we are working in the complex number space or the real number space.