One can define the function x^k for any k which is a positive integer quite naturally (ie x^k = x*x*...*x k-times). From this, the relation x^(k+m) = x^k * x^m is immediately derivable (x*x*...x k+m times is just x*x*...*x k-times followed by x*x*...*x m-times) and it is this idea that forms the basis for our extension of exponents. Indeed, for positive numbers x, to define x^(1/k) we find a positive real number a such that a^k = n (trying to define this for negative numbers leads to trouble as x^2, for example, never takes on a negative value). The fact that this number exists and is unique follows as the functions x^k are continuous and strictly increasing on [0,Infinity). One can then extend this to positive rationals (n/k) by taking integer powers as above and verify that x^((n/k) + (p/q)) = x^(n/k) * x^(p/q). By defining x^0 = 1 and x^(-a) = (1/ (x^a)), we can extend this to all rationals and guarantee that the additive property of the exponential is maintained (as x^(-a + a) = x^(-a) * x^a = x^a / x^a = 1.
The next question is how to define something like 2^x for every real number x. One can check that the above definition of c^x (where c>0) is continuous in the rationals. By some continuity theorems (rationals are dense in [a,b] for every a<b and c^x is uniformly continuous on these intervals, as well as an extension theorem), there exists a unique function c^x defined for ALL real values of x that is continuous and gives us our original function for rationals. After some playing around, one may find this function: specifically, it is exp(x log(c)) (where exp(a) = e^a, e being Euler's number, and log is log "base-e" (or "ln" aka "natural log")). Note that exp and log can be defined independent of our above derivations: log(a) being the integral from 1 to a of (dx/x) for positive a and exp being its inverse function, defined for all reals. Moreover, exp obeys our addition rule. Indeed, the derivative of exp(x+y)/exp(x) with respect to x is just [exp(x)exp(x+y)-exp(x+y)-exp(x)] / [exp(x)^2] = 0, so exp(x+y)/exp(x) is a constant by the mean value theorem. Plugging in x=0 shows this constant is just exp(0+y)/exp(0) = exp(y).
From the above, it makes sense to take the definition of a^b as exp(b log a) (for a>0). Knowing analytic expansion and absolute convergence of complex power series allows us to extend this notion to complex exponents. For any real y, we have exp(i y) = 1 + (i y) + (i y)^2 / 2! + (i y)^3 / 6! + ... which converges absolutely for every y. Hence, we may define exp(x+iy) by its series expansion for all complex numbers. A little more series manipulation shows exp(iy) = cos(y) + i sin(y). This, together with the fact that exp(x+iy) = exp(x)exp(iy) allows us to compute exp (and thus a^z for any complex z, positive a) quite easily. As mentioned above, for x+iy = i pi, this gives exp(0)[cos[pi] + i sin(pi)] = -1, ie e^(i pi) + 1 = 0.
As for 0^0, it's undefined. From limits like lim(a->0+) a^0 = 1 (a goes to 0 from above) one might think this should be 1, but from the above interpolation with 0 in place of x, one should have 0^x = 0 for x>0 and hence 0^0=0. There is no right answer, the real question is why do you want to know 0^0 in the first place. Usually it's just a limit to be found, and as such should be evaluated without explicitly plugging in 0.
On a similar note, 0! = 1 is defined mainly (from what i've seen) from convenience (indeed from the idea that the value of the empty product should be one, as the empty product multiplied by another product should be that product).