01/07/2013, 09:13 AM
(01/06/2013, 11:28 PM)mike3 Wrote: A much simpler definition of zeration is just
\( \mathrm{zer}_a(b) = b + 1 \).
Hi Mike,
It's not that I'm on a crusade here or something, so take my reactions below only as a thought experiment.
What I find unsatisfying of this definition is that zeration is then really just a unary operator. Of course, increment can hardly be binary, but still...
(01/06/2013, 11:28 PM)mike3 Wrote: Then, we can write out the next operations in the sequence as
\( a + b = \mathrm{add}_a(b) = \mathrm{zer}_a^b(a) \)
\( a * b = \mathrm{mul}_a(b) = \mathrm{add}_a^b(0) \)
\( a^b = \mathrm{\exp}_a(b) = \mathrm{mul}_a^b(1) \)
\( ^b a = \mathrm{tet}_a(b) = \mathrm{\exp}_a^b(1) \)
\( a \uparrow \uparrow \uparrow b = \mathrm{pen}_a(b) = \mathrm{tet}_a^b(1) \)
...
That is, you think of the nth operation as being to apply the previous operation b times with a "base" of a to some starting value, which for zeration to build addition is *a* and not a constant 0 or 1.
You can also say that it's the different constants needed here that indicate the 'old' system is defined improperly
When taking \( a * b \) to mean "take b plusses and put a's around them" you always end up with 0 as the constant (apart for zerating maybe). After all, \( a * 1 \) means "take just 1 a and skip the adding", while \( a + 1 \) would in the same vein mean "take just one a and skip the zerating", which would logically be again just "a" - this logical imbalance just rubbed me the wrong way. Of course you could also say that \( a [N] b = a [N-1] a [N-1] ... [N-1] a [N-1] C_{N-1} \) (with the a's repeated b times), where the C indeed fixes up the imbalance being either 0 for "+" or 1 for everything higher up, but this seems more like a hack to me than a proper universal definition.(01/06/2013, 11:28 PM)mike3 Wrote: The big problem with the idea you mention is we lose algebraic identities. Namely, addition and multiplication in the usual sense have
\( a + b = b + a \)
\( (a + b) + c = a + (b + c) \)
\( ab = ba \)
\( (ab)c = a(bc) \)
\( a(b + c) = ab + ac \)
\( (b + c)a = ba + ca \) (same as above)
This alternate definition of multiplication, which we'll denote here by \( a \otimes b \), is equivalent to \( a \otimes b = a(b+1) \). This fails all three laws, e.g. \( b \otimes a = b(a+1) = ba + b \ne ab + a = a(b+1) = a \otimes b \).
Indeed, this was also something I didn't like too much. So you then have to give up e.g. commutativity, but you also gain something nice like e.g.:
\( a [N] b > a \) for a > 0, b > 0
\( a [N] b < a \) for a > 0, b < 0
\( a [N] b = a \) for a > 0, b = 0
On the other hand, commutativity is something that is already lost at exponentiation and higher up, so only addition and multiplication benefit from it. If tossing it out allows you to get the framework straight, then it doesn't seem like a too big a loss anymore.
Again, this was just a thought experiment and not a plea to redefine the basic building blocks of mathematics. The rules used to define the operators of rank 2 and up just seem to be illogical when trying to add zeration to the picture.
Kind regards,
Carl Colijn

