Change of base formula for Tetration
#1
Branching this to a new topic, as I think it deserves one. Due to the length, I'll split into two posts, one to make my claim, the other to back it up.

This is in response to a comment made here:
http://math.eretrandre.org/tetrationforu...d=21#pid21
bo198214 Wrote:However because there is no base transform formula for tetration, it maybe that \( e^{1/e} \) is the only base with a certain uniqueness.

Depends on what you mean by base transform formula. For example, assume you have two bases a and b, each greater than \( \eta \) (the Greek letter eta). As a refresher, I've previously defined \( \eta=e^{1/e} \). I think this constant will turn out to be so important to tetration that it deserves its own name. The letter eta is basically an alternative version of "e", just as this constant serves a similar purpose to the constant e. And it also makes for an easily pronounced "cheta" function, written \( {}^x \check \eta \),which I've previously described.

Anyway, so long as you have an exact solution for base b, you can find the tetration of base a for any real exponent r > -2.

An exact formula for base conversion requires infinite iterations, but arbitrary precision can be achieved with very low iteration counts, so long as each base isn't too close to eta. I've seen this fact alluded to by many authors, in fact. Peter Walker discussed almost exactly this same principle.

It's the basis for my solution for base e, by extending my function \( ^{x} \check \eta \). I don't think any of what I'm posting is "new", just put together so that the significance is obvious.

First, we need the constant of base conversion. It's essenstially a form of superlogarithmic constant. Think of it as the equivalent of the constant \( log_b(a) \) used for converting \( a^z = b^{z \times log_b(a)} \), assuming a and b are positive real numbers. We can find \( log_b(a) \) by taking a and b to very high integer powers:

\( log_b(a)\ =\ \lim_{n \to \infty} \left(\frac{n}{k}\right),\ a^k \le b^n \le a^{k+1} \)

By analogy, for tetration, we're going to tetrate them each a large number of times. However, as you will see, tetration to integer powers won't work, not if we want to find the superlogarithmic constant. If the superlogarithmic constant isn't an integer, you can only approximate without an exact solution for one of the bases. In other words, in almost all cases, we must have an exact solution for fractional iteration for at least one of the bases. That doesn't mean the constant doesn't exist, only that we can't uniquely determine its value without an exact solution for some base.

\( \begin{array}{|ccccc|}& & & & \\
\hline
\\[10pt]

\\
\hspace{10} & {\Large ^{\normalsize x} a} & = & {\Large \lim_{n \to \infty} log_a^{\circ n}\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)} & \hspace{10} \\
\\[10pt]

\\
\hline
\end{array} \)

In a twist of irony, the logarithmic constant for exponentiation (hyper-3) is multiplicative (hyper-2), but the superlogarithmic constant for tetration (hyper-4) is additive (hyper-1). And I say it's a "superlogarithmic" constant, but it should not be confused with \( slog_b(a) \). I think they're related, but I haven't pinned down the nature of the relationship yet. This will require more study.

Moving along... Because logarthmic constants are multiplicative, we have:

\( log_a({c}) = log_a(b)\ \times\ log_b({c}) \)

On the other hand, since superlogarithmic constants are additive, we have:

\( \mu_a({c}) = \mu_a(b)\ +\ \mu_b({c}) \)

Edit: Boxed my base conversion formula, so it's easier to pick out later one when I revisit this thread.
Reply
#2
Now to back up my claim with some dense math. The conversions are all basic, so high school level calculus should be sufficient to follow (if you take your time), except for the new notation for tetration, which we should all be familiar with if we're visiting this forum.

To see that the constant \( \mu_b(a) \) exists, and furthermore that its value converges superexpontially, i.e., \( \frac{1}{O\left({}^n e\right)} \), consider the following. Note that this is simply extending my previous calculations to generic bases a and b.

First, from the definition of the conversion formula I gave above, for a fixed large n, we get:
\( \begin{eqnarray}
{\Large ^{\normalsize x} a} & = &
{\Large log_a^{\circ n}\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}
\end{eqnarray} \)

Now, let's see what happens if we increase n by 1.

\( \begin{eqnarray}
{\Large ^{\normalsize x} a} & = &
{\Large log_a^{\circ \normalsize (n+1)}\left({}^{\left(\normalsize (n+1)+x+\mu_b(a)\right)} b\right)} \\
& = & {\Large log_a^{\circ \normalsize (n+1)}\left(b^{\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)} \\
& = & {\Large log_a^{\circ n}\left(log_a \left(a^{\left( log_a(b) \times \left({}^{\left(\normalsize n+x+\mu_b(a)\right)}\right) b\right)}\right)\right)} \\

& = & {\Large log_a^{\circ n}\left({log_a(b) + \left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)} \\
& = & {\Large log_a^{\circ n}\left(\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)\ \times\ \left(1\ +\ \frac{log_a(b)}{{}^{\left(\normalsize n+x+\mu_b(a)\right)} b}\right)\right)} \\

& = & {\Large log_a^{\circ n} \left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)\ +\ \epsilon_{\small 1}},\text{ given }\epsilon_{\small 1}\ \approx\ \frac{log_a(b)}{{}^{\left(\normalsize n+x+\mu_b(a)\right)} b} \to 0
\end{eqnarray}
\)

However, if you're not convinced as I am, consider taking it to n+2:

\( \begin{eqnarray}
{\Large ^{\normalsize x} a} & = &
{\Large log_a^{\circ \normalsize (n+2)}\left({}^{\left(\normalsize (n+2)+x+\mu_b(a)\right)} b\right)} \\

& = & {\Large log_a^{\circ \normalsize (n+2)}\left(b^{\left(b^{\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)}\right)} \\

& = & {\Large log_a^{\circ \normalsize (n+2)}\left(a^{log_a(b)\times \left(a^{log_a(b) \times \left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)}\right)} \\

& = & {\Large log_a^{\circ \normalsize (n+1)}\left(log_a(b)\ +\ a^{\left(log_a(b) \times a^{\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)}\right)} \\

& = & {\Large log_a^{\circ \normalsize (n+1)}\left(\left(a^{\left(log_a(b) \times a^{\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)}\right)\ \times\ (1+\epsilon_2)\right)} \\

& = & {\Large log_a^{\circ n}\left(\left(log_a(b)\ +\ a^{\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)}\right)\ +\ log_a(1+\epsilon_2)\right)} \\

& = & {\Large log_a^{\circ n}\left(\left(\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)\ \times (1+\epsilon_1)\right)\ +\ \epsilon_{2'}\right)} \\

& = & {\Large log_a^{\circ n} \left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)\ +\ \epsilon_{\small 1'}},\text{ where }\epsilon_{\small 1'} \approx \epsilon_1 (1+log_b(a)\epsilon_{2'})
\end{eqnarray}
\)

Notice that \( \epsilon_{2'} \ll \epsilon_1 \), thus supporting the claim that this superlogarithmic constant converges superexponentially. For base e, once you've found it within a factor of 1 part in 1000, the next iteration will get you accuracy of 1 part in e^1000. The next will get you within e^(e^1000). Of course, machine precision will necessarily cut your ascent off pretty early.
Reply
#3
As I mentioned, for integer values, we can solve exactly. The inverse function \( \mu_e^{\small -1}(x) \) can be solved for integer values, as shown below:

\( \begin{eqnarray}
\mu_e^{\small -1}(-3) & \approx & 1.6882141580329708209627762045
\\
\mu_e^{\small -1}(-2) & \approx & 1.8030232398553705179704668746
\\
\mu_e^{\small -1}(-1) & \approx & 2.0404716999485556015209199031
\\
\mu_e^{\small -1}(0) & \approx &
2.7182818284590452353602874713 = e
\\
\mu_e^{\small -1}(1) & \approx & 7.514088623758291009709030684
\\
\mu_e^{\small -1}(2) & \approx & 3814263.950501952154234481426
\end{eqnarray}
\)

For base 2, we have:
\( \begin{eqnarray}
\mu_2^{\small -1}(-2) & \approx & 1.678120055270763107817891
\\
\mu_2^{\small -1}(-1) & \approx & 1.78467418460380830116416988
\\
\mu_2^{\small -1}(0) & = & 2 \\
\mu_2^{\small -1}(1) & \approx & 2.5861097400293629228959
\\
\mu_2^{\small -1}(2) & \approx & 6.121264365801078940145846
\end{eqnarray}
\)

For base 10:
\( \begin{eqnarray}
\mu_{10}^{\small -1}(-2) & \approx & 2.09007667990965647973914
\\
mu_{10}^{\small -1}(-1) & \approx & 2.894243914558514321612672 \\
\mu_{10}^{\small -1}(0) & = & 10 \\
\mu_{10}^{\small -1}(1) & \approx & 9999999990.00000000043429448210153789
\end{eqnarray}
\)

Much like the tetration function itself, it would seem that this function, whatever it is, is iteratively definable for integer steps, but who knows how to interpolate in between? With an exact solution for a particular base, we may get an answer.

One careful observation as x decreases to negative infinity:

\( \lim_{x \to -\infty} \mu_e^{\small -1}(x) = \eta \)

Also note that it would appear to grow as quickly as tetration itself as we move into positive territory. At a glance, we can tell that for \( \mu_e^{\small -1}(x) \), for values of x > 2, it should be almost exactly \( {}^{(x+1)} e \), if you only look at the order of magnitude. More succintly, we can state:

\( ln \left(\mu_e^{\small -1}(x)\right) \approx {}^{x} e \)

I'm curious to see this function graphed, once we've put together the tools to solve the various bases. We're almost there. This graph would appear to grow hyperexponentially to the right, but asymptotically approaches eta to the left. In some ways, I bet this graph looks more interesting than the tetration graph, because the curvature should always be positive.

For tetration, remember, there's an inflection point. To the right of the inflection point, the graph becomes more and more hyperexponential, i.e., repeated tetrations of the (for intuive purposes) approximately linear critical interval. While to the left of the inflection point, the graph becomes more and more hyperlogarithmic (repeated logarithms of the critical interval).
Reply
#4
Quote:I'm curious to see this function graphed, once we've put together the tools to solve the various bases. We're almost there. This graph would appear to grow hyperexponentially to the right, but asymptotically approaches eta to the left. In some ways, I bet this graph looks more interesting than the tetration graph, because the curvature should always be positive.
Random guess, before I go to bed. This function seems very similar to the cheta function. It's asymptotic to the left, hyperexponential to the right. The asymptote itself is different, so there may be some scaling involved, and possibly some other small changes. Something for me to ponder over the next few days...
Reply
#5
jaydfox Wrote:First, we need the constant of base conversion. It's essenstially a form of superlogarithmic constant. Think of it as the equivalent of the constant \( log_b(a) \) used for converting \( a^z = b^{z \times log_b(a)} \), assuming a and b are positive real numbers. We can find \( log_b(a) \) by taking a and b to very high integer powers:

\( log_b(a)\ =\ \lim_{n \to \infty} \left(\frac{n}{k}\right),\ a^k \le b^n \le a^{k+1} \)

By analogy, for tetration, we're going to tetrate them each a large number of times. However, as you will see, tetration to integer powers won't work, not if we want to find the superlogarithmic constant. If the superlogarithmic constant isn't an integer, you can only approximate without an exact solution for one of the bases. In other words, in almost all cases, we must have an exact solution for fractional iteration for at least one of the bases. That doesn't mean the constant doesn't exist, only that we can't uniquely determine its value without an exact solution for some base.

\( \begin{array}{|ccccc|}& & & & \\
\hline
\\[10pt]

\\
\hspace{10} & {\Large ^{\normalsize x} a} & = & {\Large \lim_{n \to \infty} log_a^{\circ n}\left({}^{\left(\normalsize n+x+\mu_b(a)\right)} b\right)} & \hspace{10} \\
\\[10pt]

\\
\hline
\end{array} \)

In a twist of irony, the logarithmic constant for exponentiation (hyper-3) is multiplicative (hyper-2), but the superlogarithmic constant for tetration (hyper-4) is additive (hyper-1). And I say it's a "superlogarithmic" constant, but it should not be confused with \( slog_b(a) \). I think they're related, but I haven't pinned down the nature of the relationship yet. This will require more study.
I should have trusted my first instinct. I called it a "superlogarithmic constant". As it turns out:

\( \mu_e(b)\ =\ \lim_{z \to \infty} slog_e(z)-slog_b(z) \)

And there you have it. We've got an exact formula for base conversion of tetration, and an exact formula for finding the superlogarithmic constant. But these two facts together are only sufficient to solve for integer tetration and integer superlogarithms (i.e., where the slog_b(x) = n, n an integer).

We need only 1 exact solution to fill in all the gaps. But the solution must be unique. If we find "a" solution that is not "the" solution, then we get the wrong solution for all bases. In theory, if we find "the" solution for any base, we've found it for all of them, because we have an exact base conversion formula.

Edit: It really bugs me that all the tags in TeX start with backslashes, but the closing [/tex] tag starts with a forward slash.
Reply
#6
I believe that you have a lot of good ideas, but actually I can not quite follow your explanations. Most of your formulas comes without any justifications nor proofs.

That your formula
\( {}^x a = \lim_{n \to \infty} \log_a^{\circ n}\left({}^{\left( n+x+\mu_b(a)\right)} b\right) \)
yields a proper tetration the resulting \( {}^xa \) must at least satisfy the identities
\( ^{x+1}a=a^{{}^xa} \) and \( {}^1 a=a \)
if \( {}^xb \) satisfies them. Can you verify this first?

Then I dont understand how you compute \( \log_a^{\circ n} AB=\log_a^{\circ n}(A)+\epsilon_1 \). It is always unclear what you mean by "exact" solution. Isnt everything exact in mathematics except if we say its approximate? A limit for example is exact if it exists regardless what difficulties we have with numeric computation.

A side note: if you write \log in TeX then you get the properly displayed log, instead of the multiplicaton of the letters l, o and g.
Reply
#7
bo198214 Wrote:Then I dont understand how you compute \( \log_a^{\circ n} AB=\log_a^{\circ n}(A)+\epsilon_1 \).

This is one of the logarithmic laws:

\( \log(AB) = \log(A)+\log(B) \)

Then, we know that \( \log(1)=0 \), and that \( \log(1+\epsilon) \approx \log_b(e)\ \times\ \epsilon \). As epsilon goes to 0, this is in fact exact.

Therefore, if B is (1+epsilon), then we can state:
\( \log(AB) = \log(A) + \log(B) = \log(A) + \epsilon \)

Quote:It is always unclear what you mean by "exact" solution. Isnt everything exact in mathematics except if we say its approximate? A limit for example is exact if it exists regardless what difficulties we have with numeric computation.

I mean exact only when the limits I provide are taken to infinity. Otherwise, they are definitely approximate, just like I say the following value is "approximately" equal to e for a finite n:

\( e = \left(1+\frac{1}{n}\right)^n \)

Do you see the distinction? For numerical computation purposes, the solution will always be approximate. However, I still claim exactness when the specified limits are taken to infinity.
Reply
#8
jaydfox Wrote:
bo198214 Wrote:Then I dont understand how you compute \( \log_a^{\circ n} AB=\log_a^{\circ n}(A)+\epsilon_1 \).

This is one of the logarithmic laws:

\( \log(AB) = \log(A)+\log(B) \)

Then, we know that \( \log(1)=0 \), and that \( \log(1+\epsilon) \approx \log_b(e)\ \times\ \epsilon \). As epsilon goes to 0, this is in fact exact.

Therefore, if B is (1+epsilon), then we can state:
\( \log(AB) = \log(A) + \log(B) = \log(A) + \epsilon \)
Never mind, I see what you meant. The epsilon needs to be inside the parentheses. I'll post an update when I get a chance.

Essentially, the correction is:

\( \log_a^{\circ n} AB=\log_a^{\circ (n-1)}\left(\log_a(A)+\epsilon_1\right) \)

Again, it doesn't affect the limit case.
Reply
#9
bo198214 Wrote:I believe that you have a lot of good ideas, but actually I can not quite follow your explanations. Most of your formulas comes without any justifications nor proofs.

That your formula
\( {}^x a = \lim_{n \to \infty} \log_a^{\circ n}\left({}^{\left( n+x+\mu_b(a)\right)} b\right) \)
yields a proper tetration the resulting \( {}^xa \) must at least satisfy the identities
\( ^{x+1}a=a^{{}^xa} \) and \( {}^1 a=a \)
if \( {}^xb \) satisfies them. Can you verify this first?
Sorry, I thought it was implicit in the formula, but I've been looking at this for several days, so perhaps I just took it for granted:

\( \begin{eqnarray}
{}^x a & = & \lim_{n \to \infty} \log_a^{\circ n}\left({}^{\left( n+x+\mu_b(a)\right)} b\right) \\
\\[15pt]

\\
{}^{x+1} a & = & \lim_{n \to \infty} \log_a^{\circ n}\left({}^{\left( n+(x+1)+\mu_b(a)\right)} b\right) \\
& = & \lim_{n \to \infty} \log_a^{\circ n}\left({}^{\left( (n+1)+x+\mu_b(a)\right)} b\right) \\
& = & \lim_{n \to \infty} {\Large a}^{\left[ \log_a^{\circ (n+1)}\left({}^{\left( (n+1)+x+\mu_b(a)\right)} b \right) \right]} \\
& = & \lim_{n \to \infty} {\Large a}^{\left[ \log_a^{\circ (n-1)}\left( \log_a \left({}^{\left( (n+x+\mu_b(a)\right)} b \right)+\epsilon_1 \right) \right]} \\
& \approx & \lim_{n \to \infty} {\Large a}^{\left[ \log_a^{\circ n}\left({}^{\left( (n+x+\mu_b(a)\right)} b \right) \right]} \\
& \approx & \lim_{n \to \infty} {\Large a}^{\left[{}^{x} a\right]} \\
\end{eqnarray}
\)

The base case \( {}^1 a=a \) is guaranteed by the definition of \( \mu_b(a) \). If you run the formula and you don't get \( {}^1 a=a \), then your constant \( \mu_b(a) \) needs to be adjusted.
Reply
#10
I'm going to re-write your equation in the form:

\( {}^{x}{a} = \lim_{n \rightarrow \infty} \log_a^{[n]}(\exp_b^{[n]}({}^{x+\mu_b(a)}{b})) \)

Using this form of your equation, as \( n \rightarrow \infty \) then \( \exp_b^{[n]}(\cdots) \) becomes \( {}^{\infty}b \) and \( \log_a^{[n]}(\cdots) \) becomes \( {}^{-\infty}a \). This then implies:

\( {}^{x}{a} = {}^{-\infty}a \)

which is strictly not true. If you want to make a change-of-base formula for tetration, at least make one that is consistent. This one is not. I've spent a great deal of time looking for a change-of-base formula, and I'm convinced that one does not exist. It could be that I forgot something about the limit process, and the simplifications above do not occur, we'll have to investigate in more detail.

Andrew Robbins
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  [2sinh] exp(x) - exp( - (e-1) x), Low Base Constant (LBC) 1.5056377.. tommy1729 3 1,721 04/30/2023, 01:22 AM
Last Post: tommy1729
  f(x+y) g(f(x)f(y)) = f(x) + f(y) addition formula ? tommy1729 1 985 01/13/2023, 08:45 PM
Last Post: tommy1729
  Base -1 marraco 15 26,043 07/06/2022, 09:37 AM
Last Post: Catullus
  I thought I'd take a crack at base = 1/2 JmsNxn 9 6,304 06/20/2022, 08:28 AM
Last Post: Catullus
Question Formula for the Taylor Series for Tetration Catullus 8 6,035 06/12/2022, 07:32 AM
Last Post: JmsNxn
Big Grin Repetition of the last digits of a tetration of generic base Luknik 12 10,073 12/16/2021, 12:26 AM
Last Post: marcokrt
  On the [tex]2 \pi i[/tex]-periodic solution to tetration, base e JmsNxn 0 1,634 09/28/2021, 05:44 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 2,028 03/17/2021, 11:15 PM
Last Post: JmsNxn
  There is a non recursive formula for T(x,k)? marraco 5 7,247 12/26/2020, 11:05 AM
Last Post: Gottfried
  Complex Tetration, to base exp(1/e) Ember Edison 7 16,486 08/14/2019, 09:15 AM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)