Tetration Forum

Full Version: Rational operators (a {t} b); a,b > e solved
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
(06/08/2011, 08:32 PM)sheldonison Wrote: [ -> ]So we have a definition for t=0..2 (addition, multiplication, exponentiation), for all values of a. Nice!
\( a\{t\}e = \exp_\eta^{\circ t}(\exp_\eta^{\circ -t}(a)+e) \)

So, now, I'm going to define the function f(b), which returns the base which has the fixed point of "b".
\( f(e)=\eta \), since e is the fixed point of b=\( \eta \)
\( f(2)=\sqrt{2} \), since 2 is the lower fixed point of b=sqrt(2)
\( f(3)=1.442249570 \), since 3 is the upper fixed point of this base
\( f(4)=\sqrt{2} \), since 4 is the upper fixed point of b=sqrt(2)
\( f(5)=1.379729661 \), since 5 is the upper fixed point of this base
I don't know how to calculate the base from the fixed point, but that's the function we need, and we would like the function to be analytic! This also explains why the approximation of using base eta pretty well, since the base we're going to use isn't going to be much smaller than eta, as b gets bigger or smaller than e.

Now, we use this new function in place of eta, in James's equation. Here, f=f(b).
\( a\{t\}b = \exp_f^{\circ t}(\exp_f^{\circ-t}(a)+b) \)
- Sheldon


Woah, I wonder what consequences this will have on the algebra.


I guess \( \log_f^{\circ q}(a\,\,\bigtriangleup_{1+q}\,\,b) = b\log_f^{\circ q}(a) \)
\( \log_f^{\circ q}(a\,\,\bigtriangleup_{q}\,\,b) = \log_f^{\circ q}(a) + b \)

which I guess isn't too drastic.

But ,the question is, of course, does the following still hold \( 0 \le q\le1 \):
\( a\,\,\bigtriangleup_{1+q}\,\,2 = a\,\,\bigtriangleup_q\,\,a \)

\( \exp_{2^{\frac{1}{2}}}^{\circ 1+q}(\exp_{2^{\frac{1}{2}}}^{\circ -q-1}(a) + 2) \neq \exp_{a^{\frac{1}{a}}}^{\circ q}(a + a) \), so no it doesn't. That's not good, we want operators to be recursive.

And I'm unsure if the inverse is still well defined, so I think we lose:
\( (a\,\,\bigtriangleup_{1+q}\,\, -1)\,\,\bigtriangleup_q\,\,a = S(q) \)
where S(q) is the identity function. We may even lose the identity function altogether, this is really bad.

We also lose:
\( (a\,\,\bigtriangleup_{1+q}\,\,b)\,\,\bigtriangleup_{1+q}\,\, c = a\,\,\bigtriangleup_{1+q}\,\,bc \)
\( (a\,\,\bigtriangleup_{1+q}\,\,b)\,\,\bigtriangleup_q\,\,(a\,\,\bigtriangleup_{1 + q} \,\,c) = a\,\,\bigtriangleup_{1+q}\,\,b+c \)

These are all too many valuable qualities that are lost when redefining semi-operators the way that you do. Sure it's analytic over \( (-\infty, 2] \), but it loses all its traits which make it an operator in the first place. I'm going to have to stick with the original definition of \( \vartheta \) that isn't fully analytic.

However, I am willing to concede the idea of changing from base eta to base root 2.

That is to say if we define:

\( \vartheta(a,b,\sigma) = \exp_{2^{\frac{1}{2}}}^{\circ \sigma}(\exp_{2^{\frac{1}{2}}}^{\circ -\sigma}(a) + h_b(\sigma))\\\\
[tex]h_b(\sigma)=\left{\begin{array}{c l}
\exp_{2^{\frac{1}{2}}}^{\circ -\sigma}(b) & \sigma \le 1\\
\exp_{2^{\frac{1}{2}}}^{\circ -1}(b) & \sigma \in [1,2]
\end{array}\right. \)

This will give the time honoured result, and aesthetic necessity in my point of view, of:
\( \vartheta(2, 2, \sigma) = 2\,\,\bigtriangleup_\sigma\,\, 2 = 4 \) for all \( \sigma \)

I like this also because it makes \( \vartheta(a, 2, \sigma) \) and \( \vartheta(a, 4, \sigma) \) potentially analytic over \( (-\infty, 2] \) since 2 and 4 are fix points.

I also propose writing

\( a\,\,\bigtriangle_\sigma^f\,\,b = \exp_f^{\circ \sigma}(\exp_f^{\circ -\sigma}(a) + h_b(\sigma))\\\\
[tex]h_b(\sigma)=\left{\begin{array}{c l}
\exp_f^{\circ -\sigma}(b) & \sigma \le 1\\
\exp_f^{\circ -1}(b) & \sigma \in [1,2]
\end{array}\right. \)

Sheldon's analytic function is then:
\( g(\sigma) = a\,\,\bigtriangle_\sigma^{b^{\frac{1}{b}}}\,\,b \)

that's still very pretty though, that \( g \) isn't piecewise over \( (-\infty, 2] \) and potentially analytic.
Hi James,

I do not really know, whether the following matches your input here; but screening through older discussions I just found an older post of Mike (I'd saved it by copying from google.groups). He observed the following and asked
\( \begin{eqnarray}
t_1 &=& \log(3) & \approx& 1.09861228867 \\
t_2 &=& \log(\log(3^3)) & \approx& 1.19266011628 \\
t_3 &=& \log(\log(\log(3^{3^3}))) & \approx& 1.22079590713 \\
\vspace8 & & \\
\vspace8 ... &=& ... \\
\vspace8 & & \\
t_{n\to \infty} &\to & \text{constant (which?)} & \approx& 1.22172930187
\end{eqnarray}
\)
Henryk had answered with some proof of convergence and rate of convergence. I had an idea to reformulate this in a way using somehow "lower degree operators" than addition but could not make it better computable, so I didn't involve then further.

If I get your approach right this can be used for such "lower order" operators? Say
\( \hspace{48} \begin{eqnarray}
a &+& b &=& a + b \\
a &+_{\tiny -1} & b &=& \log(e^a + e^b) \\
a &+_{\tiny -2} & b &=& \log(\log(e^e^a + e^e^b)) \\
\vspace8 \end{eqnarray} \)
and \( L^{\tiny o h}(3) \) for the h-fold iterated log( 3) then Mike's limit can be expressed
\( \begin{eqnarray}
t_1 &=& L^{\tiny o 1}(3) \\
t_2 &=& L^{\tiny o 1}(3) &+& L^{\tiny o 2}(3) \\
t_3 &=& L^{\tiny o 1}(3) &+& L^{\tiny o 2}(3) &+_{\tiny -1}& L^{\tiny o 3}(3) \\

t_4 &=& L^{\tiny o 1}(3) &+& L^{\tiny o 2}(3) &+_{\tiny -1}& L^{\tiny o 3}(3) &+_{\tiny -2}& L^{\tiny o 4}(3) \\
\vspace8 & &\\
...& &... \\
\vspace8 & &\\
t_{n\to \infty} &\to & \text{constant} \\
\end{eqnarray} \)
where the operator-precedence is lower the more negative the index at the plus is (so we evaluate it from the left).

First question: is this in fact an application of your "rational operator"?

And if it is so, then second question: does this help to evaluate this to higher depth of iteration than we can do it when we try it just by log and exp alone (we can do it to iteration 4 or 5 at max I think) ?

Gottfried

cite:
Quote:In article
<20f2d3ea-1e87-45dc-86b2-a917f89e9370@i18g2000pro.googlegroups.com>,
mike3 <mike4ty4@yahoo.com> wrote:

> Hi.
>
> I noticed this.
>
> log(3) ~ 1.098612288668109691395245237
> log(log(3^3)) ~ 1.192660116284808707569579569
> log(log(log(3^3^3))) ~ 1.220795907132767865324020633
> log(log(log(log(3^3^3^3)))) ~ 1.221729301870251716316203810

> (calculated indirectly via identity log(x^y) = y log(x).)
> log(log(log(log(log(3^3^3^3^3))))) ~ 1.221729301870251827504003124

> (calculated indirectly via identity log(log(x^x^y)) = y log(x) + log
> (log(x)).)
>
> It seems to be stabilizing on some weird value, around 1.2217293.
> What is this? And we seem to run out of log identities here making
> it infeasible to compute further approximations.
>
> Has this been examined before?
(06/11/2011, 02:33 PM)Gottfried Wrote: [ -> ]Hi James,

\( \hspace{48} \begin{eqnarray}
a &+& b &=& a + b \\
a &+_{\tiny -1} & b &=& \log(e^a + e^b) \\
a &+_{\tiny -2} & b &=& \log(\log(e^e^a + e^e^b)) \\
\vspace8 \end{eqnarray} \)

Yes this is right:
\( a\,\,\bigtriangleup_{-1}^e\,\,b = \ln(e^a+e^b) \)

I did a lot of investigation into this operator (well, to the best that I could).

Quote:and \( L^{\tiny o h}(3) \) for the h-fold iterated log( 3) then Mike's limit can be expressed
\( \begin{eqnarray}
t_1 &=& L^{\tiny o 1}(3) \\
t_2 &=& L^{\tiny o 1}(3) &+& L^{\tiny o 2}(3) \\
t_3 &=& L^{\tiny o 1}(3) &+& L^{\tiny o 2}(3) &+_{\tiny -1}& L^{\tiny o 3}(3) \\

t_4 &=& L^{\tiny o 1}(3) &+& L^{\tiny o 2}(3) &+_{\tiny -1}& L^{\tiny o 3}(3) &+_{\tiny -2}& L^{\tiny o 4}(3) \\
\vspace8 & &\\
...& &... \\
\vspace8 & &\\
t_{n\to \infty} &\to & \text{constant} \\
\end{eqnarray} \)
where the operator-precedence is lower the more negative the index at the plus is (so we evaluate it from the left).

First question: is this in fact an application of your "rational operator"?

Well it appears to be. I'm floored, I'm terrible at finding applications.

Quote:And if it is so, then second question: does this help to evaluate this to higher depth of iteration than we can do it when we try it just by log and exp alone (we can do it to iteration 4 or 5 at max I think) ?

Well, not so far since the calculations involved in lower order operators rely on iterations of exp. However, I investigated in seeing if \( f(x) = a\,\,\bigtriangleup_{-1}^e\,\,x \) was analytic (which should help in calculations). But I only made it to the sixth or seventh derivative before I realized I wasn't going to recognize the pattern. The thread's here http://www.mymathforum.com/viewtopic.php?f=23&t=20993 .
I assume it would be analytic, (the function looks analytic when graphed, if that's any argument). Also, I think that if \( a\,\,\bigtriangleup_{-1}^e\,\,x \) is analytic, then \( a\,\,\bigtriangleup_{-2}\,\,x \) is probably analytic, since it's basically the same function with just a faster convergence to y=x and a higher starting point at negative infinity.
(06/06/2011, 02:45 AM)JmsNxn Wrote: [ -> ]Well alas, logarithmic semi operators have paid off and have given a beautiful smooth curve over domain \( (-\infty, 2] \). This solution for rational operators is given by \( t \in (-\infty, 2] \) \( \{a, b \in \R| a, b > e\} \):

\(
f(t) = a\, \{t\}\, b =
\left\{
\begin{array}{c l}
\exp_\eta^{\alpha t}(\exp_\eta^{\alpha-t}(a) + \exp_\eta^{\alpha -t}(b)) & t \in (-\infty,1]\\
\exp_\eta^{\alpha t-1}(b * \exp_\eta^{\alpha 1-t}(a)) & t \in [1,2]\\
\end{array}
\right.
\)

Which extends the ackerman function to domain real (given the restrictions provided).
the upper superfunction of \( \exp_\eta(x) \) is used (i.e: the cheta function).
\( \exp_\eta^{\alpha t}(a) = \text{cheta}(\text{cheta}^{-1}(a) + t) \)

Logarithmic semi-operators contain infinite rings and infinite abelian groups. In so far as {t} and {t-1} always form a ring and {t-1} is always an abelian group (therefore any operator greater than {1} is not commutative and is not abelian). There is an identity function S(t), however its values occur below e and are therefore still unknown for operators less than {1} (except at negative integers where it is a variant of infinity (therefore difficult to play with) and at 0 where it is 0). Greater than {1} operators have identity 1.

The difficulty is, if we use the lower superfunction of \( \exp_\eta(x) \) to define values less than e we get a hump in the middle of our transformation from \( a\,\{t\}\,b \). Therefore we have difficulty in defining an inverse for rational exponentiation. however, we still have a piecewise formula:

\( g(t) = a\, \}t \{ \, \)\( b =
\left\{
\begin{array}{c l}
\exp_\eta^{\alpha t}(\exp_\eta^{\alpha-t}(a) - \exp_\eta^{\alpha -t}(b)) & t \in (-\infty,1]\\
\exp_\eta^{\alpha t-1}(\frac{1}{b} \exp_\eta^{\alpha 1-t}(a)) & t \in [1,2]\\
\end{array}
\right. \)

therefore rational roots, the inverse of rational exponentiation is defined so long as \( a > e \) and \( \frac{1}{b} \exp_\eta^{\alpha 1-t}(a) > e \).
rational division and rational subtraction is possible if \( a,b > e \) and \( \exp_\eta^{\alpha-t}(a) - \exp_\eta^{\alpha -t}(b) > e \).

Here are some graphs, I'm sorry about their poor quality but I'm rather new to pari-gp so I don't know how to draw graphs using it. I'm stuck using python right now. Nonetheless here are the graphs.

the window for these ones is xmin = -1, xmax = 2, ymin = 0, ymax = 100

[Image: 3x4.png][Image: 4x3.png]
[Image: 5x3.png]

If there's any transformation someone would like to see specifically, please just ask me. I wanted to do the transformation of \( f(x) = x\, \{t\}\, 3 \) as we slowly raise t, but the graph doesn't look too good since x > e.

Some numerical values:
\( 4\,\{1.5\} 3 = 4\,\{0.5\}\,4\,\{0.5\}\, 4= 21.01351835\\
3\, \{1.25\}\, 2 = 3\, \{0.25\}\, 3 = 6.46495791\\
5\, \{1.89\}\, 3 = 5\, \{0.89\}\,5\,\{0.89\}\, 5 = 81.307046337\\ \)
(I know I'm not supposed to be able to calculate the second one, but that's the power of recursion)

I'm very excited by this, I wonder if anyone has any questions comments?

for more on rational operators in general, see the identities they follow on this thread http://math.eretrandre.org/tetrationforu...hp?tid=546

thanks, James

PS: thanks go to Sheldon for the taylor series approximations of cheta and its inverse which allowed for the calculations.

Hello, James! Hello, Everyone!
I am really interested in that how you could make that beautiful graph. Could you tell me, please?
Thank you very much.
(08/21/2016, 06:56 PM)Xorter Wrote: [ -> ]Hello, James! Hello, Everyone!
I am really interested in that how you could make that beautiful graph. Could you tell me, please?
Thank you very much.

Search for the cheta function on the forum and get its power series.
Take
\( f(t,x) = cheta(cheta^{-1}(x) + t) \)

define

\( x [t] y = f(t,f(-t,x) + f(-t,y))\,\,t<1 \)

\( x [t] y = f(t-1,yf(1-t,x))\,\,t \ge 1 \)

continuous solution which is analytic for t < 1 and t >1 with a singularity at t=1

oddly enough

\( x [t] e \) is analytic everywhere.
(08/22/2016, 12:36 AM)JmsNxn Wrote: [ -> ]
(08/21/2016, 06:56 PM)Xorter Wrote: [ -> ]Hello, James! Hello, Everyone!
I am really interested in that how you could make that beautiful graph. Could you tell me, please?
Thank you very much.

Search for the cheta function on the forum and get its power series.
Take
\( f(t,x) = cheta(cheta^{-1}(x) + t) \)

define

\( x [t] y = f(t,f(-t,x) + f(-t,y))\,\,t<1 \)

\( x [t] y = f(t-1,yf(1-t,x))\,\,t \ge 1 \)

continuous solution which is analytic for t < 1 and t >1 with a singularity at t=1

oddly enough

\( x [t] e \) is analytic everywhere.

Thank you for the answer!
Could you show me an example, too, please?
E. g. How can I evaluate 3[0.5]3 or 3[1.5]3?
According to James' formula I could not tetrate. For example:
2[3]3 should be 2^^3 = 16, but according to the formula:
2[3]3 = f(2,3f(-2,2)) = f(2,3*1.869...) = 18.125...
But why? Huh
(08/29/2016, 02:06 PM)Xorter Wrote: [ -> ]According to James' formula I could not tetrate. For example:
2[3]3 should be 2^^3 = 16, but according to the formula:
2[3]3 = f(2,3f(-2,2)) = f(2,3*1.869...) = 18.125...
But why? Huh

The formula only works for \( t \le 2 \) in x [t] y
The formula isn't really that valuable.


(06/08/2011, 09:14 PM)bo198214 Wrote: [ -> ]
(06/08/2011, 08:32 PM)sheldonison Wrote: [ -> ]So, now, I'm going to define the function f(b), which returns the base which has the fixed point of "b".
\( f(e)=\eta \), since e is the fixed point of b=\( \eta \)
\( f(2)=\sqrt{2} \), since 2 is the lower fixed point of b=sqrt(2)
\( f(3)=1.442249570 \), since 3 is the upper fixed point of this base
\( f(4)=\sqrt{2} \), since 4 is the upper fixed point of b=sqrt(2)
\( f(5)=1.379729661 \), since 5 is the upper fixed point of this base
I don't know how to calculate the base from the fixed point, but that's the function we need, and we would like the function to be analytic!

Oh, Sheldon seems to be quite tired from all the calculation and discussion.
Sheldon, give your self some time to rest!
Your function is \( f(z)=z^{1/z} \) Wink

About those fixpoints and bases.

Bo is correct ofcourse but i wanted to add how to find the other fixpoint.

I end with a joke because i did NOT show which T is the correct one : it is the SMALLEST > 1 to be precise ...
Analytic continuation is hard for a proof of that minimal property. The convex nature and fast growth of exp type functions is intuitive but also IMHO unconvincing / informal / weak.
So no satisfying proof. Perhaps food for thought.

( certainly possible ! )

Anyway here it is ( and T = t )


X^1/x = y^1/y

Ln(x)/ x = ln(y)/y

Ln(x)/x = ln(T x)/ T x

T ^ 1/T x^1/T = x

T x = x^t

X^(t - 1) = t

Ofcourse new similar problem Smile

t1 ^ (1/(t1 - 1)) = t2 ^ (1/(t2 - 1))

Regards

Tommy1729
The master
Oh dear e ^(T - 1) = T for T <> 0 gave me the idea of " fake fixpoint theory " in analogue to " fake function theory ".

Apart from that funny/annoying thing , another reason is that this innocent little thing apparantly f* Up Some proof strategies for the desired Nice proof mentioned b4.

Regards

Tommy1729
Pages: 1 2 3 4