<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title><![CDATA[Tetration Forum - Hyperoperations and Related Studies]]></title>
		<link>https://tetrationforum.org/</link>
		<description><![CDATA[Tetration Forum - https://tetrationforum.org]]></description>
		<pubDate>Tue, 12 May 2026 11:43:04 +0000</pubDate>
		<generator>MyBB</generator>
		<item>
			<title><![CDATA[Zeration and Deltation]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1806</link>
			<pubDate>Sat, 30 Aug 2025 22:05:54 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=348">Rayanso</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1806</guid>
			<description><![CDATA[Hi, I'm a new user and I wanted to make share with you some questions and stuff related to Zeration (rank 0 hyper-operation) and its inverse, Deltation.<br />
<br />
On this forum I discovered GFR's post and I looked at the paper, and I also used the wayback machine to download an old paper from 1996 on hyper-operations from KAR's website.<br />
If you are not familiar with Zeration and Deltation I suggest that you read the file that GFR attached in his post.<br />
<br />
One question I was asking myself was what 0.5 * (<span style="font-weight: bold;" class="mycode_b">△</span>0) would be equal to, and if it could be reduced to a single number. In KAR's paper, this is resolved by introducing a new unit "j", like the imaginary unit for (-1)^(0.5). We then have 2*j =  <span style="font-weight: bold;" class="mycode_b">△</span>0. The operation 0.5 * (<span style="font-weight: bold;" class="mycode_b">△</span>0) also has 2 solutions: <span style="font-weight: bold;" class="mycode_b">△</span>j and j.<br />
<br />
We still don't know what 0.35801 * (<span style="font-weight: bold;" class="mycode_b">△</span>0), or with any other arbitrary coefficient, would equal to. I made a formula for any real number as a coefficient for what this would be equal to.<br />
<br />
I came up with x * (<span style="font-weight: bold;" class="mycode_b">△</span>0) = j*2*(2n+1)*x with n being an integer. I verified this formula with (-1)^a = b^(△0 * a)<br />
<br />
Verification: for (-1)^(1/2) we have b^(△0 * 1/2) = {b^j ; b^(△j)}, so we must have b^j = i<br />
b is a real number that is not zero.<br />
<br />
For (-1)^(1/3) we know that there will be 3 solutions, therefore, △0 * 1/3 will have 3 solutions. These are: <span style="font-weight: bold;" class="mycode_b">△</span>0 ; 2/3 * j ; -2/3 * j. All other solutions can be reduced to one of these 3.<br />
<br />
We also have an interesting rule that completes one that already exists:<br />
<br />
a^0 = 1<br />
a^(△0) = -1<br />
a^j = i<br />
a^(△j) = -i<br />
<br />
This makes sense because the absolute value is 1 in every case, and the exponent is linked in some way to the number 0. <br />
If we have b = a*(△0), for any real a. c^b will always have an absolute value of 1. <br />
<br />
I also discovered the rule △j = -j, which I think KAR has not mentioned in his paper.<br />
<br />
Proof:<br />
<br />
△j + △j = 2*△j = △0<br />
△j + (-j) = △0 + j - j = △0<br />
(-j) + (-j) = -(2*j) = -△0 = △0<br />
<br />
This is as far as I went and there are many gaps in the algebraic structure, KAR used a new operation below Zeration for defining ln(△a) and created new units for other operations, and he mentioned that there is an infinite amount of new sets of numbers. It would be interesting to see if this can be generalized in some way.<br />
<br />
<br />
I still have unanswered questions about this algebraic structure:<br />
<br />
- How can we extend Zeration to the complex set? What would (3*i) ° 4 be equal to? What about j ° 2 ?<br />
- What is the proof for Zeration's commutativity? I saw somewhere on the forum that there was a proof for that but I couldn't find it.<br />
- What about other hypothetical extensions I didn't mention?<br />
- Have I made any mistakes? What did other people discover?<br />
<br />
<br />
Rayanso]]></description>
			<content:encoded><![CDATA[Hi, I'm a new user and I wanted to make share with you some questions and stuff related to Zeration (rank 0 hyper-operation) and its inverse, Deltation.<br />
<br />
On this forum I discovered GFR's post and I looked at the paper, and I also used the wayback machine to download an old paper from 1996 on hyper-operations from KAR's website.<br />
If you are not familiar with Zeration and Deltation I suggest that you read the file that GFR attached in his post.<br />
<br />
One question I was asking myself was what 0.5 * (<span style="font-weight: bold;" class="mycode_b">△</span>0) would be equal to, and if it could be reduced to a single number. In KAR's paper, this is resolved by introducing a new unit "j", like the imaginary unit for (-1)^(0.5). We then have 2*j =  <span style="font-weight: bold;" class="mycode_b">△</span>0. The operation 0.5 * (<span style="font-weight: bold;" class="mycode_b">△</span>0) also has 2 solutions: <span style="font-weight: bold;" class="mycode_b">△</span>j and j.<br />
<br />
We still don't know what 0.35801 * (<span style="font-weight: bold;" class="mycode_b">△</span>0), or with any other arbitrary coefficient, would equal to. I made a formula for any real number as a coefficient for what this would be equal to.<br />
<br />
I came up with x * (<span style="font-weight: bold;" class="mycode_b">△</span>0) = j*2*(2n+1)*x with n being an integer. I verified this formula with (-1)^a = b^(△0 * a)<br />
<br />
Verification: for (-1)^(1/2) we have b^(△0 * 1/2) = {b^j ; b^(△j)}, so we must have b^j = i<br />
b is a real number that is not zero.<br />
<br />
For (-1)^(1/3) we know that there will be 3 solutions, therefore, △0 * 1/3 will have 3 solutions. These are: <span style="font-weight: bold;" class="mycode_b">△</span>0 ; 2/3 * j ; -2/3 * j. All other solutions can be reduced to one of these 3.<br />
<br />
We also have an interesting rule that completes one that already exists:<br />
<br />
a^0 = 1<br />
a^(△0) = -1<br />
a^j = i<br />
a^(△j) = -i<br />
<br />
This makes sense because the absolute value is 1 in every case, and the exponent is linked in some way to the number 0. <br />
If we have b = a*(△0), for any real a. c^b will always have an absolute value of 1. <br />
<br />
I also discovered the rule △j = -j, which I think KAR has not mentioned in his paper.<br />
<br />
Proof:<br />
<br />
△j + △j = 2*△j = △0<br />
△j + (-j) = △0 + j - j = △0<br />
(-j) + (-j) = -(2*j) = -△0 = △0<br />
<br />
This is as far as I went and there are many gaps in the algebraic structure, KAR used a new operation below Zeration for defining ln(△a) and created new units for other operations, and he mentioned that there is an infinite amount of new sets of numbers. It would be interesting to see if this can be generalized in some way.<br />
<br />
<br />
I still have unanswered questions about this algebraic structure:<br />
<br />
- How can we extend Zeration to the complex set? What would (3*i) ° 4 be equal to? What about j ° 2 ?<br />
- What is the proof for Zeration's commutativity? I saw somewhere on the forum that there was a proof for that but I couldn't find it.<br />
- What about other hypothetical extensions I didn't mention?<br />
- Have I made any mistakes? What did other people discover?<br />
<br />
<br />
Rayanso]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[More literature on zeration]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1767</link>
			<pubDate>Thu, 20 Jul 2023 21:40:31 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=319">mathexploreryeah</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1767</guid>
			<description><![CDATA[Hiiiiii Tetrationers adventurers<br />
<br />
Is there more literature on zeration in other languages ?<br />
<br />
Best regards  <img src="https://tetrationforum.org/images/smilies/wink.gif" alt="Wink" title="Wink" class="smilie smilie_2" />]]></description>
			<content:encoded><![CDATA[Hiiiiii Tetrationers adventurers<br />
<br />
Is there more literature on zeration in other languages ?<br />
<br />
Best regards  <img src="https://tetrationforum.org/images/smilies/wink.gif" alt="Wink" title="Wink" class="smilie smilie_2" />]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Mighty polynomials]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1733</link>
			<pubDate>Thu, 30 Mar 2023 15:04:01 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=12">Daniel</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1733</guid>
			<description><![CDATA[It is well known that the map of \[(e^\frac{1}{e})^x\] behaves differently, it displays parabolic tetration or iteration. It is also a polynomial. It seems that each hyperoperator has a parabolic value where it is a polynomial. I'm trying to figure out if one can move from one parabolic value to the parabolic value of the succeeding hyperoperator.]]></description>
			<content:encoded><![CDATA[It is well known that the map of \[(e^\frac{1}{e})^x\] behaves differently, it displays parabolic tetration or iteration. It is also a polynomial. It seems that each hyperoperator has a parabolic value where it is a polynomial. I'm trying to figure out if one can move from one parabolic value to the parabolic value of the succeeding hyperoperator.]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[[Question for Bo] about formal Ackermann laws]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1667</link>
			<pubDate>Sun, 30 Oct 2022 18:02:05 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=233">MphLee</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1667</guid>
			<description><![CDATA[Hi Bo, I see you have a good fluency in formal power-series and I'd like to know if you have any immediate insight about the possibility/problems of two-variables power-series satisfying a "formal" Ackermann/Goodstein equation.<br />
<br />
Basically formal power-series over a ring \(R[[X]]\) are formally the same as functions \(R^\mathbb N\) but with much richer algebraic structure on it. Formal power-series on two variables are something like infinite matrices \(R[[X,Y]]\simeq R^{\mathbb N\times \mathbb N} \).<br />
<br />
In some cases maybe it is possible to nest-compose 2-variables formal power-series. What about a powerserie \(A\in \mathbb R[[X,Y]]\) s.t.<br />
<br />
\[A(S_0(X),S_1(Y))=A(X,A(S_0(X),Y))\]<br />
<br />
Where \(S_0(X)=1+X\) and \(S_1(Y)=1+Y\). <br />
<br />
What would be the condition to impose on the coefficient matrix of \(A\) that ensure the existence of that composition "<span style="font-style: italic;" class="mycode_i">as a formal power-series</span>", and what that equation would imply on the coefficient matrix itself?<br />
<br />
<hr class="mycode_hr" />
<br />
Inspired by the discussion in <a href="https://en.wikipedia.org/wiki/Formal_group_law" target="_blank" rel="noopener" class="mycode_url">(wikipedia) Formal group law.</a><br />
Let \(A\in R[[X,Y]] \) and \(A(X,Y)=\sum_{n,m}a_{n,m}X^nY^m\). If I'm not mistaken, the condition \(A(0,Y)=Y+1\) implies that \(a_{0,0}=1\), \(a_{0,1}=1\) and if \(1\lt m\) we have \(a_{0,m}=0\). <br />
<br />
\[<br />
A=<br />
        \begin{bmatrix}<br />
        1 &amp; 1 &amp; 0 &amp; 0 &amp; \cdots \\<br />
        a_{10} &amp; a_{11} &amp; a_{12} &amp; a_{13} &amp;\cdots \\<br />
        a_{20} &amp; a_{21} &amp; a_{22} &amp; a_{23} &amp;\cdots \\<br />
        \vdots &amp; \vdots  &amp; \vdots  &amp; \vdots  &amp; \ddots \\ <br />
        \end{bmatrix}\]<br />
<br />
So the initial condition, what I call "trivial zeration" or Goodstein condition, implies that<br />
<br />
\[A(X,Y)=1+Y+\sum_{0\lt n,m}a_{n,m}X^nY^n\]]]></description>
			<content:encoded><![CDATA[Hi Bo, I see you have a good fluency in formal power-series and I'd like to know if you have any immediate insight about the possibility/problems of two-variables power-series satisfying a "formal" Ackermann/Goodstein equation.<br />
<br />
Basically formal power-series over a ring \(R[[X]]\) are formally the same as functions \(R^\mathbb N\) but with much richer algebraic structure on it. Formal power-series on two variables are something like infinite matrices \(R[[X,Y]]\simeq R^{\mathbb N\times \mathbb N} \).<br />
<br />
In some cases maybe it is possible to nest-compose 2-variables formal power-series. What about a powerserie \(A\in \mathbb R[[X,Y]]\) s.t.<br />
<br />
\[A(S_0(X),S_1(Y))=A(X,A(S_0(X),Y))\]<br />
<br />
Where \(S_0(X)=1+X\) and \(S_1(Y)=1+Y\). <br />
<br />
What would be the condition to impose on the coefficient matrix of \(A\) that ensure the existence of that composition "<span style="font-style: italic;" class="mycode_i">as a formal power-series</span>", and what that equation would imply on the coefficient matrix itself?<br />
<br />
<hr class="mycode_hr" />
<br />
Inspired by the discussion in <a href="https://en.wikipedia.org/wiki/Formal_group_law" target="_blank" rel="noopener" class="mycode_url">(wikipedia) Formal group law.</a><br />
Let \(A\in R[[X,Y]] \) and \(A(X,Y)=\sum_{n,m}a_{n,m}X^nY^m\). If I'm not mistaken, the condition \(A(0,Y)=Y+1\) implies that \(a_{0,0}=1\), \(a_{0,1}=1\) and if \(1\lt m\) we have \(a_{0,m}=0\). <br />
<br />
\[<br />
A=<br />
        \begin{bmatrix}<br />
        1 &amp; 1 &amp; 0 &amp; 0 &amp; \cdots \\<br />
        a_{10} &amp; a_{11} &amp; a_{12} &amp; a_{13} &amp;\cdots \\<br />
        a_{20} &amp; a_{21} &amp; a_{22} &amp; a_{23} &amp;\cdots \\<br />
        \vdots &amp; \vdots  &amp; \vdots  &amp; \vdots  &amp; \ddots \\ <br />
        \end{bmatrix}\]<br />
<br />
So the initial condition, what I call "trivial zeration" or Goodstein condition, implies that<br />
<br />
\[A(X,Y)=1+Y+\sum_{0\lt n,m}a_{n,m}X^nY^n\]]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Complex Hardy Hierarchy]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1663</link>
			<pubDate>Sun, 30 Oct 2022 03:34:31 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=310">Catullus</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1663</guid>
			<description><![CDATA[How could the Hardy hierarchy be defined for complex numbers? Like, what are \(H_{\omega^\omega}(i)\) and\(H_{\varepsilon_0}(i)\) with respect to the Wainer Hierarchy system of fundamental sequences and \(\varepsilon_0[n]={}^n\omega\)?]]></description>
			<content:encoded><![CDATA[How could the Hardy hierarchy be defined for complex numbers? Like, what are \(H_{\omega^\omega}(i)\) and\(H_{\varepsilon_0}(i)\) with respect to the Wainer Hierarchy system of fundamental sequences and \(\varepsilon_0[n]={}^n\omega\)?]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[How could we define negative hyper operators?]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1661</link>
			<pubDate>Sat, 22 Oct 2022 19:12:48 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=313">Shanghai46</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1661</guid>
			<description><![CDATA[So, hyper operators are defined recursively within all natural numbers, which makes them easy to understand although hard to extend (tetration and more). But how could we define negative hyper operators? Like the inverse function of their opposites? Like subtraction for the -1th hyperoperator? What about -3? Do we take the log, root, both, or something else?]]></description>
			<content:encoded><![CDATA[So, hyper operators are defined recursively within all natural numbers, which makes them easy to understand although hard to extend (tetration and more). But how could we define negative hyper operators? Like the inverse function of their opposites? Like subtraction for the -1th hyperoperator? What about -3? Do we take the log, root, both, or something else?]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Pentation Fractals]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1660</link>
			<pubDate>Sat, 22 Oct 2022 11:59:51 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=12">Daniel</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1660</guid>
			<description><![CDATA[See <a href="https://math.eretrandre.org/tetrationforum/showthread.php?tid=1650" target="_blank" rel="noopener" class="mycode_url">Flow</a> for background.<br />
<br />
I'm testing my tetration software by creating a pentation Fantou set. The two following fractals are are generated by Mathematica and the second by FractInt. The tetration equation that is iterated to get to pentation is \[^z(\sqrt{2})\]<br />
Note that the fractals are essentially the same.<br />
<br />
<img src="http://tetration.org/PentationJMathematica.GIF" loading="lazy"  width="300" height="300" alt="[Image: PentationJMathematica.GIF]" class="mycode_img" /><img src="http://tetration.org/PentationJFractint.GIF" loading="lazy"  width="300" height="300" alt="[Image: PentationJFractint.GIF]" class="mycode_img" /><br />
<br />
Tetration compared to iterated exponentiation. The x-axis is x while the y-axis is \[^z(\sqrt{2})\].<br />
<img src="http://tetration.org/Pentation.gif" loading="lazy"  width="400" height="200" alt="[Image: Pentation.gif]" class="mycode_img" />]]></description>
			<content:encoded><![CDATA[See <a href="https://math.eretrandre.org/tetrationforum/showthread.php?tid=1650" target="_blank" rel="noopener" class="mycode_url">Flow</a> for background.<br />
<br />
I'm testing my tetration software by creating a pentation Fantou set. The two following fractals are are generated by Mathematica and the second by FractInt. The tetration equation that is iterated to get to pentation is \[^z(\sqrt{2})\]<br />
Note that the fractals are essentially the same.<br />
<br />
<img src="http://tetration.org/PentationJMathematica.GIF" loading="lazy"  width="300" height="300" alt="[Image: PentationJMathematica.GIF]" class="mycode_img" /><img src="http://tetration.org/PentationJFractint.GIF" loading="lazy"  width="300" height="300" alt="[Image: PentationJFractint.GIF]" class="mycode_img" /><br />
<br />
Tetration compared to iterated exponentiation. The x-axis is x while the y-axis is \[^z(\sqrt{2})\].<br />
<img src="http://tetration.org/Pentation.gif" loading="lazy"  width="400" height="200" alt="[Image: Pentation.gif]" class="mycode_img" />]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[[Question] What are ranks? In your opinion.]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1654</link>
			<pubDate>Sat, 15 Oct 2022 11:53:58 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=233">MphLee</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1654</guid>
			<description><![CDATA[The field of study of hyper-operations is relatively young. In its current width we cannot say it's older than 20 years.<br />
As every young field, even more for fields unknown to the mainstream community of mathematicians, we have a situation in which there is not an established school of thought, a defined glossary of terms and standard definitions. <br />
<br />
It is not a secret that I regard the problem of defining ranks and hyperoperation as the core of my research, and that I plan to bring such a needed unification of terms and formalization. Stay tuned, I'll be dropping a major update from november to late december.<br />
I also expect, for the reasons explained in the first paragraph, that every member of this forum will have different answer to the questions of what are ranks and what are hyperoperations in general.<br />
<br />
I hope you can help me learn your position by answering in few words to the following points.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">1) What do you think when you ear the term rank in the context of hyperoperations?<br />
2) What do you think is the deep meaning and importance of the rank parameter?<br />
3) What is, in your opinion, the main obstacle to solving the problem of non-integer ranks?<br />
4) How do you see the mathematician of the future surpass this obstacle, if they ever manage to do it at all. In which field of mathematics do you see the key to the solution residing in?</span><br />
<br />
I don't need formulae, just the first thing that pops in your head. Take it as a philosophical chat.<br />
<br />
I thank you in advance.<br />
Regards]]></description>
			<content:encoded><![CDATA[The field of study of hyper-operations is relatively young. In its current width we cannot say it's older than 20 years.<br />
As every young field, even more for fields unknown to the mainstream community of mathematicians, we have a situation in which there is not an established school of thought, a defined glossary of terms and standard definitions. <br />
<br />
It is not a secret that I regard the problem of defining ranks and hyperoperation as the core of my research, and that I plan to bring such a needed unification of terms and formalization. Stay tuned, I'll be dropping a major update from november to late december.<br />
I also expect, for the reasons explained in the first paragraph, that every member of this forum will have different answer to the questions of what are ranks and what are hyperoperations in general.<br />
<br />
I hope you can help me learn your position by answering in few words to the following points.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">1) What do you think when you ear the term rank in the context of hyperoperations?<br />
2) What do you think is the deep meaning and importance of the rank parameter?<br />
3) What is, in your opinion, the main obstacle to solving the problem of non-integer ranks?<br />
4) How do you see the mathematician of the future surpass this obstacle, if they ever manage to do it at all. In which field of mathematics do you see the key to the solution residing in?</span><br />
<br />
I don't need formulae, just the first thing that pops in your head. Take it as a philosophical chat.<br />
<br />
I thank you in advance.<br />
Regards]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Is successor function analytic?]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1644</link>
			<pubDate>Thu, 22 Sep 2022 11:19:04 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=12">Daniel</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1644</guid>
			<description><![CDATA[Consider the Ackermann successor function \[\varphi(u)\] such that \[\varphi(a \uparrow ^m b)=a \uparrow ^{m+1} b\]. Is \[\varphi(u)\] analytic? If so then the complex dynamics of \[\varphi^n(u)\] can be studied where there is a finite real fixed at  point \[a\uparrow^\infty\infty\]. For example where \[1 \leq a &lt; e^\frac{1}{e}\].]]></description>
			<content:encoded><![CDATA[Consider the Ackermann successor function \[\varphi(u)\] such that \[\varphi(a \uparrow ^m b)=a \uparrow ^{m+1} b\]. Is \[\varphi(u)\] analytic? If so then the complex dynamics of \[\varphi^n(u)\] can be studied where there is a finite real fixed at  point \[a\uparrow^\infty\infty\]. For example where \[1 \leq a &lt; e^\frac{1}{e}\].]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Ackermann fixed points]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1642</link>
			<pubDate>Sun, 18 Sep 2022 14:13:44 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=12">Daniel</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1642</guid>
			<description><![CDATA[Edit: Posting while asleep <img src="https://tetrationforum.org/images/smilies/smile.gif" alt="Smile" title="Smile" class="smilie smilie_1" /><br />
<br />
Generalizing \[^\infty z=a \implies z = a^{\frac{1}{a} }\] <br />
gives the base associated with the fixed point for the hyperoperators.<br />
<br />
\[z \uparrow^n \infty = a \implies z = a \uparrow^{n-1} (a \uparrow^{n-1} {-1})\]]]></description>
			<content:encoded><![CDATA[Edit: Posting while asleep <img src="https://tetrationforum.org/images/smilies/smile.gif" alt="Smile" title="Smile" class="smilie smilie_1" /><br />
<br />
Generalizing \[^\infty z=a \implies z = a^{\frac{1}{a} }\] <br />
gives the base associated with the fixed point for the hyperoperators.<br />
<br />
\[z \uparrow^n \infty = a \implies z = a \uparrow^{n-1} (a \uparrow^{n-1} {-1})\]]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Tetration fixed points]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1597</link>
			<pubDate>Wed, 20 Jul 2022 06:51:28 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=12">Daniel</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1597</guid>
			<description><![CDATA[Edit: Changed from Pentation Fixed Points to Tetration Fixed Points. Thanks to JmsNxn for pointing this out<br />
 <br />
The identity \[z \uparrow \uparrow \infty=\frac{\mathrm{W}(-\ln{z})}{-\ln{z}}\] with some constraints gives a tetration fixed point. To extend this scheme to pentation I believe I need slog, which is fine, but I would also need a generalization of the Lambert W function. Haven't folks on this forum explored hyper or super Lambert W functions.]]></description>
			<content:encoded><![CDATA[Edit: Changed from Pentation Fixed Points to Tetration Fixed Points. Thanks to JmsNxn for pointing this out<br />
 <br />
The identity \[z \uparrow \uparrow \infty=\frac{\mathrm{W}(-\ln{z})}{-\ln{z}}\] with some constraints gives a tetration fixed point. To extend this scheme to pentation I believe I need slog, which is fine, but I would also need a generalization of the Lambert W function. Haven't folks on this forum explored hyper or super Lambert W functions.]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[The modified Bennet Operators, and their Abel functions]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1595</link>
			<pubDate>Mon, 18 Jul 2022 23:31:37 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=163">JmsNxn</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1595</guid>
			<description><![CDATA[As I've begun to reframe the discussion around semi-operators, I've chosen to choose the language of Abel functions, which allows us to reduce a \(3\) variable equation into \(2\). But it becomes a bit of a headache, and I've hit a kind of Stop sign, telling me: I need a creative leap to get passed this hurtle.<br />
<br />
<br />
<br />
So, to begin, we can reintroduce everything. We start by writing the modified bennet operators:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
Here there are a few things to take note. I am restricting \(\Re(y) &gt; 1\), and I am restricting \(x &gt; e\). The variable \(s\) is also restricted to belong in \([0,2]\). Each of these iterations are about the repelling iteration, as well. So if \(y = 2\) then \(\exp_{\sqrt{2}}^{\circ s}(u)\) is the Schroder iteration about the fixed point \(4\). This additionally means, if we were to take \(y = e\), then we are not performing the normal \(\eta\) tetration, we are performing the repelling iteration, which is described as \(\exp_\eta^{\circ s}(u)\) for \(u &gt; e\). <br />
<br />
<br />
<br />
This function is always analytic, and has the benefit of satisfying:<br />
<br />
<br />
<br />
\[<br />
<br />
\begin{align}<br />
<br />
x[0]y &amp;= x+y\\<br />
<br />
x[1]y &amp;= x\cdot y\\<br />
<br />
x[2]y &amp;= x^y\\<br />
<br />
\end{align}<br />
<br />
\]<br />
<br />
<br />
<br />
Additionally this function nearly satisfies the Goodstein equation:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]\left(x[s+1] y\right) \approx x[s+1] (y+1)\\<br />
<br />
\]<br />
<br />
<br />
<br />
We then enter into introducing a \(\varphi\) parameter. This parameter is intended to correct the modified bennet operators to satisfy the Goodstein equation.<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_\varphi y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
Previous attempts at solving for the function \(\varphi(s,x,y)\) include using the implicit function, where the surface of values:<br />
<br />
<br />
<br />
\[<br />
<br />
F(\varphi_1,\varphi_2,\varphi_3) = x[s]_{\varphi_1}\left(x[s+1]_{\varphi_2} y\right) - x[s+1]_{\varphi_3} (y+1) = 0\\<br />
<br />
\]<br />
<br />
<br />
<br />
Have a very planar structure, and that shows that we at least have a rough existence of the solution.<br />
<br />
<br />
<br />
I have switched gears now, instead of finding an implicit curve on this evolving surface, to solving an Abel equation hidden in here. We start off then, by defining the inverse function to these operators. These can always be found because \(x[s]y\) has monotone growth (easily checked by observing non-zero derivative, which is easily checked on the real line). To begin, we're going to fix \(x\), it doesn't really move in the Goodstein equation, so it can be considered as an initial point, and is therefore largely irrelevant, just so long as \(x &gt; e\). <br />
<br />
<br />
<br />
So let:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha(s,x[s]y) = y = x[s]\alpha(s,y)\\<br />
<br />
\]<br />
<br />
<br />
<br />
And to let things get a little difficult, let's let:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi}(s,x[s]_\varphi y ) = y = x[s]_\varphi\alpha_\varphi(s,y)\\<br />
<br />
\]<br />
<br />
<br />
<br />
And now our problem becomes very different. We are now asking for a function \(\varphi\) such that:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\<br />
<br />
\]<br />
<br />
<br />
<br />
And here is where things become untenable. This is precisely where everything starts to break down. But momentarily, follow me. If we find the solution to this, then we call \(\varphi(s,y)\) the various values which allow this solution. This function by construction, will now satisfy:<br />
<br />
<br />
<br />
\[<br />
<br />
\varphi(s,x[s]_{\varphi(s,y)} y) = \varphi(s,y)\\<br />
<br />
\]<br />
<br />
<br />
<br />
Which means it is idempotent. This is largely the goal from here, to construct an idempotent function. Because...<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + 1\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
And from here the orbits are satisfied:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_{\varphi(s,y)} x[s]_{\varphi(s,y)} ...\text{n times}...x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + n\right)<br />
<br />
\]<br />
<br />
<br />
<br />
And now we can affirm that these operators satisfy the Goodstein equation.<br />
<br />
<br />
<br />
The trouble?<br />
<br />
<br />
<br />
This formula diverges, or rather, has no solution for \(s &gt; 0.2\). Everything works fine for the interval \(s \in [0,0.2]\), but about here everything begins to diverge. Initially, I thought it was a problem with my code, but no, this problem is something a bit more intrinsic in this manner of solution. Namely:<br />
<br />
<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\<br />
<br />
\]<br />
<br />
<br />
<br />
<br />
<br />
Has no solutions for \(s &gt; 0.2\).<br />
<br />
<br />
<br />
<br />
<br />
<hr class="mycode_hr" />
<br />
<br />
Thus enters the more difficult problem... which is probably where I should've started earlier. But, I'm here now.<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi_2}(s+1,x[s]_{\varphi_1} y)= \alpha_{\varphi_2}(s+1,y) + 1\\<br />
<br />
\]<br />
<br />
<br />
<br />
This describes a line in \(\mathbb{R}\), and there are always solutions for it, though it is required to let \(y\) grow. The reason being that:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha(s+1,x[s]y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\<br />
<br />
\]<br />
<br />
<br />
<br />
For all \(\epsilon &gt; 0\). This essentially means that the modified Bennet operators are so close to satisfying Goodstein's equation, that the Abel equation is satisfied upto about \(O(\log(y))\). And thereby, moving \(\varphi_1\) and \(\varphi_2\) around, always ensures there is at least a point which the above equation is satisfied. In fact, there's a closed form for it, I won't write it because it's ugly as hell, but it's always solvable using the log rules:<br />
<br />
<br />
<br />
\[<br />
<br />
\begin{align}<br />
<br />
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &amp;= \log_{y^{1/y}}^{\circ s}(x) + y + \varphi\\<br />
<br />
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &amp;= \log_{y^{1/y}}^{\circ s}(x)y e^{\varphi}\\<br />
<br />
\end{align}<br />
<br />
\]<br />
<br />
<br />
<br />
<br />
<br />
So then our problem becomes something new, we can solve for the function \(\varphi_1\) as a function of \(\varphi_2\) such that:<br />
<br />
<br />
<br />
\[<br />
<br />
x [s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s,y) + 1\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
But in order for this to work, we still need \(\varphi_2\) to be idempotent. This should follow naturally because it satisfies the Abel equation, by which we have the orbits:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_{\varphi_1} x[s]_{\varphi_1} ...\text{n times}...x[s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s+1,y) + n\right)<br />
<br />
\]<br />
<br />
<br />
<br />
The trouble now?<br />
<br />
<br />
<br />
How to make sure this is a well defined <span style="font-weight: bold;" class="mycode_b">operator</span>. As a function solution it works, but I'm not sure if it constructs an operator. Analycity isn't a problem. But we, essentially, need to satisfy the following equation:<br />
<br />
<br />
<br />
\[<br />
<br />
\begin{align}<br />
<br />
\varphi(s,y) = \varphi_1(s,y)\,\,\text{for}\,\,s\in[0,1]<br />
<br />
\varphi(s,y) = \text{some new formula I can't wrap my head around, for}\,\,s \in [1,2]\\<br />
\end{align}<br />
<br />
\]<br />
<br />
<br />
<br />
I've sort of confuddled myself into a circle here. And I'm mostly just writing this out to see if something obvious sticks out to me. But still, this has become endlessly frustrating...<br />
<br />
<hr class="mycode_hr" />
<br />
<br />
Nonetheless! We're getting closer by the minute to unlocking:<br />
<br />
\[<br />
x \langle s \rangle y = x[s]_{\varphi(s,x,y)} y\\<br />
\]<br />
<br />
Such that:<br />
<br />
\[<br />
x \langle s \rangle \left(x \langle s+1 \rangle y\right) = x \langle s+1 \rangle (y+1)\\<br />
\]<br />
<br />
<br />
<br />
<hr class="mycode_hr" />
Here is a graph of \(3[1.5]y\) over a pretty large domain, about \(20 &gt; \Re(y) &gt; 0\) and \(|\Im(y)| &lt; 10\), the artifacts are code artifacts, because I haven't found a way to let my code pass the Shell-Thron boundary smoothly<br />
<br />
<br />
<br />

<br />
<img src="https://tetrationforum.org/images/attachtypes/image.gif" title="PNG Image" border="0" alt=".png" />
&nbsp;&nbsp;<a href="attachment.php?aid=1778" target="_blank" title="">TRYING_TO_FILL_IN_THE_GAPS_YET_GAIN_PLEASE.png</a> (Size: 48.5 KB / Downloads: 793)
<br />
<br />
<br />
<br />
<br />
<br />
Here is a graph of \(3 [1.9] y\) where you can see it almost has the periodic structure of \(3^y\):<br />
<br />
<br />
<br />

<br />
<img src="https://tetrationforum.org/images/attachtypes/image.gif" title="PNG Image" border="0" alt=".png" />
&nbsp;&nbsp;<a href="attachment.php?aid=1779" target="_blank" title="">TRYING_WAY_LARGER_1_POINT_9_SEMI_OPERATOR.png</a> (Size: 23.78 KB / Downloads: 645)
<br />
<br />
<br />
<br />
I'll update with a plot of \(\alpha(1.9,y)\) which is almost logarithmic, I'm making a large complex plot, but the code is certainly suboptimal so that'll probably take all night, lol.<br />
<br />
Here is the real plot!<br />
<br />

<br />
<img src="https://tetrationforum.org/images/attachtypes/image.gif" title="PNG Image" border="0" alt=".png" />
&nbsp;&nbsp;<a href="attachment.php?aid=1781" target="_blank" title="">LOG_1_9_20_100_3_TRY_IT.png</a> (Size: 9.09 KB / Downloads: 905)
<br />
<br />
I'll post the complex when it compiles.]]></description>
			<content:encoded><![CDATA[As I've begun to reframe the discussion around semi-operators, I've chosen to choose the language of Abel functions, which allows us to reduce a \(3\) variable equation into \(2\). But it becomes a bit of a headache, and I've hit a kind of Stop sign, telling me: I need a creative leap to get passed this hurtle.<br />
<br />
<br />
<br />
So, to begin, we can reintroduce everything. We start by writing the modified bennet operators:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
Here there are a few things to take note. I am restricting \(\Re(y) &gt; 1\), and I am restricting \(x &gt; e\). The variable \(s\) is also restricted to belong in \([0,2]\). Each of these iterations are about the repelling iteration, as well. So if \(y = 2\) then \(\exp_{\sqrt{2}}^{\circ s}(u)\) is the Schroder iteration about the fixed point \(4\). This additionally means, if we were to take \(y = e\), then we are not performing the normal \(\eta\) tetration, we are performing the repelling iteration, which is described as \(\exp_\eta^{\circ s}(u)\) for \(u &gt; e\). <br />
<br />
<br />
<br />
This function is always analytic, and has the benefit of satisfying:<br />
<br />
<br />
<br />
\[<br />
<br />
\begin{align}<br />
<br />
x[0]y &amp;= x+y\\<br />
<br />
x[1]y &amp;= x\cdot y\\<br />
<br />
x[2]y &amp;= x^y\\<br />
<br />
\end{align}<br />
<br />
\]<br />
<br />
<br />
<br />
Additionally this function nearly satisfies the Goodstein equation:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]\left(x[s+1] y\right) \approx x[s+1] (y+1)\\<br />
<br />
\]<br />
<br />
<br />
<br />
We then enter into introducing a \(\varphi\) parameter. This parameter is intended to correct the modified bennet operators to satisfy the Goodstein equation.<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_\varphi y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
Previous attempts at solving for the function \(\varphi(s,x,y)\) include using the implicit function, where the surface of values:<br />
<br />
<br />
<br />
\[<br />
<br />
F(\varphi_1,\varphi_2,\varphi_3) = x[s]_{\varphi_1}\left(x[s+1]_{\varphi_2} y\right) - x[s+1]_{\varphi_3} (y+1) = 0\\<br />
<br />
\]<br />
<br />
<br />
<br />
Have a very planar structure, and that shows that we at least have a rough existence of the solution.<br />
<br />
<br />
<br />
I have switched gears now, instead of finding an implicit curve on this evolving surface, to solving an Abel equation hidden in here. We start off then, by defining the inverse function to these operators. These can always be found because \(x[s]y\) has monotone growth (easily checked by observing non-zero derivative, which is easily checked on the real line). To begin, we're going to fix \(x\), it doesn't really move in the Goodstein equation, so it can be considered as an initial point, and is therefore largely irrelevant, just so long as \(x &gt; e\). <br />
<br />
<br />
<br />
So let:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha(s,x[s]y) = y = x[s]\alpha(s,y)\\<br />
<br />
\]<br />
<br />
<br />
<br />
And to let things get a little difficult, let's let:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi}(s,x[s]_\varphi y ) = y = x[s]_\varphi\alpha_\varphi(s,y)\\<br />
<br />
\]<br />
<br />
<br />
<br />
And now our problem becomes very different. We are now asking for a function \(\varphi\) such that:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\<br />
<br />
\]<br />
<br />
<br />
<br />
And here is where things become untenable. This is precisely where everything starts to break down. But momentarily, follow me. If we find the solution to this, then we call \(\varphi(s,y)\) the various values which allow this solution. This function by construction, will now satisfy:<br />
<br />
<br />
<br />
\[<br />
<br />
\varphi(s,x[s]_{\varphi(s,y)} y) = \varphi(s,y)\\<br />
<br />
\]<br />
<br />
<br />
<br />
Which means it is idempotent. This is largely the goal from here, to construct an idempotent function. Because...<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + 1\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
And from here the orbits are satisfied:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_{\varphi(s,y)} x[s]_{\varphi(s,y)} ...\text{n times}...x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + n\right)<br />
<br />
\]<br />
<br />
<br />
<br />
And now we can affirm that these operators satisfy the Goodstein equation.<br />
<br />
<br />
<br />
The trouble?<br />
<br />
<br />
<br />
This formula diverges, or rather, has no solution for \(s &gt; 0.2\). Everything works fine for the interval \(s \in [0,0.2]\), but about here everything begins to diverge. Initially, I thought it was a problem with my code, but no, this problem is something a bit more intrinsic in this manner of solution. Namely:<br />
<br />
<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\<br />
<br />
\]<br />
<br />
<br />
<br />
<br />
<br />
Has no solutions for \(s &gt; 0.2\).<br />
<br />
<br />
<br />
<br />
<br />
<hr class="mycode_hr" />
<br />
<br />
Thus enters the more difficult problem... which is probably where I should've started earlier. But, I'm here now.<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha_{\varphi_2}(s+1,x[s]_{\varphi_1} y)= \alpha_{\varphi_2}(s+1,y) + 1\\<br />
<br />
\]<br />
<br />
<br />
<br />
This describes a line in \(\mathbb{R}\), and there are always solutions for it, though it is required to let \(y\) grow. The reason being that:<br />
<br />
<br />
<br />
\[<br />
<br />
\alpha(s+1,x[s]y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\<br />
<br />
\]<br />
<br />
<br />
<br />
For all \(\epsilon &gt; 0\). This essentially means that the modified Bennet operators are so close to satisfying Goodstein's equation, that the Abel equation is satisfied upto about \(O(\log(y))\). And thereby, moving \(\varphi_1\) and \(\varphi_2\) around, always ensures there is at least a point which the above equation is satisfied. In fact, there's a closed form for it, I won't write it because it's ugly as hell, but it's always solvable using the log rules:<br />
<br />
<br />
<br />
\[<br />
<br />
\begin{align}<br />
<br />
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &amp;= \log_{y^{1/y}}^{\circ s}(x) + y + \varphi\\<br />
<br />
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &amp;= \log_{y^{1/y}}^{\circ s}(x)y e^{\varphi}\\<br />
<br />
\end{align}<br />
<br />
\]<br />
<br />
<br />
<br />
<br />
<br />
So then our problem becomes something new, we can solve for the function \(\varphi_1\) as a function of \(\varphi_2\) such that:<br />
<br />
<br />
<br />
\[<br />
<br />
x [s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s,y) + 1\right)\\<br />
<br />
\]<br />
<br />
<br />
<br />
But in order for this to work, we still need \(\varphi_2\) to be idempotent. This should follow naturally because it satisfies the Abel equation, by which we have the orbits:<br />
<br />
<br />
<br />
\[<br />
<br />
x[s]_{\varphi_1} x[s]_{\varphi_1} ...\text{n times}...x[s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s+1,y) + n\right)<br />
<br />
\]<br />
<br />
<br />
<br />
The trouble now?<br />
<br />
<br />
<br />
How to make sure this is a well defined <span style="font-weight: bold;" class="mycode_b">operator</span>. As a function solution it works, but I'm not sure if it constructs an operator. Analycity isn't a problem. But we, essentially, need to satisfy the following equation:<br />
<br />
<br />
<br />
\[<br />
<br />
\begin{align}<br />
<br />
\varphi(s,y) = \varphi_1(s,y)\,\,\text{for}\,\,s\in[0,1]<br />
<br />
\varphi(s,y) = \text{some new formula I can't wrap my head around, for}\,\,s \in [1,2]\\<br />
\end{align}<br />
<br />
\]<br />
<br />
<br />
<br />
I've sort of confuddled myself into a circle here. And I'm mostly just writing this out to see if something obvious sticks out to me. But still, this has become endlessly frustrating...<br />
<br />
<hr class="mycode_hr" />
<br />
<br />
Nonetheless! We're getting closer by the minute to unlocking:<br />
<br />
\[<br />
x \langle s \rangle y = x[s]_{\varphi(s,x,y)} y\\<br />
\]<br />
<br />
Such that:<br />
<br />
\[<br />
x \langle s \rangle \left(x \langle s+1 \rangle y\right) = x \langle s+1 \rangle (y+1)\\<br />
\]<br />
<br />
<br />
<br />
<hr class="mycode_hr" />
Here is a graph of \(3[1.5]y\) over a pretty large domain, about \(20 &gt; \Re(y) &gt; 0\) and \(|\Im(y)| &lt; 10\), the artifacts are code artifacts, because I haven't found a way to let my code pass the Shell-Thron boundary smoothly<br />
<br />
<br />
<br />

<br />
<img src="https://tetrationforum.org/images/attachtypes/image.gif" title="PNG Image" border="0" alt=".png" />
&nbsp;&nbsp;<a href="attachment.php?aid=1778" target="_blank" title="">TRYING_TO_FILL_IN_THE_GAPS_YET_GAIN_PLEASE.png</a> (Size: 48.5 KB / Downloads: 793)
<br />
<br />
<br />
<br />
<br />
<br />
Here is a graph of \(3 [1.9] y\) where you can see it almost has the periodic structure of \(3^y\):<br />
<br />
<br />
<br />

<br />
<img src="https://tetrationforum.org/images/attachtypes/image.gif" title="PNG Image" border="0" alt=".png" />
&nbsp;&nbsp;<a href="attachment.php?aid=1779" target="_blank" title="">TRYING_WAY_LARGER_1_POINT_9_SEMI_OPERATOR.png</a> (Size: 23.78 KB / Downloads: 645)
<br />
<br />
<br />
<br />
I'll update with a plot of \(\alpha(1.9,y)\) which is almost logarithmic, I'm making a large complex plot, but the code is certainly suboptimal so that'll probably take all night, lol.<br />
<br />
Here is the real plot!<br />
<br />

<br />
<img src="https://tetrationforum.org/images/attachtypes/image.gif" title="PNG Image" border="0" alt=".png" />
&nbsp;&nbsp;<a href="attachment.php?aid=1781" target="_blank" title="">LOG_1_9_20_100_3_TRY_IT.png</a> (Size: 9.09 KB / Downloads: 905)
<br />
<br />
I'll post the complex when it compiles.]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[[to do] fully iterative definition of goodstein HOS]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1593</link>
			<pubDate>Sun, 17 Jul 2022 22:51:56 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=233">MphLee</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1593</guid>
			<description><![CDATA[Before I forget it, let me post here a note for the future.<br />
As defined formally by me elsewhere (document in preparation), formal in the sense of deprived of interpretation/representation as endofunction, a formal (pre-)Goodstein sequence inside a pointed non-commutative monoid \((M,s)\) with ranks belonging to an \(\mathbb N\)-iteration \((J,{(-)}^+)\) is a function \({\bf h}:J\to M\) satisfying the system of \(\mathbb N\)-equivariance condition (aka superfucntion equations)<br />
\[{\bf h}_{j^+}s={\bf h}_{j}{\bf h}_{j^+}\]<br />
Since after the latest discussions on the forum I have started to internalize and understand fully the relationship between being a superfucntion, being a family of superfunctions and being an iteration I believe that the previous version of the Goodstein f.equation shows itself as just the \(\mathbb N\)-equivariant version of a more general \(A\)-equivariant Goodstein f.equation.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Definition (\(A\)-equivariant (pre-)Goodstein equation):</span> Fix the monoid of time \(A\) and an unit of time \(u\in A\). Take an \(A\)-pointed non-comm. monoid \((M,s)\), it will be our support and monoid morphism \(s:A\to M\) will be called the seed. Let \((J,{(-)}^+)\) be a an \(\mathbb N\)-action, called the space of ranks. An \(A\)-equivariant (pre-)Goodstein map is a map \({\bf h}:J\to {\rm Hom}_{\rm Mon}(A,M)\), i.e. a sequence of \(A\)-iterations/monoid homomorphisms \({\bf h}_j:A\to M\), over \(M\) indexed by \(J\) that satisfies the \(A\)-equivariant Goodstein f.equation over the seed \(s\) wrt the unit of time \(u\):<br />
\[\forall a\in A.\,{\bf h}_{j^+}(u)s(a)={\bf h}_{j}(a){\bf h}_{j^+}(u)\]<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Example:</span> look for the special case \(A=\mathbb R\) adn \(u=1\), then this means the \(\mathbb R\)-equivariant goodstein functional equation doesn't ask the next hyperoperation to be just a superfunction of the previous but also to respect \(\mathbb R\)-iterations of the previous. This means the new definition is more close to the naive expectation of what we would like Goodstein hyperoperations to be. This means we have a sequence of \(\mathbb R\)-iterations \(f_j^t\) and that \[f^{\circ 1}_{j^+}\circ s^{\circ a}=f^{\circ a}_{j}\circ f^{\circ 1}_{j^+}\]<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Open problem.</span> some can clearly see that this is not perfection. The rank variable still belongs to the world of \(\mathbb N\)-iterations/actions. The ultimate Goodstein functional equation should be \(B\)-equivariant also i the rank variable... but how? The only way I can think of is by iterating group conjugation. We need to ask \(M\) to be a group. In this way, maybe we can find to make \(A\)-equivariant (pre-)Goodstein map \({\bf h}:J\to {\rm Hom}_{\rm Mon}(A,M)\) into a \(B\)-equivariant map, for some monoid \(B\) acting on the space of ranks.... but how?]]></description>
			<content:encoded><![CDATA[Before I forget it, let me post here a note for the future.<br />
As defined formally by me elsewhere (document in preparation), formal in the sense of deprived of interpretation/representation as endofunction, a formal (pre-)Goodstein sequence inside a pointed non-commutative monoid \((M,s)\) with ranks belonging to an \(\mathbb N\)-iteration \((J,{(-)}^+)\) is a function \({\bf h}:J\to M\) satisfying the system of \(\mathbb N\)-equivariance condition (aka superfucntion equations)<br />
\[{\bf h}_{j^+}s={\bf h}_{j}{\bf h}_{j^+}\]<br />
Since after the latest discussions on the forum I have started to internalize and understand fully the relationship between being a superfucntion, being a family of superfunctions and being an iteration I believe that the previous version of the Goodstein f.equation shows itself as just the \(\mathbb N\)-equivariant version of a more general \(A\)-equivariant Goodstein f.equation.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Definition (\(A\)-equivariant (pre-)Goodstein equation):</span> Fix the monoid of time \(A\) and an unit of time \(u\in A\). Take an \(A\)-pointed non-comm. monoid \((M,s)\), it will be our support and monoid morphism \(s:A\to M\) will be called the seed. Let \((J,{(-)}^+)\) be a an \(\mathbb N\)-action, called the space of ranks. An \(A\)-equivariant (pre-)Goodstein map is a map \({\bf h}:J\to {\rm Hom}_{\rm Mon}(A,M)\), i.e. a sequence of \(A\)-iterations/monoid homomorphisms \({\bf h}_j:A\to M\), over \(M\) indexed by \(J\) that satisfies the \(A\)-equivariant Goodstein f.equation over the seed \(s\) wrt the unit of time \(u\):<br />
\[\forall a\in A.\,{\bf h}_{j^+}(u)s(a)={\bf h}_{j}(a){\bf h}_{j^+}(u)\]<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Example:</span> look for the special case \(A=\mathbb R\) adn \(u=1\), then this means the \(\mathbb R\)-equivariant goodstein functional equation doesn't ask the next hyperoperation to be just a superfunction of the previous but also to respect \(\mathbb R\)-iterations of the previous. This means the new definition is more close to the naive expectation of what we would like Goodstein hyperoperations to be. This means we have a sequence of \(\mathbb R\)-iterations \(f_j^t\) and that \[f^{\circ 1}_{j^+}\circ s^{\circ a}=f^{\circ a}_{j}\circ f^{\circ 1}_{j^+}\]<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Open problem.</span> some can clearly see that this is not perfection. The rank variable still belongs to the world of \(\mathbb N\)-iterations/actions. The ultimate Goodstein functional equation should be \(B\)-equivariant also i the rank variable... but how? The only way I can think of is by iterating group conjugation. We need to ask \(M\) to be a group. In this way, maybe we can find to make \(A\)-equivariant (pre-)Goodstein map \({\bf h}:J\to {\rm Hom}_{\rm Mon}(A,M)\) into a \(B\)-equivariant map, for some monoid \(B\) acting on the space of ranks.... but how?]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Hyper-Operational Salad Numbers]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1571</link>
			<pubDate>Sun, 10 Jul 2022 02:35:00 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=310">Catullus</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1571</guid>
			<description><![CDATA[Can you please tell me some salad numbers that use hyper-operations? If you want, you may make up your own hyper-operational salad numbers and post them here.<br />
<br />
A salad number is a large number created with a mishmash of numbers or functions.]]></description>
			<content:encoded><![CDATA[Can you please tell me some salad numbers that use hyper-operations? If you want, you may make up your own hyper-operational salad numbers and post them here.<br />
<br />
A salad number is a large number created with a mishmash of numbers or functions.]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Communitive Octonion Multiplication]]></title>
			<link>https://tetrationforum.org/showthread.php?tid=1550</link>
			<pubDate>Fri, 24 Jun 2022 06:12:30 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://tetrationforum.org/member.php?action=profile&uid=310">Catullus</a>]]></dc:creator>
			<guid isPermaLink="false">https://tetrationforum.org/showthread.php?tid=1550</guid>
			<description><![CDATA[What would happen if you defined a communitive multiplication for octonions as k^(log(k,a)+log(k,b)), for some k such as sqrt(2)?]]></description>
			<content:encoded><![CDATA[What would happen if you defined a communitive multiplication for octonions as k^(log(k,a)+log(k,b)), for some k such as sqrt(2)?]]></content:encoded>
		</item>
	</channel>
</rss>