Ok, I'm glad I was clear enough and you were able to master the idea so quickly... an idea which is pretty simple at the end. It's just the usual limit trick but on a higher level with all of its pros and cons.
1) Now the problem I see coming: \(f(y,s)\) seems not really easily invertible... also you note the domain of \(f^{-1}\). Rightly so. In my toy model we are working in groups, everything is invertible and we need not to worry about domains. But in this case I feel that this becomes pretty fat, as a problem. I experimented alot back in the days in iterating the sub-function operator. Obviously the more you iterate it on something non-surjective, the more the domain shrinks...
1.5) I haven't tried to invert you construction nor computing the algebra, but inverting in the \(y\) sounds like we need Lambert function \(W\) somewhere. Also... I tried to apply the limit formula to the, easier, Bennet's family... and I totally don't see what should look polynomial... I still can't see nothing behaving as a polynomial. Polynomial in the \(n\)? I'm truly lost xD
2) The second point... I have a fear when you see it converges quickly... when iterating the operator \(\Sigma_s(f)=fsf^{-1}\) we have that \(s\) is a fixed point, everyone here is aware of that, also Tommy made some comments on it as a carrier of potential problems. It is a fixed point even when \(s\) is not chosen to be the successor. In fact... we do not know how much it is "attracting".
Some functions quickly converge (an I mean after few iterations) on a shrinking-but-stabilizing domain to the seed function \(s\). I believe that the least number of iteration that sends the resulting function \(\Sigma^n_s(f)\) locally into a small neighborhood of \(s\) -I'm thinking here informally as some kind of pointwise convergence over a function space- should be called local-rank. It means that locally a function looks like the \(n\)-th solution to the equation \(xs=hx\) where \(h\) is an \((n-1)\)-'th solution.
To sum it up: the operation of "shifting and sub-functioning" does indeed have as its fixed points solutions to the goodstein equation. Maybe locally if we extend to non-bijective functions. The problem is that also \((s,s,s,s,....)\) is a solution, where \(s\) is the seed of the family. How much strongly it attracts arbitrary non-goodstein sequences?
I ask that because I tried on desmos to compute the limit of
\[\lim (\Sigma_S \circ S^*)^n({\bf bennet})=\lim (\Sigma_S \circ S^*)^n(\oplus)\]
I know, it's not what you are using, but it seems to to solidly converge to the successor after just \(3\) iterations. With solidly I mean that I added the slider for the base \(b\) in \(b\oplus_s x= \exp^n (\ln^n b +\ln^n x)\). And after the third iteration, the slider does not perturb the shape of the graphs of \( \left( (\Sigma_S \circ S^*)^n (\oplus) \right)_3\) anymore and the function lies in the strip \(x+1+\rho\) for \(x>N\) and \(\rho \in [-\epsilon,\epsilon]\).
So... how strong is the basing of attraction of the trivial goodstein sequence? How to escape it? I know, my corollary was meant to prove that if we have convergence of a non constant sequence of operations, then the limit was going to be a non-constant solution. Tbh I need to check again the argument... I believe it is true in the theory over groups... but outside it... idk.
A VISUAL EXPLAINATION OF THE METHOD
If someone finds hard to visualize what is happening, here the picture: we start with a continuous family of binary operations, eg. Bennett's or James's variant \(\exp^s_{y^{1/y}}(\ln^s_{y^{1/y}}(x)+y )\).
![[Image: f1.png]](https://i.ibb.co/DrsPMG0/f1.png)
remember that it is a continuous one this time
![[Image: f2.png]](https://i.ibb.co/crGN8qC/f2.png)
Given an operator \(b* y\) we can ask for its super-operator \(*^+\) and it's sub operator \(*^-\). Call \(*^+\) the solution of the equation \(b*^+(y+1)=b*(b*^+ y)\) and call \(*^-\) the solution of \(b*(y+1)=b*^-(b*y)\). Assume we can iterate this procedure we obtain a new family \(*^0=*\), \(*^{n+1}=(*^n)^+\) and \(*^{n-1}=(*^n)^-\).
![[Image: f3.png]](https://i.ibb.co/LYLCfPb/f3.png)
Here the vertical links are the superfunction/subfunction relations. Let's apply this to our initial continuous family, e.g. Bennet's one
![[Image: f4.png]](https://i.ibb.co/0r7jB8V/f4.png)
We obtain a grid of operations. Traveling along the columns we have "iterate"/"take subfunction" process.
![[Image: f5.png]](https://i.ibb.co/k981qkf/f5.png)
But traveling the horizontal direction is NOT taking the "Bennett process" \(b*y \mapsto e^{\ln b * \ln y}\) if we are not moving along the the 0-th row. This means that the grid is not commutative (horizontal moves don't commute with vertical moves). Every column is a discrete sequence that satisfies the Goodstein equation, but only the continuous family at the 0th row satisfies bennet.
![[Image: f6.png]](https://i.ibb.co/j31PYNr/f6.png)
The grid itself seems something very interesting on its own, the non-commutativity implies that is can be extended in and infinite number of direction. A first study of this kind was initiated in december 2020/may 2021 by Jaramillo (Hyperoperations in exponential fields). Giving names to the nodes of the grid makes it look like this
![[Image: f7.png]](https://i.ibb.co/qnXwpkG/f7.png)
Now we try to visualize the limit formula. We want to find a continuous family of binary operations \(g_{s}(b,y)\) s.t. \[g_{s+1}(b,y+1)=g_{s}(b,g_{s+1}(b,y))\]
We want to obtain that as a fixed point of a family \(\lim_{n\to \infty}g_{n,s}(b,y)=g_s(b,y)\). To do that we need a first approximation \(g_{0,s}(b,y)\).
We want \[g_{n,s+1}(b,y+1)=g_{n+1,s}(b, g_{n,s+1}(b,y))\]
Clearly \(g_{n+1,s}(b,y)\) is the subfunction of \(g_{n,s+1}(b,y)\) \[\boxed{g_{n+1,s}=g_{n,s+1}^-}\]
If we set \(g_{0,s}=\oplus_s^0\), i.e. as the first approximation we use Bennet, we get \(g_{1,s}=\oplus^{-1}_{s+1}\), \(g_{2,s}=\oplus^{-2}_{s+2}\) and \(g_{n,s}=\oplus^{-n}_{s+n}\)
![[Image: f8.png]](https://i.ibb.co/wNY85P6/f8.png)
As we can see we are iterating, shifting and subfunction. We expect the limit to be a fixed point, hence a continuous solution of the goodstein equation.
![[Image: f9.png]](https://i.ibb.co/RCW7xc5/f9.png)
\[\boxed{g_{s}=\lim g_{n,s}=\lim_{n\to\infty}(\Sigma_S\circ S^*)^n(\oplus)_s=\lim \oplus_{s+n}^{-n}}\]
1) Now the problem I see coming: \(f(y,s)\) seems not really easily invertible... also you note the domain of \(f^{-1}\). Rightly so. In my toy model we are working in groups, everything is invertible and we need not to worry about domains. But in this case I feel that this becomes pretty fat, as a problem. I experimented alot back in the days in iterating the sub-function operator. Obviously the more you iterate it on something non-surjective, the more the domain shrinks...
1.5) I haven't tried to invert you construction nor computing the algebra, but inverting in the \(y\) sounds like we need Lambert function \(W\) somewhere. Also... I tried to apply the limit formula to the, easier, Bennet's family... and I totally don't see what should look polynomial... I still can't see nothing behaving as a polynomial. Polynomial in the \(n\)? I'm truly lost xD
2) The second point... I have a fear when you see it converges quickly... when iterating the operator \(\Sigma_s(f)=fsf^{-1}\) we have that \(s\) is a fixed point, everyone here is aware of that, also Tommy made some comments on it as a carrier of potential problems. It is a fixed point even when \(s\) is not chosen to be the successor. In fact... we do not know how much it is "attracting".
Some functions quickly converge (an I mean after few iterations) on a shrinking-but-stabilizing domain to the seed function \(s\). I believe that the least number of iteration that sends the resulting function \(\Sigma^n_s(f)\) locally into a small neighborhood of \(s\) -I'm thinking here informally as some kind of pointwise convergence over a function space- should be called local-rank. It means that locally a function looks like the \(n\)-th solution to the equation \(xs=hx\) where \(h\) is an \((n-1)\)-'th solution.
To sum it up: the operation of "shifting and sub-functioning" does indeed have as its fixed points solutions to the goodstein equation. Maybe locally if we extend to non-bijective functions. The problem is that also \((s,s,s,s,....)\) is a solution, where \(s\) is the seed of the family. How much strongly it attracts arbitrary non-goodstein sequences?
I ask that because I tried on desmos to compute the limit of
\[\lim (\Sigma_S \circ S^*)^n({\bf bennet})=\lim (\Sigma_S \circ S^*)^n(\oplus)\]
I know, it's not what you are using, but it seems to to solidly converge to the successor after just \(3\) iterations. With solidly I mean that I added the slider for the base \(b\) in \(b\oplus_s x= \exp^n (\ln^n b +\ln^n x)\). And after the third iteration, the slider does not perturb the shape of the graphs of \( \left( (\Sigma_S \circ S^*)^n (\oplus) \right)_3\) anymore and the function lies in the strip \(x+1+\rho\) for \(x>N\) and \(\rho \in [-\epsilon,\epsilon]\).
So... how strong is the basing of attraction of the trivial goodstein sequence? How to escape it? I know, my corollary was meant to prove that if we have convergence of a non constant sequence of operations, then the limit was going to be a non-constant solution. Tbh I need to check again the argument... I believe it is true in the theory over groups... but outside it... idk.
A VISUAL EXPLAINATION OF THE METHOD
If someone finds hard to visualize what is happening, here the picture: we start with a continuous family of binary operations, eg. Bennett's or James's variant \(\exp^s_{y^{1/y}}(\ln^s_{y^{1/y}}(x)+y )\).
![[Image: f1.png]](https://i.ibb.co/DrsPMG0/f1.png)
remember that it is a continuous one this time
![[Image: f2.png]](https://i.ibb.co/crGN8qC/f2.png)
Given an operator \(b* y\) we can ask for its super-operator \(*^+\) and it's sub operator \(*^-\). Call \(*^+\) the solution of the equation \(b*^+(y+1)=b*(b*^+ y)\) and call \(*^-\) the solution of \(b*(y+1)=b*^-(b*y)\). Assume we can iterate this procedure we obtain a new family \(*^0=*\), \(*^{n+1}=(*^n)^+\) and \(*^{n-1}=(*^n)^-\).
![[Image: f3.png]](https://i.ibb.co/LYLCfPb/f3.png)
Here the vertical links are the superfunction/subfunction relations. Let's apply this to our initial continuous family, e.g. Bennet's one
![[Image: f4.png]](https://i.ibb.co/0r7jB8V/f4.png)
We obtain a grid of operations. Traveling along the columns we have "iterate"/"take subfunction" process.
![[Image: f5.png]](https://i.ibb.co/k981qkf/f5.png)
But traveling the horizontal direction is NOT taking the "Bennett process" \(b*y \mapsto e^{\ln b * \ln y}\) if we are not moving along the the 0-th row. This means that the grid is not commutative (horizontal moves don't commute with vertical moves). Every column is a discrete sequence that satisfies the Goodstein equation, but only the continuous family at the 0th row satisfies bennet.
![[Image: f6.png]](https://i.ibb.co/j31PYNr/f6.png)
The grid itself seems something very interesting on its own, the non-commutativity implies that is can be extended in and infinite number of direction. A first study of this kind was initiated in december 2020/may 2021 by Jaramillo (Hyperoperations in exponential fields). Giving names to the nodes of the grid makes it look like this
![[Image: f7.png]](https://i.ibb.co/qnXwpkG/f7.png)
Now we try to visualize the limit formula. We want to find a continuous family of binary operations \(g_{s}(b,y)\) s.t. \[g_{s+1}(b,y+1)=g_{s}(b,g_{s+1}(b,y))\]
We want to obtain that as a fixed point of a family \(\lim_{n\to \infty}g_{n,s}(b,y)=g_s(b,y)\). To do that we need a first approximation \(g_{0,s}(b,y)\).
We want \[g_{n,s+1}(b,y+1)=g_{n+1,s}(b, g_{n,s+1}(b,y))\]
Clearly \(g_{n+1,s}(b,y)\) is the subfunction of \(g_{n,s+1}(b,y)\) \[\boxed{g_{n+1,s}=g_{n,s+1}^-}\]
If we set \(g_{0,s}=\oplus_s^0\), i.e. as the first approximation we use Bennet, we get \(g_{1,s}=\oplus^{-1}_{s+1}\), \(g_{2,s}=\oplus^{-2}_{s+2}\) and \(g_{n,s}=\oplus^{-n}_{s+n}\)
![[Image: f8.png]](https://i.ibb.co/wNY85P6/f8.png)
As we can see we are iterating, shifting and subfunction. We expect the limit to be a fixed point, hence a continuous solution of the goodstein equation.
![[Image: f9.png]](https://i.ibb.co/RCW7xc5/f9.png)
\[\boxed{g_{s}=\lim g_{n,s}=\lim_{n\to\infty}(\Sigma_S\circ S^*)^n(\oplus)_s=\lim \oplus_{s+n}^{-n}}\]
![[Image: f10.png]](https://i.ibb.co/7pyNjsb/f10.png)
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)