[Question for Bo] about formal Ackermann laws
#1
Hi Bo, I see you have a good fluency in formal power-series and I'd like to know if you have any immediate insight about the possibility/problems of two-variables power-series satisfying a "formal" Ackermann/Goodstein equation.

Basically formal power-series over a ring \(R[[X]]\) are formally the same as functions \(R^\mathbb N\) but with much richer algebraic structure on it. Formal power-series on two variables are something like infinite matrices \(R[[X,Y]]\simeq R^{\mathbb N\times \mathbb N} \).

In some cases maybe it is possible to nest-compose 2-variables formal power-series. What about a powerserie \(A\in \mathbb R[[X,Y]]\) s.t.

\[A(S_0(X),S_1(Y))=A(X,A(S_0(X),Y))\]

Where \(S_0(X)=1+X\) and \(S_1(Y)=1+Y\).

What would be the condition to impose on the coefficient matrix of \(A\) that ensure the existence of that composition "as a formal power-series", and what that equation would imply on the coefficient matrix itself?



Inspired by the discussion in (wikipedia) Formal group law.
Let \(A\in R[[X,Y]] \) and \(A(X,Y)=\sum_{n,m}a_{n,m}X^nY^m\). If I'm not mistaken, the condition \(A(0,Y)=Y+1\) implies that \(a_{0,0}=1\), \(a_{0,1}=1\) and if \(1\lt m\) we have \(a_{0,m}=0\).

\[
A=
        \begin{bmatrix}
        1 & 1 & 0 & 0 & \cdots \\
        a_{10} & a_{11} & a_{12} & a_{13} &\cdots \\
        a_{20} & a_{21} & a_{22} & a_{23} &\cdots \\
        \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
        \end{bmatrix}\]

So the initial condition, what I call "trivial zeration" or Goodstein condition, implies that

\[A(X,Y)=1+Y+\sum_{0\lt n,m}a_{n,m}X^nY^n\]

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#2
Maybe it was wrong. Let's visualize the summands:
\[\begin{pmatrix}
        a_{00} & a_{01}Y & a_{02}Y^2 & a_{03}Y^3 & \cdots \\
        a_{10}X & a_{11}XY & a_{12}XY^2 & a_{13}XY^3 &\cdots \\
        a_{20}X^2 & a_{21}X^2Y & a_{22}X^2Y^2 & a_{23}X^2Y^3 &\cdots \\
        a_{30}X^3 & a_{31}X^3Y & a_{32}X^3Y^2 & a_{33}X^3Y^3 &\cdots \\
        \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
        \end{pmatrix}\]
The condition \(A(0,Y)=1\) is
\[\begin{pmatrix}
        a_{00} & a_{01}Y & a_{02}Y^2 & a_{03}Y^3 & \cdots \\
        0 & 0 & 0 & 0 &\cdots \\
        0&0 &0 & 0 &\cdots \\
       0& 0 & 0 & 0 &\cdots \\
        \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
        \end{pmatrix}=1+Y
\]
I'd deduce from this that as formal powerseries \(a_{00}=1\) and \(a_{01}=1\) with all the higher terms equal zero.

Let me try to explore the tho other possible boundary values and how they change the infinite matrix. We look for three infinite matrices \(S,A, G\in \mathbb R[[X,Y]]\). Assume that they have trivial zeration.
  • (Simple Bounday condition) \(S(X+1,0)=1\) implies that \(1+\sum_{0\lt n}s_{n,0}(X+1)^n=1\) i.e. \[\sum_{0\lt n}s_{n,0}(X+1)^n=0\].
  • (Ackermann Bounday condition) \(A(X+1,0)=A(X,1)\) implies that \(1+\sum_{0\lt n}a_{n,0}(X+1)^n=2+\sum_{0\lt n,m}a_{n,m}X^n\) i. e \[\sum_{0\lt n}a_{n,0}(X+1)^n=1+\sum_{0\lt n,m}a_{n,m}X^n\]
    \[\begin{pmatrix}
            1 & 0 & 0 & 0 & \cdots \\
            a_{10}(X+1) & 0 & 0 & 0 &\cdots \\
            a_{20}(X+1)^2 & 0& 0 & 0 &\cdots \\
            a_{30}(X+1)^3 & 0 & 0 & 0 &\cdots \\
            \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
            \end{pmatrix}-\begin{pmatrix}
            1 & 1 & 0 & 0 & \cdots \\
            a_{10}X & a_{11}X & a_{12}X & a_{13}X &\cdots \\
            a_{20}X^2 & a_{21}X^2 & a_{22}X^2 & a_{23}X^2 &\cdots \\
            a_{30}X^3 & a_{31}X^3 & a_{32}X^3 & a_{33}X^3 &\cdots \\       
          \vdots & \vdots  & \vdots  & \vdots  & \ddots \\         \end{pmatrix}=0\]
    \[\sum_{n=1}^\infty a_{n,0}\left (\sum_{k=0}^n\binom{n}{k} X^k\right )-\sum_{n=1}^\infty \left (\sum_{m=0}^\infty a_{nm}\right )X^n=1  \]

  • (Goodstein Bounday conditions) \(G(1,0)=b,\,G(2,0)=0,\,G(X+2,0)=1,\,\) implies, considering \(2\leq j\) as a constant powerseries, that
    \[\begin{pmatrix}
            1-b &  0 & 0 & 0 & \cdots \\
            g_{10}&  0 & 0 & 0 &\cdots \\
            g_{20} &  0 & 0 & 0 &\cdots \\
            g_{30} &  0 & 0 & 0 &\cdots \\
            \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
            \end{pmatrix}=0\,\quad
    \begin{pmatrix}
            1 &  0 & 0 & 0 & \cdots \\
            g_{10}2 &  0 & 0 & 0 &\cdots \\
            g_{20}4 &  0 & 0 & 0 &\cdots \\
            g_{30}8 &  0 & 0 & 0 &\cdots \\
            \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
            \end{pmatrix}=0\,\quad
    \begin{pmatrix}
            0 &  0 & 0 & 0 & \cdots \\
            g_{10}j &  0 & 0 & 0 &\cdots \\
            g_{20}j^2 &  0 & 0 & 0 &\cdots \\
            g_{30}j^3 &  0 & 0 & 0 &\cdots \\
            \vdots & \vdots  & \vdots  & \vdots  & \ddots \\
            \end{pmatrix}=0\]
    \[1-b+\sum_{n=1}g_{n0}=0;\quad 1+\sum_{n=1}g_{n0}2^n=0;\quad \sum_{n=1}g_{n0}j^n=0\]

Not sure how to interpret the last expressions. It's is not evaluation of the formal powerseries, because it may not exists... but composition with in the first formal variable with a constant powerseries.>

Can someone extract some insight from the Ackermann version?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#3
Hey, Mphlee.

I'm not that well versed in the matrices you are talking about, but I am versed very well in what you are trying to construct.

To begin, let's write:

\[
g(X,Y) = \sum_{n,m=0}^\infty g_{nm} X^nY^m\\
\]

And let's refer to this solely as a Formal Series--by which there is no need to check for convergence. Now let's assume that:

\[
g(X,g(X+1,Y)) = g(X+1,Y+1)
\]

And let's pull a Gottfried and just collect coefficients, and create a matrix solution to all the values. (This is little different than solving an \(\infty \times \infty\) linear system (Heisenberg shit).

The trouble is, Mphlee, this shit will almost certainly not converge. Adding in boundary conditions makes this way way way fucking harder too.

BUT! Formally, yes. It all works fine, and you can definitely pull out coefficients here, and solve a formal series which "should" converge to semi operators. But sadly, it'll be divergent, and probably brutally divergent.


The biggest problem with this approach is largely that it's not computable. It's not something we can plug into a calculator. But as a formal system, and an algebraic construct, absolutely it works. And this is, in many senses, an Abstract Algebraic construction. But it is not an Analytic construction; unless you can prove that \(g_{nm} = O(1)\) or something like that.
Reply
#4
I'd like to add one more point of discussion to this thread, to exemplify that Mphlee is not very far off with his ideas, he's just taking a wrong angle, and a wrong base pair.

Let us write the following expansion:

\[
x [s]_\varphi y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\
\]

We are intrinsically assuming that \(\exp^{\circ s}_b\) is the repelling iteration of \(b^z\)--so that if \(b = \sqrt{2}\), we are choosing the repelling fixed point \(4\). There is always a repelling fixed point for \(b\) in the Shell-thron region, and there is always a singular one nearest to the real line. We are then, using said fixed point and performing a Schroder iteration here.

Now, some important things to note.

This function is holomorphic for \(\Re(x) > e\) and \(\Re(y) > 1\). While the domain in \(s\) is a tad finicky, it is analytic in \(s\), and the best I can say at the moment is:

\[
s \in \mathcal{S} = \{s \in \mathbb{C}\,|\,\Re(s) > -1/2,\, |\Im(s)|< 1\}\\
\]

Whereby this function is holomorphic in \(s\) on \(\mathcal{S}\). The value \(\varphi\) is more difficult, but it is holomorphic in \(\varphi\) on these domains, if we ask something like \(|\varphi| < \delta\), where \(\delta > 0\) and this value isn't too large, but not as tiny as you expect (I find it's usually around \(\delta = 0.5\)).



Now let's do everything Mphlee is describing, but let's do it in \(\varphi\) as opposed to in the actual base equation. For this, we're going to set \(x=3\), because \(x\) doesn't affect much, and has nothing to do with the Goodstein equation. We are in effect, ignoring \(x\), so just set it to \(3\). For this we write the following:

\[
3 [1+s]_\varphi y = \sum_{n,m=0}^\infty g_{nm}(\varphi)s^n(y-e)^m\\
\]

On the domains described:

\[
|g_{nm}(\varphi)| < M\,\,\text{for some constant}\\
\]

And now we can perform something similar to Gottfried's approach (I have this written out much more detailed, just not in latex, pounds and pounds of notes). Let us take this a step further and say there is a:

\[
\varphi(s,y) = \sum_{n,m=0}^\infty a_{nm} s^n(y-e)^m\\
\]

The values \(g_{nm}(\varphi)\) are fully determined, one only needs to study the repelling iteration of exponentials, and Bennet's form. The system of equations \(a_{nm}\) are the unknowns. I have code to do this already, I do not have code for the next part.

Now, per Mphlee's suggestion, we are trying to solve:

\[
3 \langle 1+s \rangle y = 3 [1+s]_{\varphi(s,y)} y\\
\]

Using this knowledge of infinite matrices, that Gottfried uses, that a lot of people use. So the functional equation we ask is:

\[
3 \langle 1+s \rangle \left(3 \langle 2+s \rangle y\right) = 3 \langle 2 +s \rangle (y+1)\\
\]

This reduces into a solution system for \(a_{nm}\). I don't know how to solve it, but I know it can be solved, because I can construct this solution using implicit equations. Additionally, since we are just testing the holomorphy of \(\varphi\), we always have convergent taylor series. Even setting this up, I've been modest about how well these taylor series converge. Bennet's form of semi-operators have GREAT (AND I REPEAT, GREAT) taylor data, I've been modest choosing the domains I chose.

You should definitely notice now, if you haven't yet, that \(\langle 1+s\rangle\) is at least holomorphic for \(|s| < 1\), which in turn gives definition to \(\langle 2+s\rangle\) for \(s \in [-1,0]\) (and its analycity here).

Now, actually bruteforcing this code, I have no fucking clue. I can understand Gottfried's approach, and speak on it; but in no way would I be able to recreate it in Pari. Same way I can understand Sheldon's approach, but I can't code it in myself. I do believe this will probably be the best way to write code for semi-operators, I will not continue this path.

The solution that I have is pretty fucking ugly, as in, it is a mess of symbols. But no less than what is seen here. And that, really Mphlee you're on the right path. This is just a fucking beast of a problem Tongue




Another final point I'd like to add is that if we castrate everything to \(O(s^3(y-e)^3)\), then I can get a solution. So instead:


\begin{align}
3 [1+s]_\varphi y &= \sum_{n,m=0}^2 g_{nm}(\varphi)s^n(y-e)^m + O(s^3(y-e)^3)\\
g_{nm}(\varphi) &= a + b\varphi + c\varphi^2 + O(\varphi^3)\\
\varphi(s,y) &= \sum_{n,m=0}^2 a_{nm} s^n(y-e)^m + O(s^3(y-e)^3)\\
\end{align}

Then, this produces a solvable \(3\times 3\) matrix solution thingy, and everything works fine. I've done this and calculated values, everything is good; though it only gives a third order approximation to the solution near \(y = e\) and \(s =1\) (multiplication). And therefore, it does not check the functional equation as much as we'd like (we get like 1-2 digit accuracy (so the first 2 decimals are correct)). But the principle is the same. I simply do not know how to solve this beast, as I'm terrible at matrices, let alone, programming in matrix solutions.

Nonetheless, the math I have says, if we let the order of error to infinity, we're in the clear Tongue
Reply
#5
There is alot to unpack here but this already projects some light on my fragile understanding of your old attempt of turning Bennet into Goodstein.
If possible, it would be interesting to see if Gottfried/Sheldon can somehow manage to extract some efficient truncated matrix black magic out of this... so to have some run-able pariGP.

Also, I wonder if using the Limit trick for hyperoperations would produce easier finite order truncations matrices... then we could just have something that evaluates in human amount of time and just leave formal convergence proof for another day, just like Sheldon and Gottfried codes.

In fact.... I don't think a pc would take too much to work with 5 or 6 order square matrices... or maybe it does?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#6
(12/17/2022, 01:37 PM)MphLee Wrote: There is alot to unpack here but this already projects some light on my fragile understanding of your old attempt of turning Bennet into Goodstein.
If possible, it would be interesting to see if Gottfried/Sheldon can somehow manage to extract some efficient truncated matrix black magic out of this... so to have some run-able pariGP.

Also, I wonder if using the Limit trick for hyperoperations would produce easier finite order truncations matrices... then we could just have something that evaluates in human amount of time and just leave formal convergence proof for another day, just like Sheldon and Gottfried codes.

In fact.... I don't think a pc would take too much to work with 5 or 6 order square matrices... or maybe it does?

I believe, if I could transport my knowledge and understanding into Gottfried or Sheldon; they'd be able to write this for 100 terms. And it should converge fast. The only reason I stuck to \(3\times 3\) matrices in my experiments, is because that's about all my brain can handle. I am quite literally handicapped when it comes to matrices. They are unintuitive, ugly, disgusting things in my brain--and nothing makes sense. If you ask me to bruteforce a \(3 \times 3\) matrix, I can do it, but it takes me wayyyyyyyy longer than it should. You ask me to do it with \(100 \times 100\), and I am instantly crushed under the weight of that many terms.

I am still confident the Bennet-Goodstein approach is the correct approach to semi-operators. And I understand that I need working code to empirically justify it. I need more time though. Luckily I have 2-3 months come Jan 10th--and I plan to buckle down and rip this apart!

Regards.
Reply
#7
Quote:I believe, if I could transport my knowledge and understanding into Gottfried or Sheldon; they'd be able to write this for 100 terms. And it should converge fast.

Maybe this could be a game changer: since I firmly believe that such a normal family/limit trick applied to Bennet yelds a truly genuine, non artifact, non-integer Goodstein, this will open up the field for numerical experiments and explorations! The gates of non-integer ranks fractal-like behavior and the like... seems exciting.

I also have 1/2 weeks free from work so I plan to produce something written on the foundation of all of this. Something that can be applied in the linear case... like linear Goodstein, just to test a toy-exemple.
I'm excited to see what comes out of this.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question [Question] What are ranks? In your opinion. MphLee 19 9,678 09/15/2024, 12:43 AM
Last Post: marracco
  Ackermann fixed points Daniel 1 1,369 10/09/2023, 11:45 PM
Last Post: leon
  Laws and Orders GFR 33 60,613 06/28/2022, 02:43 PM
Last Post: tommy1729
  [MO] Is there a tetration for infinite cardinalities? (Question in MO) Gottfried 10 30,385 12/28/2014, 10:22 PM
Last Post: MphLee
  Proof Ackermann function extended to reals cannot be commutative/associative JmsNxn 1 7,146 06/15/2013, 08:02 PM
Last Post: MphLee
  generalizing the problem of fractional analytic Ackermann functions JmsNxn 17 51,304 11/24/2011, 01:18 AM
Last Post: JmsNxn
  Proof Ackermann function cannot have an analytic identity function JmsNxn 0 5,051 11/11/2011, 02:26 AM
Last Post: JmsNxn
  extension of the Ackermann function to operators less than addition JmsNxn 2 9,515 11/06/2011, 08:06 PM
Last Post: JmsNxn
  Ackermann function and hyper operations andydude 3 14,011 04/18/2011, 05:08 PM
Last Post: bo198214
  A specific value of the Ackermann function tetrator 12 29,449 11/02/2008, 02:47 PM
Last Post: Finitist



Users browsing this thread: 1 Guest(s)