(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ; taking x * ... y times )

x < 3 > y = x^^ y

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.

Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.

Regards

tommy1729

Tom Marcel Raes

Hey, Tommy

Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1 = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.

That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.

I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:

\[

\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\

\]

Largely because if you take the first differ integral and set \(z=0\), we get:

\[

\alpha e^w\\

\]

Then taking the second differintegral, we just get:

\[

\alpha\\

\]

Since the Mellin transform converges at these points, it's very likely it'll converge in a half plane for \(\Re(s),\Re(z) \ge 0\). I made some progress on this, but I could never plug one of the leaks. Additionally, the first differintegral always converges by the work done on bounded analytic hyperoperators, where:

\[

\alpha \uparrow^n z+1 = \frac{d^{z-1}}{du^{z-1}}\Big{|}_{u=0} \sum_{k=0}^\infty \alpha \uparrow^n (k+2) \frac{u^k}{k!}\\

\]

Where we define the terms in the sum recursively as:

\[

\alpha \uparrow^n (k+2) = \alpha \uparrow^{n-1} \alpha \uparrow^{n-1} \cdots\,(k+2\,\,\text{times})\,\cdots \uparrow^{n-1} \alpha\\

\]

But this, as far as I'm concerned is irrelevant for this thread. I'm trying to do something very different here.

YES!

I've got it down to one equation. This is only a good first order approximation, but if this is working it's a very good sign. So, let's call:

\[

\varphi_3(y,s) = \log^{\circ s+1}_{(y+1)^{1/(y+1)}}\left(x\langle s\rangle_{\varphi_1}\left(x \langle s+1\rangle_{\varphi_2} y\right)\right) - y-1-\log^{\circ s+1}_{(y+1)^{1/(y+1)}}(x)\\

\]

We can treat this linearly upto about 3 digits in the interval \(\varphi_1,\varphi_2 \in [-1,1]\). So call:

\[

\begin{align}

\rho_1(y,s) &= \frac{\partial \varphi_3}{\partial \varphi_1}\Big{|}_{\varphi_1 =0}\\

\rho_2(y,s) &= \frac{\partial \varphi_3}{\partial \varphi_2}\Big{|}_{\varphi_2 =0}\\

\end{align}

\]

And let \(C(y,s) = \varphi_3(y,s)\Big{|}_{\varphi_1,\varphi_2 = 0}\)

Then, the first order approximation, which is surprisingly accurate, looks like this:

\[

\varphi_3(y,s) = C(y,s) + \rho_1(y,s)\varphi_1 + \rho_2(y,s)\varphi_2\\

\]

Now, the first restriction we make on the plane is that \(\varphi_2 = \varphi_3(y-1,s)\) which I detailed above. So now we have the first order difference equation, that happens to be linear:

\[

\varphi_3(y,s) = C(y,s)+ \rho_1(y,s)\varphi_1 + \rho_2(y,s)\varphi_3(y-1,s)\\

\]

Can you guess the solution.....? [Enter infinite compositions from right stage.]

\[

\varphi_3(y,s) = \Omega_{j=1}^\infty \frac{\rho_1(y+j-1,s)\varphi_1 - C(y+j-1,s) + z}{\rho_2(y+j-1,s)}\bullet z \Big{|}_{z=0}\\

\]

Now I was doubting this converges initially, as I looked at it. But the coefficients are all decaying just fast enough. I'd estimate something like \(1/n^{3/2}\) maybe a bit faster. But this converges... The value \(\rho_1\) drops to zero pretty damn fast. The value \(\rho_2\) tends to 1 moderately fast. The only trouble value I see is going to be \(C\), but it still looks like it's converging. All hope is not lost if this diverges, we'd just have to use a different technique to solve the first order recurrence equation.

This gives the first order approximation, which is almost exactly. You'd have to do this more difficultly for the actual solution, but if the linear is converging, then the actual solution should converge too. I'm dreading writing that out though. It's going to be one helluva a nasty infinite composition...

Then, all we have to do is make sure that \(\varphi_1\) satisfies its equation, and we'd have the solution. Jesus, that's going to be tough though. I doubt this will look like a linear equation.

But we definitely can use the infinite composition to get the correct answer if you guess about where \(\varphi_1\) is... But you also have to modify this equation a tad, and you have to let \(C\) depend on \(\varphi_1\) a tad, I'm not sure why. But that's what makes these equations converge.

Think of it like a newtonian approximation. You have to guess about where \(\varphi_1\) is for \(x,y,s\), then you run this equation to get \(\varphi_3(y,s)\), then you get \(\varphi_2 = \varphi_3(y-1,s)\).

JESUS! This is working out too well. We're going to have a hell of time solving this \(\varphi_1\) anomaly though.