05/26/2022, 10:00 PM
(05/10/2022, 08:29 PM)JmsNxn Wrote:(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.
First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.
If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !
Therefore the superfunction operator is an analytic operator.
this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.
Secondly we want
x <s> 1 = x for all s.
by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.
This gives us an opportunity to get analytic hyperoperators.
Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.
You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).
This implies that
x <0> y = x + y is WRONG.
We get by the above :
x < 0 > y = x + y - 1
( x <0> 1 = x !! )
x < 1 > y = x y
( the super of +x + 1 - 1 aka +x y times )
x < 2 > y = x^y
( the super of x y ; taking x * ... y times )
x < 3 > y = x^^ y
( starting at x and repeating x^... )
This also allows us to compute x < n > y for any n , even negative.
That is a sketch of my idea.
Not sure how this relates to 2 < s > 2 = 4 ...
Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.
Gotta run.
Regards
tommy1729
Tom Marcel Raes
Hey, Tommy
Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1 = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.
That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.
I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:
\[
\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\
\]
...
We talked about those fractional derivatives in the past , some 10 years ago.
The thing is they are just some gamma type interpolations.
But they usually do not satisfy the functional equations you want or even when they do it is hard to prove.
Also if there is no closed form using infinite sums , then this cannot be the solution.
Now you could try to alter the formula by replacing your list of function f_n(x) by G ( f_n(x) ) assuming that still makes the sum converge.
And then with this functionally invertible G you can retrieve your f_n and somehow that satisfies the functional equation.
But there is no easy way to find this G , in fact this is G is not easier than any other method as far as we know.
I hesitated to post this because I know you like these gamma and fractional derivatives ideas and invested alot of time and effort in it.
But as for now I see little progress or hope...
Im sorry.
regards
tommy1729

