Holomorphic semi operators, using the beta method
#52
(05/26/2022, 10:00 PM)tommy1729 Wrote:
(05/10/2022, 08:29 PM)JmsNxn Wrote:
(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ;  taking x * ... y times )

x < 3 > y = x^^ y 

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.


Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.



Regards

tommy1729

Tom Marcel Raes

Hey, Tommy

Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1  = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.

That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.

I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:

\[
\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\
\]

...

We talked about those fractional derivatives in the past , some 10 years ago.

The thing is they are just some gamma type interpolations.

But they usually do not satisfy the functional equations you want or even when they do it is hard to prove.

Also if there is no closed form using infinite sums , then this cannot be the solution.

Now you could try to alter the formula by replacing your list of function f_n(x) by G ( f_n(x) ) assuming that still makes the sum converge.

And then with this functionally invertible G you can retrieve your f_n and somehow that satisfies the functional equation.

But there is no easy way to find this G , in fact this is G is not easier than any other method as far as we know.



I hesitated to post this because I know you like these gamma and fractional derivatives ideas and invested alot of time and effort in it.
But as for now I see little progress or hope...

Im sorry.

regards

tommy1729

...?


Well first of all, you can solve many difference equations using them. And I can't make sense of anything else you said.

\[
\alpha \uparrow^n z = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \alpha \uparrow^{n} k+1 \frac{w^k}{k!}\\
\]

I mean, that's absolute. I don't think you know what you're talking about, tommy.

I think you're misinterpreting a lot of things...

Who cares if there's no closed form for the sum? What does that have to do with anything...

This is precisely the action of iterating linear operators. So if \(E\) is a linear operator with specified conditions, then \(E^z e^{Ew} = \frac{d^z}{dw^z} e^{Ew}\). This converges fairly often, and the criterion of its convergence is pretty loose. I honestly have no clue what you're talking about. Maybe your equations aren't working when you try to solve whatever difference equations you are trying to solve--but it works fine for the purposes I've used it for. The equations I've used it for do in fact work. And the above (about semi operators \(\uparrow^s\)) was a conjecture, with a good amount of evidence to back it.

EDIT: TO clarify.


The correct way to say it, is if \(F\) is holomorphic in the right half plane:

\[
|F(z)| \le e^{\rho|\Re(z)| + \tau|\Im(z)|}\,\,\text{for}\,\,\tau,\rho>0\,\,\tau < \pi/2\\
\]

Then; as you call the gamma interpolation, is equivalent to \(F\). So that \(F(z)\) is fully determined by its behaviour on the naturals.

Suppose that:

\[
EF(n) = F(n+1)\\
\]

And let's also suppose that:

\[
EF(z)\,\,\text{has the same bounds as above}\\
\]

Then:

\[
EF(z) = F(z+1)\\
\]


This is because:

\[
EF(z) - F(z+1) = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \left(EF(k+1) - F(k+2)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\
\]

This works for composition, because composition is a linear operator. Write \(Ez = g(z)\) locally about fixed points with real positive multipliers (it works with complex multipliers but it's very tricky). Then the above "gamma interpolation" of \(g^{\circ n}\) just produces the standard Schroder iteration about that fixed point. This is apparent without doing any work at all. the function \(g^{\circ z}\), using the Schroder iteration and keeping the multiplier real positive, satisfies the above bounds I wrote, and is holomorphic in a right half plane. So we just use Ramanujan's master theorem. Absolutely it satisfies the equation.

I really don't understand what you're talking about at all. And I would appreciate clarity before you call work useless.
Reply


Messages In This Thread
RE: Holomorphic semi operators, using the beta method - by JmsNxn - 05/26/2022, 10:18 PM

Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 6,249 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 33,494 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 10,285 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 18,886 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 16,432 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 19,870 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 6,736 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 120,901 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 51,122 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 77,571 04/01/2015, 06:09 PM
Last Post: MphLee



Users browsing this thread: 2 Guest(s)