Bounded Analytic Hyper operators
#1
Hey everyone, check out my paper! It's been written formal and rigorous, written in a purist complex analysis format. It's about the bounded analytic hyper operators, those with a base in between 1 and eta, and fractional iteration through the lens of fractional calculus. I've centered it around Ramanujan's master theorem, using this theorem as a base, but it gives some very advantageous results. I give an expression for \( \alpha [n] z \) when \( \alpha \in [1,e^{1/e}], z \in \mathbb{C} \) using ramanujan's master theorem. This is not as easy as it sounds off hand, but I do believe it's all been proved. I've uploaded it on arxiv, I've just been waiting for the conclusion of the wait period.

Thank you if you do read it, I really appreciate any support or comments or suggestions. I am trying to get this published, and I would have never been able to do this without the existence of this community. It has just knocked me on the head so many times that I finally eventually got the rigor and formality that my thoughts have now. I really hope this paper makes up for all my stupidities when I have posted here. This is really a big step lol.


Attached Files
.pdf   Complex iterations and bounded analytic hyper-operators.pdf (Size: 396.86 KB / Downloads: 1,187)
#2
OMG ... i'm reading at page 4... and looks so amazing... Finally I understand why the auxiliary functions... this is really "as beautiful as the Gamma function interpolates the factorial" (as you told me)...!!!
I guess that for me has come the time to study complex analisys! Big Grin

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#3
That is a fantastic paper!


I have an idea on how to extend this to real bases greater than eta
Define, for integer \( n \), the super root \( \text{srt}_n(z) \) as the inverse of tetration in the base so that we have \( ^n\text{srt}_n(z) = z \)
Then we simply apply your method to factor this in \( n \):
\( \vartheta(z,w) = \sum_{n=0}^{\infty} \text{srt}_{n+1}(z)\frac{w^n}{n!} \)
\( \text{srt}_{n+1}(z) = \frac{\mathrm{d}^n }{\mathrm{d} w^n} |_{w=0} \vartheta(z,w) \)
Then we can simply invert in z.
...unfortunately, there does not seem to be a nice recursion relation between super roots that would force the result to be a tetration.
But it seems to converge numerically to the super root for your tetration.
#4
¿Is there any way to deduce from it what is \( \frac {1}{n}x \) or \( ^{n}(^{m}x)=x \) ?

I mean, given n and x, what is m as function of n (and x)?
#5
@Marraco read Jmsn's paper at page 15, theorem 4.1. It gives a closed form for complex tetration but only for a small set of bases.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#6
@JmsNxn
I don't yet understand all the analysis behind but I'm tryng to follow the logical implications of the lemmas. If I get it is the Ramanujan's that makes all the works (but we have the exp bound requirmeent).

The other interesting point is the differential equation that holds for the auxiliary function (seems crazy). An this is the really interesting trick imho. But pls, tell me if I get it (started to study calculus just 1 month ago).

We have a power series \( \varphi \)

\( \vartheta(w)=a_0w^0+a_1w^1+a_2w^2+...+a_{n}w^n+a_{n+1}w^{n+1}+... \)

and its derivative becomes the following (D distributes over addition, commutes with product and appliyng power rule we get a "shift" of indexes in the series )

\( \vartheta'(w)=a_1w^0+(a_22)w^1+(a_33)w^2+...+(a_{n}n)w^{n-1}+a_{n+1}w^n... \)

Now comes the trick... (Am I right?) Define the coefficents in the following form

\( a_n=\phi_{n+1}/n! \) that becomes \( a_nn=\phi_{n+1}/n!\cdot n=\phi_{n+1}/(n-1)! \). Replace this in the prevous series and

\( \displaystyle\vartheta(w)= \phi_1{w^0\over0!}+\phi_2{w^1\over1!}+{\phi_3}{w^2\over 2!} + ... +\phi_{n+1}{w^n\over n!}+\phi_{n+2}{w^{n+1}\over(n+1)!}+ ...=\sum_{n=0}^{\infty}\phi_{n+1}{w^n\over n!} \)

\( \displaystyle\vartheta'(w)= \phi_2{w^0\over0!}+\phi_3{w^1\over1!}+{\phi_4}{w^2\over 2!}+...+\phi_{n+1}{w^{n-1}\over (n-1)!}+\phi_{n+2}{w^n\over n!}+...=\sum_{n=0}^{\infty}\phi_{1+n+1}{w^n\over n!} \)


So if we define

\( \displaystyle\Theta_\phi(k,w):=\sum_{n=0}^{\infty}\phi_{k+n+1}{w^n/ n!} \)

We could assume to have the following

\( \displaystyle{d\over dw}\Theta_\phi(k,w)=\Theta_\phi(k+1,w) \) and that

and we hope that

\( \displaystyle{d^z\over d^zw}\Theta_\phi(k,w)=\Theta_\phi(k+z,w) \)

At this point you set \( \phi_n=\phi^{\circ n}(\xi) \) (in other notation youre using its xi-based superfunction \( \phi_n=\Sigma[\phi]_\xi(n) \)) so you can send differentiations by w to iteration by the transfer function (or application of the right-composition operator).

Seeing this I now understand why you told me, long ago, that this can be used for the fractional ranks... but now you talk only about linear operators and the recursion (I mean the b-based superfunction operator) and the antirecursion/subfunction operator aren't linear. Also, just before Lemma 3.1, you say that this is not proved for operators different than the left-composition yet. Why?

Why can't we just repeat the trick using other sequences \( \tau_n \) like using a sequence of values of your bounded hyperoperators indexed by the rank?
We just need \( |T(n)|=\tau_n \) to be exp. bounded or I've missed something important somewhere in your paper?

For example we could try to set in it one of these two sequences

Given an invertible function \( f \) and a \( t \) in its domain

define the direct antirecursion sequence \( f_{\sigma} \) of \( f \) and its inverse antirecursion sequence \( g_{\sigma} \)

\( f_0:=f \) and \( f_{\sigma+1}:=f_{\sigma}Sf_{\sigma}^{\circ-1} \)

\( g_0:=f^{\circ-1} \) and \( g_{\sigma+1}:=f_{\sigma}S^{\circ-1}f_{\sigma}^{\circ-1}=f_{\sigma+1}^{\circ-1} \)

Those two sequences of functions give us two sequences of real numbers \( \tau_\sigma:=f_{\sigma}(t) \) and \( \theta_\sigma=g_{\sigma}(t) \) for a fixed \( t \)

btw those sequences satisfie those recurrences

\( \tau_{\sigma+1}=f_\sigma(\theta_{\sigma}+1) \)

\( \theta_{\sigma+1}=f_\sigma(\theta_{\sigma}-1) \)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#7
I havent read the whole paper , but it seems good.
Is it really new ? Why didnt Ramanujan see this ?

1

I really like the idea for the super root.

I wonder if it agrees with other methods.

2

For a sequence of iterates going to hyperb fix ,
Does this agree with Koenigs ?

I think so.

I think proving that in the paper would impress.

3

I think this can be applied to nonbounded operators with some tricks.

Regards

Tommy1729
#8
Off topic: @Tommy Do you mean non-linear operators? From wiki I see that boundness comes with the definition of a norm... what is the relationship between bounded operators and linear ones? Wiki page about operators is a little bit chaotic.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#9
What I meant is that you have tetration !

Not just for the bases between 1 and eta but for all real bases larger than 1.

Not sure if you realise it yet.

Here is a sketchy way to show it :

In short since you can interpolate analyticly x^x^... = m

where ... are integer iterations and m is a real > e ...

You can THUS solve for the x in tet_x(t) = m for a given m > e and t >0.

( x is the base ).

But this also means that you can solve for t since you can set up the equation RAM(m,t) = x for any desired x.

WHen you have this t , you have found tet_base_x(t) = m for a given m.

In other words from the relation tet_x(t) = m you can solve for either x or t.

therefore you can solve sexp_x(t) = m

which is slog_x(m).

Then invert this function and you have sexp for any base > 1.

Since all of this is done analyticly you have found tetration.

And it seems simpler then some other methods , like Kneser or Cauchy.

Hope this is clear enough.
I can explain more if required.

SO JmsNxn finally has his own method , with credit to the brilliant comment of fivexthethird.

( Im thinking of a variant of this method too )

I just wonder what this will be called ...

JMS method ? JN method ? Jms5x3 method ?

Smile

Jms5x31729 method Wink

I already started calling it in my head " Ramanujan-Lagrange method ".
The reason seems clear : Ramanujan's master theorem and Lagrange's inversion theorem.

For those unfamiliar :

http://en.wikipedia.org/wiki/Lagrange_inversion_theorem



regards

tommy1729
#10
It seems that this method is equivalent to the method of newton series.
To see this, note that
\( e^x \sum_{k=0}^{\infty}f(k) \frac{(-1)^k x^k}{k!} = (\sum_{k=0}^{\infty}\frac{x^k}{k!})(\sum_{k=0}^{\infty}f(k) \frac{(-1)^k x^k}{k!}) = \)
\( \sum_{k=0}^{\infty}x^k \sum_{j=0}^k\frac{(-1)^{j}f(j)}{(k-j)!j!} = \sum_{k=0}^{\infty}\frac{x^k}{k!} \sum_{j=0}^k(-1)^{j}f(j){k \choose j} = \)
\( \sum_{k=0}^{\infty}{\Delta}^kf(0) \frac{(-1)^{k}x^k}{k!} \)
where \( \Delta f(x) = f(x+1)-f(x) \)
So we can rewrite the integral as
\( \frac{1}{\Gamma(-z)}\int_0^{\infty} \sum_{k=0}^{\infty}{\Delta}^kf(0) \frac{(-1)^{k}x^{k-z-1}e^{-x}}{k! } dx \)
Since the power series defines an entire function we can exchange the integral and sum so that we have
\( \sum_{k=0}^{\infty}{\Delta}^kf(0)\frac{(-1)^{k}}{k!\Gamma(-z)} \int_0^{\infty} x^{k-z-1}e^{-x} dx = \)
\( \sum_{k=0}^{\infty}{\Delta}^kf(0)\frac{(-1)^{k}\Gamma(k-z)}{k!\Gamma(-z)} =\sum_{k=0}^{\infty}{\Delta}^kf(0)\frac{(z)_{k}}{k!} \)
where \( (z)_{k} = z(z-1)(z-2)...(z-(k-1)) \) is the falling factorial.
This is just the newton series of f around 0.
TPID 13 is thus solved, as \( x^{\frac{1}{x}} \) satisfies the bounds required for Ramanujan's master theorem to apply.

@tommy: I don't see how inverting around t helps us recover tetration from the interpolated super root... it gets us the slog, yes, but in either case we need to just invert around m to get tetration.
Or am I misinterpreting your post?


Possibly Related Threads…
Thread Author Replies Views Last Post
  Is successor function analytic? Daniel 6 2,521 11/28/2022, 12:03 PM
Last Post: JmsNxn
  How could we define negative hyper operators? Shanghai46 2 1,472 11/27/2022, 05:46 AM
Last Post: JmsNxn
Question Base Pi Hyper-Operations Catullus 3 2,476 11/08/2022, 06:51 AM
Last Post: Catullus
Question Hyper-Operational Salad Numbers Catullus 9 4,455 09/17/2022, 01:15 AM
Last Post: Catullus
Question Rank-Wise Approximations of Hyper-Operations Catullus 48 26,462 09/08/2022, 02:52 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 17,736 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 3,107 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 5,863 07/17/2022, 05:42 AM
Last Post: JmsNxn
Question Octonion Hyper-Operations Catullus 3 2,394 07/05/2022, 08:53 AM
Last Post: Catullus
  Thoughts on hyper-operations of rational but non-integer orders? VSO 4 7,261 06/30/2022, 11:41 PM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)