Using a family of asymptotic tetration functions...
#11
(04/26/2021, 02:11 AM)sheldonison Wrote: Thanks James,

Your latest paper looks really impressive; I will comment more later; I've been too busy to spend any time on tetration over the last month and a half.
- Sheldon

Thank you, Sheldon, for doing that variable change,

\(
s = \log(w)/\lambda\\
\)

And thinking of it as a weird Schroder equation. That was definitely the click I needed. Using \( \phi \) obviously doesn't work because it solves,

\(
g(ew) = we^{g(w)}
\)

Which is definitely not tetration at \( w = \infty \). Whereas, the function,

\(
g(e^{\lambda}w) = \frac{w}{w+1}e^{g(w)}\\
\)

IS definitely tetration at \( w = \infty \) because \( \frac{w}{w+1}|_{w=\infty} =1 \). Which isn't really obvious when you write it using the variable \( s \). They both go to infinity, and they both display chaotic behaviour; but one looks like \( \infty e^\infty \) and the other just looks like \( e^\infty \).

If you ever get the time to do Tetration again, I'm happy to answer any questions. A lot of this is very complicated, and especially when pasting the tetration functions together to get the tetration we actually want; I imagine it'll be very confusing. The key though, is that once you can do this uniformly in \( \lambda \) and \( w \); you can talk about moving \( \lambda \to 0 \) while iterating; and everything still works.

At this point, I can't think of anything to add to the paper. And as a disclaimer; I am a horrible programmer. I only ever did a bunch of C programming back in the glory days of XBOX mods; where I dealt with hex code as a primary. My piece de resistance was a hack for Capcom games that broke the encryption of picture files... to make girls naked on a certain Capcom game.  So I do not know how to even approach generating a Taylor Series. I imagine you, who has definitely dealt with constructing Taylor Series before; especially in exotic situations; may have some light to shed.  The trouble I'm having is approaching the function, without it blowing up at other points -_-. And these blow ups (which I just call short circuits) are quite clearly artifacts of my code -_-. I'm trying to even think of a way of generating Taylor Series, and I have no F*n clue, lol.

   

You can see what I mean here; where it converges much better in some areas (the singularity is more obviously a singularity; the dip to zero is more prominent; and the imaginary arguments seem more dramatic), but then some singularities in the code appear (these spikes pop out of the wood work, which are definitely artifacts). Which is no doubt, because my iteration requires larger samples \( n \) of \( \beta_\lambda(s+n) \) to produce a more accurate result. But larger values of \( n \) is producing overflow errors, because it looks like tetration... -_- A computer knows no difference between \( \log \exp^{\circ 10}(z) \) and \( \log \exp^{\circ 9}(z) \)--at least in MatLab.  Couple that with some round-off errors, and some poorly coded exponentials; and we get blow ups which follow a dynamic pattern. (A lot of the code errors look like Leau Petals, so at least that's kind of cool, lol.)

Honestly, this would be the easiest code on planet earth if we had a processor that could handle arbitrary large strings of binary digits. If it weren't for overflow errors, this problem would be elementary -_-. Sucks to live in a world of finite resources, lol.


Regards, James


BTW

I've added my primitive code on GitHub, so if anyone's interested in seeing how I ran the numbers, it's there.

https://github.com/JmsNxn92/Recursive_Tetration/

And this is the link to the arXiv publication.

https://arxiv.org/abs/2104.01990
Reply
#12
(04/01/2021, 05:19 AM)JmsNxn Wrote: **Here's the arXiv link with everything**

https://arxiv.org/abs/2104.01990

I hardly understand how to read your symbols. Did you complete a brand new holomorphic Tetration in base e? Where is your superlog?
Reply
#13
(05/03/2021, 01:58 PM)Ember Edison Wrote:
(04/01/2021, 05:19 AM)JmsNxn Wrote: **Here's the arXiv link with everything**

https://arxiv.org/abs/2104.01990

I hardly understand how to read your symbols. Did you complete a brand new holomorphic Tetration in base e? Where is your superlog?

Hey, I assume the problem you are having is with the Omega-notation. The omega-notation isn't pivotal, but it saves a hell of a lot of space.

Assume we have a sequene of holomorphic functions \( \phi_j(s,z) : \mathcal{S} \times \mathcal{G} \to \mathcal{G} \) for domains \( \mathcal{S},\mathcal{G} \subseteq \mathbb{C} \). The Omega-notation has been developed repeatedly by me in about 6 papers now; and it can get a little repetitive re-introducing it in each paper, so I just ran with it in this paper.

If I write,

\(
\Omega_{j=1}^n \phi_j(s,z) \bullet z
\)

Then this is interpreted as,

\(
\phi_1(s,\phi_2(s,...\phi_n(s,z)))\\
\)

The bullet essentially binds the variable that we compose across. It's similar to how \( ds \) behaves in \( \int...ds \); it binds the operation to a specific variable. And then the \( \Omega_{j=1}^n \) just means to compose these functions across the index \( j \) from \( 1 \) to \( n \). Attached in this paper is a proof of a specific type of "Infinite composition". An infinite composition is just when we let \( n\to\infty \). These things can converge in many different ways; I only use a specific case in this paper.

\(
\Phi(s) = \lim_{n\to\infty} \Omega_{j=1}^n \phi_j(s,z)\bullet z = \lim_{n\to\infty} \phi_1(s,\phi_2(s,...\phi_n(s,z)))\\
\)

This will converge to a holomorphic function in \( s \); if the following sum converges. Let \( \mathcal{N} \subset \mathcal{S} \) be an arbitrary compact set; and let \( \mathcal{K} \subset \mathcal{G} \) be an arbitrary compact set. If there exists an \( A \in \mathcal{G} \) such for all \( \mathcal{N},\mathcal{K} \) that,

\(
\sum_{j=1}^\infty \sup_{s\in\mathcal{N},z\in\mathcal{K}} |\phi_j(s,z) - A| < \infty\\
\)

Then the sequence,

\(
\lim_{n\to\infty} \phi_1(s,\phi_2(s,...\phi_n(s,z)))
\)

Converges uniformly on \( \mathcal{N} \) and \( \mathcal{K} \); to a holomorphic function in \( s \) and a constant function in \( z \). The proof of this can be found in the appendix of this paper; but I've proven it a couple of times in other papers; specifically, it was used when I constructed the \( \phi \) method before (which only made a \( C^\infty \) tetration).

From here, I move pretty fast in the paper; again, I've done this so many times it can be exhausting to rewrite introductions in every paper.

If I define the set \( \mathbb{L} = \{(s,\lambda) \in \mathbb{C}^2\,|\, \Re \lambda > 0,\,\lambda(j-s) \neq (2k+1)\pi i,\,j,k \in \mathbb{Z},\,j\ge 1\} \); then the following sequence of functions is holomorphic on \( \mathbb{L} \times \mathbb{C} \).

\(
q_j(s,\lambda,z) = \frac{e^z}{e^{\lambda(j-s)} + 1}\\
\)

Additionally, for compact sets of \( \mathbb{L} \) and \( \mathbb{C} \); we know the sum converges,

\(
\sum_{j=1}^\infty ||q_j(s,\lambda,z)|| < \infty\\
\)

From, this we can write that,

\(
\beta_\lambda(s) = \Omega_{j=1}^\infty q_j(s,\lambda,z)\bullet z\\
\)

Is a holomorphic function for \( (s,\lambda) \in \mathbb{L} \). We can write this more explicitly as;

\(
\beta_\lambda(s) = \Omega_{j=1}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\bullet z\\
\)

Now, if I shift \( \beta_\lambda(s) \) forward by \( s \mapsto s+1 \) then we get a re-indexing in our infinite composition, where we start from \( j=0 \) rather than \( j=1 \).

\(
\beta_\lambda(s+1) = \Omega_{j=0}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\bullet z\\
\)

But, this just equals,

\(
\Omega_{j=0}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\bullet z = q_0(s,\lambda,\Omega_{j=1}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\bullet z)\\
\)

To make a long story short; this means that,

\(
\beta_\lambda(s+1) = \frac{e^{\beta_\lambda(s)}}{e^{-\lambda s} + 1}\\
\)

I assume this is where you were having trouble; as I did just blaze through this part. The rest of the paper then focuses on solving the Abel equation at \( \Re(s) = \infty \) using this function. Where,

\(
\log \beta_\lambda (s+1) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\
\)

And we want to add in sequence of convergents \( \tau_\lambda^n \), which solve,

\(
\log (\beta_\lambda(s+1) + \tau_\lambda^n(s+1)) = \beta_\lambda(s) + \tau_\lambda^{n+1}(s)\\
\)

And we show that the limit \( \tau_\lambda = \lim_{n\to\infty} \tau_\lambda^n \) is holomorphic. This solves the Abel equation for any \( \lambda \) and large enough \( \Re(s) \); in which,

\(
F_\lambda(s) = \beta_\lambda(s) + \tau_\lambda(s)\\
\log F_\lambda(s+1) = F_\lambda(s)\\
\)


Then, the real trouble is that these tetrations are periodic and have a whole bunch of singularities; so you don't want them at all. Instead you want to take a limit \( \lim_{n\to\infty} \lim_{\lambda \to 0} \tau_\lambda^n \) where \( \lambda = \mathcal{O}(n^{-\epsilon}) \) and this will give you a real valued tetration function; which is holomorphic and whose only singularities are at the negative integers less than -1.

At this point, I'm confident this constructs holomorphic tetration for the base point \( e \); and that its real-valued. I'm working on making some Pari-GP code; but the trouble is, as I'm pulling back from infinity, we encounter overflow errors pretty quickly in the construction. But I can already achieve a precision of about 10 digits; but only when the argument is about \( \Re(s) < 3 \). I am not the best programmer; so I don't think I'll be able to improve it unless I can think of a way of discovering Taylor series. I do not know how to generalize this result to other basis though; as the dynamics are particular to \( e \); but it should work... I think, not certain.

As to the superlog; it only exists through the implicit function theorem; I have no effective way of constructing it other than just functionally inverting the tetration function.

If you have any more questions, just ask. I'm happy to answer. A lot of this is new math centering around the Omega-notation and it can be a little confusing, I understand. I have written a bunch of papers using it though; and it's just annoying to have to make a new section in each paper explaining what the notation means in depth. It's gotten to the point I assume no one really reads these things, that I just reference previous work at this point and give a quick run through.

Regards, James

PS:

If you are having trouble cognizing what \( \beta_\lambda \) looks like, you can always think of it how Tommy and Sheldon think of it.

\(
{\displaystyle\beta_\lambda(s) = \frac{e^{\displaystyle\frac{e^{\displaystyle\frac{e^{\displaystyle...}}{e^{\lambda(3-s)} + 1}}}{e^{\lambda(2-s)} + 1}}}{e^{\lambda(1-s)} + 1}}
\)

Which is just pulling the iteration all the way back to infinity; where,

\(
\beta_\lambda(-\infty) = 0\\
\)

And

\(
\beta_\lambda(s+1) = \frac{e^{\beta_\lambda(s)}}{e^{-\lambda s} + 1}\\
\)

And we are solving this in a neighborhood of negative infinity; where the value is zero; and then just iterating forward to get the whole complex plane. I don't really like this way of thinking of it; largely because if you try and prove this converges from that way, it doesn't generalize to more complicated constructions. I prefer viewing it as, if a sum converges compactly normally, then the infinite composition converges compactly normally. This generalizes well for much more exotic constructions than \( \beta_\lambda \).
Reply
#14
So I've stumbled across coding this object better. I've managed to make some fairly flawless code; but for some reason I get too many errors when graphed. And as I saw these errors, I realized I've stumbled across numerical evidence that this tetration IS NOT Kneser's tetration.

The key identity you have to recall in Kneser's tetration is that,

\(
\lim_{z\to\infty} \text{tet}_K(z) = L\,\,\text{for}\,\,\pi/2 \le \arg(z) <\pi\\
\)

And further more; that the principal branch of the logarithm; in the upper half plane \( \mathbb{H} = \{z \in \mathbb{C}\,|\,\Im(z) > 0\} \); satisfies,

\(
\lim_{n\to\infty} \log^{\circ n}(\mathbb{H}) = L\\
\)

Now, Pari-gp/Matlab always choose the principal branch. The fact is; my tetration is only stable about the principal branch of \( \log \) near the real axis. Everywhere else, my tetration chooses different logarithms. This is because, as I already suspected; my tetration is not normal in the upper half plane like Kneser's is. This can be summarized as,

\(
\lim_{|\Im(s)| \to \infty} \text{tet}_\beta(s) = \infty\\
\)

So, although my tetration can be calculated perfectly point-wise using,

\(
\log^{\circ n}(\beta(s+n))\\
\)

With the principal branch of \( \log \). It cannot be holomorphic everywhere in \( \mathbb{H} \) if we only use the principal branch of \( \log \). So the anomalies I'm seeing in my code are in fact evidence that this function is NOT Kneser's tetration. And coding this is going to be even harder.

I'm preparing a second paper proving that this tetration is not Kneser's. At this point, I'm certain everything works. But I'm scared this may just develop Kneser's tetration; which my gut says no. And I think I can prove it now.

This is a real valued tetration that is completely different from what we've seen before. There is no special fixed point involved at all.
Reply
#15
Hey James,
I've lately been considering an anologous familt of asymptotic tetration functions, satisfying the recurrence
\( f(z+1)=a^z*b^{f(z)} \) with arbitrary constant a and b
I simply take \( f(z)=g(a^z) \) and solve \( g(az)=z*b^{g(z)} \) term by term in series, which uses this code I wrote by wolfram mathematica 12, it iterates the recurrence to get the function converged
Code:
Clear[A, B, term, aa, IS, Z]
(* Solving \[Alpha] in coefficients *)
A = 1 + I;
B = 1/2;
term = 15;
aa[0] = 0;
aa[1] = 1/A;
IS = 1/A xx + Sum[aa[n] xx^n, {n, 2, term}];
Z = xx Series[Exp[IS Log[B]], {xx, 0, term}] - (IS /. xx -> A xx);
For[i = 2, i <= term, i++,
temp = Solve[Coefficient[Z, xx, i] == 0, aa[i]];
aa[i] = Simplify[temp[[1, 1, 2]]]]

Clear[\[Alpha], \[Beta], ff]
\[Alpha][z_] := Sum[aa[n] z^n, {n, 0, 15}]
\[Beta][z_] := Module[{x, q, o},
  x = N[z, 200];
  x = SetPrecision[x, 200];
  q = 0;
  While[Abs[x] > 10^-50,
   x = x/A;
   q = q + 1];
  o = \[Alpha][x];
  While[q > 0, o = x B^o; q = q - 1; x = A x];
  Return[o]] /; Abs[A] > 1
\[Beta][z_] := Module[{x, q, o},
  x = N[z, 200];
  x = SetPrecision[x, 200];
  q = 0;
  While[Abs[x] > 10^-50,
   x = x A;
   q = q + 1];
  o = \[Alpha][x];
  While[q > 0, o = Log[B, o/x]; q = q - 1; x = x/A];
  Return[o]] /; Abs[A] < 1
ff[z_] := \[Beta][A^z] /; Abs[A] > 1
ff[z_] := \[Beta][A^(z - 1)] /; Abs[A] < 1
I think maybe there's a relation between these functions, especially seeing how it diverges when a is close to 1, so I think, if this is correct:
\( \lim_{a\to1}f(z-g(a))\sim\mathrm{tet}_b(z) \) and g(a) is only determined by a, exploding to infinity when a is getting closer to 1
Also, these functions are multivalued(Taken the relation between f(z) and f(z-1)), maybe associated with Riemann surface?

Leo
Reply
#16
(08/05/2021, 04:51 PM)Leo.W Wrote: Hey James,
I've lately been considering an anologous familt of asymptotic tetration functions, satisfying the recurrence
\( f(z+1)=a^z*b^f(z) \) with arbitrary constant a and b
I simply take \( f(z)=g(a^z) \) and solve \( g(az)=z*b^g(z) \) term by term in series, which uses this code I wrote by wolfram mathematica 12, it iterates the recurrence to get the function converged
Code:
Clear[A, B, term, aa, IS, Z]
(* Solving \[Alpha] in coefficients *)
A = 1 + I;
B = 1/2;
term = 15;
aa[0] = 0;
aa[1] = 1/A;
IS = 1/A xx + Sum[aa[n] xx^n, {n, 2, term}];
Z = xx Series[Exp[IS Log[B]], {xx, 0, term}] - (IS /. xx -> A xx);
For[i = 2, i <= term, i++,
temp = Solve[Coefficient[Z, xx, i] == 0, aa[i]];
aa[i] = Simplify[temp[[1, 1, 2]]]]

Clear[\[Alpha], \[Beta], ff]
\[Alpha][z_] := Sum[aa[n] z^n, {n, 0, 15}]
\[Beta][z_] := Module[{x, q, o},
  x = N[z, 200];
  x = SetPrecision[x, 200];
  q = 0;
  While[Abs[x] > 10^-50,
   x = x/A;
   q = q + 1];
  o = \[Alpha][x];
  While[q > 0, o = x B^o; q = q - 1; x = A x];
  Return[o]] /; Abs[A] > 1
\[Beta][z_] := Module[{x, q, o},
  x = N[z, 200];
  x = SetPrecision[x, 200];
  q = 0;
  While[Abs[x] > 10^-50,
   x = x A;
   q = q + 1];
  o = \[Alpha][x];
  While[q > 0, o = Log[B, o/x]; q = q - 1; x = x/A];
  Return[o]] /; Abs[A] < 1
ff[z_] := \[Beta][A^z] /; Abs[A] > 1
ff[z_] := \[Beta][A^(z - 1)] /; Abs[A] < 1
I think maybe there's a relation between these functions, especially seeing how it diverges when a is close to 1, so I think, if this is correct:
\( \lim_{a\to1}f(z-g(a))\sim\mathrm{tet}_b(z) \) and g(a) is only determined by a, exploding to infinity when a is getting closer to 1
Also, these functions are multivalued(Taken the relation between f(z) and f(z-1)), maybe associated with Riemann surface?

Leo

Hey, Leo

I'm sorry; I don't think I follow. Would you mind elaborating? What is \( b^f(z) \), particularly?

Regards, James



OHHHH WWAIT, nevermind, I get it. You meant to write \( b^{f(z)} \).  You are absolutely correct.

What you have constructed here; using Sheldon's idea of a modified Schroder function; you've made,

\(
f(s) = \Omega_{j=1}^\infty a^{s-j} b^z\,\bullet z\\
\)

This function will be holomorphic for \( |a| > 1 ,b \neq 0,s \in \mathbb{C} \). This is similar to how I constructed the \( \phi \) method, where I took \( a = b = e \). The conjecture that stands is that this can only construct a \( \mathcal{C}^\infty \) tetration on \( \mathbb{R}^+ \). And converges nowhere in \( \mathbb{C} \) when you apply iterated logs.

You're construction method is perfectly valid though; it's how Sheldon justified my method; both ways are equivalent; his is more hands on with taylor series though.

By this, I mean, you can construct a family of tetrations,

\(
F(a,b,s) = \lim_{n\to\infty} \log^{\circ n} f(s+n)\\
\text{for}\,\,a>1\,b > 0\,s \in \mathbb{R}\,s > R\,\,\text{for some}\,\,R > 0\\
b^{F(a,b,s)} = F(a,b,s+1)\\
\)

It will probably diverge in \( \mathbb{C} \) though. It's going to look like the \( \phi \) method.



I'd suggest looking at something that solves the asymptotic equation; and keep \( b > e^{1/e} \) and real.  In such a sense,

\(
g(b,\lambda, s) = \Omega_{j=1}^\infty \frac{b^z}{e^{\lambda(j-s)} + 1}\,\bullet z\\
\)

Which satisfies the equation,

\(
\log_b g(b,\lambda, s+1) = g(b,\lambda,s) - \log_b(1+e^{-\lambda s})\\
\)

Or fiddle with Tommy's gaussian approach. Much of this paper extends to all \( b > e^{1/e} \); I just kept it with \( e \) to keep it simpler. Theoretically the beta method works for all \( b > e^{1/e} \). Not too sure about complex values yet.

Regards, James
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  4 hypothesis about iterated functions Shanghai46 11 416 04/22/2023, 08:22 PM
Last Post: Shanghai46
  Question about the properties of iterated functions Shanghai46 9 358 04/21/2023, 09:07 PM
Last Post: Shanghai46
  Computing sqrt 2 with rational functions. tommy1729 0 121 03/31/2023, 11:49 AM
Last Post: tommy1729
  [NT] Caleb stuff , mick's MSE and tommy's diary functions tommy1729 0 138 02/26/2023, 08:37 PM
Last Post: tommy1729
  Evaluating Arithmetic Functions In The Complex Plane Caleb 6 385 02/20/2023, 12:16 AM
Last Post: tommy1729
  Searching for an asymptotic to exp[0.5] tommy1729 204 408,614 09/10/2022, 12:28 PM
Last Post: tommy1729
  Bessel functions and the iteration of \(e^z -1 \) JmsNxn 8 1,166 09/09/2022, 02:37 AM
Last Post: tommy1729
  The iterational paradise of fractional linear functions bo198214 7 1,145 08/07/2022, 04:41 PM
Last Post: bo198214
Question Tetration Asymptotic Series Catullus 18 2,954 07/05/2022, 01:29 AM
Last Post: JmsNxn
  Uniqueness of fractionally iterated functions Daniel 7 1,453 07/05/2022, 01:21 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)