My new ABEL_L.gp program
#1
UPDATE: I'll be updating this code on a GitHub repository. I've modified some of the functions to make them more conducive to Taylor series.

https://github.com/JmsNxn92/Recursive_Tetration_PARI

Mike's graphing tool is there too; if you want to make graphs.






Hey, everyone!

So I'm fairly satisfied with my code at this point. There are no glaring errors, and it admits 200 digit precision if you need it. The code is a little shoddy, in how to call the functions. But everything is working as it should, regardless the slight awkward gait it has. Before I attach the code (at the bottom); I'll remind what these functions are and do.

To begin, I constructed a family of asymptotic solutions to tetration using infinite compositions. These functions can be denoted,

\(
\beta_\lambda(s) = \Omega_{j=1}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\,\bullet z\\
\beta_\lambda(s) = {\displaystyle \frac{e^{\displaystyle \frac{e^{\displaystyle \frac{e^{...}}{e^{\lambda(3-s)} + 1}}}{e^{\lambda(2-s)} + 1}}}{e^{\lambda(1-s)} + 1}}\\
\)

Which are holomorphic for \( (s,\lambda) \in \mathbb{L} = \{(s,\lambda) \in \mathbb{C}^2\,|\,\Re \lambda > 0,\, \lambda(j-s) \neq (2k+1)\pi i,\,j,k \in \mathbb{Z},\,j \ge 1\} \). This family satisfies the equation,

\(
\log \beta_\lambda(s+1) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\
\)

From here, we insert an error term \( \tau_\lambda(s) = - \log(1+e^{-\lambda s}) + o(e^{-\lambda s}) \) such that,

\(
F_\lambda = \beta_\lambda + \tau_\lambda\\
\exp F_\lambda(s) = F_\lambda(s+1)\\
\)

This produces a solution to the tetration Abel/superfunction equation; and each of these functions are periodic with period \( 2\pi i/\lambda \). So as these will be real-valued with \( \lambda \in \mathbb{R}^+ \); they will be periodic, and will have a wall of singularities on the boundary of their period.  These functions are included in the code dump.

From here, we want to take a new asymptotic solution, which we write \( \beta(s) = \beta_{1/\sqrt{1+s}}(s) \). Then, the basic principle, is that the tetration we want is,

\(
\text{tet}_\beta(s) = \lim_{n\to\infty} \log^{\circ n} \beta(s+n+x_0)\\
\)

For a normalizing constant \( x_0 \in \mathbb{R} \) (I haven't normalized the code yet, but \( x_0 \approx 2 \), I'm having trouble finding the constant exactly).


I've tried to add as much documentation as I can, but I'm wondering how much is really needed; a lot of the code is pretty self explanatory. The only difficult part is when trying to generate Taylor series, you have to be fairly careful. I explained as well as I could how to grab Taylor series. This is necessary for calculating near the real-line (but not on the real line). This is definitely not optimized though.

I'm surprised by how simply you can construct Tetration in Pari; all I really used was a couple for loops, and a couple if statements; and a whole swath of recursion. This is significantly more elementary than Sheldon's fatou.gp; so expect more glitches. I also have a strong feeling this is not Kneser's tetration. It appears to be diverging for large imaginary arguments; which can only happen if it isn't normal as the imaginary argument grows; which Kneser's solution tends to L.

Here's the tetration solution \( \text{tet}_\beta(z) \) for \( -1 \le \Re(z) \le 4 \) and \( -0.8 \le \Im(z) \le 0.8 \).

   

Here's the Taylor series of \( \text{tet}_\beta(z) \) about \( z =1 \) for 100 terms and 100 precision. These Taylor series tend to converge pretty slow, even with 100 terms and 100 precision; nonetheless the radius of convergence will be 3 about 1; we're just seeing a very slow convergence. It's nice to see it diverging pretty evenly away from the center point though.

   

Here's the Taylor series about \( z=0 \) graphed over a smaller region. Taylor series converge slower in proximity to the singularity at \( z=-2 \). At least, that's how it seems. This is a box about where it converges. You can see it already diverging in the corners; chop that up to slow convergence because it's happening a tad early.

   

Any questions, comments are greatly appreciated.

Regards, James.


Attached Files Thumbnail(s)
   

.gp   Abel_L.gp (Size: 4.16 KB / Downloads: 497)
Reply
#2
Nice pictures !

But sorry for asking but I do not know why you are always involving that square root of 1+s ?

Was it something with riemann mapping ? Getting closer to kneser solution ? avoiding singularity at oo ?
Making lim ln ln ln ... f( s + t+a) be tetration  ?

Or just to make the function look smoother ?

You explained it a bit before but I did not fully understand tbh.
Reply
#3
Hey, Tommy

It's a little arbitrary that I use \( \sqrt{1+s} \); but I'll explain it again.

The function \( \beta_\lambda(s) \) has singularities at the points \( s = j + (2k+1)\pi i / \lambda \); so when we write,

\(
F_\lambda(s) = \lim_{n\to\infty} \log^{\circ n} \beta_\lambda(s+n)\\
\)

This will inherently have branch-cuts/singularities along the points \( j + (2k+1)\pi i / \lambda \) for \( j,k \in \mathbb{Z} \). But other than that, this construction works very well. So what we want to do, is move \( \lambda \) while we're taking this limit.

The way I proved this works was essentially with \( \lambda = 1/\sqrt{1+s} \); but it need not be this function. If \( \lambda : \mathbb{R}^+ \to \mathbb{R}^+ \) and,

\(
\beta_{\lambda(s)}(s) : \{s \in \mathbb{C}\,|\,|\arg(s)| < \theta < \pi/2\} \to \mathbb{C}\\
\)

Where \( \lim_{s\to\infty} \lambda(s) = \mathcal{O}(s^{-\epsilon}) \) for \( 0 < \epsilon < 1 \); then the construction works. And additionally; they will produce the same functions (because I use Banach's Fixed Point Theorem to construct these things).

The way I think about it is as a Riemann mapping. We are going to move \( \lambda \) while we're iterating, to avoid the singularities. So for example, the solutions to the equation,

\(
s = j + (2k+1) \pi i \sqrt{1+s}\\
\)

Get further and further out in the complex plane. And we can find a sector \( S_\theta = \{s \in \mathbb{C}\,|\,|\arg(s)| < \theta < \pi/2 \} \), in which,

\(
\beta_{1/\sqrt{1+s}}(s) : S_\theta \to \mathbb{C}\\
\)

And is holomorphic here. This function still acts as an asymptotic solution to tetration, but it doesn't have a convenient functional equation. Now when we do our construction with the iterated logarithms; we can get;

\(
F(s) : S_\theta \to \mathbb{C}\\
\)

But what's so great about this, is that for all \( s \in \mathbb{C} \) there exists an \( n \) such that \( s+n \in S_\theta \). So we can pull this back with logarithms; and extend \( F \) to \( s \in\mathbb{C}/(-\infty,-2] \) for an appropriate normalization constant \( x_0 \approx 2 \).

Now the reason I'm presuming this isn't Kneser's tetration is because; as we limit the imaginary argument to infinity; we are approaching the boundary of \( F_\lambda(s) \) (The almost cylinder where it's holomorphic), which as I mentioned earlier, is a wall of singularities/branch cuts. Which implies \( \text{tet}_\beta(s) \not \to L \) as \( \Im(s) \to \infty \). Instead, it should display no normality condition. I'm having trouble making this heuristic a rigorous proof; but I'm getting there.

So all in all, you don't really need \( \lambda = 1/\sqrt{1+s} \); you could also choose \( \lambda = 1/(1+s)^{1/3} \) just as well; and this will produce the same tetration (albeit, you should have a different normalization constant).

Another way to think about this, which is a bit of an abuse of notation; but it works,

\(
\text{tet}_\beta(s) = F_\lambda(s+x_0)|_{\lambda = 0}\\
\)

Where we write this limit as,

\(
\text{tet}_\beta(s) = \lim_{n\to\infty}\lim_{\lambda \to 0} \log^{\circ n} \beta_\lambda(s+x_0+n)\,\,\text{where}\,\,\lambda = \mathcal{O}(n^{-\epsilon})\,\,\text{for}\,\,0 < \epsilon < 1\\
\)

I hope that clears it up. I mostly just chose \( \lambda = 1 / \sqrt{1+s} \) because it's simple and effective. This function effectively moves all the singularities from \( s = j+ (2k+1)\pi i/\lambda \) to \( \Im(s) = \pm \infty \); which is what I mean by Riemann Mapping.

Regards, James


Additionally Tommy, we can think about this as a Riemann mapping on a whole bunch of tetration solutions.

If I take \( \lambda = 1+i \) and construct \( F_{1+i}(s) \) I get something like this,


   

Which has singularities on its boundary; will have a period of \( \ell = 2 \pi i / 1+i \); and we want to use this to create the right tetration. We want to move the boundary of the cylinder to \( \Im(s) = \pm \infty \). But how do we do this?

We do it by moving \( \lambda \) while we take \( n\to\infty \), it's that simple. The function \( \lambda = 1/\sqrt{1+s} \) is just one of many functions which will work.
Reply
#4
I managed to make the following graph by storing a matrix of Taylor series. I'm working on making a separate program intended to work through a matrix method. I'm kind of reverse engineering how Sheldon was storing Taylor series in a matrix to make fatou.gp; and I think I'm getting the hang of it. Again, this is not anything like Sheldon's process though; I'm just appreciating how he stored matrices to make such a great calculator.

Here is my tetration \( \text{tet}_\beta(z) \) for \( -1 \le \Re(z) \le 4 \) and \( |\Im(z)| \le 5 \). The graph isn't perfect, by no measure at all. I only used a 100 term Taylor Series; and my Taylor series converge very very slowly; and I sparsed out my data points too much, I think. Especially near the real line, and especially near the singularity at \( z=-2 \). I'm hoping Sheldon may have some light to shed on a more efficient way to make a matrix method approach.

The main reason I am thinking this isn't Kneser's is because this tetration appears to slowly diverge as we increase the imaginary argument in either direction. I'm still working on a proof that this tetration is not Kneser's; but I've developed a couple heuristic arguments by this point. It also, doesn't look exactly like Kneser's.

I've kind of categorized this solution as the solution about the fixed point at infinity of \( \exp(z) \). Where the fixed point only exists as \( \exp(z) : \mathbb{C}_{\Re(z) > 0} \to \mathbb{C} \) where this restricted exponential map satisfies \( \exp(\infty) = \infty \) in a well enough manner. This is then, very god damn similar to Kneser's iteration. But it isn't about a fixed point; unless you count \( \infty \) as a weird kind of fixed point. It's structured on solving Schroder's equation at infinity.

   
Reply
#5
AHA!

I think I'm getting the hang of making this work. I'm making this as I go; and I've been reminding myself, as I go, how to code in C based languages. I've improved some of the code for this, but I've kept the same structure. I will have more to update on as I progress. But! I thought I'd share a really cool graph I just made. I'm broaching the territory of proving this isn't Kneser's but I'm trying to get there.

Over the domain \( -1 \le \Re(z) \le 4 \) and \( 10 \le \Im(z) \le 12 \), this is \( \text{tet}_\beta(z) \):

   

The more the chaos, the more I'm right. This tetration is not normal when we apply the principal branch of the logarithm. It behaves like a julia set, and not like a standard Schroder iteration.




Here's a larger portrait over the domain \( -1 \le \Re(z) \le 5 \) and \( 2 \le \Im(z) \le 12 \), this is \( \text{tet}_\beta(z) \),

   

And another large portrait over the domain \( -1 \le \Re(z) \le 5 \) and \( 10 \le \Im(z) \le 20 \).

   

I'm still having trouble making fast and accurate code near the real-line--but I'm pretty sure I have the upper and lower half-planes well managed.


So I thought I'd post a graph full of artifacts, and explain why we have these artifacts. Pari-GP always chooses the principal branch of the logarithm. Therefore as we increase the imaginary argument; since this function is not normal on \( \Im(z) > 0 \), then you get a cycle between \( \Im(z) >,<0 \).

No where else is this more obvious than with the recursive definition of the code on the Real line. Everything works great away from the real line; but on the real line the recursive definition fails. Where it's trying to swap between \( Im(z) >,<0 \). This is because, in a neighborhood of the real line, the principal branch of \( \log \) is NOT NORMAL on \( \text{tet}_\beta \). So my pretty Taylor series versions are correct, these recursive graphs are not necessarily. Still this looks similar to what I posted before (I'll add it after).

The purple (resp. green) which appears in the upper/lower half plane; is a swap between \( \pi,-\pi \); which forces all the errors. I'm still in the process of making a "Matrix Add On" for this code; which we'll solve this problem. But nonetheless, here's what a better recursive code looks like.

The recursive definition using the principal branch of \( \log \):

   

The definition using Taylor series; and where we choose arbitrary logs, not just the principal.

   
Reply
#6
I thought I'd remind, that \( \beta(z) \) already looks a lot like \( \text{tet}_\beta(z) \). And that, we're really just calculating an error.

Here's \( \beta(z) \) for \( \Re(z) \in (-2,6) \) and \( \Im(z) \in (0,4) \)

   
Reply
#7
I've had a very big AHA! moment. I will post updated code in a week or so.

I finally managed to solve the branching problem I was facing by turning my \( \log(X) \) for \( X \) large, into \( \log(1+w) \) for \( w \) small.

I'm very excited by this. I'm going to try and clean up all my code; and make the Github repository final.

I have to find a way to speed up the solution I've made for the moment.
Reply
#8
(05/21/2021, 03:44 AM)JmsNxn Wrote: I'm surprised by how simply you can construct Tetration in Pari; all I really used was a couple for loops, and a couple if statements; and a whole swath of recursion. This is significantly more elementary than Sheldon's fatou.gp; so expect more glitches. I also have a strong feeling this is not Kneser's tetration. It appears to be diverging for large imaginary arguments; which can only happen if it isn't normal as the imaginary argument grows; which Kneser's solution tends to L.

We use Pari not only because it is simple, for example, in Wolfram Mathematica, a lot of large number operation (like exp/log) will make it memory overflow.

I'm very angry when I say it...Tetration is one of the most important fractal functions. Wolfram, what shit did you do?

I don't want to say that Pari is perfect(like abs(x) < 1E-1000000 and calculation log(x) ), but at least it won’t fuck down my windows
Reply
#9
(06/17/2021, 04:29 AM)Ember Edison Wrote: We use Pari not only because it is simple, for example, in Wolfram Mathematica, a lot of large number operation (like exp/log) will make it memory overflow.

I'm very angry when I say it...Tetration is one of the most important fractal functions. Wolfram, what shit did you do?

I don't want to say that Pari is perfect(like abs(x) < 1E-1000000 and calculation log(x) ), but at least it won’t fuck down my windows

Ya, I wrote virtually the same program in Matlab and it just overflows everywhere before doing anything.


I updated the code and managed to get it nearly working to perfection. Ironically the error I'm having is precisely the error Ember is talking about. In my iteration we can dip a little too close to 0; and the log's either overflow or; you can make a cut off function on the log. This produces some fractal anomalies. But it works fine away from the real line, on the real line; but in a neighborhood of the real line it can be a little fractally.

Now, though, all you need is to call

\(
\text{sexp}(z)\\
\)

for the super-exponential; which will (almost everywhere) be accurate to 100 decimal places.

You can also grab taylor series about a point; say \( \pi \) by writing,

\(
\text{sexp}(\pi + z , z)\\
\)

Again, the code is no where near as good as Sheldon's but I've polished it very well since before.

It's available on the github repository.

   

You can see what I mean by some hairs appearing near the real line; until I can figure out a way to fix this logarithm problem they're there to stay. I still suggest for near the real line to expand a taylor series on the real line and just sum it; this works better and won't have the hairs. But nonetheless; this construction is entirely made from recursion.

That is to say; this is a purely recursive program; and I've only proved it works for a theoretical turing machine with infinite memory. Without infinite memory we get glitches specific to the real line. And it's precisely because of Ember's comment of pari-gp.
Reply
#10
Working on Tommy's method, I realized that I made a fatal flaw in this code, which when catching overflows it may exit incorrectly, which causes a bunch of errors; and what I assume is responsible for the hairs. I'm going to go to work at solving it but it requires making a better method of evaluating the beta function; this undoubtedly needs to be done using Taylor series. For this then, I'm going to use how sheldon constructed the \( \phi \) function so well. Probably should've started from that idea.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Wink new fatou.gp program sheldonison 33 75,002 07/10/2022, 02:23 AM
Last Post: Catullus
  The beta method program JmsNxn 0 1,972 02/25/2022, 03:05 AM
Last Post: JmsNxn
  Natural complex tetration program + video MorgothV8 1 7,085 04/27/2018, 07:54 PM
Last Post: MorgothV8
  Mathematica program for tetration based on the series with q-binomial coefficients Vladimir Reshetnikov 0 5,737 01/13/2017, 10:51 PM
Last Post: Vladimir Reshetnikov
  complex base tetration program sheldonison 23 92,900 10/26/2016, 10:02 AM
Last Post: Gottfried
  C++ program for generatin complex map in EPS format MorgothV8 0 6,078 09/17/2014, 04:14 PM
Last Post: MorgothV8



Users browsing this thread: 1 Guest(s)