UPDATE: I'll be updating this code on a GitHub repository. I've modified some of the functions to make them more conducive to Taylor series.

https://github.com/JmsNxn92/Recursive_Tetration_PARI

Mike's graphing tool is there too; if you want to make graphs.

Hey, everyone!

So I'm fairly satisfied with my code at this point. There are no glaring errors, and it admits 200 digit precision if you need it. The code is a little shoddy, in how to call the functions. But everything is working as it should, regardless the slight awkward gait it has. Before I attach the code (at the bottom); I'll remind what these functions are and do.

To begin, I constructed a family of asymptotic solutions to tetration using infinite compositions. These functions can be denoted,

\(

\beta_\lambda(s) = \Omega_{j=1}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\,\bullet z\\

\beta_\lambda(s) = {\displaystyle \frac{e^{\displaystyle \frac{e^{\displaystyle \frac{e^{...}}{e^{\lambda(3-s)} + 1}}}{e^{\lambda(2-s)} + 1}}}{e^{\lambda(1-s)} + 1}}\\

\)

Which are holomorphic for \( (s,\lambda) \in \mathbb{L} = \{(s,\lambda) \in \mathbb{C}^2\,|\,\Re \lambda > 0,\, \lambda(j-s) \neq (2k+1)\pi i,\,j,k \in \mathbb{Z},\,j \ge 1\} \). This family satisfies the equation,

\(

\log \beta_\lambda(s+1) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\

\)

From here, we insert an error term \( \tau_\lambda(s) = - \log(1+e^{-\lambda s}) + o(e^{-\lambda s}) \) such that,

\(

F_\lambda = \beta_\lambda + \tau_\lambda\\

\exp F_\lambda(s) = F_\lambda(s+1)\\

\)

This produces a solution to the tetration Abel/superfunction equation; and each of these functions are periodic with period \( 2\pi i/\lambda \). So as these will be real-valued with \( \lambda \in \mathbb{R}^+ \); they will be periodic, and will have a wall of singularities on the boundary of their period. These functions are included in the code dump.

From here, we want to take a new asymptotic solution, which we write \( \beta(s) = \beta_{1/\sqrt{1+s}}(s) \). Then, the basic principle, is that the tetration we want is,

\(

\text{tet}_\beta(s) = \lim_{n\to\infty} \log^{\circ n} \beta(s+n+x_0)\\

\)

For a normalizing constant \( x_0 \in \mathbb{R} \) (I haven't normalized the code yet, but \( x_0 \approx 2 \), I'm having trouble finding the constant exactly).

I've tried to add as much documentation as I can, but I'm wondering how much is really needed; a lot of the code is pretty self explanatory. The only difficult part is when trying to generate Taylor series, you have to be fairly careful. I explained as well as I could how to grab Taylor series. This is necessary for calculating near the real-line (but not on the real line). This is definitely not optimized though.

I'm surprised by how simply you can construct Tetration in Pari; all I really used was a couple for loops, and a couple if statements; and a whole swath of recursion. This is significantly more elementary than Sheldon's fatou.gp; so expect more glitches. I also have a strong feeling this is not Kneser's tetration. It appears to be diverging for large imaginary arguments; which can only happen if it isn't normal as the imaginary argument grows; which Kneser's solution tends to L.

Here's the tetration solution \( \text{tet}_\beta(z) \) for \( -1 \le \Re(z) \le 4 \) and \( -0.8 \le \Im(z) \le 0.8 \).

Here's the Taylor series of \( \text{tet}_\beta(z) \) about \( z =1 \) for 100 terms and 100 precision. These Taylor series tend to converge pretty slow, even with 100 terms and 100 precision; nonetheless the radius of convergence will be 3 about 1; we're just seeing a very slow convergence. It's nice to see it diverging pretty evenly away from the center point though.

Here's the Taylor series about \( z=0 \) graphed over a smaller region. Taylor series converge slower in proximity to the singularity at \( z=-2 \). At least, that's how it seems. This is a box about where it converges. You can see it already diverging in the corners; chop that up to slow convergence because it's happening a tad early.

Any questions, comments are greatly appreciated.

Regards, James.

https://github.com/JmsNxn92/Recursive_Tetration_PARI

Mike's graphing tool is there too; if you want to make graphs.

Hey, everyone!

So I'm fairly satisfied with my code at this point. There are no glaring errors, and it admits 200 digit precision if you need it. The code is a little shoddy, in how to call the functions. But everything is working as it should, regardless the slight awkward gait it has. Before I attach the code (at the bottom); I'll remind what these functions are and do.

To begin, I constructed a family of asymptotic solutions to tetration using infinite compositions. These functions can be denoted,

\(

\beta_\lambda(s) = \Omega_{j=1}^\infty \frac{e^z}{e^{\lambda(j-s)} + 1}\,\bullet z\\

\beta_\lambda(s) = {\displaystyle \frac{e^{\displaystyle \frac{e^{\displaystyle \frac{e^{...}}{e^{\lambda(3-s)} + 1}}}{e^{\lambda(2-s)} + 1}}}{e^{\lambda(1-s)} + 1}}\\

\)

Which are holomorphic for \( (s,\lambda) \in \mathbb{L} = \{(s,\lambda) \in \mathbb{C}^2\,|\,\Re \lambda > 0,\, \lambda(j-s) \neq (2k+1)\pi i,\,j,k \in \mathbb{Z},\,j \ge 1\} \). This family satisfies the equation,

\(

\log \beta_\lambda(s+1) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\

\)

From here, we insert an error term \( \tau_\lambda(s) = - \log(1+e^{-\lambda s}) + o(e^{-\lambda s}) \) such that,

\(

F_\lambda = \beta_\lambda + \tau_\lambda\\

\exp F_\lambda(s) = F_\lambda(s+1)\\

\)

This produces a solution to the tetration Abel/superfunction equation; and each of these functions are periodic with period \( 2\pi i/\lambda \). So as these will be real-valued with \( \lambda \in \mathbb{R}^+ \); they will be periodic, and will have a wall of singularities on the boundary of their period. These functions are included in the code dump.

From here, we want to take a new asymptotic solution, which we write \( \beta(s) = \beta_{1/\sqrt{1+s}}(s) \). Then, the basic principle, is that the tetration we want is,

\(

\text{tet}_\beta(s) = \lim_{n\to\infty} \log^{\circ n} \beta(s+n+x_0)\\

\)

For a normalizing constant \( x_0 \in \mathbb{R} \) (I haven't normalized the code yet, but \( x_0 \approx 2 \), I'm having trouble finding the constant exactly).

I've tried to add as much documentation as I can, but I'm wondering how much is really needed; a lot of the code is pretty self explanatory. The only difficult part is when trying to generate Taylor series, you have to be fairly careful. I explained as well as I could how to grab Taylor series. This is necessary for calculating near the real-line (but not on the real line). This is definitely not optimized though.

I'm surprised by how simply you can construct Tetration in Pari; all I really used was a couple for loops, and a couple if statements; and a whole swath of recursion. This is significantly more elementary than Sheldon's fatou.gp; so expect more glitches. I also have a strong feeling this is not Kneser's tetration. It appears to be diverging for large imaginary arguments; which can only happen if it isn't normal as the imaginary argument grows; which Kneser's solution tends to L.

Here's the tetration solution \( \text{tet}_\beta(z) \) for \( -1 \le \Re(z) \le 4 \) and \( -0.8 \le \Im(z) \le 0.8 \).

Here's the Taylor series of \( \text{tet}_\beta(z) \) about \( z =1 \) for 100 terms and 100 precision. These Taylor series tend to converge pretty slow, even with 100 terms and 100 precision; nonetheless the radius of convergence will be 3 about 1; we're just seeing a very slow convergence. It's nice to see it diverging pretty evenly away from the center point though.

Here's the Taylor series about \( z=0 \) graphed over a smaller region. Taylor series converge slower in proximity to the singularity at \( z=-2 \). At least, that's how it seems. This is a box about where it converges. You can see it already diverging in the corners; chop that up to slow convergence because it's happening a tad early.

Any questions, comments are greatly appreciated.

Regards, James.