After I finished my zoom call with Sheldon, I've realized a couple of things.
Unless you can show that \( \log^{100} \beta(z+100) \) shows the elimination of singularities; this is a nowhere analytic solution on \( \mathbb{R} \). It is still holomorphic almost everywhere on \( 0 < \Im(z) < \pi \) though. So all is not lost yet. But the saw tooth effect (which I myself have noticed) is a real thing (not a glitch like I so thought); so unless at n = 10000000 or whatever, the saw tooth disappears, we succumb ourselves to nowhere analytic on \( \mathbb{R} \). The trouble is, on the real line, our iterations cap at about n =4; so there is zero way to confirm this numerically unless you had a supercomputer.
We should remember though, and you too, Tommy; that this is the periodic tetration. The actual tetration we care about is created by limiting the period to infinity. And the preliminary tests, as I've always done it, show little to no sawtooth effect in the final beta-tetration. Which is when we let the multiplier \( \lambda = 1 \) move to \( \lambda \to 0 \). Here, is where we lose sawtooth data. I am still on the fence about nowhere analycity on \( \mathbb{R} \) for this case; but the upper half plane should be fine. Just how, the strip \( 0 < \Im(s) < \pi \) is fine for the \( \lambda = 1 \) case. And I can show this with much better numbers, now that Sheldon optimized my code.
To those interested in the main difference between mine and Sheldon's code; it's kind of silly really. The way I wrote the initialization file was for a free variable of the multiplier. So you could run any multiplier and everything would work. Sheldon chose to fix the multiplier to \( \lambda = 1 \)--and upon doing this, everything was streamlined. I never expected that sole move to speed everything up so much.
I'm in the process of creating much better code, which hybridizes both methods. Unfortunately, as you could before, you can't graph across the multiplier. And the multiplier is no longer a free variable. It is fixed to whatever constant you choose to set it to. Similarly; I've added a protocol for any base. And again, we have to fix the base, and we have to fix the multiplier; but we still get Sheldon level speeds. I've added rudimentary normalization protocols, but they are still slow running.
But now, finding a function:
\(
F_{b,\lambda}(z+1) = \exp(bF_{b,\lambda}(z))\\
F_{b,\lambda}(z+2\pi i/\lambda) = F_{b,\lambda}(z)\\
\)
Is a hell of a lot easier.
So in that sense, you can initialize beta in a couple of seconds; the trouble will be when initializing a normalization constant so that \(\text{Sexp}(0) = 1\); this takes much longer. Especially for bad bases. And there is no way to graph across \(b\), or \(\lambda\). \(b\) and \(\lambda\) are fixed values. This is still just as fast as Sheldon's code. As his was a specialization for one base and multiplier, and a simple initialization. Mine works similarly but will work for arbitrary bases and multipliers. And it will work much better--I cleaned up a couple of Sheldon's protocols.
An important note, is that I always consider the "base" of the exponential \(b\) as \(\exp(bz)\) as opposed to \(b^z\)... this fixes so many errors. I apologize if this is confusing. We are taking the log of what we usually call the base; but we have a free imaginary argument.
I'll release this code soon. I'm just working on making an efficient graphing protocol; which works with the error catching protocols Ember Edison described.
Tommy; Remember that all is not lost because the \(2 \pi i\)-periodic tetration is not analytic on \(\mathbb{R}\). This is possible, per my construction. The trouble would be when we limit \(\lambda \to 0\). If this is not analytic, then we have a problem. And the more I think about it. The reason \( \lambda \to 0 \) is analytic on \( \mathbb{R} \) is because it's Kneser. I'm seeing more and more evidence of this. This would solidify that the only analytic tetration which takes \( (-2,\infty) \to \mathbb{R} \) bijectively is Kneser.
Regards, James
Unless you can show that \( \log^{100} \beta(z+100) \) shows the elimination of singularities; this is a nowhere analytic solution on \( \mathbb{R} \). It is still holomorphic almost everywhere on \( 0 < \Im(z) < \pi \) though. So all is not lost yet. But the saw tooth effect (which I myself have noticed) is a real thing (not a glitch like I so thought); so unless at n = 10000000 or whatever, the saw tooth disappears, we succumb ourselves to nowhere analytic on \( \mathbb{R} \). The trouble is, on the real line, our iterations cap at about n =4; so there is zero way to confirm this numerically unless you had a supercomputer.
We should remember though, and you too, Tommy; that this is the periodic tetration. The actual tetration we care about is created by limiting the period to infinity. And the preliminary tests, as I've always done it, show little to no sawtooth effect in the final beta-tetration. Which is when we let the multiplier \( \lambda = 1 \) move to \( \lambda \to 0 \). Here, is where we lose sawtooth data. I am still on the fence about nowhere analycity on \( \mathbb{R} \) for this case; but the upper half plane should be fine. Just how, the strip \( 0 < \Im(s) < \pi \) is fine for the \( \lambda = 1 \) case. And I can show this with much better numbers, now that Sheldon optimized my code.
To those interested in the main difference between mine and Sheldon's code; it's kind of silly really. The way I wrote the initialization file was for a free variable of the multiplier. So you could run any multiplier and everything would work. Sheldon chose to fix the multiplier to \( \lambda = 1 \)--and upon doing this, everything was streamlined. I never expected that sole move to speed everything up so much.
I'm in the process of creating much better code, which hybridizes both methods. Unfortunately, as you could before, you can't graph across the multiplier. And the multiplier is no longer a free variable. It is fixed to whatever constant you choose to set it to. Similarly; I've added a protocol for any base. And again, we have to fix the base, and we have to fix the multiplier; but we still get Sheldon level speeds. I've added rudimentary normalization protocols, but they are still slow running.
But now, finding a function:
\(
F_{b,\lambda}(z+1) = \exp(bF_{b,\lambda}(z))\\
F_{b,\lambda}(z+2\pi i/\lambda) = F_{b,\lambda}(z)\\
\)
Is a hell of a lot easier.
So in that sense, you can initialize beta in a couple of seconds; the trouble will be when initializing a normalization constant so that \(\text{Sexp}(0) = 1\); this takes much longer. Especially for bad bases. And there is no way to graph across \(b\), or \(\lambda\). \(b\) and \(\lambda\) are fixed values. This is still just as fast as Sheldon's code. As his was a specialization for one base and multiplier, and a simple initialization. Mine works similarly but will work for arbitrary bases and multipliers. And it will work much better--I cleaned up a couple of Sheldon's protocols.
An important note, is that I always consider the "base" of the exponential \(b\) as \(\exp(bz)\) as opposed to \(b^z\)... this fixes so many errors. I apologize if this is confusing. We are taking the log of what we usually call the base; but we have a free imaginary argument.
I'll release this code soon. I'm just working on making an efficient graphing protocol; which works with the error catching protocols Ember Edison described.
Tommy; Remember that all is not lost because the \(2 \pi i\)-periodic tetration is not analytic on \(\mathbb{R}\). This is possible, per my construction. The trouble would be when we limit \(\lambda \to 0\). If this is not analytic, then we have a problem. And the more I think about it. The reason \( \lambda \to 0 \) is analytic on \( \mathbb{R} \) is because it's Kneser. I'm seeing more and more evidence of this. This would solidify that the only analytic tetration which takes \( (-2,\infty) \to \mathbb{R} \) bijectively is Kneser.
Regards, James

