Writing Kneser's super logarithm using values of Kneser at a single point
#1
I am going to draw a graph here, and it's the standard Kneser graph. This is drawn using minimal data. If we assume that:

\[
\text{tet}_K'(\tau(0)) = 1\\
\]

The value \(\tau(0) = 0.806...\). Then we bank a Taylor series:

\[
\text{tet}_K(\tau(0) + z) = \text{tet}_K(\tau(0)) + z + \sum_{k=2}^\infty a_k z^k\\
\]

I developed this Taylor series using Sheldon's fatou.gp. And this is all that I've used from sheldon. Once you have this Taylor series, you can create the super logarithm, and by proxy, the Kneser tetration. My code is on par with speed with Sheldon, but it can take some time. I think a big graph should show what I mean. Here is \(|\Re(z)| <4\) and \(|\Im(z)| < 1\) and we graph \(\text{slog}(z)\):

   

It's still not perfect; but this is Kneser's super logarithm from a single taylor series of Kneser's super exponential centered at a point \(\text{tet}_K'(\tau(0)) = 1\). To get perfection we will need to sample along \(\tau\), and getting Taylor series about here; which can speed up the current slog code I have. Additionally this would fix some issues with large graphs I have.

So, to refresh; we can take a single 50kb file, which stores \(a_k\); and we can use this to run Kneser's slog with little to no interruptions. My code is not faster than Sheldon's. But it is more malleable. When we write slog(A+z) this expression writes the taylor series of slog about A in the variable z. My expression of slog uses much more of the built in functions of pari. And you can treat it like \(\sin\) or how built in pari code works.

AGAIN! I've done nothing mathematically here. I've simply stored the taylor series:

\[
\text{tet}_K(z+\tau(0)) = \text{TAYLOR}\\
\]

I used sheldon's program to get the Kneser numbers. But now, all we need is TAYLOR to construct the slog function. From here we can write Kneser's tetration everywhere. If we use a single Taylor series from Kneser--generated by Sheldon; we can bypass all of Sheldon's code and write simpler code which does a "good enough job".

This code is very rough, and slightly inaccurate. But we're good for about 15-20 digits of accuracy. Which is the cap from the import of 50kb Taylor data. This technique does not work generally. The code I use specifies that \(z \mapsto \exp(z)\) is nowhere normal, and each point \(z_0 \mapsto \exp(z_0)\) eventually gets close to \(0 \mapsto \exp(0) \mapsto \exp(1) \mapsto \exp(\exp(1))\mapsto...\). This is a proven property of the exponential function. Thereby, my code works. If you want to take arbitrary tetrations, this code will not work. And in no way will it even look like it works.

This algorithm produces \(\text{slog}_K(z)\) for vertical branching points at \(\pm L\).  This works solely off the 50kb that \(a_k\) generate. The \(\text{slog}\) we create is the unique one about the real-line, but there are branches in the complex plane.

   

Inverting this super logarithm creates Kneser's tetration. This is all Sheldon's heavy lifting. But It tricks the story. We only need 50kb of data, and we can process it as fast but in more manageable language. The taylor series of \(|text{tet}_K'(\tau)| = 1\) gives the raw solution/
Reply
#2
Hey, everyone!



So I'd like to publish this code, but first I'd like to draw out what this code does differently than Sheldon.



First of all, we are going to rely on Sheldon's code for the construction of a single sequence. We will call this sequence:



\[

a_n = \text{slog}^{(n)}(1)\\

\]



Where Kneser's super logarithm can be written as:



\[

\text{slog}(z) = \sum_{n=1}^\infty a_n z^n\\

\]



Sheldon's code is pre-run in this code; which is the attached 80kb file. This is done to grab this sequence to a good enough accuracy; 200 terms and about 50-60 digits of accuracy. Which will be good for our purposes. This series converges for \(|z| < |L|\), but for our purposes we only need it to converge for \(|z| < |\Im(L)|\)--which the generation of the sequence \(a_n\) I use, Sheldon's code is unbelievably accurate.



The reason I wrote this code; and not just use the masterpiece that is fatou.gp; is pretty straightforward. Sheldon's fatou.gp does not accept polynomial arguments. It is purely numerical in its arguments. This means something pretty straightforward. In Pari-GP, we can write something like



Code:
sin(1+z)


And out spits the Taylor series of \(\sin\) about \(z = 1\):



Code:
%1 = 0.84147098480789650665250232163029899962 + 0.54030230586813971740093660744297660373*z - 0.42073549240394825332625116081514949981*z^2 - 0.090050384311356619566822767907162767288*z^3 + 0.035061291033662354443854263401262458318*z^4 + 0.0045025192155678309783411383953581383644*z^5 - 0.0011687097011220784814618087800420819439*z^6 - 0.00010720283846590073757955091417519377058*z^7 + 2.0869816091465687168960871072180034713 E-5*z^8 + 1.4889283120263991330493182524332468136 E-6*z^9 - 2.3188684546072985743289856746866705237 E-7*z^10 - 1.3535711927512719391357438658484061942 E-8*z^11 + 1.7567185262176504350977164202171746391 E-9*z^12 + 8.6767384150722560201009222169769627834 E-11*z^13 - 9.6522995946024749181193209902042562590 E-12*z^14 - 4.1317801976534552476671058176080775159 E-13*z^15 + O(z^16)


Sheldon's code does not do this. Sheldon's code cannot adapt to the polynomial case....



Now sheldon's code has all the tools to actually produce; say the superlogarithm taylor series at \(1\). But! nowhere in sheldon's code can we just write



Code:
slog(1+z)


And expect this to print out the taylor series at 1.



This may seem trivial; and from a mathematical point of view it definitely is. In sheldon's code, you would write:



Code:
slogtaylor(1)


But this is lengthy, and fucking time consuming to say the least. And good luck doing this on the fly with random points.



That is what my code solves. It's a two birds one stone kind of thing. To define \(\text{slog}(z)\), we only need it to be defined in a neighborhood (which is the initial grab of sheldon's code). Then from there we can run a recursive algorithm, that looks super super super fucking simple. But I've been slaving at this thing for a while. And it uses the divergence of orbits of \(\exp\) in a super clever way (if I do say so myself Tongue )






I am going to package code for one thing, and one thing only. This is Kneser's Super Logarithm. The code I am packaging draws a chi-star branch-cut near the fixed point \(L\), and branches at \(\Re(L) + \pi i\) towards positive infinity horizontally (I've included a graph, and I've included a graph explaining some of these points). The chi-star is actually seen completely through recursion. (I've included a graph of the branch cut near \(L\) as well, which has the unmistakable look of the chi-star).



This function is \(2 \pi i\) periodic, and we can choose how it branches. There are multiple versions of Kneser's super logarithm; we just have to move the branch cut around. Think of this as moving the branch cut of the logarithm; but it can move much much more chaotically--I'd love to have more control over it; but for the moment, this branch cut looks really good.



I like calling this a T-shaped super logarithm; because it takes it's major branch cut along the line \(\Re(z) > \Re(L)\) and \(\Im(z) = (2k+1)\pi i\)--and it curls towards \(L\) or \(L^*\) as a fancy caligraphy version of the letter T would. But for the moment, I am just calling this the recursive sheldon method. Because it turns Sheldon's fatou.gp's taylor series for slog at \(1\) (which as far as I am concerned is the hard part) into a workable slog that handles polynomial values.



So, again, I can write something like:



Code:
slog(3+z+z^3)

%5 = (1.0882491361342880006740084286921804608 - 3.0063724780796382184006418466606948781 E-213*I) + (0.29115832796648167809725524322185564842 - 4.2985348618744138323506310791882604230 E-213*I)*z + (-0.073058087280068937319111790527254704389 - 1.8234095242410207856218419746620351633 E-213*I)*z^2 + (0.30906430318032394074383665825674159246 - 4.1897609825248264566947421175187408385 E-213*I)*z^3 + (-0.15003947348997580178695026854441790161 - 3.7375361335197652472952818219673848366 E-213*I)*z^4 + (0.054393160100824104295983990735795975243 + 3.6554042757599526721536442242419098001 E-214*I)*z^5 + (-0.088795793636932988949383072894007777190 - 2.1706608366752410124466129020609338442 E-213*I)*z^6 + (0.057063148885711291737111899080612861627 + 4.9496068494730215180038825093665536939 E-214*I)*z^7 + (-0.023786965031661616616678995786661053535 - 4.3770862104259230226798875141047733361 E-214*I)*z^8 + (0.024433796556719230288603965703938032319 + 3.0396655573025851940848179720683926404 E-214*I)*z^9 + (-0.016199252859544990921842738471640929930 - 2.4210942404539025512507413445001625652 E-215*I)*z^10 + (0.0060308759446497098213760978794565312281 - 2.2763777761389472516107042355740946622 E-214*I)*z^11 + (-0.0042317802061312785718865677904055132077 + 5.9505884466721817452992013498376064760 E-214*I)*z^12 + (0.0020024615039366304497142300439925822512 - 9.3758538853969812664997453501690138299 E-214*I)*z^13 + (0.00055848650419835654512223904676720455130 + 1.0105935609685363361212628308636116681 E-213*I)*z^14 + (-0.0011039988112221366853484752442266692821 - 1.3223560560644804059580500561334890551 E-213*I)*z^15 + O(z^16)


And this will be the Taylor data of the function \(\text{slog}(3+z+z^3)\). The same way you could do with \(\sin\) or \(\cos\) or \(\exp\) or any function in pari. It takes polynomial data, and produces polynomial data. Which to me, is just the only real flaw in Sheldon's fatou.gp



This will work much much simpler. It is not as good as Sheldon's code; not in any way. Not until we can run Sheldon's initialization, and adapt the functions I have in the works will this be as good. And even then, I'm only confident for values \(b > \eta\). I'm going to post my version of Kneser's tetration soon too; it just relies on sheldon's series at \(0\) again, and then runs a similar recursive protocol to get the value everywhere. But we need slog first...






I'm going to graph the Kneser super logarithm for \(-5 \le \Re(z) \le 5\) and \(-5 \le \Im(z) \le 5\). And then I will draw some labels on the graph to give you an idea of what this means.



   



Here is a small legend describing what's happening in this graph:



   



Here is a zoom in on the chi star about \(L\)



   



This is just describing the branching points of my code; and how I've written the super logarithm. This is pretty much all you need to know about that.








I am releasing my code as a .zip file; and there are two things you need to know. You have two .gp files. The first is "SLOG_SERIES_ABOUT_1.gp". This stores an 80kb file which is just Sheldon's generation of \(a_n\). The second file is the actual code-- "recursive_sheldon.gp"; which is a very very brief code block. Despite how brief it is, I hope you guys don't underestimate it. This is a very difficult recursion protocol that required many sleepless nights  Shy



I should note now, that \(\text{slog}(\exp(z)) -1 = \text{slog}(z)\) under my code; but this does not mean that \(\text{slog}(\log(z)) +1 = \text{slog}(z)\) necessarily; because the principle branch of the logarithm may not be the logarithm we want. Which I think has a lot to do with the chi star. The correct statement is that, for some \(k \in \mathbb{Z}\) we have \(\text{slog}(\log(z+2\pi i k)) + 1 = \text{slog}(z)\)--which is the entire point of Kneser's construction.



But I digress.



Unzip this file in a Pari-gp home folder; or whatever your home-folder is; so that this thing works. You should only need to read "recursive_sheldon.gp" to use this program. But "recursive_sheldon.gp" reads the first file as its first action. And to do such; Pari needs the file  "SLOG_SERIES_ABOUT_1.gp" somewhere it can read it.



From there you have the function:



Code:
slog(z)


Which will handle polynomial operations in its argument. And it is a specific Kneser super logarithm.



Code is super fucking fast. Go wild.


.zip   Recursive Sheldon.zip (Size: 41.47 KB / Downloads: 91)



If you're interested in coding, look how simple my program looks  Tongue



This has always been my schtick; making recursions within recursions within recursions look clean as fuck!

LET'S GO!

On to getting sexp through slog now. Giving me a headache lately...

God I wish Sheldon already had polynomial arguments built in... Shy 

Regards, James Big Grin Big Grin Big Grin
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Computing Kneser's Super Logarithm and its Analytic Continuation Catullus 2 2,402 07/10/2022, 04:04 AM
Last Post: Catullus
  fast accurate Kneser sexp algorithm sheldonison 40 154,624 07/03/2022, 06:38 AM
Last Post: JmsNxn
  Kneser-iteration on n-periodic-points (base say \sqrt(2)) Gottfried 11 13,185 05/05/2021, 04:53 AM
Last Post: Gottfried
  fixed point formula sheldonison 6 25,868 05/23/2015, 04:32 AM
Last Post: mike3
  Attempt to find a limit point but each step needs doubling the precision... Gottfried 15 45,552 11/09/2014, 10:25 PM
Last Post: tommy1729
  "Kneser"/Riemann mapping method code for *complex* bases mike3 2 13,477 08/15/2011, 03:14 PM
Last Post: Gottfried
  Attempt to make own implementation of "Kneser" algorithm: trouble mike3 9 33,609 06/16/2011, 11:48 AM
Last Post: mike3
  Single-exp series computation code mike3 0 5,827 04/20/2010, 08:59 PM
Last Post: mike3
  Attempting to compute the kslog numerically (i.e., Kneser's construction) jaydfox 11 38,797 10/26/2009, 05:56 PM
Last Post: bo198214
  An error estimate for fixed point computation of b^x bo198214 0 5,460 05/31/2008, 04:11 PM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)