10/15/2010, 09:20 PM (This post was last modified: 11/21/2011, 09:37 PM by sheldonison.)
(10/15/2010, 04:03 PM)nuninho1980 Wrote: lastest your code for faster.
I tried bases 1.5), 1.68, 1.7 and 100 and worked (not work on your older code). it's better. but I get errors after try init(1.6) and init(200).
errors here: base 1.6 base 1.5
....
Hi Nuninho,
I finally got around to debugging this (you reported it last time too). What's happening, is that the isuperf/superf equations starts misbehaving as the base approaches \( \eta \). Below, there is a a graph for base 1.7, which just barely works. You can see the isuperf "jumps" when the sexp(z) starts getting slightly less than zero; sexp(z)<0, for z<-1. For the riemup code to work, the superf/isuperf needs to be consistent, over at least the unit circle. The problem is occurring in the last logarithm of the isuperf equation, after subtracting the fixed point "L". There is a term trying to counteract this multi-valued logarithm problem, by multiplying by c^n, before the last log_c, and this term works for bases>=1.7, but not for smaller bases, approaching eta. I'm checking in a "patch", that adds a new "xterminc", to push the function back into the well behaved region for bases B>=1.49. It still fails for B=1.48, but for other reasons, since the algorithm to generate "L" failed. The number of log/exp iterations required for the superf/isuperf for bases near \( \eta \) starts to grow very large. In my program, this is the variable "scnt", and the algorithm for generating "scnt" does not work for bases, B<1.49.
- Sheldon, small typo fixed in new code, uploaded a new version. Also, fixed so that it works down to B=1.47, but B<1.47 still fails. These bases close to eta run very slow.
Code:
xterminc = 0; /* xterminc used to stabilize isuperf for bases<1.7 */
if ((B<1.7), xterminc=1);
if ((B<1.55), xterminc=2);
....
10/19/2010, 03:33 AM (This post was last modified: 11/21/2011, 09:37 PM by sheldonison.)
(10/15/2010, 04:03 PM)nuninho1980 Wrote: ... but I get errors after try ... init(200). base 200
Nuinho noticed that large bases don't work, so I modified the program (attached below), so that now kneser.gp uses "centerat=-0.5" for base>20. sexp(0) will still return "1", but if you evaluate the Taylor series, TaylorSeries(z=0.5)=1. This is for bases>20, which may also require more than the 13 or so iterations required for smaller bases to get optimal precision. I also changed the initial sexp estimate to use the linear estimate, sexp(z)=log(e), sexp(z-1)=log(log(e)), and I slightly improved the renormsub routine. So now, bases from 1.47 to 50,000 will converge by simply typing init(1.47) ...init(50000). Some bases higher then 50,000 converge too, but centerat needs to be manualy modified closer to z=-1. If anyone wants to examine bases larger than 50,000, here's the code to try. This is for the new kneser.gp code, attached below. As far as I know, nobody else has tried generating an analytic version of tetration for such large bases before.
Code:
init(50000);
loop(25); /* larger bases need more than 13 iterations */
dumparray;
init(300000);centerat=-0.8;recenterup; /* manually adjust centerat */
loop(25);
dumparray;
The problem for computing a Taylor series for sexp(z), for larger bases is that iterating with the Taylor series centered around z=0 doesn't work too well. This is a parametric plot of sexp_150, around the unit circle. What is amazing to me is that the program worked as well as it did, somehow converging up to B=100.
ploth(t=0,1,z=exp(Pi*I*2*t);y=sexp(z);[real(y),imag(y)],1);
also, notice the little green dot representing sexp_e(y). The behavior around the unit circle gets increasingly poorly behaved as the base gets larger.
ploth(t=0,1,z=exp(Pi*I*2*t)-0.5;y=sexp(z);[real(y),imag(y)],1);
Now, we plot the unit circle, with the Taylor series for sexp_150, and with "centerat"=-0.5, so that taylorseries(0.5)=1. But the program still sets sexp(0)=1. Also, the green circle is Taylor series for sexp_e, also centered at -0.5
And finally, a plot for sexp_150(z) and sexp_e(z), for z from -1.99 to 0.5. Notice that the sexp_150(z) has a smaller slope. The fixed point for B=150 is L=-0.169+0.394*I, so that value will not be taken, so the slope for z<0 at the real axis needs to be small. For the most recent code version: go to the Nov 21st, 2011 thread.
10/30/2010, 09:47 PM (This post was last modified: 11/21/2011, 09:38 PM by sheldonison.)
This is a small update, which improves performance, and allows more flexibility. Its a little faster (25 seconds for 105-110 binary bit precision, as opposed to 1 minute). I added an slog function, as well as a gentaylorseries function that will generate the sexp taylor series about any point in the complex plane. I cleaned up the code, and made it a little simpler, in that the "n" parameter for loop(n) is now optional, and the default base for init; is base e. You can still "loop(1,2...)" at a time if desired.
\r kneser.gp
init;loop
base 2.71828182845904523536029
fixed point 0.318131505204764135312654 +
.... 25 seconds later, after 13 loop iterations
gp > sexp(0.5)
1.6463542337511945809719240315921
gp > slog(I)
-1.3937809364969141717386716853218 + 1.0171347771806901474061430995874*I
gp > gentaylor(3*I,1)
complex sexp Taylor series centered at 3*I
gp > for (s=1,4,print(tseries[s])); /* prints first 4 series terms */
0.37090658903228120608281774428121 + 1.3368216707889122043183467452060*I
0.018300482687991735872774054492565 + 0.069611076949751462108351480615434*I
-0.042221079601603961951042313227903 + 0.024296334049072988793474267457353*I
-0.015851643810851135752181652414535 - 0.014789535958785875826049470725178*I
There is a much larger performance increase for "\p 134", which now takes about 5 1/2 minutes, as opposed to the previous version which took 32 minutes. "\p 134 gives sexp results accurate to 64 decimal digits. The new algorithm cleverly converges the sexp(z) with the riemaprx(z) at imag(z)=0.12*i, thereby avoiding all of the Riemann mapping singularity problems requiring larger and larger numbers of Riemann approximation Fourier series terms. That's where the speed up comes from. I had to fix a couple of pari-gp precision loss problems to make this new algorithm reliable for all bases, init(1.47) - init(100000). The earlier algorithm is still available via "slowmode=1". One reason for the new kneser.gp code is to allow a secondary program, to generate pentation, which I will be posting separately.
Another code example, showing \p 134, and the taylor series function.
Code:
\r kneser.gp
\p 134
init(exp(1));loop
.... wait 5 1/2 minutes, after 26 iterations ....
sexp(0.5)
gp > sexp(0.5)
1.646354233751194580971924031592114518205311648969041530394879048
Code:
type "morestuff" on the command line to see new functions
new functions:
slog(z,est) inverse sexp(z), using optional est as initial estimate
gentaylor(w,r) generates tseries sexp taylor series, centered at w, radius r
taprx(z) evaulates tseries taylor series from gentaylor at z
This implementation shows that excellent convergence can be had at 0.12*I. Here the Cauchy unit circle which is sampled to generate the sexp(z) taylor series is divided into a region where imag(z) at the unit circle is >= 0.12*I. For that region, the Cauchy unit circle is sampled from the Riemann approximation generated from the 1-cyclic transformation of the super function. But, for points where imag(z)<0.12*I, the sample is taken from the exp or logarithm of previous version of sexp(z) itself. Then these samples are stitched together, to generate the next sexp(z) series. Then the next Riemann approximation Fourier series is generated at 0.12*I. The new algorithm required identifying and fixing the pari-gp precision problems, which results in much more predictable stable behavior, which convinces me that the convergence will continue with more and more iterations.
- Sheldon
11/15/2010, 02:53 PM (This post was last modified: 11/21/2011, 09:38 PM by sheldonison.)
This is an update to support an sexp(z) mapping for bases<eta. The program starts with the regular entire superfunction developed from the repelling fixed point, and calculates \( \text{sexp}(z)=\text{RegularSuperf}(z+\theta(z)) \), where \( \theta(z) \) decays to zero as imag(z) goes to +I*infinity. Thus the solution for bases<eta differs from the standard solution, developed from the attracting fixed point. See this post for discussion, and graphs.
This version will converge for converge for 1.449<=B<=1000000, and 1.02<bases<1.444. This program is very very slow for bases close to but greater than eta; in those cases, I recommend using "\p 28" for less accurate, but faster results (using 5 iterations). second update, added cosmetic changes, warning message for sexp(imag(z)>i. Converges for B>1.02.
-Sheldon For the most recent code version: go to the Nov 21st, 2011 thread.
11/15/2010, 03:26 PM (This post was last modified: 11/17/2010, 05:29 PM by nuninho1980.)
(11/15/2010, 02:53 PM)sheldonison Wrote: This is an update to support an sexp(z) mapping for bases<eta. The program starts with the regular entire superfunction developed from the repelling fixed point, and calculates \( \text{sexp}(z)=\text{RegularSuperf}(z+\theta(z)) \), where \( \theta(z) \) decays to zero as imag(z) goes to +I*infinity. Thus the solution for bases<eta differs from the standard solution, developed from the attracting fixed point. See this post for discussion, and graphs.
This version will converge for converge for 1.449<=B<=1000000, and 1<bases<1.444. This program is very very slow for bases close to but greater than eta; in those cases, I recommend using "\p 28" for less accurate, but faster results (using 5 iterations).
-Sheldon
I can calculate good for base 1.1.
I tried bases 1.01 and 1.001 but I get not big precision and too small precision, respectively.
11/17/2010, 06:52 PM (This post was last modified: 11/21/2011, 09:39 PM by sheldonison.)
(11/15/2010, 03:26 PM)nuninho1980 Wrote: I can calculate good for base 1.1.
I tried bases 1.01 and 1.001 but I get not big precision and too small precision, respectively.
I'm aware it doesn't converge for bases<1.02 (I had updated the post with the latest code). The code update supporting a kneser mapping to generate sexp for bases<eta is still somewhat of an experiment, since it is inherently less efficient than just using regular iteration.
There are much better methods for calculating sexp(z) for bases less than eta, and I'm sure you know much more about that than I do. When I have time, I may try to figure out if there is a way to get the code converging for smaller bases. Also on my todo list, is supporting complex bases (which I'm about 1/3rd of the way through). And I still really need to try to rigorously defend why the algorithm converges at all, which will be a difficult task.
New version of kneser.gp dowload attached below. For bases<eta, this supports generating the kneser mapping Laurent series to the get the regular iteration sexp solution via a mapping from the upper sexp. I got this working yesterday. It should be almost exactly equal to the new superf2(z) function, which is only valid for bases<eta, and is the sexp(z) from the lower attracting fixed point. If you want the kneser mapping for the newsexp function I posted earlier, type "init(sqrt(2));gennewsexp=1;loop;". Then you can compare sexp(z) to superf2(z), but you'll need to set "\p 134;" or larger, to calculate the differences.
- Sheldon For the most recent code version: go to the Nov 21st, 2011 thread.
12/26/2010, 12:35 PM (This post was last modified: 11/21/2011, 09:39 PM by sheldonison.)
Another small update. I added "slogtaylor(w,r)" to generate the slog taylor series, centered at "w", generated with a sample radius of "r". For sexp, there is the sexptaylor(w,r) function. I also speed up the slog(z,est) routine, where "est" is an optional initial estimate. Both the sexptaylor/slogtaylor series can be centered anywhere in the complex plane, but I rounded to a real valued Taylor series if imag(w)=0, at the real axis. update Jan 6th, 2011. fix for base B>=500, which I had accidentally broken with my slog routine updates
June 3rd 2011 update:, added support for base eta, which is a separate function. New functions, cheta(z), sexpeta(z), invcheta(z), invsexpeta(z), which generate the superfunctions and inverse superfunctions for base \( \eta=\exp(1/e) \). The default precision for these new functions is approximately 50 decimal digits of accuracy, with "\p 67" as the default precision setting, and 120 decimal digits accuracy with the "\p 134" setting.
- Sheldon
kneser.gp (Size: 35.15 KB / Downloads: 1,025)
June 2011 version, with \( \text{sexp}_\eta(z) \) support, update, fixed typo in slog For the most recent code version: go to the Nov 21st, 2011 thread.
01/06/2011, 02:53 PM (This post was last modified: 01/06/2011, 04:35 PM by sheldonison.)
(01/06/2011, 02:48 AM)JmsNxn Wrote: Thank you for this. This is exactly what I need.
I was wondering though, since sexp(5*i) = 0.999 + 4.999i is it safe for me to write sexp(fi) = 1 + fi and assume it's an error in approximation?
Perhaps you didn't execute the "loop". Also, the current version gives a warning if you evaluate sexp(z) for imag(z)>I. If you download the kneser.gp program (I would suggest the most recent version, go to this thread), and type in:
Code:
init(exp(1));loop;
.... goes through 13 iterations .....
(07:43) gp > sexp(5*I)
!!! WARNING, riemaprx(z) much better than sexp(z) for imag(z)>I !!!
%2 = 1.3786691576693131111676650899624 E40 - 3.2923562701722998997666622377240 E40*I
To get the correct result for imag(z)>I, use the riemaprx(z) instead, which is accurate for imag(z)>=0.12*I. Or use splicesexp(z), which is valid everywhere in the complex plane.
Code:
(07:43) gp > riemaprx(5*I)
%3 = 0.32106749434792621140043570196732 + 1.3394349973320786642136709026508*I
I plan to post again, giving a more clear overview of the sexp and riemaprx routines and how they are generated, and how the different versions of the code improved convergence. The latest version leads to a definition of an infinite sequence of both functions. In particular, the current version allows for a continuous Cauchy integral for the sexp function, as opposed to the earlier version which was required to be strictly a discreet Cauchy integral estimation, which I think had more theoretical problems. I think the current version will help prove convergence, but I'm not there yet.
- Shel