Posts: 93
Threads: 30
Joined: Aug 2016
I have just downloaded it from somewhere. It includes the following:
\r kneser.gp
/* hyperoperators */
h(a,s,b) = {
fatbx(a,s-1,b);
}
/* root inverse of hyperops */
hr(a,s,b) = {
invfatbx(a,s-1,b);
}
/* support function for fatbx and invfatbx, allowing to have a,b<e */
expeta(t,a) = {
if (real(a)<exp(1), sexpeta(invsexpeta(a)+t), cheta(invcheta(a)+t));
}
/* fatbx main function of calculation of hyperops */
fatbx(a,t,b) = {
if (real(t)<1, return (expeta(t,expeta(-t,a)+expeta(-t,b))));
if (real(t)>=1, return (expeta(t-1,b*expeta(1-t,a))));
}
/* invfatbx main function of calculation of root inverse of hyperops */
invfatbx(a,t,b) = {
if (real(t)<1, return (expeta(t,expeta(-t,a)-expeta(-t,b))));
if (real(t)>=1, return (expeta(t-1,(1/b)*expeta(1-t,a))));
}
It might be incorrect according to the graph.
Xorter Unizo
Posts: 684
Threads: 24
Joined: Oct 2008
01/03/2017, 09:24 AM
(This post was last modified: 01/03/2017, 09:36 AM by sheldonison.)
(01/02/2017, 02:26 PM)Xorter Wrote: I have just downloaded it from somewhere. It includes the following:
\r kneser.gp
/* hyperoperators */
h(a,s,b) = ....
It might be incorrect according to the graph.
This was a reply to me asking, " I'm not familiar with H(x); sorry. How do you define/implement it? What function in fatou.gp are you using?"
First off, this post should never have been posted here in the fatou.gp discussion since Xorter's H function has nothing to do with fatou.gp. Xorter's H function has two different definitions (ignoring the subtle use of both superfunctions of eta) depending on whether s<2 or s>=2. Let's focus on only on the 2nd definition for S>=2. For integer values of S, Xorter's pari-gp code for H(a,s,b) is equivalent to the following equation.
\( H(a,s,b) = \exp^{[\circ(s-2)]}_\eta(b \cdot \log^{[\circ(s-2)]}_{\eta}(a))\;\;\;\eta=\exp(1/e) \)
For s=2, this is just b*a which is as Xorter expected
For s=3, this is \( \eta^{b\cdot\log_{\eta}(a))}\;=\;a^b\; \) so this is as Xorter expected.
So it "works" for integer values of 2,3. But Xorter "complains" about other values making a funny graph.
For s=4: H(2,4,3) = 11.713
For s=5: H(2,5,3) = 11.103
For s=6: H(2,6,3) = 3.411
So H(2,s,3) is doing exactly what Xorter's equation says it should do, with S extended to non-integer values of S by using the superfunction and the inverse superfunction of eta. Of course its "incorrect" according to the graph... because H(2,s,3) is only "correct" at s=2 and s=3, but not other integer values of S, where H has nothing to do with the extended hyper operator Xorter claims it should be!
- Sheldon
Posts: 684
Threads: 24
Joined: Oct 2008
08/08/2017, 12:32 AM
(This post was last modified: 08/08/2017, 11:12 AM by sheldonison.)
I posted a new version of fatou.gp; to download, see the first post. The most important addition to the program is a routine called "matrix_ir(k,n,m), where k is log(log(B))+1 if B is the tetration base. Internally, fatou iterates exp(z)-1+k near the parabolic fixed point. The matrix_ir routine solves a matrix equivalent to finding a solution on n sample points that matches exactly with the theta approximation, or abel(f(z))-1 or abel(finv(z))+1. The sexpinit(B) is still 2-2.5x faster than the matrix solution on average, but for some complicated complex bases, the times are closer. I also added loop1(n), which is used after loop(k) to keep iterating the loop solution without adding any additional sample points. Then loop1 will converge to the exact same solution as matrix_ir generates! I will add more details later on the matrix_ir solution, and how it samples a set of points to get the exact same kind of convergence as sexpinit/loop.
The output has been cleaned up, and shows 32 bit decimals by default now. Precision is reported in number of decimal digits now. With 64-bit pari-gp implementations, its still generally less than 0.6 seconds for an sexpinit(B), but if you are calculating lots of bases, try limitp=16 to speed up computation by another factor of 5x with >16 decimal digits precision. Perhaps you want to also enable quietmode=1.
I had to change how the guts of fatou.gp worked to allow matrix_ir to work. These changes are extensive but mostly invisible. Some bases require a couple of more iterations than before. Precision will generally be higher than the last version for all bases. The new version converges to full precision on far more bases than the old version, including real bases up to 39000, and complex bases like sexpinit(I) and sexpinit(-exp(-2)) converge fully as well. I cleaned up the help menus, and the output doesn't show the imaginary part for real bases. In general, the program is more stable than it used to be over a much wider range of real and complex bases. For real bases>39000, try setting ctr=19/20. This allows real bases up to about sexpinit(10^6). The only thing kneser.gp does better than fatou.gp is real bases>10^6, but I am no longer supporting or updating kneser.gp.
My old pentation.gp program is broke with the latest versions of pari-gp, and since I'm not supporting kneser.gp I added an improved more elegant version of pentation in fatou.gp. Look at help_pentation(). I added support for eta=exp(1/e); see help_eta. Internally, I implemented Ecalle's formal assymptotic solution for exp(x)-1, the parabolic base equivalent to eta.
- Sheldon
Posts: 1,630
Threads: 106
Joined: Aug 2007
Hey Sheldon, just want to say how amazed I am how extensively you developed your program, including the findings of Ecalle and the perturbed Fatou coordinates, while I was absent. Perhaps later I will ask some question about the analytic continuation of the base, which still puzzles me. Meanwhile I will read your fatou.gp to finally understand, what you are doing there
Posts: 684
Threads: 24
Joined: Oct 2008
07/18/2019, 10:58 PM
(This post was last modified: 11/11/2019, 05:13 PM by sheldonison.)
(07/18/2019, 11:52 AM)bo198214 Wrote: Hey Sheldon, just want to say how amazed I am how extensively you developed your program, including the findings of Ecalle and the perturbed Fatou coordinates, while I was absent. Perhaps later I will ask some question about the analytic continuation of the base, which still puzzles me. Meanwhile I will read your fatou.gp to finally understand, what you are doing there
Thanks Henryk. I wanted to prove fatou would converge, but I hit interesting roadblocks when I try. Here is a base(e) picture showing 60 sample points, and the correspondence. I don't remember where I posted this before, but it belongs in this thread.
You can run sexpinit(exp(1)) which iterates, but you can also solve a linear system of equations, which works equally well, but you need to know how many samples you want for the theta mapping ahead of time, and its generally extra work, but I think more interesting for understanding how the program works. matrix_ir(1,60,8 ) also solves slog base(e). With 60 sample points, the precision is good for about 12 digits of precision with 61 terms in the Taylor series ranging from x^1, to x^60. For the system of equations, or the iterative solution, the constant term is required to be zero.
Anyway, the points in yellow on the middle circle border are paired up with their exponent in yellow in the inner circle. The points in brown are paired up with their logarithm in the inner circle. The points in pink in the inner circle (and green for the conjugate) are used to tell the regular slog's Abel function what its theta mapping is. Here, alpha_u is the upper abel function from the logarithm of the Schroeder equation, and theta_u is the theta mapping. That is used to define the points in pink (and green) in the middle circle. Solve the system of equations, or iterate adding sample points as you go, and you are solving this picture, as the number of sample points goes to infinity. Here, all the points are paired up with data points in a smaller radius circle, so the high frequency components aren't as relevant and the solution is both linear and very stable. The same is true of the theta mappings.
For Kneser's slog, there is a complex valued inverse superfunction in the upper half of complex plane, and another in the lower half of the complex plane, generated from the formal Schroder function which my program generates.
\( \alpha_u=\frac{\ln(\psi(z))}{\ln(\lambda)};\;\;\lambda=L;\;\; \) for base_e lambda=L
\( \alpha_r(z)=\alpha_u(z)+\theta_u(\alpha_u(z));\;\; \) real valued Abel function via 1-cyclic theta mapping
\( \alpha_r(z)=\frac{\ln(z-L_1)}{\ln(\lambda_1)} + \frac{\ln(z-L_2)}{\ln(\lambda_2)} + p(z)\;\; \) real valued abel function via Taylor series p(z) centered between fixed points
\( \text{slog}(z)=\alpha_r(z)-\alpha_r(1) \)
I have lots more charts I could post as time permits. The algorithm works for lots of repelling and attracting complex bases, as long as they're not on the Shell Thron boundary or have a period too close to pseudo period-2, but you can get really close to the boundary and it still works, just a little slower.
- Sheldon
Posts: 92
Threads: 10
Joined: May 2019
07/22/2019, 11:56 PM
(This post was last modified: 07/23/2019, 01:50 PM by Ember Edison.)
(08/08/2017, 12:32 AM)sheldonison Wrote: I posted a new version of fatou.gp; to download, see the first post. The most important addition to the program is a routine called "matrix_ir(k,n,m), where k is log(log(B))+1 if B is the tetration base. Internally, fatou iterates exp(z)-1+k near the parabolic fixed point. The matrix_ir routine solves a matrix equivalent to finding a solution on n sample points that matches exactly with the theta approximation, or abel(f(z))-1 or abel(finv(z))+1. The sexpinit(B) is still 2-2.5x faster than the matrix solution on average, but for some complicated complex bases, the times are closer. I also added loop1(n), which is used after loop(k) to keep iterating the loop solution without adding any additional sample points. Then loop1 will converge to the exact same solution as matrix_ir generates! I will add more details later on the matrix_ir solution, and how it samples a set of points to get the exact same kind of convergence as sexpinit/loop.
The output has been cleaned up, and shows 32 bit decimals by default now. Precision is reported in number of decimal digits now. With 64-bit pari-gp implementations, its still generally less than 0.6 seconds for an sexpinit(B), but if you are calculating lots of bases, try limitp=16 to speed up computation by another factor of 5x with >16 decimal digits precision. Perhaps you want to also enable quietmode=1.
I had to change how the guts of fatou.gp worked to allow matrix_ir to work. These changes are extensive but mostly invisible. Some bases require a couple of more iterations than before. Precision will generally be higher than the last version for all bases. The new version converges to full precision on far more bases than the old version, including real bases up to 39000, and complex bases like sexpinit(I) and sexpinit(-exp(-2)) converge fully as well. I cleaned up the help menus, and the output doesn't show the imaginary part for real bases. In general, the program is more stable than it used to be over a much wider range of real and complex bases. For real bases>39000, try setting ctr=19/20. This allows real bases up to about sexpinit(10^6). The only thing kneser.gp does better than fatou.gp is real bases>10^6, but I am no longer supporting or updating kneser.gp.
My old pentation.gp program is broke with the latest versions of pari-gp, and since I'm not supporting kneser.gp I added an improved more elegant version of pentation in fatou.gp. Look at help_pentation(). I added support for eta=exp(1/e); see help_eta. Internally, I implemented Ecalle's formal assymptotic solution for exp(x)-1, the parabolic base equivalent to eta.
And only for sexp. The real bases for slog not support >= 5. (I believe this value, base = 4.968 test pass)
Code: sexpinit(5);slog(-0.015 - 8*I)
Is bugs or features for slog?
Posts: 684
Threads: 24
Joined: Oct 2008
(07/22/2019, 11:56 PM)Ember Edison Wrote: ...
And only for sexp. The real bases for slog not support >= 5. (I believe this value, base = 4.968 test pass)
Code: sexpinit(5);slog(-0.015 - 8*I)
Is bugs or features for slog?
Thanks for testing Ember;I fixed the typo in the pari-gp code for fatou.gp. It was missing a check for complextaylor before calling the isuperf2 routine. It is fixed now. Sorry for the delay; things have been very very busy for me. If a base ia real valued, this routine isn't generated for the lower half of the complex plane. Apparently, this particular piece of code isn't used much in the slog routine.
- Sheldon
- Sheldon
Posts: 92
Threads: 10
Joined: May 2019
(07/27/2019, 07:38 AM)sheldonison Wrote: (07/22/2019, 11:56 PM)Ember Edison Wrote: ...
And only for sexp. The real bases for slog not support >= 5. (I believe this value, base = 4.968 test pass)
Code: sexpinit(5);slog(-0.015 - 8*I)
Is bugs or features for slog?
Thanks for testing Ember;I fixed the typo in the pari-gp code for fatou.gp. It was missing a check for complextaylor before calling the isuperf2 routine. It is fixed now. Sorry for the delay; things have been very very busy for me. If a base ia real valued, this routine isn't generated for the lower half of the complex plane. Apparently, this particular piece of code isn't used much in the slog routine.
- Sheldon Sorry, but please test it:
Code: sexpinit(32000*(-1)^(1/30))
Posts: 684
Threads: 24
Joined: Oct 2008
08/14/2019, 09:57 AM
(This post was last modified: 08/14/2019, 10:24 AM by sheldonison.)
(07/29/2019, 10:35 AM)Ember Edison Wrote: Sorry, but please test it:
Code: sexpinit(32000*(-1)^(1/30))
I updated the main fatou.gp code to include the more resilient matrix version from Ember's post. But matrix_ir doesn't know how many theta sample points to use and defaults to 18 theta samples sexpinit(B) is still the suggested version to use for generic real and complex bases, which loops, increasing precision at each iteration instead of using the matrix version of the same solution family. So if you have a base that gives errors or for sexpinit(B) ....
matrix_ir(B,lctr,ltht,myctr,myir); /* B is now the tetration base, instead of the internally used k=log(log(B))+1 */
lctr is the number of sample points in the slog Taylor series
ltht is the number of sample points in the theta mapping; defaults to 18
myctr is sampling radius ratio.
myir is the inner radius radio with respect to the sampling radius ratio
matrix_ir(exp(1)); /* automatic settings work well for many real valued bases to give >32 digits of precision */
matrix_ir(31825+3345*I,400,,8/9); /* 400 samples, myctr=8/9 for convergence; 29 digits */
Here are some settings for other crazy bases like base=0.15, or base=0.1, or base 0.2*I ....
matrix_ir(0.2*I,400,90,14/15,45/46); /* 20 decimal digits of precision */
matrix_ir(0.15,400,90,14/15,45/46); /* 16.5 decimal digits of precision */
matrix_ir(0.1+I*1E-30,400,250,14/15,45/46); /* 16 decimal digits of precision */
- Sheldon
Posts: 92
Threads: 10
Joined: May 2019
(08/14/2019, 09:57 AM)sheldonison Wrote: I updated the main fatou.gp code to include the more resilient matrix version from Ember's post. But matrix_ir doesn't know how many theta sample points to use and defaults to 18 theta samples sexpinit(B) is still the suggested version to use for generic real and complex bases, which loops, increasing precision at each iteration instead of using the matrix version of the same solution family. So if you have a base that gives errors or for sexpinit(B) ....
matrix_ir(B,lctr,ltht,myctr,myir); /* B is now the tetration base, instead of the internally used k=log(log(B))+1 */
lctr is the number of sample points in the slog Taylor series
ltht is the number of sample points in the theta mapping; defaults to 18
myctr is sampling radius ratio.
myir is the inner radius radio with respect to the sampling radius ratio
matrix_ir(exp(1)); /* automatic settings work well for many real valued bases to give >32 digits of precision */
matrix_ir(31825+3345*I,400,,8/9); /* 400 samples, myctr=8/9 for convergence; 29 digits */
Here are some settings for other crazy bases like base=0.15, or base=0.1, or base 0.2*I ....
matrix_ir(0.2*I,400,90,14/15,45/46); /* 20 decimal digits of precision */
matrix_ir(0.15,400,90,14/15,45/46); /* 16.5 decimal digits of precision */
matrix_ir(0.1+I*1E-30,400,250,14/15,45/46); /* 16 decimal digits of precision */ Sorry, but please test it:
Code: matrix_ir(1E6,,,19/20);circchart("1E6.csv")
Code: hexinit(1.6);[ipent(3),ihex(2)]
hexinit(1.7);[pent(8),ihex(3)]
hexinit(22);hex(2)
pentinit(3381);pent(2)
matrix_ir(1E9,,,19/20);sexp(1)
|