![]() |
|
fixed point formula - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1) +--- Forum: Computation (https://tetrationforum.org/forumdisplay.php?fid=8) +--- Thread: fixed point formula (/showthread.php?tid=728) |
fixed point formula - sheldonison - 02/25/2012 I'm in the process of updating Mike's merged fixed point code to work for a generic base in the complex plane. But first I needed a way to figure out both primary fixed points for a generic complex tetration base, and then to figure out if the base was inside or outside the Shell Thron region, which tells whether or not one of the fixed points is attracting. The formula to generate the base from the fixed point, \( b=L^{\frac{1}{L}} \) is pretty well known, but what about the function's inverse, if you don't have access to the Lambert W function? What if you want to get the fixed point \( b^L=L \) from the base, using something other than brute force? For this Taylor series formula, for a base b, z is the input to the Taylor series. \( z=i\sqrt{(\log(\log(b))+1)} \). In this formula, z=0 corresponds to base \( \eta=\exp(1/e) \). Then the two primary fixed points are calculated by putting both the positive and negative square roots in the taylor series, which is accurate to >32 decimal digits for 1.05<=b<=25 or so, and gives the equation for the two fixed points. Since I don't have access to the Lambert W function, (it is not a part of pari-gp), I think this Taylor series is very helpful. This Taylor series equation was inspired by the observation that you need to loop around \( \eta \), twice to return to your starting point, where the fixed point is slowly modified as you loop around \( \eta \). The log(log(b)) part of the equation was to maximize the radius of convergence, since there is a singularity the fixed point for b=1. The log(log(b)) moves the singularity at b=1 to infinity, so that helps this taylor series converge over a very large range of bases, and probably for any real base>1. I haven't had time to figure out the total range of convergence yet, but it is surprisingly large, and includes negative bases. For any base b, with abs(b)<4000, except in the immediate vicinity of b=1, and b=0, the Taylor series gives double precision accurate results for the fixed point. - Sheldon \( z=i\sqrt{(\log(\log(b))+1)} \) \( L(z)=\sum_{n=0}^{\infty} a_n\times z^n \) Code: a0= 2.7182818284590452353602874713526624977572470937000RE: fixed point formula - jaydfox - 09/23/2014 (02/25/2012, 06:35 AM)sheldonison Wrote: (...) I needed a way to figure out both primary fixed points for a generic complex tetration base, and then to figure out if the base was inside or outside the Shell Thron region, which tells whether or not one of the fixed points is attracting. The formula to generate the base from the fixed point, \( b=L^{\frac{1}{L}} \) is pretty well known, but what about the function's inverse, if you don't have access to the Lambert W function? What if you want to get the fixed point \( b^L=L \) from the base, using something other than brute force? For this Taylor series formula, for a base b, z is the input to the Taylor series. Hey Sheldon, I see we found the same Taylor series, though I couldn't tell what algorithm you used to derive it. I have an iterative fixed point finder with quadratic convergence, which I posted perhaps 6-7 years ago. I didn't have a good method of "seeding" the iterative solver, so about 4-6 months ago I decided to come up with one. My initial plan was just to do a polynomial regression to give me decent estimates, but as I was playing around with it, I noticed the same thing you did, that near base eta, the fixed points follow a sort of sqrt-ish behavior. I derived a method to calculate the Taylor series to arbitrary precision, which I initially coded in SAGE/python. Then, after doing all that work, I found this thread. ![]() Anyway, after that, I was working on making improvements to your kneser.gp / tetcomplex.gp code (which I'll get around to posting some day), so I ported the code to PARI/gp. In the process, I checked your coefficients, and the accuracy degrades pretty quickly. By the last few coefficients, they're not accurate to more than a few decimal places. I've been meaning to post it here for a few months, but I got busy with other projects (most recently, the binary partition sequence). MorgothV8's recent thread on fixed points prompted me to finally post my fixed point code. If you're interested, the code is posted in that thread: http://math.eretrandre.org/tetrationforum/showthread.php?tid=917&pid=7460#pid7460 By the way, you mentioned that there is a large range of convergence. When I first analyzed the Taylor series, it seemed to have a radius of convergence of about 2.5-2.6, give or take. Keep in mind we're working in units of sqrt(log(log(z))+1), which corresponds to a radius of about 6.0-6.5 in units of log(log(z))+1. That number is really close to 2*pi, so I was initially tempted to assume that it had something to do with a singularity in a nearby branch. However, when I looked at where the values of the function "explode", it was near large negative values of log(log(z))+1, which corresponds to bases just above 1, e.g., 1.0001. Well, when I thought about it, this actually makes sense. For bases very close to 1, there will be a fixed point very close to the base, and another fixed point (on the other side of e) that is very large. By very large, we could be talking a thousand, a million, a billion, etc. This is what is constricting the radius. The radius of convergence should be infinite (I think), but the coefficients in the Taylor series decay so slowly that it behaves like a singularity when you approach base 1. I've only calculated the first 500 or so coefficients of the Taylor series for analysis, but I've found a method to roughly estimate the rate at which the coefficients decay, and it's very slow. In fact, the "effective radius" seems to climb up to about 2.6 before actually coming back down to 2.5 or less, and then eventually, slowly, climbing back up. Consider that for a base just slightly larger than 1, the upper fixed point will be very large. For example, for a base with an upper fixed point of 2^12 (4096), the value of abs(sqrt(ln(ln(z))+1)) is about 2.28. For 2^20 (1048576), the value of abs(sqrt(ln(ln(z))+1)) is about 3.20. Think about how many terms it would take, at a radius of 3.20, to be able to encode a value of about a million. A hundred thousand at least, I would think. So even by a hundred thousand terms, the function will behave like it has a radius of convergence of 3.5 or less. At least, that's how I perceive it, intuitively. My complex analysis isn't sharp enough to do a formal analysis. Anyway, the good thing is, on the top side, a radius of convergence of 2.5 implies that we can find fixed points for bases up to about exp(exp(2.5^2-1)), which is about 10^80 (rounding conservatively). That's provided we use enough terms in the Taylor series. If we conservatively stay within a radius of 2, that still gives us a range up to exp(exp(2^2-1)) = e^e^3 = 5.3 x 10^8, down to about 1.00676. I haven't tried calculating anything that large yet, so I don't know how many terms are needed. I've played around with the small numbers, and they don't seem to blow up until about 1.001 or lower. As for my algorithm, I thought I had a pretty TeX-ified description of it somewhere, but I can't find it. I at least had the presence of mind to put comments in my SAGE/python code (4-6 months ago), so I'll post those until I have time to TeX-ify it. This is just the comments: I started modifying the (SAGE/python) code, separate from the comments, so I still need to merge the code and comments back together. Code: # The base is developed as a function of the fixed point (fp).One final thought. I did an analysis, and I found that precision decays at about 3 bits per term in the Taylor series. Therefore, to calculate 200 terms to 150 bits of precision, you need to start with about 150+3*200, or 750 bits, plus a little more just to be safe. This turns out to be easy in PARI/gp, because precision is measured in decimal digits. Each decimal digit of precision is about 3.3 bits. So I just added an extra digit of precision for each term. RE: fixed point formula - jaydfox - 09/28/2014 (09/23/2014, 11:01 PM)jaydfox Wrote: By the way, you mentioned that there is a large range of convergence. When I first analyzed the Taylor series, it seemed to have a radius of convergence of about 2.5-2.6, give or take. Keep in mind we're working in units of sqrt(log(log(z))+1), which corresponds to a radius of about 6.0-6.5 in units of log(log(z))+1. That number is really close to 2*pi, so I was initially tempted to assume that it had something to do with a singularity in a nearby branch. Okay, so I've calculated the Taylor series out to 1600 terms. It took almost two full days. The bulk of the time (over 99%) was spent on the series reversion. I'm using SAGE/python to do the series reversion, but I checked the source code, and it's using the PARI library to do reversion. So the SAGE code and the PARI code should take just about as much time, for a given level of precision. And as I mentioned before, the minimum precision during the reversion step is dictated by the number of terms in the series. Anyway, I should have trusted my gut about the 2*pi thing. The root test is still limited to about 2.51, or really close to sqrt(2*pi). I did an analysis of where the function "blows up", and found something interesting. In terms of absolute value, the function blows up in the direction of theta=pi/2, e.g., sqrt(2*pi)*I = sqrt(-2*pi). However, there is no obvious singularity in that direction. Instead of looking at where the absolute value is the largest, I looked for anything strange, and found it at odd multiples of pi/4. So at theta=pi*(2*k+1)/4, e.g., sqrt(2*pi)*(sqrt(1/2)+sqrt(1/2)*I), there is a singularity. It's not an "infinite" singularity. Instead, it turns out to be the location of base eta, in other branches. Once I realized this, it was like, duh! Why didn't I think of that! You see, the value is well-defined at base eta, unlike say, base 0 or 1. But it's an algebraic singularity, because there are pairs of fixed points that converge when we get to that base.Remember, we're working with sqrt(ln(ln(base))+1). Well, consider base eta. The first logarithm is 1/e. The second iterated logarithm is -1. Add 2*pi*I and then add 1, and now you have sqrt(2*pi*I). We could just as easily have added -2*pi*I, and the sqrt has two branches, so this gives us all four possibilities for the sqrt: sqrt(2*pi)*sqrt(1/2)*(+/-1 +/- I) = sqrt(pi) * (+/-1 +/i I) \( \sqrt{\pi} (\pm 1 \pm i) \) This means that, without analytically continuing the function, we will be limited to a radius of convergence of about 2.5. For reference, sqrt(pi) is about 2.506628... Edit:Oops, I forgot the factor of 2: sqrt(2*pi) = 2.506628... End Edit Now, I haven't put much thought yet into how to analytically extend the function. However, just to get something to work with, we could calculate a bunch of fixed points for a list of bases, and then use that information to come up with an approximate Taylor series. This would boil down to either polynomial interpolation or least squares regression. You can do both at the same time if you use a discrete fourier transform, or DFT (use a list of equally spaces points on the circumference of a circle). I believe this is what Sheldon refers to as a Cauchy integral. It has the benefit of being a polynomial interpolation and a least squares fit, which is why it's so useful. Another benefit of a DFT (compared to, say, Lagrange or Newton interpolation), is that it uses considerably less memory, and it's quite a bit faster if you use an FFT (Fast Fourier Transform). I'll see what I can put together over the next few days. RE: fixed point formula - jaydfox - 10/01/2014 (09/28/2014, 09:22 PM)jaydfox Wrote: Now, I haven't put much thought yet into how to analytically extend the function. However, just to get something to work with, we could calculate a bunch of fixed points for a list of bases, and then use that information to come up with an approximate Taylor series. This would boil down to either polynomial interpolation or least squares regression. You can do both at the same time if you use a discrete fourier transform, or DFT (use a list of equally spaces points on the circumference of a circle). I believe this is what Sheldon refers to as a Cauchy integral. It has the benefit of being a polynomial interpolation and a least squares fit, which is why it's so useful. Okay, here's a somewhat stable version of my DFT/FFT library, a slightly updated version of the fixed point finder, and a piece of code that calculates the fixed point Taylor series using a DFT (i.e., a "Cauchy integral"). I was able to calculate 512 terms to 96 digits of precision, in just a couple seconds. It took my other code (closed-form calculation of terms, using reversion of a power series) over 2 minutes to get the same result. I then calculated 1600 terms, again in only a few seconds. It took nearly two days using my other code. Needless to say, in this specific scenario, the DFT method is extremely fast. And just as accurate. So, the files. First, the fixed point iterative solver: It also has code for finding the inverse Schroeder function at the fixed point of an arbitrary Taylor series. This is used to find the inverse Schroeder function for the exponential function, but I assume it can be used for arbitrary functions. Edit: I forgot to mention, in this version of the library, I decided to go with Sheldon's convention of applying a factor of sqrt(-1) to the "z" term: z = I*sqrt(ln(ln(base))+1). The benefit of this is that the Taylor series takes on real coefficients only, which makes it a little easier to study. I haven't double checked all my other code yet to make sure I didn't break something in the process, so if you get an unexpected result, check with the previous version of the code. End Edit The second file is the DFT library: It has a few comments here and there, but it needs more. I was originally writing it to speed up the "Cauchy integrals" in Sheldon's kneser/tetcomplex code, by converting them from simple DFT's to Fast Fourier Transforms. But FFT's turned out to be so much fun, that I got carried away and built a library, useful for other applications as well. Finally, here's the code that uses these two libraries to calculate the fixed point formula: There's a lot of experimental code in here, so I apologize in advance for any bugs, or for any code that's hard to reverse engineer. Note the performance of the FFT. I can use thousands of samples and still get a result in a few seconds. Heck, I can do tens of thousands of samples and still get a result in a reasonable timeframe. I did 8! (40320) samples in 11 seconds at 67 digits of precision, or 30 seconds at 144 digits. Update: I suppose I should post a screenshot. The following three lines of code were typed at the prompt. It's hard to tell because of the long help text that displays when you load the file): Code: \r jdft_fp_0.2.gpRE: fixed point formula - sheldonison - 03/11/2015 (10/01/2014, 04:43 AM)jaydfox Wrote: ... I missed the posts in the computation section! This looks like great fun. I've never written an FFT, but I would like to. Its just that a generic DFT works pretty good within reasonable radius of convergence scenarios. Back to the original topic, generating the fixed points from the \( \ln(\ln(b))+1 \). The \( k=\ln(\ln(b))+1 \) form is also very interesting because it means the tetration itreration function switches from iterating \( z \mapsto b^z \) to iterating \( z \mapsto \exp(z)-1+k \). Here, \( \text{sexp}_k(z) \mapsto \text{sexp}_b(z)\cdot \ln(b)+\ln(\ln(b)) \) Another comment. Sometime after this post, I did succeed in getting a complex base tetration algorithm to work; its posted online here http://math.eretrandre.org/tetrationforum/showthread.php?tid=729. But it doesn't converge for bases near the Shell Thron boundary. Its also dog slow for bases close to \( \eta=\exp(1/e) \). So I'm thinking about a tetration algorithm that iterates generating the slog, centered exactly between the two fixed points, using the "k" form ... and using the very same fixed point Taylor series as from above; well actually the \( L_k \mapsto \ln(\ln(L_b))+1 \). The algorithm I have in mind should have convergences of \( n^{-r} \), where I think r should be the real(pseudo period+1), so a 500 Taylor series terms for base e would be accurate to 15 decimal digits or so. Smaller values of k (base closer to eta) have larger real periods.... An FFT would help because it allow for thousands of terms instead of hundreds of terms.... So I may eventually need your FFT. RE: fixed point formula - sheldonison - 03/30/2015 (02/25/2012, 06:35 AM)sheldonison Wrote: \( z=i\sqrt{(\ln(\ln(b))+1)} \)This evening, I stumbled across this equivalent formal power series for the same fixed-point function.... Instead of the fixed point function above, the function is the ln(ln(fixed-point)) of the fixed-point function quoted above. \( z=\sqrt{-2\left(\ln(\ln(b))+1\right)} \) \( L(b)=\exp(\exp(\text{xfixed}(z))) \) Here are the first 60 terms of the "xfixed" formal power series; update xfixed is defined as follows: see my answer to this mathstack question \( g(x)=\sqrt{2(\exp(x)-x-1)};\;\;\;\;\text{xfixed}=g^{-1}(x) \) Code: a1= 1The series above calculates the fixed point x for f(x) function, as z varies in the neighborhood of zero. Here, if x is the desired fixed point L, then f(x)=x \( f(x)=\exp(x)-1+k \;\mapsto \; f(x) = \exp(x)-1-\frac{ z^2}{2}\;\;\; k \mapsto -\frac{ z^2}{2};\; \) this mapping gives rational coefficients for L(z) k=0 corresponds to the parabolic case, or \( f(x)=\exp(x)-1\; \) which has a parabolic fixed point of zero. \( L(z) = \sum_{n=1}^{\infty}a_n z^n\;\; \) The Taylor series from above \( L(z) = \exp(L(z))-1-\frac{ z^2}{2}\;\; \)plugging L(z) into f(x) as x For \( \text{sexp}_e; \; k=1; \; f(x)=\exp(x); \; z=\pm\sqrt{-2} \) With an 80 term formal power series, one gets the fixed point for base(e) accurate to 35 decimal digits. Smaller bases closer to exp(1/e) are more accurate. For base(2), the fixed point is accurate to 43 decimal digits. So for example, for tetration base(e), the fixed point is \( L\approx 0.318132 + 1.337236i \) To get this value for L and its conjugate, plug \( z=\pm\sqrt{-2(\ln(\ln(e))+1)}=\pm\sqrt{-2} \) into the series above... \( L = \sqrt{-2} - \frac{\left(\sqrt{-2}\right)^2}{6} + \frac{\left(\sqrt{-2}\right)^3}{36} - \frac{\left(\sqrt{-2}\right)^4}{270} + \frac{\left(\sqrt{-2}\right)^5}{4320} + \frac{\left(\sqrt{-2}\right)^6}{17010} - \frac{139\left(\sqrt{-2}\right)^7}{5443200}+... \approx 0.318132 + 1.337236i \) For base(e) conveniently \( L=\exp(L) \) For other bases, \( L=\exp(L)-1+k \) For tetration base2, we first convert it to the conjugate form \( b=\ln(\ln(2))+1 \approx 0.63348708 \) \( f(x)\approx \exp(x)-1+0.63348708 \) If we have a fixed point L of f(x), than the fixed point for tetration base(2) is \( \exp(\exp(L)) \) Then the value z for the series above is \( z = \sqrt{-2\left( \ln(\ln(2))+1 \right)} \) RE: fixed point formula - mike3 - 05/23/2015 Want to point out that there are far superior methods: http://math.eretrandre.org/tetrationforum/showthread.php?tid=917&pid=7470#pid7470 http://math.eretrandre.org/tetrationforum/showthread.php?tid=917&pid=7512#pid7512 and below. This code calculates arbitrary branches of the Lambert W function for arbitrary inputs and is extremely fast (I can get thousands of digits of precision in a fraction of a second on my machine) by using Newton's method. |