01/27/2019, 12:42 AM
(This post was last modified: 01/29/2019, 08:56 PM by sheldonison.)
jaydfox Wrote:I remember Andrew Robbins noticed something similar with his (unaccelerated) slog. It appears to be centered at 0, with a radius of convergence (which I determined was limited by the logarithmic singularities), but the series is also convergent in a second approximately circular region centered at 1, and also limited in radius by the singularities. To me, the region looks like a peanut shell.
The peanut shell region is really cool. Normally, one views the function as centered at slog(z=0), like your truncated 800 term series. But if we used all 4000 terms to full accuracy, then we have two peanut regions of very good convergence, one peanut centered around zero, and the other peanut centered around one, with another hollow peanut shell of even better convergence connecting the two peanuts! I conjecture that it may very well be that the best behaved part of Jay's slog and Andrew's slog is the hollow peanut shell between the two regions of convergence, between zero and one.... instead of around either individual peanuts. More on a real example of this in a later post. I want to talk about theta \( (\theta) \) mappings.
In my last post, I started using a naming convention to distinguish the two different real axis real valued 1-cyclic theta mappings which behave similarly.
\( f(z)=z+\theta_{Rk}(z);\;\;\;f^{-1}(z)=z+\theta_{Rj}(z) \)
\( \text{KneserSlog}(z)\approx\text{JaySlog}(z)+\theta_{Rj}(\text{JaySlog}(z)) \)
\( \text{JaySlog}(z)\approx\text{KneserSlog}(z)+\theta_{Rk}(\text{KneserSlog}(z)) \)
In my error plots, I used \( \theta_{Rk}(z) \) to approximate Jay's slog when starting from Kneser's slog, and then plotting the error term. What we really want next is a way to approximate the inverse function involving \( \theta_{Rj} \) so as to use Jay's slog to get a superb approximation of Kneser's slog, and we want to do so without starting with Sheldon's solution for Kneser's slog. To do that we start with this equation, which involves a third 1-cyclic theta mapping, which generates Kneser's slog from the complex valued Abel/Schroder function. What this equation says is that if you start with the complex valued abel function, and add a 1-cyclic theta mapping to it, then you can get Kneser's real valued slog, within a suitable region of convergence, and then Kneser's slog is extended to the entire complex plane via a Schwarz reflection.
\( \text{KneserSlog}(z)=\alpha(z)+\theta_s(\alpha(z)) \)
\( \text{KneserSlog}(\alpha^{-1}(z))=z+\theta_s(z) \)
\( \theta_s(z)=\text{KneserSlog}(\alpha^{-1}(z))-z;\;\; \) plugging in the complex valued superfunction
\( \theta_s(z)\approx\text{JaySlog}(\alpha^{-1}(z))+\theta_{Rj}\left(\text{JaySlog}(\alpha^{-1}(z))\right)-z;\;\; \) approximation for Kneser's slog from Jay's slog; from above
The general form theta_rj, and theta_s are:
\( \theta_{Rj}(z)=\sum_{n=1}^{\infty}a_n\exp(2n\pi i z) + \overline{a_n}\exp(-2n\pi i z) - 2\Re(a_n) \)
\( \theta_{s}(z)=\sum_{n=0}^{\infty}b_n\exp(2n\pi iz)+\sum_{n=1}^{\infty}c_n\exp(-2n\pi iz)\mid\;c_n=0 \)
Note that for Kneser's slog, all of the "c_n" terms in \( \theta_s \) are all zero, which is not the case when approximating \( \theta_s \) from Jay's slog. So, we need to calculate these undesirable c_n error terms for Jay's slog, and then force all of the c_n terms to zero by using \( \theta_{Rj} \). Since Jay's slog is an approximation, we'll probably only force the first few terms to zero. For example, for a 120 term compuation of JaySlog, we might force the first 10 terms c_1...c_10 to zero which should give accuracy>50 decimal digits at the real axis; more in a later post.
We start with the simplest approximation where \( \theta_{Rj} \) has one unknown term, where we approximate \( a_1\mid\;c_1=0 \)
\( \theta_{Rj}(z) = a_1 \exp(2\pi i z) + \overline{a_1} \exp(-2\pi i z) - 2\Re(a_1) \)
Unfortunately, we actually need two equations in two unknown, which are complex conjugates of each other for the upper/lower halves of the complex plane are as follows:
\( c_1 \) in terms of \( a_1;\;\overline{a_1};\;\;\; \) upper half of complex plane; \( \theta_s \)
\( \overline{c_1} \) in terms of \( \overline{a_1};\;a_1;\;\;\;\; \) lower half of complex plane; \( \overline{\theta_s} \)
Equivalently, one could have two equations in two unknowns, one equation for the real part of a_1, and a second equation for the imaginary part of a_1, and then we use those two equations to force the real and imaginary parts of c_1 to zero.
So lets sample some points for the complex valued \( \alpha^{-1} \) superfunction along the dotted line from the previous posts so as to get an approximation for c_1. At each sample point, we have an equation in terms of Jay's slog, and the two unknowns, a_1 and a*_1. Now we combine these equations to get an approximation for \( \Re(c_1);\;\Im(c_1) \), which we desire to have the value of zero.
We could repeat the process, with four unknowns \( a_1;a_2;\overline{a_1};\overline{a_2}; \), and solve for \( c_1;c_2;\overline{c_1};\overline{c_2}=0 \)
with six unknowns \( a_1;a_2;a_3;\overline{a_1};\overline{a_2};\overline{a_3}; \), and solve for \( c_1;c_2;c_3;\overline{c_1};\overline{c_2};\overline{c_3}=0 \)
So, in general to solve for n terms in the approximation for the real valued 1-cyclic mapping, we need to solve a 2nx2n matrix for the real and complex values of \( a_1; a_2; ... a_n \). Then it is just a matter of programming such a complicated indirect fourier series, followed by solving the 2nx2n matrix, which is somewhat complicated due to the real and complex parts of each coefficient, and lots of other details, but it should simply be a programming problem. Then, in theory the result is a much much more accurate version of Jay's slog.
\( \text{JaySlog}(z)+\theta_{Rj}(\text{JaySlog}(z)) \)
More later...
- Sheldon

