01/29/2019, 09:15 PM
(This post was last modified: 01/29/2019, 09:30 PM by sheldonison.)
I added a criitical clarification on how to calculate \( \theta_s(z) \) in my previous post.
\( \theta_s(z)=\text{KneserSlog}(\alpha^{-1}(z))-z;\;\; \) plugging in the complex valued superfunction
\( \theta_s(z)\approx\text{JaySlog}(\alpha^{-1}(z))+\theta_{Rj}\left(\text{JaySlog}(\alpha^{-1}(z))\right)-z;\;\; \) approximation for Kneser's slog from Jay's slog; from above
So this gives us an equation for \( \theta_s \) in terms of Jay's slog with Jay's slog \( \theta_{Rj} \). We sample the superfunction, \( \alpha^{-1}(z) \) at a set of equally spaced points, and then use those samples points to take the Fourier transform of \( \theta_s \) at those sample points. Then we drive the undesired terms of the \( \theta_s \) fourier transform to zero in terms of the \( \theta_{Rj} \) unknown terms by a matrix simultaneous equation solution.
The programming is complicated, because we need a the complex valued Schroder function and its inverse, plus lots of other details. I will post the pari-gp code, once I finish cleaning it up a little more, along with some results. Anyway, remarkably this all works and generates superbly accurate results for Kneser's slog in terms of Jay's slog and a 1-cyclic real valued fourier transform which is calculated via a matrix simultaneous equation in terms of the Schroder function. The results are generated without starting with Kneser's slog, but match Kneser's slog much better than one would have naively predicted since the \( \alpha^{-1}(z) \) samples can be chosen to take maximum advantage of the peculiar peanut shaped convergence region of Jay's slog. More later...
\( \theta_s(z)=\text{KneserSlog}(\alpha^{-1}(z))-z;\;\; \) plugging in the complex valued superfunction
\( \theta_s(z)\approx\text{JaySlog}(\alpha^{-1}(z))+\theta_{Rj}\left(\text{JaySlog}(\alpha^{-1}(z))\right)-z;\;\; \) approximation for Kneser's slog from Jay's slog; from above
So this gives us an equation for \( \theta_s \) in terms of Jay's slog with Jay's slog \( \theta_{Rj} \). We sample the superfunction, \( \alpha^{-1}(z) \) at a set of equally spaced points, and then use those samples points to take the Fourier transform of \( \theta_s \) at those sample points. Then we drive the undesired terms of the \( \theta_s \) fourier transform to zero in terms of the \( \theta_{Rj} \) unknown terms by a matrix simultaneous equation solution.
The programming is complicated, because we need a the complex valued Schroder function and its inverse, plus lots of other details. I will post the pari-gp code, once I finish cleaning it up a little more, along with some results. Anyway, remarkably this all works and generates superbly accurate results for Kneser's slog in terms of Jay's slog and a 1-cyclic real valued fourier transform which is calculated via a matrix simultaneous equation in terms of the Schroder function. The results are generated without starting with Kneser's slog, but match Kneser's slog much better than one would have naively predicted since the \( \alpha^{-1}(z) \) samples can be chosen to take maximum advantage of the peculiar peanut shaped convergence region of Jay's slog. More later...
- Sheldon

