01/08/2019, 07:04 PM
(01/08/2019, 06:14 PM)sheldonison Wrote: In Jay's slog matrix, if one replaces Jay's speedup with the exact version via Kneser, but only for terms not in Jay slog matrix computation, than the only solution left for Jay's matrix to find is Kneser's slog, right? Of course, this assumes infinite precision. Wouldn't 1500bits approximate infinite precision???? I think some more experiments would be required here. As I recall, Jay's 100 term matrix needs about 67 digits of computation precision to get 16 or 17 decimal digits of result accuracy. I didn't experiment with larger matrices. But lets say I used fatou.gp to feed in a speedup for 900 terms series for x^101 .. x^1000 accurate to 150 decimal digits or so. Then how many digits matching Kneser could I get out of Jay's slog matrix for a 100x100 matrix? And how much computation precision would I need?
Hi Sheldon,
I have to admit, I haven't looked at your solution in years, and even then, I didn't fully understand it. I don't want to bias my work towards yours. I would rather let my work evolve on its own and see where it takes me. Please don't take that wrong, it's nothing personal, but I will probably wait a while before I make another attempt at understanding your solution. Part of it is also the quest for knowledge, and knowledge like this feels more "earned" when it is derived through hard work and perseverance.
I did something similar when I was working on my Kneser solution (which probably used a very different method of solving the Riemann mapping than you use, because convergence was painfully slow for my method, and your method seems to converge very quickly). I didn't want to influence my work to make my Kneser solution match my accelerated matrix solution for the Abel function, so I specifically used very naive approximations of the sexp function to initialize my data. I've lost my Kneser code, but I lost it once before and recreated it, so I'm sure I can do it again.
Regarding the very high internal precision and relatively low accuracy of my solution: I don't think it's necessarily a result of a theta function to perturb the results. I think it's simply an artifact of the slow convergence. If you solve the (unaccelerated) Abel matrix for a simple iteration like 2x+1 (which has an exact solution of log_2(x+1)), you get a similar situation. High internal precision, but low accuracy. In the example of 2x+1, the solution appears to be converging on the exact solution. My hope (which is yet unproven) is that the Abel matrix solution for the slog (whether accelerated or not) should converge on the "exact" solution, i.e., the Kneser solution.
~ Jay Daniel Fox

