Revisting my accelerated slog solution using Abel matrix inversion
#10
(01/17/2019, 06:44 PM)sheldonison Wrote: I have no idea why or how Jay's slog chooses a different less well behaved theta mapping.  I imagine the theta mapping in Jay's slog gradually becomes more and more well behaved as the number of terms used in the Matrix calculation approaches infinity.

Wow, thanks for the analysis!  I still haven't attempted to get into the details of your slog solution and how you calculate the Kneser mapping, because I'm still trying to understand my solution.  I mean, I understand it at a high level, but I keep seeing patterns and trying to tie them together.

But I think I have an answer to your question above, and unfortunately, it's not a very exciting answer.  Brace yourself:

The first term in my Taylor series is inaccurate.

I know, it's not very exciting.  But here's the thing.  My accelerated slog solution is internally consistent to hundreds of decimal places.  But the first term of the Taylor series has an error on the order \( O\(2^{{-7} \log_{2}\(n\)}\) \), for a matrix of size nxn, based on my analysis of matrix solutions for 2x2 up to 1024x1024.  Indeed, see the attached graph, showing the log_2 of the matrix size on the x-axis, and the log_2 of the error on the y-axis.

   

Red and blue indicate a positive or negative error (I made that image a couple weeks ago, so I don't remember now which color is positive).  As you can see, the error oscillates, and the black line is an estimate of the maximum error, with a slope really close to -7, and hitting 72 bits at 10=log_2(1024).  Presumably then, the first term in my 4096 solution has an error of about 2^(-86).

At any rate, let's do the thought experiment.  It's internally consistent to hundreds of decimal places near z=0 and z=1, but the slope at z=0 and z=1 is off by about 10^-27, and hence the slope at sexp(0) and sexp(1) will be off by a similar amount.  This means that the sexp (i.e., the reversion of my slog series) will be off by a (smooth) 1-cyclic function that is equal to 0 at the integers, but which has a slope with a magnitude of about 10^-27.  A simple sine function would do the trick.

But the second and third and fourth terms are also inaccurate, so of course there will be additional harmonics.

I know it's a very boring reason why there's a theta mapping, but there it is.

For me, the theta mapping isn't a big deal.  It doesn't mean that this solution is "wrong".  It just means that convergence is too slow.  To get 70+ decimal digits of accuracy, i.e., 256+ bits of accuracy, I would need to solve a matrix with billions of rows and columns.  Based on my estimate 7 bits per doubling of the matrix size (doubling of rows, so 4x the terms), I would need 2^(256/7) ~= 100 billion rows and columns.  Convergence is just too slow to get the kind of precision that you're using in your solution, even with the acceleration technique I came up with.

The good news is, the acceleration can be taken a step further, by removing the algebraic singularity that remains after removing the logarithmic singularity.  A few years ago, I determined that my accelerated solution appears to be converging on an algebraic singularity in terms of \( (z-L)^{2\pi i / L} \), or rather a power series in such.  (I typically call the fixed point a in my code, but I'm using L here, since that's what you mentioned in your post.)

This makes sense when you look at the Kneser mapping, which involves a mapping from the unit disk to the Kneser region, which itself is an algebraic conformal mapping of the Shroeder mapping.  In other words, when you take the logarithm of the Schroeder mapping, and then multiply that by 2*pi*i/L, and then exponentiate to get the Kneser region, you are effectively mapping (z-L) to (z-L)^(2 pi i / L), so you get a power series in that algebraic term.  The algebraic singularity, with it's complex periodicity, is what allows the Kneser solution to have non-trivial branches (where the value in one branch is not merely a constant difference from the value of the same point in a different branch).  When I was researching this a few years ago, I had moderate success in calculating the first algebraic term (maybe 4-8 digits of accuracy), and then using that to accelerate my solution even more.  However, I've lost all my work from that time period (hard disk failed), so I'm having to build it up from scratch again.  I hope to post something in the coming weeks or months, as time permits.
~ Jay Daniel Fox
Reply


Messages In This Thread
Analysis of Jay's slog vs Kneser - by sheldonison - 01/17/2019, 06:44 PM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:35 AM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:42 AM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:17 PM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Terse Schroeder & Abel function code Daniel 1 3,792 10/16/2022, 07:03 AM
Last Post: Daniel
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 0 3,766 04/21/2022, 01:52 AM
Last Post: JmsNxn
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 5,843 08/21/2021, 03:18 AM
Last Post: JmsNxn
  An incremental method to compute (Abel) matrix inverses bo198214 3 20,217 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 26,101 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 67,492 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 10,067 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 22,086 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 7,239 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 22,902 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)