Improving convergence of Andrew's slog
#4
Hmm, it would seem that my accelerated solution produces much better results than I previously had anticipated.

First, we have to remember that I've made an unproven (but very probably valid) assumption that Andrew's matrix solution converges on a solution with a logarithmic singularity at the upper and lower primary fixed points (at +/- 0.318...+1.337...*I), allowing acceleration as I've described.

So, to provide some very preliminary numerical evidence that this assumption is valid, I've solved a 1250x1250 unaccelerated system. For now, I'll call unaccelerated systems the "natural" solutions, using Henryk's suggestion.

I solved the 1250x1250 natural system at 2560 and 2816 bits, in order to calculate precision. In the 2560 system, every term had at least 994 bits of absolute precision (i.e., relative to 1.0), and at least 993 bits of relative precision (i.e., relative to the coefficient in question). I then used the 2816 bit solution, which should have approximately 256 additional bits of precision, though for the purposes of this experiment I probably only need 50-100 bits of relative precision.

Okay, the following graph might be a bit difficult to follow. The blue lines are the pseudo-relative error in bits of various natural solutions. The red lines are the error of various accelerated solutions.

The x axis represents the coefficient's index (0 being the first term, the \( z^1 \) coefficient). The y axis is the pseudo-relative error in bits, with a negative ten (-10) representing an error of \( \frac{1}{2^{10}}=\frac{1}{1024} \).

Finally, I should explain what I mean by pseudo-relative error. The coefficients themselves decrease approximately exponentially, but they have that approximately 7-term cycle that makes the graph of the relative error spike twice every 7th term or so, which makes analysis difficult. Therefore, I've used the magnitude of the distance from the origin of the primary fixed points as an exponential scaling factor (it's a little more complicated than that, but that's the gist of it.)

   

The blue lines, from darkest to lightest (and, of course, from most error to least error): 50, 100, 200, 300, and 500 terms of the natural solution.

The red lines, from darkest to lightest: 10, 20, 30, and 40.

Yes, that's right. The 10-term accelerated solution is about as accurate as the 150-term natural solution. The 20-term accelerated solution is about as accurate as the 500-term natural solution. The 30- and 40-term solutions don't look much different from each other, but don't be fooled: that's due to inaccuracies in the 1250-term natural solution.

I hope the provides strong numerical evidence that the natural solution converges very slowly on the accelerated solution. If you don't agree, the following graph will be somewhat pointless.

The following graph uses a 1200-term accelerated solution as its baseline for determining pseudo-relative error. The additional (lightest) blue line is for the 1250-term natural solution:

   

Assuming you agree that the acceleration technique is valid, you can see that the 20-term accelerated solution is more accurate than the 500-term natural solution, and the 30-term accelerated solution is more accurate than the 1250-term natural solution!!!

So how accurate is the 1200-term accelerated solution? I have no idea. If you take the square root of the number of terms in the various natural solutions, you get a number approximately equal to the size of a comparable accelerated system. If this trend holds, then in theory the 1200-term solution is more accurate than a one-million-term natural solution. However, I don't have enough data points to make so bold a claim. I'll hedge my bets and assume it's more accurate than a 100,000-term natural solution.

PS: Sorry I don't have labels on the graphs themselves. I'll add those at some point, once I figure out how to do it within SAGE.

Edit: Updated graphs, to reflect a correction to the exponential scaling I was using. I forgot a factor of n, where n is the index, because these are logarithmic singularities. The graphs now correctly show errors in the 0-bit range for the natural solutions, i.e., pseudo-relative errors as large as the terms themselves.
~ Jay Daniel Fox
Reply


Messages In This Thread
RE: Improving convergence of Andrew's slog - by jaydfox - 12/01/2007, 02:11 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 68,932 05/16/2021, 11:51 AM
Last Post: Gottfried
  sum(e - eta^^k): convergence or divergence? Gottfried 6 25,152 08/17/2010, 11:05 PM
Last Post: tommy1729
  A note on computation of the slog Gottfried 6 26,202 07/12/2010, 10:24 AM
Last Post: Gottfried
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 10,115 09/10/2009, 06:47 PM
Last Post: bo198214
  sexp and slog at a microcalculator Kouznetsov 0 7,270 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 23,032 12/18/2007, 12:14 AM
Last Post: jaydfox
  SAGE code implementing slog with acceleration jaydfox 4 17,291 10/22/2007, 12:59 AM
Last Post: jaydfox
  Dissecting Andrew's slog solution jaydfox 15 45,005 09/20/2007, 05:53 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 47,674 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)