Improving convergence of Andrew's slog
#2
I'll post code when I've had a chance to clean it up, but I've been really busy the past couple weeks. Here are some coefficients I calculated two weeks ago, that I've honestly only just today had a chance to look at and perform a quick sanity test on. The first few (omitting the constant term):

Code:
0.9159460564995333939479691812942268437
0.2493545986721730438836546471260900921
-0.1104647597964313574537187937346440024
-0.09393625509985870822446364031599412950
0.01000323329323155623460883844375422560
0.03589792159454311060893719837092056919
0.006573401099605069030878509226706541076
-0.01230685951818438834898435047586546818
-0.006389802569157469189015008370967404941
0.003273589822817257105090732708082107253
0.003769202952828280970722223160084597794
-0.0002802170195369746576871369468148542093
-0.001775106557196463508225358976394413676
-0.0004279699575246649277874304606275712555
0.0006797232612443379504312897205488076911
0.0004127926181657687663755328531761878708
-0.0001865977837752200311706517694607367697
-0.0002535491984167313806722016634301260175
0.000007474329223085892546373262727349306521
0.0001231669079299400651864105820209693007

And I've attached all 700 coefficients (slog_700_accel.txt). Beyond this, just use the coefficients resulting from adding the basic logarithms at the fixed points.

I've also attached the coefficients for the "residue", for the first 700 terms (slog_700_residue.txt). To use these, simply calculate the logarithms for the two primary fixed points, then add the residue. Again, please note that I've omitted the constant term.

Using the residue negates the need for more than 700 terms. Picking branches must be done carefully, obviously, and just adjust your constant term to ensure that slog(0) is -1, or whatever value you want. The residue is only valid inside a radius of convergence limited by the primary fixed points, though the root test for these would indicate that you can get decent precision a little further out, as long as you stay away from the singularities themselves. At any rate, it's best to use iterated logarithms or expontentials to find the point closest to the origin, evaluate at the chosen point, then iterate back to where you started (adding or subtracting an integer as necessary).

As far as accuracy, it's only about 20 decimal digits or so per coefficient, at least for the first few hundred terms, but the total accuracy is much better, probably 40+ digits inside the unit circle at the origin. I haven't had time to really dig in and investigate it thoroughly yet, however.


Attached Files
.txt   slog_700_accel.txt (Size: 54.5 KB / Downloads: 1,541)
.txt   slog_700_residue.txt (Size: 53.5 KB / Downloads: 1,370)
~ Jay Daniel Fox
Reply


Messages In This Thread
RE: Improving convergence of Andrew's slog - by jaydfox - 10/10/2007, 08:25 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 68,932 05/16/2021, 11:51 AM
Last Post: Gottfried
  sum(e - eta^^k): convergence or divergence? Gottfried 6 25,152 08/17/2010, 11:05 PM
Last Post: tommy1729
  A note on computation of the slog Gottfried 6 26,202 07/12/2010, 10:24 AM
Last Post: Gottfried
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 10,115 09/10/2009, 06:47 PM
Last Post: bo198214
  sexp and slog at a microcalculator Kouznetsov 0 7,270 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 23,032 12/18/2007, 12:14 AM
Last Post: jaydfox
  SAGE code implementing slog with acceleration jaydfox 4 17,291 10/22/2007, 12:59 AM
Last Post: jaydfox
  Dissecting Andrew's slog solution jaydfox 15 45,005 09/20/2007, 05:53 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 47,673 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)