Revisting my accelerated slog solution using Abel matrix inversion
#23
I've just come across this thread and took a time for it - which I couldn't take when the thread has been young & fresh in 2019.    

I'd never understood, what it means "a taylorseries at 2 fixpoints" - from Sheldon's post I now get, that it was just meant to subtract from the powerseries of the slog() the real part of the series of  (log(1-L)/L + log(1-L*)/L*) .  That's all...
So, ok, now I've got it and could reproduce the related powerseries, and could then look at the coefficients of the residual powerseries for some recognizable characteristics.     

After this I asked myself: if I reduce the powerseries of the slog() by the real part of the *primary* fixpoint, what if I reduce by the equivalent series for the *second* fixpoint as well, and the *third*, and, of course, the sum of *all* fixpoints?              

We have this nice Pari/GP LambertW() function by Mike3, which also accepts the branch-index as parameters, and I produced the residual of the slog() powerseries after removing the sum of real parts of all (well: the first N=2048 ... ) powerseries 2*\ln(x+L_k)/L_k , accumulated to the powerseries SU_N(x)   by use of the first N fixpoints.   

I think it results in a nice residual series res_N(x)=slog(x)-SU_N(x), see my first 60 coefficients below.

-  Note, that the limits of the coefficients of SU_N(x) seem to approximate simple rational numbers (but I couldn't detect their construction-rule so far).

-  The last column in the table is a logarithm-like rescaling of the res(x)-coefficients by means of the asinh()-function attempting to mark a constant (here by example 1.64) which should be the range of convergence as far as the resulting rescaled coefficients are bounded within a certain small interval.

PHP Code:
                                
              A1=slog(x          A2=SU_N(x      A3=res(x)=slog(x)-SU_N(x  A4=asinh(0.5*res(1.64 x))  \\for exact expression of A4 see below
-----------------------------------------------------------------------------------------------------------------------
x^0*                    0            -6.01362838850              6.01362838850    0.0830489795521
x
^1*        0.915946056500            0.999975272452          -0.0840292159527   -2.00036627431
x
^2*        0.249354598672            0.249999999999        -0.000645401327096   -6.35624875167
...        -0.110464759796           -0.111111111111         0.000646351314663    5.86008677396
          
-0.0939362550999          -0.0937500000000        -0.000186255099859   -6.60961016732
           0.0100032332932           0.0100000000000       0.00000323329323156    10.1685281561
           0.0358979215945           0.0358796296296        0.0000182919649135    7.94087134595
          0.00657340109961          0.00657596371882      
-0.00000256261921579   -9.41160700518
          
-0.0123068595182          -0.0123046875000      -0.00000217201818439   -9.08228386197
         
-0.00638980256916         -0.00639023613561      0.000000433566449037    10.1989545924
          0.00327358982282          0.00327323082011      0.000000359002711437    9.89297347993
          0.00376920295283          0.00376926734881    
-0.0000000643959845224   -11.1165558979
        
-0.000280217019537        -0.000280141200642    -0.0000000758188951953   -10.4585633972
         
-0.00177510655720         -0.00177511385911    0.00000000730191846398    12.3040775762
        
-0.000427969957525        -0.000427988270449     0.0000000183129248088    10.8899113673
         0.000679723261244         0.000679722859771   0.000000000401472977297    14.2154372616
         0.000412792618166         0.000412797297023   
-0.00000000467885679836   -11.2650721615
        
-0.000186597783775        -0.000186597001042  -0.000000000782733541762   -12.5583926709
        
-0.000253549198417        -0.000253550392217    0.00000000119380072367    11.6415913810
       0.00000747432922309       0.00000747390655756   0.000000000422665522288    12.1852113823
         0.000123166907930         0.000123167193596  
-0.000000000285665608851   -12.0822743523
        0.0000359226636883        0.0000359228452628        
-1.81574577162E-10   -12.0407335746
       
-0.0000477147691069       -0.0000477148257309         5.66239713926E-11    12.7112713772
       
-0.0000327288948796       -0.0000327289645648         6.96852050893E-11    12.0090195241
        0.0000125870328510        0.0000125870377672        
-4.91620353562E-12   -14.1657747183
        0.0000200057062797        0.0000200057307739        
-2.44941717121E-11   -12.0651798701
      0.000000328421886987      0.000000328425308225        
-3.42123869912E-12   -13.5389161500
      
-0.00000969753198878      -0.00000969753980480         7.81601612587E-12    12.2180476085
      
-0.00000331044768235      -0.00000331045059455         2.91220724068E-12    12.7106150489
       0.00000370224855088       0.00000370225071381        
-2.16293012462E-12   -12.5133662611
       0.00000284048701230       0.00000284048856000        
-1.54770233960E-12   -12.3533623913
     
-0.000000907373128721     -0.000000907373571659         4.42938343139E-13    13.1097623179
      
-0.00000170542583695      -0.00000170542651692         6.79973549150E-13    12.1864427571
     
-0.000000105766930521     -0.000000105766923618        -6.90332855510E-15   -16.2817967197
      0.000000814959687357      0.000000814959947939        
-2.60582712173E-13   -12.1561838486
      0.000000319378912383      0.000000319378972565        
-6.01823151704E-14   -13.1270293894
     
-0.000000302803952745     -0.000000302804039175         8.64299142248E-14    12.2703778435
     
-0.000000259113609213     -0.000000259113654049         4.48362263787E-14    12.4319990098
     0.0000000668680950706     0.0000000668681180191        
-2.29484578713E-14   -12.6070684661
      0.000000152047985378      0.000000152048008960        
-2.35822623706E-14   -12.0851281284
     0.0000000163010167610     0.0000000163010135384         3.22266788681E-15    13.5807171687
    
-0.0000000711928410564    -0.0000000711928513879         1.03314351218E-14    11.9210392786
    
-0.0000000316203136559    -0.0000000316203150032         1.34721633503E-15    13.4634937482
     0.0000000255268411676     0.0000000255268450390        
-3.87137271321E-15   -11.9132288456
     0.0000000243757334112     0.0000000243757349738        
-1.56262822101E-15   -12.3257725934
   
-0.00000000488385551816   -0.00000000488385670824         1.19008703699E-15    12.1034190675
    
-0.0000000139492357905    -0.0000000139492367539         9.63479126480E-16    11.8199537261
   
-0.00000000212263049456   -0.00000000212263025320        -2.41354209894E-16   -12.7095427025
    0.00000000637533532526    0.00000000637533579420        
-4.68945196339E-16   -11.5506261564
    0.00000000318068006715    0.00000000318068009120        
-2.40456264779E-17   -14.0264626951
   
-0.00000000218960125797   -0.00000000218960145023         1.92260232881E-16    11.4528697483
   
-0.00000000234030729101   -0.00000000234030735336         6.23544593291E-17    12.0841881498
   0.000000000341361331440   0.000000000341361396652        
-6.52121328220E-17   -11.5446815607
    0.00000000130469052711    0.00000000130469057172        
-4.46064141569E-17   -11.4297531928
   0.000000000257242243110   0.000000000257242227261         1.58491746740E-17    11.9698171869
  
-0.000000000580366971924  -0.000000000580366995593         2.36690532344E-17    11.0740699490
  
-0.000000000323237411750  -0.000000000323237411424        -3.26330860817E-19   -14.8633856329
         1.89449577736E-10         1.89449588140E-10        
-1.04038597499E-17   -10.9066690225
         2.27877205677E-10         2.27877208438E-10        
-2.76018307444E-18   -11.7388526386
        
-2.14327446696E-11        -2.14327484604E-11         3.79079075152E-18    10.9268787670 

update: I've just found, that the powerseries of SU_N(x) for N \to \infty seems to converge to that of (a slight shift of) H(x)=log(1-x*exp(-x))/x .
The first coefficient in SU_N() however possibly diverges to infinity or converges to some finite value with increasing N and the observation of the second coefficient is somehow inconclusive, however I'd guess, that it converges to 1.  
It should be mentioned, that I've found this relationship via lookup of the integer approximations (achieved by some standard trial&error rescalings of the SU_N()-coefficients by factorials and reciprocals) in the OEIS finding http://oeis.org/A009306 
So instead of computing finite sums SU_N(x) we can simply use the coefficients of H(x) where we use h_k*k/(k-1) (for k>1).
H(x) is here the powerseries of the sum of all  2*real(log(x-L_k)/L_k) for k=0 to infinity and perhaps a better/more perfect candidate for the reduction of the slog()-power-series.

update 2: I've just put a question focused at the coincidence of SU_N(x) and H(x) at stackexchange: https://math.stackexchange.com/questions...oefficient

update 3: a much better/more coherent/more focused presentation of this is now in already mentioned the stackexchange-post: https://math.stackexchange.com/questions...oefficient
end-update

I've not much more to say at the moment, but it is perhaps an additional, and perhaps even fruitful observation. 

- - -
Here the Pari/GP-code:
PHP Code:
\ps 60   \\ powerseries expansion up to 60 terms
N
=2048
SU_N
=O(x^60);
 for(
k=0,N
 
     L_Kexp(-LambertW(-1,k));   \\ branch-enabled version LambertW() by Mike3
      SU_N
+= 2*real(log(x-L_K)/L_K);
 
 );

\\ --------------------------------------------------------------------------

A1=VE(JF700,60         \\ first 60 coefficients of Jay^s 700x700-solution
A2
=polcoeffs(SU_N60)~  \\ first 60 coefficients of the accumulated fixpoints-related series     
A3
=A1-A2                 \\ the residual
A4
=asinh(0.5*VR(dV(1.64,60)*A3)) \\ dV(1.64)*A3 rescales A3^s coefficients by consecutive powers of 1.64
                                 
\\ VR reciproces the entries of its argument-vektor,
 
                                \\ asinh(0.5* ) provides an approximate -but signedlogarithmic scaling for values >10 or >100
\\ 'update
dim=60
A2_a= polcoeffs(log(1-x*exp(-x))/x,dim)~
A2  = vectorv(dim,r,A2_a[r]*if(r==1,1,r/(r-1)))
A3  = vectorv(dim,r,A1[r]-A2[r]) \\ the better residual
\\ end update' 

The slog-coefficients were taken from Jay's 700x700 matrix solution posted in this thread.
Gottfried Helms, Kassel
Reply


Messages In This Thread
Analysis of Jay's slog vs Kneser - by sheldonison - 01/17/2019, 06:44 PM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:35 AM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:42 AM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:17 PM
RE: Revisting my accelerated slog solution using Abel matrix inversion - by Gottfried - 05/16/2021, 11:51 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Terse Schroeder & Abel function code Daniel 1 3,792 10/16/2022, 07:03 AM
Last Post: Daniel
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 0 3,764 04/21/2022, 01:52 AM
Last Post: JmsNxn
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 5,843 08/21/2021, 03:18 AM
Last Post: JmsNxn
  An incremental method to compute (Abel) matrix inverses bo198214 3 20,216 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 26,101 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 67,492 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 10,067 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 22,085 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 7,239 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 22,900 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 2 Guest(s)