Computing Andrew's slog solution
#1
I've been tinkering with Andrew's slog solution, since I think it probably has a much better claim to uniqueness for base e than my solution.

Anyway, I've only tested solutions for the 50-, 60-, 70-, and 80-term truncations. I've noticed that I can speed up the initial matrix solving by about 10-15% if I avoid the rationals created by dividing through by the factorials.

To be useful in a power series, I would have to divide those factorials back out anyway, so there's an additional (negligible) savings there as well.

Here are some sample solver times (matsolve in gp), in seconds. The rows in this chart are the number of rows/columns in the matrix, and the columns are with and without the rationals, as well as the approximate savings:

\( \begin{array}{c|cccc}
& Rationals && Integers & {\small Approx.}\ Savings\\
\hline
{\Large 50} & 5.066 {\small \pm 0.093} && 4.400 {\small \pm 0.063} & 0.666 \\
{\Large 60} & 17.325 {\small \pm 0.150} && 14.984 {\small \pm 0.415} & 2.341 \\
{\Large 70} & 48.661 {\small \pm 0.433} && 43.883 {\small \pm 0.387} & 4.778 \\
{\Large 80} & 123.464 {\small \pm 0.181} && 112.218 {\small \pm 0.474} & 11.245
\end{array}
\)

I don't know how much precision is lost if you try to solve for bases with non-rational logarithms. However, for a base b where ln(b) is rational, the solver times should be only slightly longer than for base e.
~ Jay Daniel Fox
#2
jaydfox Wrote:Anyway, I've only tested solutions for the 50-, 60-, 70-, and 80-term truncations. I've noticed that I can speed up the initial matrix solving by about 10-15% if I avoid the rationals created by dividing through by the factorials.

Hm, on Maple solving is quite fast, taking into account that the solving is exact by fractional arithmetic with a symbol for \( \log(b) \). To compute the initial 100 coefficients of the series it needs perhaps a minute.
#3
Well, gp would appear to be pretty slow then. I need to get a faster library...

Anyway, do you see the same speed difference in Maple? Here's my gp code:
Code:
aQ = matrix(80, 80, r, c, (c^(r-1))/(c!)-if(c-r+1,0,1))
aZ = matrix(80, 80, r, c, (c^(r-1))-if(c-r+1,0,c!))
b = vector(80, n, if(n-1,0,1))
cQ = matsolve(aQ, b~)
##
cZ = matsolve(aZ, b~)
##
~ Jay Daniel Fox
#4
jaydfox Wrote:Well, gp would appear to be pretty slow then. I need to get a faster library...

Anyway, do you see the same speed difference in Maple? Here's my gp code:
Code:
aQ = matrix(80, 80, r, c, (c^(r-1))/(c!)-if(c-r+1,0,1))
aZ = matrix(80, 80, r, c, (c^(r-1))-if(c-r+1,0,c!))
b = vector(80, n, if(n-1,0,1))
cQ = matsolve(aQ, b~)
##
cZ = matsolve(aZ, b~)
##

Oh I told nonsense, it is even shorter. However Maple uses quite some caching, so I restarted before each test. By some reason its faster with dividing factorials than without ... so 39s with factorials and 1m:35s without factorials. I guess it is slower to compute with really big integers than with medium sized fractions.
#5
Wow! I finally figured out how to do this in SAGE (the documentation is almost useless!), and I was able to solve the 100x100 rational system in 7.35 seconds, and the 100x100 integer system in 4.78 seconds!

That's about two orders of magnitude faster than the gp solver. And I'm running SAGE in a VM on a laptop.

For a 150x150 system, it was 62.87 seconds for the rational system, and 35.98 seconds for the integer system. The difference is way more pronounced that it was in gp.
~ Jay Daniel Fox
#6
Wow, check this out. Using a 150-term truncation, I graphed the first 20 odd derivatives. I scaled each derivative so they would fit nicely on the same graph.

   

You can see the 20th odd derivative (i.e., the 39th derivative) is not quite symmetric, and it seems to backtrack a little. Up to that point, each odd derivative bottomed out a little to the right of the previous one, but the 20th odd derivative seems to bottom out a little to the left. Perhaps this is supposed to happen, but my hunch is that a few derivatives later we'll lose concavity, etc. I didn't save the data, so I can't get the next couple odd derivatives to check.

With only 50 terms, this breakdown happened several derivatives earlier (don't have the graph, I overwrote it accidently), followed by wild oscillations (similar to what my solution does at the 7th derivative). With only 30 terms, the breakdown occurs even earlier. Oddly enough, the 30-term truncation behaves better than my solution (if you don't count the fact that it has discontinuities in its higher derivatives at x=0, 1, e, etc.).

I'm currently calculating a 400-term trunctation to 2048 bits of precision, and I'm going to extract as many derivatives as the precision allows, unless and until the concavity of the odd derivatives breaks down. This will likely be an all-day calculation, but my curiosity must be satisfied...
~ Jay Daniel Fox
#7
wow
#8
While I'm waiting for this massive script to run, I was hoping to get a comparison of the speed of the matrix solvers in the various libraries. Andrew, since you seem to have a copy of Mathematica, Maple, and SAGE, could you generate comparison solver times for various matrix sizes (e.g., 50 to 250 by multiples of 50) for each library. One set of times with rational matrices, and one set with integer?

By integer, I mean don't divide through by the factorial, which means you will have to multiply the "-1" terms (the first subdiagonal) by the factorial to compensate. For bases with where 1/ln(b) is non-integer, this will introduce rationals, so I don't mean "integer" in the general case, but... Anyway, Andrew, could you generate these numbers for comparison? If, as I'm beginning to strongly believe, your solution is the "unique" solution for bases greater than eta (I've only tested e so far), then it would be good to know the quickest way to compute it in each library, as well as sample expected times, etc.
~ Jay Daniel Fox
#9
Here's my SAGE code preparing the coefficients, for rational and for integer matrices:

For the rational solution, you'll need to divide each coefficient by k factorial (for the kth coefficient). I did that step elsewhere, which is why it isn't shown below. But for comparison purposes, it's probably best to put it in this function. I'd do it myself, but SAGE is still crunching numbers, so I can't test a code change right now.
Code:
# For optimal speed and accuracy, try to use a rational LogB
def Prepare_Slog_Q(LogB, size):
    print "Preparing slog for base e^(%s) with %s terms"%(LogB, size)
    m = matrix(QQ, size, size, [[[((c+1)^r)*factorial(c+1) -
        (c==(r-1))/(LogB^r)] for c in xrange(size)]
        for r in xrange(size)])
    v = vector(QQ, size, [(k==0) for k in xrange(size)])
    s = m.solve_right(v)
    ret = vector(QQ, len(s)+1, [-1] + list(s))
    return ret

Code:
# For optimal speed and accuracy, try to use a rational LogB
def Prepare_Slog(LogB, size):
    print "Preparing slog for base e^(%s) with %s terms"%(LogB, size)
    m = matrix(QQ, size, size, [[[((c+1)^r) -
        factorial(c+1)*(c==(r-1))/(LogB^r)]
        for c in xrange(size)] for r in xrange(size)])
    v = vector(QQ, size, [(k==0) for k in xrange(size)])
    s = m.solve_right(v)
    ret = vector(QQ, len(s)+1, [-1] + list(s))
    return ret
~ Jay Daniel Fox
#10
My graphing code had a few bugs in it, so I essentially started from scratch to clean it up. While doing so, I noticed some weird behavior in the matrix solver.

To start with, I was curious how the time to solve increased with n. Theoretically, if we double n, we'll get 4 times as many elements, and we have to perform twice as many row reductions. So we're already at n^3=8 times longer. Of course, with a larger matrix, the size of the operands increases (the bottom right corner contains n^(n-1)). Looking solely at the bottom right element, the size of operands (in bits) increases as n*log(n). Additions and subtractions will therefore take n*log(n) times longer. However, multiplications take even longer. Depending on the math library, they can take up to n^2 time, though n^1.5 or n*log(n) is possible. I'm not sure what SAGE uses. And if it's n^2, then it's really (n*log(n))^2, because the operand size is n*log(n). Anyway, I was hoping SAGE's library was at least n^1.5, if not faster.

Stringing it all together, I was expecting something in the ballpark of O(n^4.5) time. Interestingly, it looks like it's at least O(n^5), possibly slightly more.

But even more interestingly, for n approximately 10+45*k, e.g., 55, 100, 145, 190, 235, etc., the solve time suddenly jumps by a significant amount. Increasing n by 1 might suddenly make the solve time increase by 30-40%.

I was so perplexed the first time I noticed this that I spent several hours playing around with it, trying to figure out the pattern. I'm still not sure if this is an artifact of SAGE internals, or if it's a property of the matrix method itself. Andrew found repetitive ratios every 7th term, if I recall correctly, so perhaps there's another pattern every 44th or 45th term.

So I graphed the fifth root of the solve times (because a linear graph would mean O(n^5) solve time). Here's what I got:

   

Notice the jump at roughly evenly spaced intervals. There aren't any jumps beyond 280 because the solve times were too long to try to verify that the jumps continue regularly.

Now, here's a detailed graph of each data point. There are gaps when I heuristically determined it wouldn't be interesting. In the transition ranges, I ran every value of n, up to 10 times to ensure stable times for comparisons.

   

The first thing to notice is the chaotic nature. Near the transition points (55, 100, 145, 189, etc.), the time might increase by 30% by adding a term, then decrease 15% by adding another term. I ran multiple tests, so it wasn't just a random effect of processor load or whatever.
~ Jay Daniel Fox


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Computing Kneser's Super Logarithm and its Analytic Continuation Catullus 2 1,759 07/10/2022, 04:04 AM
Last Post: Catullus
Question Computing Integer Tetrations Catullus 5 3,225 06/10/2022, 10:59 PM
Last Post: JmsNxn
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 45,981 05/16/2021, 11:51 AM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 19,557 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 52,472 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 7,396 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 16,939 08/21/2009, 05:32 PM
Last Post: jaydfox
  computing teh last digits without computing the number deepinlife 3 11,450 02/24/2009, 09:09 AM
Last Post: deepinlife
  sexp and slog at a microcalculator Kouznetsov 0 5,885 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 17,767 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)