SAGE code for computing flow matrix for exp(z)-1
#1
(Update: Look two posts down, where I have updated code that is much faster, probably on par with the Mathematica code that Andrew posted.)


I don't recall what the exact term is for the matrix which this code computes. I call it the FMA, which I believe stands roughly for "flow matrix". You see, "fma" was the name of a variable in some code which Andrew provided two years ago:

http://math.eretrandre.org/tetrationforu...147#pid147

I don't remember what the matrix is called, but I know what it represents. The columns (or rows, depending on transposition) are the coefficients of Lagrange interpolation polynomials in n, and each polynomial interpolates a coefficient of the power series of the nth iterate of the desired function, provided it's centered at a parabolic fixed point. For example, I have been using it to calculate for exp(z)-1.

Well, the code apparently runs much slower in SAGE than it does in Mathematica. Andrew converted the code directly, it seems, so efficiencies in Mathematica are apparently not conserved in going to SAGE. Fear not, this merely means we must find an equivalent form that is efficient in SAGE, or at least relatively so.

If you look at the code, there are three subfunctions that Andrew wrote. The first calculates the coefficients of the integer iterates of a function centered at its parabolic fixed point (can't recall if the fixed point needs to be 0, but the change is trivial if so). The second two functions create the interpolation polynomials.

Well, the first change I made was in the parabolic_idm function. Rather than computing the taylor series of ser(expr), I changed it to compute expr(ser). It turns out that internal optimizations make this latter form much faster, about four times faster for size=100, five times faster for size=150, and generally with the speed increase getting slightly better with increasing size.

Using the matrix approach that Gottfried helped me figure out, I was able to reduce the second and third function calls to a matrix multiplication, which takes only a matter of seconds for matrices this small. These latter two function calls were taking almost as much time as the first, so reducing this to a few seconds helps dramatically.

Without further ago, here is the code, which runs in SAGE 4.0.1 and 4.1, but is probably backwards compatible as well. (Depending on how you convert Andrew's fma list to a matrix, it might be a transposition of the one I calculate below. So don't freak out if you run into this when trying to compare.)

Code:
# Define the custom IDM:
def parabolic_idm(expr, x, size):
    ret = [];
    ser = x;
    for i in xrange(size):
        ser = taylor(expr(ser), x, 0, size);
        cof = ser.coeffs(x);
        ret.append([0] + [cof[int(k)][0] for k in xrange(len(cof))]);
    top = [0, 1] + [0 for i in xrange(size-1)];
    return [top] + ret;

# Create a Lagrange Interpolation polynomial for each
# coefficient of the iteration function. Note that
# multiplying on the left by a row-vector of powers
# of n (where n is the desired iteration) will give
# a row vector which represents the coefficients of
# a power series in z. Alternatively, multiplying
# on the right by a column vector of powers of z
# will give the coefficients of a power series in n,
# which allows direct calculation of the nth iterate,
# centered at z.
def LagrangePolynomialMatrix(pidm, size):
    XI = matrix(ZZ, size+1);
    XI[0,0] = factorial(size);
    for ccol in xrange(1, size+1):
        XI[0, ccol] = (1-ccol)/ccol * XI[0, ccol-1]
    for rrow in xrange(1,size+1):
        for ccol in xrange(rrow, size+1):
            XI[rrow, ccol] =  (XI[rrow-1, ccol-1] + (1-ccol) * XI[rrow, ccol-1])/ccol
    PI = matrix(ZZ, size+1);
    PI[0,0] = 1;
    for rrow in xrange(1, size+1):
        for ccol in xrange(rrow+1):
            PI[rrow, ccol] = PI[rrow-1, ccol-1] - PI[rrow-1, ccol];
    return (XI*PI)*matrix(QQ, pidm)/factorial(size);

# test data: change or delete as needed
size = 10
time mtx = parabolic_idm(exp(x)-1, x, size)
time fma = LagrangePolynomialMatrix(mtx, size)

Note: the code gives a deprecation warning when calculating the Taylor series, but I'm not sure about the best way to avoid it. (I'll post an update if I can work around it without hurting performance.)

PS: If anyone tries this code and it doesn't work, let me know. If anyone can suggest further ways to optimize, I'd like to know, as learning to optimize SAGE code can be helpful for me in general.
~ Jay Daniel Fox
Reply
#2
Alas, it seems that Maxima and/or Lisp seems to crash for a large size, or after repeated calculations, due to small, hard-coded memory limit in the Maxima installation. According to discussions on the sage-support google group, it might require a recompilation of Maxima!

Well, I know ways to work around this, as there are other ways to compute the taylor series of composed functions, such as through an alternative library like gp/pari, but I don't have time to work on a workaround today. Sorry, folks!
~ Jay Daniel Fox
Reply
#3
(08/19/2009, 09:17 PM)jaydfox Wrote: Alas, it seems that Maxima and/or Lisp seems to crash for a large size, or after repeated calculations, due to small, hard-coded memory limit in the Maxima installation. According to discussions on the sage-support google group, it might require a recompilation of Maxima!

Well, I know ways to work around this, as there are other ways to compute the taylor series of composed functions, such as through an alternative library like gp/pari, but I don't have time to work on a workaround today. Sorry, folks!
Okay, so I decided to just calculate the taylor series of the iterations "by hand", using matrix math. It turns out to be way faster. I am now getting times comparable to what Andrew was getting in Mathematica. Hooray!

Code:
# Define the custom IDM:
def parabolic_idm(field, expr, x, size):
    mulm = matrix(field, size+1, size+1);
    abel = matrix(field, size+1, size+1);
    ret = matrix(field, size+1, size+1);
    cof = taylor(expr, x, 0, size).coeffs();
    for rrow in xrange(size+1):
        for ccol in xrange(len(cof)):
            col2 = cof[ccol][1]+rrow;
            if col2 < size+1:
                mulm[rrow, col2] = field(cof[ccol][0]);
    vec = vector(field, size+1);
    vec[0] = field(1);
    abel[0] = vec;
    for ii in xrange(size):
        vec = vec * mulm;
        abel[ii+1] = vec;
    vec = vector(field, size+1);
    vec[1] = field(1);
    ret[0] = vec;
    for ii in xrange(size):
        vec = vec * abel;
        ret[ii+1] = vec;
    return ret;

# Create a Lagrange Interpolation polynomial for each
# coefficient of the iteration function. Note that
# multiplying on the left by a row-vector of powers
# of n (where n is the desired iteration) will give
# a row vector which represents the coefficients of
# a power series in z. Alternatively, multiplying
# on the right by a column vector of powers of z
# will give the coefficients of a power series in n,
# which allows direct calculation of the nth iterate,
# centered at z.
def LagrangePolynomialMatrix(pidm, size):
    XI = matrix(ZZ, size+1);
    XI[0,0] = factorial(size);
    for ccol in xrange(1, size+1):
        XI[0, ccol] = (1-ccol)/ccol * XI[0, ccol-1]
    for rrow in xrange(1,size+1):
        for ccol in xrange(rrow, size+1):
            XI[rrow, ccol] =  (XI[rrow-1, ccol-1] + (1-ccol) * XI[rrow, ccol-1])/ccol
    PI = matrix(ZZ, size+1);
    PI[0,0] = 1;
    for rrow in xrange(1, size+1):
        for ccol in xrange(rrow+1):
            PI[rrow, ccol] = PI[rrow-1, ccol-1] - PI[rrow-1, ccol];
    return (XI*PI)*matrix(QQ, pidm)/factorial(size);

# test data: change or delete as needed
size = 100
time mtx = parabolic_idm(QQ, exp(x)-1, x, size)
time fma = LagrangePolynomialMatrix(mtx, size)

Update: It occurs to me that memory can be saved by eliminating the "mulm" variable and repurposing the "ret" variable. Thus:
Code:
# Define the custom IDM:
def parabolic_idm(field, expr, x, size):
    abel = matrix(field, size+1, size+1);
    ret = matrix(field, size+1, size+1);
    cof = taylor(expr, x, 0, size).coeffs();
    for rrow in xrange(size+1):
        for ccol in xrange(len(cof)):
            col2 = cof[ccol][1]+rrow;
            if col2 < size+1:
                ret[rrow, col2] = field(cof[ccol][0]);
    vec = vector(field, size+1);
    vec[0] = field(1);
    abel[0] = vec;
    for ii in xrange(size):
        vec = vec * ret;
        abel[ii+1] = vec;
    vec = vector(field, size+1);
    vec[1] = field(1);
    ret[0] = vec;
    for ii in xrange(size):
        vec = vec * abel;
        ret[ii+1] = vec;
    return ret;
~ Jay Daniel Fox
Reply
#4
You may also use the powerseries package that was an outcome/branch of some effort of Andrew and me to put together sage routines for hyperoperations.
I recently adapted the code to work with sage 4.1.1:
http://github.com/bo198214/hyperops/raw/...rseries.py

You would use it for the purpose of iterating exp(x)-1 like:
Code:
----------------------------------------------------------------------
| Sage Version 4.1.1, Release Date: 2009-08-14                       |
| Type notebook() for the GUI, and license() for information.        |
----------------------------------------------------------------------
In [2]: load formal_powerseries.py
In [3]: P = FormalPowerSeriesRing(QQ)
In [4]: w = PolynomialRing(QQ,'w')([0,1])
In [5]: time sum(P.Dec_exp.it(w)[0:100]);
CPU times: user 326.53 s, sys: 1.49 s, total: 328.02 s
Wall time: 345.83 s
In [7]: P.Dec_exp.it(w)
Out[7]: [0, 1, 1/2*w, 1/4*w^2 - 1/12*w, 1/8*w^3 - 5/48*w^2 + 1/48*w, ...]
(This is on a Athlon XP 2500, it consumes roughly 200-300 MB memory.)

"it(w)" does just what one would expect: it iterates the given powerseries by w.
Reply
#5
(08/21/2009, 12:38 PM)bo198214 Wrote: You may also use the powerseries package that was an outcome/branch of some effort of Andrew and me to put together sage routines for hyperoperations.
Very cool, I've perused through the code a bit and will take a better look when I can dedicate the time. Interestingly, calculating the 101x101 (i.e., size=100) matrix with my code takes about 8 seconds and comsumes negligible RAM (<10 MB), and it takes about 160 seconds for a 201x201 matrix (and perhaps 40-50 MB of RAM). So there seems to be a lot of overhead involved somewhere in the formal power series ring.

On the other hand, the matrix system is nowhere near as flexible; it's specially built for one purpose. Memory-wise 1GB of RAM should be sufficient to calculate somewhere between 600 and 1000 terms (probably close to the low end of that estimate).

Update: 400 terms took 2800 seconds to calculate the parabolic_idm function, and 960 seconds to calculate the "fma" matrix of lagrangian polynomials. Memory usage was about 410 MB (on top of the roughly 130 MB used by SAGE itself). Based on the growth rate, it would seem that 1 GB would only be good enough for about 500 terms, perhaps slightly more. For 600 terms, one would need at least 1.5 GB, perhaps slightly more.
~ Jay Daniel Fox
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Code to calculate tetration using my method Shanghai46 10 3,923 10/19/2023, 09:15 PM
Last Post: marracco
  Terse Schroeder & Abel function code Daniel 1 1,421 10/16/2022, 07:03 AM
Last Post: Daniel
Question Computing Kneser's Super Logarithm and its Analytic Continuation Catullus 2 2,402 07/10/2022, 04:04 AM
Last Post: Catullus
Question Computing Integer Tetrations Catullus 5 4,691 06/10/2022, 10:59 PM
Last Post: JmsNxn
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 3,502 08/21/2021, 03:18 AM
Last Post: JmsNxn
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 49,683 05/16/2021, 11:51 AM
Last Post: Gottfried
  C++ code for tet, ate and hexp MorgothV8 0 6,266 07/10/2014, 04:24 PM
Last Post: MorgothV8
  "Kneser"/Riemann mapping method code for *complex* bases mike3 2 13,477 08/15/2011, 03:14 PM
Last Post: Gottfried
  An incremental method to compute (Abel) matrix inverses bo198214 3 16,805 07/20/2010, 12:13 PM
Last Post: Gottfried
  Sage Question? rsgerard 1 7,933 05/09/2010, 11:40 AM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)