06/24/2010, 07:19 AM
Adding to Henryk's post -
the slog-matrix SLOG is the (supposed) inverse of the "Bell-matrix minus I" (B - I) , constructed by dismissing an empty column.
For each size nxn the truncated solution has the inverse only on information of (n-1)*(n-1) parameters, so we have information-loss. After inversion the matrix gets shifted to fit the nxn-size and the parameter -1 is filled into the empty column-head.
If that loss of information decreases when the size is increased then I think we have good reason to assume convergence of the methods. Andrew showed in his base-article the "stabilizing" of the coefficients for some truncations and bases.
However, if we multiply (B-I) * SLOG (or was it SLOG*(B-I) ?) we don't get exactly the identity matrix, but (systematically) a nonneglectable value in the last row. This value is far bigger than the last entry in the relevant column of SLOG and -in my opinion- disturbes the convergence-characteristic of the SLOG- function: we get diminuisihing coefficients with higher index, (and an apparent convergence but to a false value!) but the needed correction for the final value is in the last term of the truncated powerseries (and it cannot be neglected).
Now -how does this matter with respect to the asymptotics of the infinite size? Then a "last" term is not available and the "correction" can only be neglected if its value vanishes - well, since the parameter x of the SLOG-funcion can be between 0 and 1 its infinite power suppresses any influence of that correction-term, but for instance, if x=1 I don't know whether we can dismiss that effect.
I cannot really estimate the weight of that problem, and maybe it vanishes for some relevant range for the parameters. But I remember the problems Jay Fox reported with some difficult oscillating error terms when the SLOG-computation is accelerated and each bit of the result was examined...
Gottfried
the slog-matrix SLOG is the (supposed) inverse of the "Bell-matrix minus I" (B - I) , constructed by dismissing an empty column.
For each size nxn the truncated solution has the inverse only on information of (n-1)*(n-1) parameters, so we have information-loss. After inversion the matrix gets shifted to fit the nxn-size and the parameter -1 is filled into the empty column-head.
If that loss of information decreases when the size is increased then I think we have good reason to assume convergence of the methods. Andrew showed in his base-article the "stabilizing" of the coefficients for some truncations and bases.
However, if we multiply (B-I) * SLOG (or was it SLOG*(B-I) ?) we don't get exactly the identity matrix, but (systematically) a nonneglectable value in the last row. This value is far bigger than the last entry in the relevant column of SLOG and -in my opinion- disturbes the convergence-characteristic of the SLOG- function: we get diminuisihing coefficients with higher index, (and an apparent convergence but to a false value!) but the needed correction for the final value is in the last term of the truncated powerseries (and it cannot be neglected).
Now -how does this matter with respect to the asymptotics of the infinite size? Then a "last" term is not available and the "correction" can only be neglected if its value vanishes - well, since the parameter x of the SLOG-funcion can be between 0 and 1 its infinite power suppresses any influence of that correction-term, but for instance, if x=1 I don't know whether we can dismiss that effect.
I cannot really estimate the weight of that problem, and maybe it vanishes for some relevant range for the parameters. But I remember the problems Jay Fox reported with some difficult oscillating error terms when the SLOG-computation is accelerated and each bit of the result was examined...
Gottfried
Gottfried Helms, Kassel

