11/26/2007, 07:22 AM
jaydfox Wrote:jaydfox Wrote:I finally got back to this, and it appears that shifting the center prior to solving the system produces the same results as shifting the center after solving the system. This means we're limited by the radius of convergence at the origin, which is to say, the system explicitly solves at the origin, not where we recenter it.
I did some additional testing with the Bell matrix for exponentiation, because the "answer" is well-known. I started with a 26x26 Bell matrix for \( e^x-1 \), and after inverting, I got a power series for \( \ln(x+1) \) that was accurate to machine precision. This is to be expected, since this would represent a simple reversion of a power series.
However, I then calculated the Bell matrix for \( e^x-2 \), which is the inverse of \( \ln(x+2) \). I got bogus coefficients that grew larger as I increased matrix size, just as I had observed with the Abel matrix for exponentiation.
Just to make sure that the issue wasn't that the function I was inverting didn't have a fixed point at 0, I tried the Bell matrix for \( e^{x+\ln(2)}-2 \), which is the inverse of \( \ln(x+2)-\ln(2) \). Once again, I got the bogus coefficients that grew with matrix size.
However, I noticed that if I took the inverse of a submatrix, say the upper-left 15x15 matrix, I got approximately correct answers, though not to machine precision.
It occurred to me then that the infinite summations were involved when computing the recentered matrix system, and I was only getting partial sums. Had I been paying more attention, I would have seen this. At any rate, this gives me hope that I can recenter the system if I proceed carefully. Exactly how to accomplish this is something that will require quite a bit of experimentation, however, so I'll probably wait until early next year to pursue this line of study again.
Wow. That's exactly what I'm doing with my matrix-notation, howerver I didn't consider your special cases.
Say U is the Bellmatrix for e^x-1, which is in my version the factorial scaled Stirling-matrix.
So V(x)~ * U = V(e^x-1) ~
Since the pascal-matrix P applies addition +1 to the argument of a powerseries-vector, I have
V(x)~ * U*P~ = V(e^x)~
where U*P is the square Bell-matrix.
V(x)~ -> V(log(1+x))~
is then performed by U^-1, which contains the Stirling-numbers 1'st kind:
V(x)~ * U^-1 = V(log(1+x))~
This inverse can be computed exactly, and since it is triangular, also its powers.
To accomplish
V(x)~ -> V(e^x-2)
we have to use
V(x)~ * (U * P^-1 ~) = V(e^x-2)
and (U * P^-1) is no more trianguler; thus its powers imply to evaluate series, (possibly divergent series) for each entry.
Again the inverse performs the V(x)~ -> V(log(2+x))~ so we had
V(x)~ * (U * P^-1~ )^-1 = V(log(2+x))~
But the computation of entries of the inverse does not converge to final values by increasing the size, at least not by the ordinary numerical inversion-procedure.
The change of coefficients, however, can be better managed if one inverts (U*P^-1 ~) by inversion of its factors,
(P~ * U^-1)
The computation of this matrix involves then divergent summation of series, though of alternating sign, so we may accelerate convergence by Euler-sum - while we have to apply different orders of Eulersum for each entry! Anyway, this way we might approximate a reasonable approximation for the true inverse.
Numerically it would be better, to use the associativity
( V(x)~ * P~) * U^-1 = V(log(2+x))~
= V(x+1) ~ * U^-1 = V(log(2+x))~
and one can then numerically deal with the truncated series of powers of (x+1) and the coefficients of the ordinary Stirling-matrix of first kind which makes the Eulersum much more convenient, and the approximation would be better controllable.
Gottfried
Gottfried Helms, Kassel

