11/26/2007, 01:46 AM
jaydfox Wrote:I finally got back to this, and it appears that shifting the center prior to solving the system produces the same results as shifting the center after solving the system. This means we're limited by the radius of convergence at the origin, which is to say, the system explicitly solves at the origin, not where we recenter it.
I did some additional testing with the Bell matrix for exponentiation, because the "answer" is well-known. I started with a 26x26 Bell matrix for \( e^x-1 \), and after inverting, I got a power series for \( \ln(x+1) \) that was accurate to machine precision. This is to be expected, since this would represent a simple reversion of a power series.
However, I then calculated the Bell matrix for \( e^x-2 \), which is the inverse of \( \ln(x+2) \). I got bogus coefficients that grew larger as I increased matrix size, just as I had observed with the Abel matrix for exponentiation.
Just to make sure that the issue wasn't that the function I was inverting didn't have a fixed point at 0, I tried the Bell matrix for \( e^{x+\ln(2)}-2 \), which is the inverse of \( \ln(x+2)-\ln(2) \). Once again, I got the bogus coefficients that grew with matrix size.
However, I noticed that if I took the inverse of a submatrix, say the upper-left 15x15 matrix, I got approximately correct answers, though not to machine precision.
It occurred to me then that the infinite summations were involved when computing the recentered matrix system, and I was only getting partial sums. Had I been paying more attention, I would have seen this. At any rate, this gives me hope that I can recenter the system if I proceed carefully. Exactly how to accomplish this is something that will require quite a bit of experimentation, however, so I'll probably wait until early next year to pursue this line of study again.
~ Jay Daniel Fox

