11/11/2007, 10:37 PM

The solution to the Abel equation A, such that A(e^z)=A(z)+1, can be calculated by solving \( (E-I)\vec{f}=\vec{1} \). Here, E is a Carleman or Bell matrix for the exponential function. I'm still confused on the name, but to be clear, it is the matrix with column vectors as powers of the power series for e^z. The matrix I is the identity matrix, and \( \vec{1} \) is the column vector (1, 0, 0, ...). Solving for \( \vec{f} \), we get the power series for the slog.

Now, to shift to a different center, we could use basic shifting operations, e.g., by multiplying the power series by a suitable Pascal matrix. However, this isn't feasible when dealing with a truncation of the power series. Recentering the series reduces the number of terms in the series that can be considered accurate, due to the radius of convergence.

This bothers me somewhat, because we want to develop the slog at 0, but I'd also like to develop a sexp at 0. The sexp at 0 is the reversion of the slog at 1, and the slog at 0 is the sexp at -1. So no matter what, a recentering will be needed if we solve for the slog at 0.

This got me wondering if we could shift the center before recentering and not lose precision. In other words, if we shift the system before solving, do we get the good accuracy for the same number of terms we would get as if we had solved at the origin, or do we get the same reduced precision we would expect by solving at the origin first and then recentering?

I hope to answer this question through experimentation, but if anyone knows the answer off the top of their heads, please share.

I initially feared that recentering before solving wouldn't help, but further reflection on it makes me hopeful that I will get more precision, because I can shift a larger version of the matrix (for more accuracy), then truncate to the size I'm solving for.

If so, I could solve the slog at z=1, then reverse the series and get the sexp at 0, with more valid terms than I previously could have gotten. I'm actually hoping to get several hundred terms of the "residue" (after subtracting the logarithms at the primary fixed points) to within, say, 0.1%, sufficient to try to solve a small Abel system for the pentalog, just to get a rough idea of what it looks like.

However, going further, I would like to be able to recenter an arbitrary Abel equation to an arbitrary location, as long as it's not a singularity. One motivation for this is to recenter the \( \text{slog}_{\sqrt{2}} \) to z=3.5, or z=3. I suspect that if I recenter it to z=3.5, it will use the regular slog for the fixed point at z=4, because the root test will be dictated by that singularity, and hence the contribution from the singularity at z=2 should get washed out.

If this happens, I'd be delighted. If not, I'd still be delighted if the solution still somehow managed to "choose" the lower fixed point.

If it does choose the closer singularity, then I wonder what will happen at z=3. The center would be equidistant to both singularities, and both singularities would be of the logarithmic variety with only slightly different bases, such that neither series would be able to dwarf the other. What would happen then?

Anyway, these are question I intend to explore this week.

Now, to shift to a different center, we could use basic shifting operations, e.g., by multiplying the power series by a suitable Pascal matrix. However, this isn't feasible when dealing with a truncation of the power series. Recentering the series reduces the number of terms in the series that can be considered accurate, due to the radius of convergence.

This bothers me somewhat, because we want to develop the slog at 0, but I'd also like to develop a sexp at 0. The sexp at 0 is the reversion of the slog at 1, and the slog at 0 is the sexp at -1. So no matter what, a recentering will be needed if we solve for the slog at 0.

This got me wondering if we could shift the center before recentering and not lose precision. In other words, if we shift the system before solving, do we get the good accuracy for the same number of terms we would get as if we had solved at the origin, or do we get the same reduced precision we would expect by solving at the origin first and then recentering?

I hope to answer this question through experimentation, but if anyone knows the answer off the top of their heads, please share.

I initially feared that recentering before solving wouldn't help, but further reflection on it makes me hopeful that I will get more precision, because I can shift a larger version of the matrix (for more accuracy), then truncate to the size I'm solving for.

If so, I could solve the slog at z=1, then reverse the series and get the sexp at 0, with more valid terms than I previously could have gotten. I'm actually hoping to get several hundred terms of the "residue" (after subtracting the logarithms at the primary fixed points) to within, say, 0.1%, sufficient to try to solve a small Abel system for the pentalog, just to get a rough idea of what it looks like.

However, going further, I would like to be able to recenter an arbitrary Abel equation to an arbitrary location, as long as it's not a singularity. One motivation for this is to recenter the \( \text{slog}_{\sqrt{2}} \) to z=3.5, or z=3. I suspect that if I recenter it to z=3.5, it will use the regular slog for the fixed point at z=4, because the root test will be dictated by that singularity, and hence the contribution from the singularity at z=2 should get washed out.

If this happens, I'd be delighted. If not, I'd still be delighted if the solution still somehow managed to "choose" the lower fixed point.

If it does choose the closer singularity, then I wonder what will happen at z=3. The center would be equidistant to both singularities, and both singularities would be of the logarithmic variety with only slightly different bases, such that neither series would be able to dwarf the other. What would happen then?

Anyway, these are question I intend to explore this week.

~ Jay Daniel Fox