Ivars Wrote:Gottfried Wrote:AS(2,I) = I - 2^I + 2^2^I - 2^2^2^I + ... -... = ??
My proposal is
AS(2,I) = -0.440033096027+0.928380628227*I
What do you think?
Gottfried
I can only say it diverges, and I understand You want me to find a method to sum it and a result. (...)
Ivars
Well, diagonalization seems to give an answer here.
Remember, in my matrix-notation, we have
\( \hspace{24} V(x)\sim * B_b = V(b^x)\sim \)
or more general
\( \hspace{24} V(T_b^{{o}h}(x)) )\sim * B_b = V(T_b^{{o}h+1}(x)) )\sim \)
and using digonalization
\( \hspace{24} B_b^h = W *^dV(u^h) * W^{-1} \)
Now the alternating sum is (using the small letter "i" for the imaginary unit to prevent confusion with "I" as the identity matrix)
\( \hspace{24} AS(2,i) = V(i)\sim * B_b^0 - V(I)\sim *B_b^1 + V(I)\sim*B_b^2 - ... + ... \)
\( \hspace{24} = V(i)\sim*(B_b^0 - B_b^1 + B_b^2 - ... +...) \)
\( \hspace{24} = V(i)\sim*W *(^dV(u^0)-^dV(u^1)+^dV(u^2)-...+...)*W^{-1}
\)
\( \hspace{24} = V(i)\sim*W *(X)*W^{-1}
\)
where we have to determine the entries of the diagonal-matrix X. But they can all be determined by the rule of geometric series:
\( \hspace{24} X[0] = (u^0)^0 - (u^1)^0+(u^2)^0 ... = 1/(1+u^0) = 1/(1+1)=1/2 \)
\( \hspace{24} X[1] = (u^0)^1 - (u^1)^1+(u^2)^1 ... = 1/(1+u^1) \)
\( \hspace{24} X[2] = (u^0)^2 - (u^1)^2+(u^2)^2 ... = 1/(1+u^2) \)
...
(Note, that we need no fractional powers, so the multiplications in the exponents are commutative and can be reordered)
Then
\( \hspace{24} X = diag([1/(1+u^0),1/(1+u^1),1/(1+u^2),...]) \)
which can be described by
\( \hspace{24} X = (I +^dV(u))^{-1} \) // "I" means here the identity-matrix
so the eigenvalues of
\( \hspace{24} AS_b = W * X * W^{-1} \)
are not a vandermonde-vector, btw.
In most short form we may simply write, again invoking the identities of diagonalization
\( \hspace{24} AS(2,i) = V(i)\sim *\(I + B_b)^{-1} [,1] \) // [,1] means: use column 1 (the second) only for coefficients
The nice aspect now is, that even for b>e^(1/e), or abs(u)>1, which leads to divergent trajectories, the eigenvalues of the matrix (I+Bb)^-1 form a convergent sequence (well, I've to check for the precise range) and AS() gives a reasonable result.
I've given a plot for some bases, where x=1 (instead of I as in the case here), which compares the (cesaro-and) Euler-summable bases and the Shanks-summable bases with values of the diagonalization-summable bases, where the diagonalization extends the summablity to arbitrary high bases. (Note: this is an old picture, for instance I used "s" for base and "matrix-method" for "diagonalization")
[attachment=77]
The result looks pretty smooth...
Because of the nice eigenvalue-configuration, you may arrive at these results even without fixpoint-shifts and expression by the Ut-matrix; simply invert -using an empirical Bb-matrix- I+Bb of reasonable size, say 32x32 or 64x64, to get good approximations.
Gottfried
Gottfried Helms, Kassel

