bo198214 Wrote:However I want a clear distinction whether you use "matrix method with fixpoint shift" which is nothing else than regular iteration or whether you use "matrix method at a non-fixpoint" which is a different method which is also capable to obtain real iterates for \( b>e^{1/e} \)
I see again the specification of "finite matrices" in the earlier post..
Whatever I derive in my matrix-formulae, I'll consider this for the case of infinite dimension only. This makes no difference where we have triangular matrices, but for instance if inverse matrices or eigenmatrices of non-triangular matrix-operators/Bell-/Carleman-matrices are under discussion.
Only in my very first postings in summer 2007 I referred to empirical, truncated matrices and took inverses and computed fractional iterates -just from the given dimension. After then I tried to determine the entries of the matrices with which I'm working for the infinite size only and to strictly to rely only on such matrices (though they are also truncated). So, for an instance, it surprised me when I recently came to learn Andy's slog-matrices in more depth and understood, that the slog is using the inverse of a finite square matrix and develops its coefficients from that (however tending to find the limiting values, but see especially Jay d Fox's massive investigation for the characteristics of the numerical errors). You gave a partial alternative computation-scheme, which at least lets the coefficients be constant with the increasing matrix-size. Such a class of approaches I would call essentially polynomially approximations, where not only the truncation of series introduce errors but also the coefficients themselves are approximates as well.
The concept of use of infinite matrixes faces then the case of non-invertibility of matrices, for instance the (square) Bell-matrix for b^x. However, the Bellmatrix can be decomposed into two triangular matrix-factors, (both of infinite size) which *can* be inverted meaningfully (pertaining exact entries). Only the product of the inverted factors *cannot* be defined. This is the position, where the fixpoint-shift steps in:
If we have the Bellmatrix B for b^x , then we cannot invert B for the infinite case. But
B = fS2F * P~ // both triagular, invertible
and we want to use the inverse, we can write
PInv~ * fS1F
but this has singularities for the infinite case and we are not allowed to evaluate this. (so I marked the "*"-multiplication red as "forbidden")
But P (as well as then PInv) are the binomial-matrices and they perform addition when operating on the powerseries:
V(x)~ *PInv~ = V(x-1)~
That's where the shift comes in:
we can safely consider the matrix-equation
(using b=exp(1) = e and I write the result to the lhs):
V(e^x) = V(x)~ *B = V(x)~ * (fS2F * P~ )
Then rearranging the invertible P as PInv to the left
V(e^x)*PInv~ = V(x)~ * fS2F
then the invertible fS2F into fS1F to the left
V(e^x)*PInv~ * fS1F = V(x)~
where the product to construct B^-1 is forbidden *in the infinite case* but, applying the binomial-theorem by PInv
V(e^x - 1)~ * fS1F = V(x) ~
which is perfectly ok, and this leads then, since fS1F performs log(1+x)
. V(e^x - 1)~ * fS1F = V(x) ~
. = V( log(1 + (e^x-1))~
. = V( log(e^x))~
. = V(x)~ = V(x)~
Note: this matrix-algebra is only valid if infinite size is assumed everywhere.
I do the same with the eigensystem-decomposition / Schröder-function. I found the fixpoint-shift with my matrix-notation by simply proceeding from the initial equation
V(x)~ * B = V(y)~
decomposing B into matrix-factors P, P^-1 and a triangular C
V(x)~ * P^-t ~ * C * P^t ~ = V(y)~
applying binomial-theorem with the -t'th power of P~ on rhs und lhs
V(x-t)~ * C = V(y - t)~ // implements shift by t = fixpoint
where then C is triangular and allows eigendecomposition providing exact values
Again: only for the case of *infinite* size (and where the often inadmissible or badly converging product P^t~ * C is avoided by the fixpoint-shift)
So this method has then the (t)error
of computation *only* in the truncation of the powerseries (or in the truncation of the dot-product V(x-t)~*C[,1]), but where all *used coefficients* are *exact* (as far as logs and exp's are assumed as exact).All in all- I use the "regular iteration with fixpoint-shift", but only as far as I can represent it coherently in terms of infinite matrices/known closed-form expressions for sums of infinite powerseries, which result by the implicte dot-products. Thus I have the difficulties with b>eta, since the occuring complex-valued matrices C give unsatisfying powerseries, and I have not yet the remedy to deal with that series appropriately.
Quote:Ill answer the other part of your post later and wish you all the best for your health!Yes, thanks! It's progressing diabetes, and sometimes I'm fitter, sometimes struck down, and in general just less powerful and regenerable than up to recent years. Just life..
Henryk
Gottfried
Gottfried Helms, Kassel

