Yesterday I succeeded in

analytically solving the eigensystem-decomposition

of the tetration-matrix Bs.

Recall my usual notation:

----------------------------------------------------------------

\( \hspace{24} V(x) = column(x^0,x^1,x^2,...) \)

is a

type of column-vector. This notation is only used, if the vector in an equation has this form of consecutive powers.

\( \hspace{24} \phantom{a}_dV(x) =diag(V(x)) \) is the same as diagonal-matrix

\( \hspace{24} V(x)\sim \) is the transpose of V, thus a row-vector

\( \hspace{24} F = column(0!,1!,2!,...) \hspace{24}\phantom{a}_dF=diag(F) \) the factorial column-vector resp its diagonal-version

\( \hspace{24} B:=b_{row,column}=b_{r,c}= \frac{c^r}{r!} \) with idices r,c beginning at zero,

is then the matrix which performs e -> e^e

\( \hspace{24} B_s:=bs_{row,column}=bs_r,c= \frac{(log(s)*c)^r}{r!} \) the matrix which performs s -> s^s,

so that

\( \hspace{24} V(s)\sim * B_s = V(s^s)\sim \)

The most simple case:

\( \hspace{24} V(1)\sim * B_s = V(s^1)\sim \)

and the continuous tetration, where s in the bounds described below:

\( \hspace{24} V(1)\sim * B_s^y = V(s\^\^y)\sim \)

----------------------------------------------------------------

Assume for the following

Code:

`bounds for scalars`

t in the range 1/e < t < e , lt = log(t)

s = t^(1/t) , ls = log(s) = lt/t

Bs is then also defined as

\( \hspace{24}

B_s = \phantom{a}_dV(\log(s)) * B = \phantom{a}_dV(\frac{\log(t)}t) * B

\)

The eigensystem of Bs is then

\( \hspace{24}

B_s = W_s * \phantom{a}_d\Lambda * W_s^{-1}

\)

where dLambda is the diagonal matrix containing the eigenvalues l_k

First hypothese was :

\( \hspace{24}

\phantom{a}_d\Lambda = \phantom{a}_dV(lt) = \phantom{a}_dV(log(t))

\)

This hypothese fits the result, and allows consistent proof of the structural description of Ws.

Because it was simpler to analyze Ws^-1 I'll start with that description.

From Ws^-1 the vector dV(t) can be factored, and the remaining matrix is called Qs, which is still depending on s:

\( \hspace{24}

W_s^{-1} = Q_s * \phantom{a}_dV(t)

\)

or

\( \hspace{24}

Q_s = W_s^{-1} * \phantom{a}_dV(t)^{-1}

\)

Qs can further be factored

\( \hspace{24}

Q_s = X_s * P\sim

\)

where P is the lower triangular Pascal-matrix (of binomial-coefficients).

Having decomposed the structure of the entries in

Xs analytically, this means now, to have the analytical solution also for Ws^-1.

In other words: we can easily compute the

correct finite dimensional truncation of the theoretically infinite square-matrix Ws^-1. That was the aim of my consideration of this matrix.

It occurs also, that Xs, the only part depending on s (or t), is a triangular matrix, which can also be proven.

Its rows, up to a finite dimension d, can be computed by linear combinations of powers of log(t) at most up to the d'th power of log(t). So, if we assume logarithms to be exact, the d'th row of Xs can be determined exactly in d terms of powers of such logaritms of maximal exponent d and one reciprocal.

This is a very satisfactory result since we do no longer rely on numerical approximations by the implementations of numerical eigensystem-solvers, and can develop checks for bounds, quality of numerical approximations given a finite dimension d, and possibly extend to analytic continuation based on the formal description of W^-1.

----------------------------------------

However, to determine the non-inverse, W itself, is not non-trivial now.

On one hand, we have with the decomposition of W^-1 into two triangular matrices (and the final column-scaling)

\( \hspace{24}

W^{-1} = X_s * P\sim * \phantom{a}_dV(t)

\)

where each component can be exactly inverted for any finite dimension d, so that formally

\( \hspace{24}

W = \phantom{a}_d V(t)^{-1} * (P\sim)^{-1} * X_s^{-1}

\)

Let's denote the inverses PI = (P~)^-1, XIs = Xs^-1.

\( \hspace{24}

W = \phantom{a}_dV(\frac1t) * PI * XI_s

\)

then, on the other hand, the theoretical and practical matrix-multiplication PI * XIs involves summation of infinite series, and for the example t=2 these series are not absolutely convergent. So, unless I can make further progress in describing the entries in XIs too, this means to still approximate results for W itself, and thus still for the general powers of Bs.

However, the series, which have to be summed, have alternating signs and the growthrate of their terms are lower than exponential, so they can regularly summed by Euler-summation. This approach gave already satisfactory results for numerically determining W from W^-1 by this analytical way.

(But let this part, to improve computation of W, be the next step for further study)

-----------------------------------------

The derivation for the structure of the entries of W^-1 is relatively simple in terms of matrix-operations. However, since I don't see much discussion of the matrix-concept here, I'll describe this, if this is requested (I'm also a bit exhausted by the concentration last days).

One note, anyway: the way of proving the analytical solution involves solving of linear equations, which I now found to be also studied earlier, for instance in

Keith Briggschröder-equation page 12, about the Schröder-equations , and I think I'm beginning to understand these concepts first time.

Btw: I had a

very nice, easy afternoon yesterday... :-)

Gottfried