I don't really know, whether my earlier approach to find a diagonalization for the b^x problem is really a proof, but may be a base for a proof.
Assume the carleman-matrix of infinite size for a base b, 1<b<e^(1/e), using the letters c=log(b), b=t^(1/t) u=log(t)
written as
Bb = (its top-left edge)
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\ \\[8pt]
& \\
0 & c*1 & c*2 & c*3 & c*4 & c*5 \\ \\[8pt]
& \\
0 & c^2*\frac{1^2}{2!} & c^2*\frac{2^2}{2!} & c^2*\frac{3^2}{2!} & c^2*\frac{4^2}{2!} & c^2*\frac{5^2}{2!} \\ \\[8pt]
& \\
0 & c^3*\frac{1^3}{3!} & c^3*\frac{2^3}{3!} & c^3*\frac{3^3}{3!} & c^3*\frac{4^3}{3!} & c^3*\frac{5^3}{3!} \\ \\[8pt]
& \\
0 & c^4*\frac{1^4}{4!} & c^4*\frac{2^4}{4!} & c^4*\frac{3^4}{4!} & c^4*\frac{4^4}{4!} & c^4*\frac{5^4}{4!} \\ \\[8pt]
& \\
0 & c^5*\frac{1^5}{5!} & c^5*\frac{2^5}{5!} & c^5*\frac{3^5}{5!} & c^5*\frac{4^5}{5!} & c^5*\frac{5^5}{5!}
\end{matrix}
\)
to the effect, that
\( \hspace{24} V(x) * Bb = V(b^x) \)
where, as usual, V(x) denotes a vandermonde-vector [1,x,x^2,x^3,...] of a variable parameter x. But differing from my usual notation let's assume V(x) being a rowvector for this sequel to reduce notation-overhead.
Now, if we look for a diagonalisation of Bb this would have the form
\( \hspace{24} Bb = W^{-1} * D * W \)
(W and W^-1 exchanged from my usual notation elsewhere)
The diagonalization-theorems for finite matrices say then
\( \hspace{24} W * Bb = D * W \)
so each of the rows of W becomes a \( d_r \) -multiple of itself by such a transformation, where \( d_r \) denotes the r'th eigenvalue contained in the diagonal of D, where r is the row-number beginning at zero.
Then, obviously, if we use a fixpoint t of Bb we have, for instance
\( \hspace{24}\\[8pt]
V(t) * Bb = 1* V(b^t) = 1* V(t) \)
and V(t) satisfies the condition to be an eigenvector for the eigenvalue 1. So assume \( d_0 = 1 \) and \( W_0 = V(t) \).
Next one can show (I'll add the proof later) that another vector E_1(t) also satisfies the eigenvector-condition.
\( \hspace{24}\\[8pt]
E_1(t) = [0,1*t,2*t,3*t,...] \)
such that
\( \hspace{24}\\[8pt]
E_1(t) *Bb = u * E_1(t) \)
so we have W_1 = E_1 and d_1 = u .
It is difficult to get an idea about E_2 by sheer inspection of example data; but it seems, that an extrapolation makes sense: that we may rescale W by dV(1/t) with the effect, that the descriptions of E_0 and E_1 (and hopefully all E_k) reduce to its numeric coefficients independent of t.
So we restate the diagonalization in the following form:
\( \hspace{24}\\[8pt]
W = X * dV(t) \)
\( \hspace{24}\\[8pt]
X = W *^dV(1/t) \)
\( \hspace{24}\\[8pt]
Bb = dV(1/t)* X^{-1} * D * X * dV(t) \)
\( \hspace{24}\\[8pt]
^dV(t)* Bb * ^dV(1/t) = X^{-1} * D * X \)
and investigate X instead of W.
Second, the base-equation for a parameter x changes.
Let's call dV(t)*Bb*dV(1/t) = Bb_1
The parameter x has now to be divided by t and the result has to be multiplied by t:
\( \hspace{24}\\[8pt]
(V(x)*^dV(1/t))*(^dV(t)* Bb * ^dV(1/t)) *^dV(t) = V(b^x) \)
\( \hspace{24}\\[8pt]
V(x/t)* Bb1 *^dV(t) = V(b^x) \)
\( \hspace{24}\\[8pt]
V(x/t)* Bb1 = V(b^x)*dV(1/t) \)
\( \hspace{24}\\[8pt]
V(x/t)* Bb1 = V(b^x/t) \)
Note, that in the above definition of Bb we have the constant factor c^r in each row, which is now multiplied by t^r due to the premultiplication by dV(t). But c=log(b) = u/t and we have
t^r*c^r = (t*c)^r = u^r
and the row-multiplicator of dV(t)*Bb*dV(1/t) is now dV(u).
Well, let's go back to the previous.
First we have
\( \hspace{24}\\[8pt]
X_0 = E_0*^dV(1/t) = [1,1,1,1,1,...] \)
\( \hspace{24}\\[8pt]
X_1 = E_1*^dV(1/t) = [0,1,2,3,4,...] \)
and we may assume, that X_2, X_3,... are following a simple scheme.
One assumption is, that this could come out as a composition of the pascal-matrix, say
\( \hspace{24}\\[8pt]
X = S * P\sim \)
The interesting thing - and may be the base for a final proof - is now, that with this assumption S can be found by an iterative process, if we assume, that d_r = u^r. The iterative process requires an eigensystem-solution for each row in X, but which requires only the results of the previous steps and leads to a triangular solution S (which comes out to be the Ut-matrix, btw.)
Because the latter solution is a) solvable and b) not arbitrary under the assumtion of d_r = u^r I think, that may be a path for the proof. However - even if the solution is unique under this assumtion, one may find other solutions with another assumtion.
Assume the carleman-matrix of infinite size for a base b, 1<b<e^(1/e), using the letters c=log(b), b=t^(1/t) u=log(t)
written as
Bb = (its top-left edge)
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\ \\[8pt]
& \\
0 & c*1 & c*2 & c*3 & c*4 & c*5 \\ \\[8pt]
& \\
0 & c^2*\frac{1^2}{2!} & c^2*\frac{2^2}{2!} & c^2*\frac{3^2}{2!} & c^2*\frac{4^2}{2!} & c^2*\frac{5^2}{2!} \\ \\[8pt]
& \\
0 & c^3*\frac{1^3}{3!} & c^3*\frac{2^3}{3!} & c^3*\frac{3^3}{3!} & c^3*\frac{4^3}{3!} & c^3*\frac{5^3}{3!} \\ \\[8pt]
& \\
0 & c^4*\frac{1^4}{4!} & c^4*\frac{2^4}{4!} & c^4*\frac{3^4}{4!} & c^4*\frac{4^4}{4!} & c^4*\frac{5^4}{4!} \\ \\[8pt]
& \\
0 & c^5*\frac{1^5}{5!} & c^5*\frac{2^5}{5!} & c^5*\frac{3^5}{5!} & c^5*\frac{4^5}{5!} & c^5*\frac{5^5}{5!}
\end{matrix}
\)
to the effect, that
\( \hspace{24} V(x) * Bb = V(b^x) \)
where, as usual, V(x) denotes a vandermonde-vector [1,x,x^2,x^3,...] of a variable parameter x. But differing from my usual notation let's assume V(x) being a rowvector for this sequel to reduce notation-overhead.
Now, if we look for a diagonalisation of Bb this would have the form
\( \hspace{24} Bb = W^{-1} * D * W \)
(W and W^-1 exchanged from my usual notation elsewhere)
The diagonalization-theorems for finite matrices say then
\( \hspace{24} W * Bb = D * W \)
so each of the rows of W becomes a \( d_r \) -multiple of itself by such a transformation, where \( d_r \) denotes the r'th eigenvalue contained in the diagonal of D, where r is the row-number beginning at zero.
Then, obviously, if we use a fixpoint t of Bb we have, for instance
\( \hspace{24}\\[8pt]
V(t) * Bb = 1* V(b^t) = 1* V(t) \)
and V(t) satisfies the condition to be an eigenvector for the eigenvalue 1. So assume \( d_0 = 1 \) and \( W_0 = V(t) \).
Next one can show (I'll add the proof later) that another vector E_1(t) also satisfies the eigenvector-condition.
\( \hspace{24}\\[8pt]
E_1(t) = [0,1*t,2*t,3*t,...] \)
such that
\( \hspace{24}\\[8pt]
E_1(t) *Bb = u * E_1(t) \)
so we have W_1 = E_1 and d_1 = u .
It is difficult to get an idea about E_2 by sheer inspection of example data; but it seems, that an extrapolation makes sense: that we may rescale W by dV(1/t) with the effect, that the descriptions of E_0 and E_1 (and hopefully all E_k) reduce to its numeric coefficients independent of t.
So we restate the diagonalization in the following form:
\( \hspace{24}\\[8pt]
W = X * dV(t) \)
\( \hspace{24}\\[8pt]
X = W *^dV(1/t) \)
\( \hspace{24}\\[8pt]
Bb = dV(1/t)* X^{-1} * D * X * dV(t) \)
\( \hspace{24}\\[8pt]
^dV(t)* Bb * ^dV(1/t) = X^{-1} * D * X \)
and investigate X instead of W.
Second, the base-equation for a parameter x changes.
Let's call dV(t)*Bb*dV(1/t) = Bb_1
The parameter x has now to be divided by t and the result has to be multiplied by t:
\( \hspace{24}\\[8pt]
(V(x)*^dV(1/t))*(^dV(t)* Bb * ^dV(1/t)) *^dV(t) = V(b^x) \)
\( \hspace{24}\\[8pt]
V(x/t)* Bb1 *^dV(t) = V(b^x) \)
\( \hspace{24}\\[8pt]
V(x/t)* Bb1 = V(b^x)*dV(1/t) \)
\( \hspace{24}\\[8pt]
V(x/t)* Bb1 = V(b^x/t) \)
Note, that in the above definition of Bb we have the constant factor c^r in each row, which is now multiplied by t^r due to the premultiplication by dV(t). But c=log(b) = u/t and we have
t^r*c^r = (t*c)^r = u^r
and the row-multiplicator of dV(t)*Bb*dV(1/t) is now dV(u).
Well, let's go back to the previous.
First we have
\( \hspace{24}\\[8pt]
X_0 = E_0*^dV(1/t) = [1,1,1,1,1,...] \)
\( \hspace{24}\\[8pt]
X_1 = E_1*^dV(1/t) = [0,1,2,3,4,...] \)
and we may assume, that X_2, X_3,... are following a simple scheme.
One assumption is, that this could come out as a composition of the pascal-matrix, say
\( \hspace{24}\\[8pt]
X = S * P\sim \)
The interesting thing - and may be the base for a final proof - is now, that with this assumption S can be found by an iterative process, if we assume, that d_r = u^r. The iterative process requires an eigensystem-solution for each row in X, but which requires only the results of the previous steps and leads to a triangular solution S (which comes out to be the Ut-matrix, btw.)
Because the latter solution is a) solvable and b) not arbitrary under the assumtion of d_r = u^r I think, that may be a path for the proof. However - even if the solution is unique under this assumtion, one may find other solutions with another assumtion.
Gottfried Helms, Kassel

