Eigenvalues of the Carleman matrix of b^x
#1
I don't really know, whether my earlier approach to find a diagonalization for the b^x problem is really a proof, but may be a base for a proof.



Assume the carleman-matrix of infinite size for a base b, 1<b<e^(1/e), using the letters c=log(b), b=t^(1/t) u=log(t)
written as
Bb = (its top-left edge)
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\ \\[8pt]

& \\
0 & c*1 & c*2 & c*3 & c*4 & c*5 \\ \\[8pt]

& \\
0 & c^2*\frac{1^2}{2!} & c^2*\frac{2^2}{2!} & c^2*\frac{3^2}{2!} & c^2*\frac{4^2}{2!} & c^2*\frac{5^2}{2!} \\ \\[8pt]

& \\
0 & c^3*\frac{1^3}{3!} & c^3*\frac{2^3}{3!} & c^3*\frac{3^3}{3!} & c^3*\frac{4^3}{3!} & c^3*\frac{5^3}{3!} \\ \\[8pt]

& \\
0 & c^4*\frac{1^4}{4!} & c^4*\frac{2^4}{4!} & c^4*\frac{3^4}{4!} & c^4*\frac{4^4}{4!} & c^4*\frac{5^4}{4!} \\ \\[8pt]

& \\
0 & c^5*\frac{1^5}{5!} & c^5*\frac{2^5}{5!} & c^5*\frac{3^5}{5!} & c^5*\frac{4^5}{5!} & c^5*\frac{5^5}{5!}
\end{matrix}
\)

to the effect, that
\( \hspace{24} V(x) * Bb = V(b^x) \)

where, as usual, V(x) denotes a vandermonde-vector [1,x,x^2,x^3,...] of a variable parameter x. But differing from my usual notation let's assume V(x) being a rowvector for this sequel to reduce notation-overhead.

Now, if we look for a diagonalisation of Bb this would have the form

\( \hspace{24} Bb = W^{-1} * D * W \)
(W and W^-1 exchanged from my usual notation elsewhere)

The diagonalization-theorems for finite matrices say then

\( \hspace{24} W * Bb = D * W \)

so each of the rows of W becomes a \( d_r \) -multiple of itself by such a transformation, where \( d_r \) denotes the r'th eigenvalue contained in the diagonal of D, where r is the row-number beginning at zero.

Then, obviously, if we use a fixpoint t of Bb we have, for instance

\( \hspace{24}\\[8pt]

V(t) * Bb = 1* V(b^t) = 1* V(t) \)
and V(t) satisfies the condition to be an eigenvector for the eigenvalue 1. So assume \( d_0 = 1 \) and \( W_0 = V(t) \).

Next one can show (I'll add the proof later) that another vector E_1(t) also satisfies the eigenvector-condition.
\( \hspace{24}\\[8pt]

E_1(t) = [0,1*t,2*t,3*t,...] \)
such that
\( \hspace{24}\\[8pt]

E_1(t) *Bb = u * E_1(t) \)
so we have W_1 = E_1 and d_1 = u .

It is difficult to get an idea about E_2 by sheer inspection of example data; but it seems, that an extrapolation makes sense: that we may rescale W by dV(1/t) with the effect, that the descriptions of E_0 and E_1 (and hopefully all E_k) reduce to its numeric coefficients independent of t.

So we restate the diagonalization in the following form:
\( \hspace{24}\\[8pt]

W = X * dV(t) \)
\( \hspace{24}\\[8pt]

X = W *^dV(1/t) \)
\( \hspace{24}\\[8pt]

Bb = dV(1/t)* X^{-1} * D * X * dV(t) \)
\( \hspace{24}\\[8pt]

^dV(t)* Bb * ^dV(1/t) = X^{-1} * D * X \)

and investigate X instead of W.

Second, the base-equation for a parameter x changes.
Let's call dV(t)*Bb*dV(1/t) = Bb_1
The parameter x has now to be divided by t and the result has to be multiplied by t:
\( \hspace{24}\\[8pt]

(V(x)*^dV(1/t))*(^dV(t)* Bb * ^dV(1/t)) *^dV(t) = V(b^x) \)
\( \hspace{24}\\[8pt]

V(x/t)* Bb1 *^dV(t) = V(b^x) \)
\( \hspace{24}\\[8pt]

V(x/t)* Bb1 = V(b^x)*dV(1/t) \)
\( \hspace{24}\\[8pt]

V(x/t)* Bb1 = V(b^x/t) \)


Note, that in the above definition of Bb we have the constant factor c^r in each row, which is now multiplied by t^r due to the premultiplication by dV(t). But c=log(b) = u/t and we have
t^r*c^r = (t*c)^r = u^r
and the row-multiplicator of dV(t)*Bb*dV(1/t) is now dV(u).

Well, let's go back to the previous.
First we have
\( \hspace{24}\\[8pt]

X_0 = E_0*^dV(1/t) = [1,1,1,1,1,...] \)
\( \hspace{24}\\[8pt]

X_1 = E_1*^dV(1/t) = [0,1,2,3,4,...] \)
and we may assume, that X_2, X_3,... are following a simple scheme.
One assumption is, that this could come out as a composition of the pascal-matrix, say
\( \hspace{24}\\[8pt]

X = S * P\sim \)

The interesting thing - and may be the base for a final proof - is now, that with this assumption S can be found by an iterative process, if we assume, that d_r = u^r. The iterative process requires an eigensystem-solution for each row in X, but which requires only the results of the previous steps and leads to a triangular solution S (which comes out to be the Ut-matrix, btw.)

Because the latter solution is a) solvable and b) not arbitrary under the assumtion of d_r = u^r I think, that may be a path for the proof. However - even if the solution is unique under this assumtion, one may find other solutions with another assumtion.
Gottfried Helms, Kassel
Reply


Messages In This Thread
Eigenvalues of the Carleman matrix of b^x - by Gottfried - 05/30/2008, 07:12 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Matrix question for Gottfried Daniel 6 9,117 12/10/2022, 09:33 PM
Last Post: MphLee
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 12,158 03/08/2021, 07:13 PM
Last Post: JmsNxn
  New Quantum Algorithms (Carleman linearization) Finally Crack Nonlinear Equations Daniel 2 5,720 01/10/2021, 12:33 AM
Last Post: marraco
  Tommy's matrix method for superlogarithm. tommy1729 0 6,142 05/07/2016, 12:28 PM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 25,831 09/29/2014, 11:39 PM
Last Post: Gottfried
  Q: Exponentiation of a carleman-matrix Gottfried 0 6,856 11/19/2012, 10:18 AM
Last Post: Gottfried
  "Natural boundary", regular tetration, and Abel matrix mike3 9 36,873 06/24/2010, 07:19 AM
Last Post: Gottfried
  sum of log of eigenvalues of Carleman matrix bo198214 4 17,770 08/28/2009, 09:34 PM
Last Post: Gottfried
  spectrum of Carleman matrix bo198214 3 12,689 02/23/2009, 03:52 AM
Last Post: Gottfried
  Matrix Operator Method Gottfried 38 117,515 09/26/2008, 09:56 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)