12/06/2022, 10:05 PM
(12/06/2022, 02:59 PM)Gottfried Wrote: Hmm, last point first. The Carlemanmatrix-approach gives a simple representation of the Schroeder-mechanism, true. But with the Abel-equation, this is out of the elegance which comes with the basic tech of that matrices. Note, the Abel-equation can be understood as logarithm of the Schroeder-equation, but there is no Carlemanmatrix for the \(log()\) (only for the \( \log(1+x) \) ) So when I worked with the Abel-equation I only took the Carlemanmatrices as basic food and then applied my general (low) college-math at it. That Andydude's method can be understood as an Ansatz to model the Abel-solution and I expressed that in my Carleman-matrix-toolbox is another thing: I simply introduced the (hypothetically applicable) Neumann-series of a Carleman-matrix and derived formally Andydude's ansatz of this basis. But whether the (infinite) Neumann-series of matrix-powers (analogy to the geometric series with matrices as argument) can formally be used, - I have no definite theorem/proof and no sufficient answer to my question in MO.
After skimming over the mentioned Aldrovandi-article when I'd downloaded it, I was discouraged with -at least- imprecisions in their assumptions.
For instance, they write on the subject of diagonalization (and matrix functions):
Quote: --> (pg 16):I had never heard this "are not normal" in context with diagonalization. Putting the keyword "commute" in this, I assume, they mean that the eigenvector-matrices must be orthogonal ("orthogonal" --> they are not only inverses, but even transposes). But "diagonalibility" means only that the eigenvector-matrices are inverses of each other.
Bell matrices are not normal, that is, they do not commute with their transposes. Normality is the condition for diagonalizability.
This means that Bell matrices cannot be put into diagonal form by a similarity transformation.
I left this then as an open riddle for me, and did no more involve myself much in that article.
In wikipedia one can find a proper definition of "diagonalizability":
Quote:In linear algebra, a square matrix \(A\) is called '''diagonalizable''' or '''non-defective''' if it is "similar" to a diagonal matrix, i.e., if there exists an invertible matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\), or equivalently \(A = PDP^{-1}\). (Such \(P\), \(D\) are not unique.)(The Aldrovandi-texts continues with the following remark:
For a finite-dimensional vector space \(V\), a linear map \(T:V\to V\) is called '''diagonalizable''' if there exists an ordered basis of \(V\) consisting of eigenvectors of \(T\).
These definitions are equivalent: if \(T\) has a matrix representation \(T = PDP^{-1}\) as above, then the column vectors of \(P\) form a basis consisting of eigenvectors of \(T\), and the diagonal entries of \(D\) are the corresponding eigenvalues of \(T\); with respect to this eigenvector basis, \(A\) is represented by \(D\).
'''Diagonalization''' is the process of finding the above \(P\) and \(D\).
Quote: As it happens, this will not be a difficulty because we know their eigenvalues. That functions of matrices are completely determined by their spectra is justified on much more general grounds.. Ahh. At least we can still use the results of diagonalization... )
I don't remember whether they at all considered the difference between finite and infinite sized matrices (besides the rough remarks on this on the same page). While we have a nice Carleman-matrix for the \( \exp() \)-function we don't have one for the \( \log() \)-function although it would be simply its inverse...Well, we could truncate the (infinite) Carleman-matrix to finite software-handable size, and find the inverse of that trimmed creature
- but the inverse of that infinite Carlemanmatrix is not defined, as long as we cannot assign definite values to the harmonic series and its powers. In the case of infinite dimension we can even have *infinitely many* "inverses" : for \( f(z)= \exp(z)-1 \) we do not only have the Carleman-matrix for \( \log(1+z) \) as inverse, but as well that of \( \log(1+z) + k \cdot 2 \pi î \) -and those matrices are even square-matrices: not easy to handle in the Carleman-toolbox, especially when it comes to "matrix-functions" (which Aldrovandi refers to as well as an easily available tool).
Because of your question I looked at the article again, but have still the same old difficulties to read through that text, after they didn't lay out the prerequisites for the use of (infinitely sized!) Bell-matrices more deliberately.
Ahh, and p.s.: I would like to discuss this always under the terms of "Carleman"-matrices instead of "Bell"-matrices; while the "Carleman"-matrix has coefficients for instance of the exp()-function by \( \exp(z) = 1+ a_1 x + a_2 x^2 + ...\) the "Bell"-convention is to assume the coefficients \( \exp(z) = 1+ b_1/1! x + b_2/2! x^2 + ...\) . Converting between the "Carleman-" and "Bell-" convention needs then a similarity-scaling with the factorials, and for me it had always seemed easier to handle the "Carleman"-route when diagonalizing, so I got used to it - but that might be not so significant and up to the personal preferences of someone.
Thank you for your exceedingly quick response Gottfried. I agree with you about the imprecision of Androvandi's paper, but it is something I feel comfortable with and can write an article extending. I'm writing up what the additional virtues of my paper in the introduction, but handling both Schroeder and Abel is not one of them. The main strength of my work is that it can answer the open question of what the combinatorial structure is in the paper. Also I like the algebraic notation (linear operator?) I use, of course more tools is always a good thing.
While we are here, you mentioned geometrical series of matrices. I work with geometrical series and want my work to handle matrices. Here is my totally stupid problem, I couldn't quickly find a closed for for the geometrical series of matrices while the geometrical series of reals is a high school problem.
Daniel


Well, we could truncate the (infinite) Carleman-matrix to finite software-handable size, and find the inverse of that trimmed creature
- but the inverse of that infinite Carlemanmatrix is not defined, as long as we cannot assign definite values to the harmonic series and its powers. In the case of infinite dimension we can even have *infinitely many* "inverses" : for \( f(z)= \exp(z)-1 \) we do not only have the Carleman-matrix for \( \log(1+z) \) as inverse, but as well that of \( \log(1+z) + k \cdot 2 \pi î \) -and those matrices are even square-matrices: not easy to handle in the Carleman-toolbox, especially when it comes to "matrix-functions" (which Aldrovandi refers to as well as an easily available tool).