Matrix question for Gottfried
#3
(12/06/2022, 02:59 PM)Gottfried Wrote: Hmm, last point first. The Carlemanmatrix-approach gives a simple representation of the Schroeder-mechanism, true. But with the Abel-equation, this is out of the elegance which comes with the basic tech of that matrices. Note, the Abel-equation can be understood as logarithm of the Schroeder-equation, but there is no Carlemanmatrix for the \(log()\) (only for the \( \log(1+x) \) ) So when I worked with the Abel-equation I only took the Carlemanmatrices as basic food and then applied my general (low) college-math at it. That Andydude's method can be understood as an Ansatz to model the Abel-solution and I expressed that in my Carleman-matrix-toolbox is another thing: I simply introduced the (hypothetically applicable) Neumann-series of a Carleman-matrix and derived formally Andydude's ansatz of this basis. But whether the (infinite) Neumann-series of matrix-powers (analogy to the geometric series with matrices as argument) can formally be used, - I have no definite theorem/proof and no sufficient answer to my question in MO.     

After skimming over the mentioned Aldrovandi-article when I'd downloaded it, I was discouraged with -at least- imprecisions in their assumptions.  
For instance, they write on the subject of diagonalization (and matrix functions):

Quote: --> (pg 16):
Bell matrices are not normal, that is, they do not commute with their transposes.  Normality  is  the  condition  for  diagonalizability. 
This  means that  Bell  matrices  cannot  be  put  into  diagonal  form   by  a  similarity transformation.
I had never heard this "are not normal" in context with diagonalization. Putting the keyword "commute" in this, I assume, they mean that the eigenvector-matrices must be orthogonal ("orthogonal" --> they are not only inverses, but even transposes). But "diagonalibility" means only that the eigenvector-matrices are inverses of each other.      

I left this then as an open riddle for me, and did no more involve myself much in that article.
In wikipedia one can find a proper definition of "diagonalizability":

Quote:In linear algebra, a square matrix \(A\) is called '''diagonalizable''' or '''non-defective''' if it is "similar" to a diagonal matrix, i.e., if there exists an invertible matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\), or equivalently \(A = PDP^{-1}\). (Such \(P\), \(D\) are not unique.)
For a finite-dimensional vector space \(V\), a linear map \(T:V\to V\) is called '''diagonalizable''' if there exists an ordered basis of \(V\) consisting of eigenvectors of \(T\).     
These definitions are equivalent: if \(T\) has a matrix representation \(T = PDP^{-1}\) as above, then the column vectors of \(P\) form a basis consisting of eigenvectors of \(T\), and the diagonal entries of \(D\) are the corresponding eigenvalues of \(T\); with respect to this eigenvector basis, \(A\) is represented by \(D\).           
'''Diagonalization''' is the process of finding the above \(P\) and \(D\).
(The Aldrovandi-texts continues with the following remark:
Quote:  As it happens, this will not be a difficulty because we know their eigenvalues. That functions of matrices are  completely determined  by their  spectra is justified  on  much  more  general  grounds.
. Ahh. At least we can still use the results of diagonalization... )

I don't remember whether they at all considered the difference between finite and infinite sized matrices (besides the rough remarks on this on the same page). While we have a nice Carleman-matrix for the \( \exp() \)-function we don't have one for the \( \log() \)-function although it would be simply its inverse... Sleepy Well, we could truncate the (infinite) Carleman-matrix to finite software-handable size, and find the inverse of that trimmed creature Cool - but the inverse of that infinite Carlemanmatrix is not defined, as long as we cannot assign definite values to the harmonic series and its powers. In the case of infinite dimension we can even have *infinitely many* "inverses" : for  \( f(z)= \exp(z)-1 \) we do not only have the Carleman-matrix for \( \log(1+z) \) as inverse, but as well that of \( \log(1+z) + k \cdot 2 \pi î \) -and those matrices are even square-matrices: not easy to handle in the Carleman-toolbox, especially when it comes to "matrix-functions" (which Aldrovandi refers to as well as an easily available tool).       

Because of your question I looked at the article again, but have still the same old difficulties to read through that text, after they didn't lay out the prerequisites for the use of (infinitely sized!) Bell-matrices more deliberately.  

Ahh, and p.s.: I would like to discuss this always under the terms of "Carleman"-matrices instead of "Bell"-matrices; while the "Carleman"-matrix has coefficients for instance of the exp()-function by \( \exp(z) = 1+ a_1 x + a_2 x^2 + ...\) the "Bell"-convention is to assume the coefficients \( \exp(z) = 1+ b_1/1! x + b_2/2! x^2 + ...\) . Converting between the "Carleman-" and "Bell-" convention needs then a similarity-scaling with the factorials, and for me it had always seemed easier to handle the "Carleman"-route when diagonalizing, so I got used to it - but that might be not so significant and up to the personal preferences of someone.

Thank you for your exceedingly quick response Gottfried. I agree with you about the imprecision of Androvandi's paper, but it is something I feel comfortable with and can write an article extending. I'm writing up what the additional virtues of my paper in the introduction, but handling both Schroeder and Abel is not one of them. The main strength of my work is that it can answer the open question of what the combinatorial structure is in the paper. Also I like the algebraic notation (linear operator?) I use, of course more tools is always a good thing.

While we are here, you mentioned geometrical series of matrices. I work with geometrical series and want my work to handle matrices. Here is my totally stupid problem, I couldn't quickly find a closed for for the geometrical series of matrices while the geometrical series of reals is a high school problem.
Daniel
Reply


Messages In This Thread
Matrix question for Gottfried - by Daniel - 12/06/2022, 12:48 PM
RE: Matrix question for Gottfried - by Gottfried - 12/06/2022, 02:59 PM
RE: Matrix question for Gottfried - by Daniel - 12/06/2022, 10:05 PM
RE: Matrix question for Gottfried - by Gottfried - 12/06/2022, 11:23 PM
RE: Matrix question for Gottfried - by Gottfried - 12/07/2022, 11:46 PM
RE: Matrix question for Gottfried - by JmsNxn - 12/10/2022, 03:37 AM
RE: Matrix question for Gottfried - by MphLee - 12/10/2022, 09:33 PM

Possibly Related Threads…
Thread Author Replies Views Last Post
  [Question] Classifying dynamical system by connected components MphLee 6 7,981 10/22/2025, 11:53 AM
Last Post: MphLee
  A question about tetration from a newbie TetrationSheep 2 6,243 08/26/2024, 12:38 PM
Last Post: TetrationSheep
  Question about the properties of iterated functions Shanghai46 9 11,302 04/21/2023, 09:07 PM
Last Post: Shanghai46
  [question] Local to global and superfunctions MphLee 8 10,906 07/17/2022, 06:46 AM
Last Post: JmsNxn
  A random question for mathematicians regarding i and the Fibonacci sequence. robo37 1 7,901 06/27/2022, 12:06 AM
Last Post: Catullus
  Question about tetration methods Daniel 17 22,692 06/22/2022, 11:27 PM
Last Post: tommy1729
  A question concerning uniqueness JmsNxn 4 16,911 06/10/2022, 08:45 AM
Last Post: Catullus
  Math.Stackexchange.com question on extending tetration Daniel 3 7,543 03/31/2021, 12:28 AM
Last Post: JmsNxn
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 12,136 03/08/2021, 07:13 PM
Last Post: JmsNxn
  Kneser method question tommy1729 9 25,844 02/11/2020, 01:26 AM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)