11/07/2007, 09:32 PM
Gottfried Wrote:I can only restate: in no situation I assumed the matrices as truncated by principle - all my considerations assume the unavoidable truncations in praxi as giving approximations in numerical evaluation as plcaeholder for the basically infinite matrices. I never discussed a finite (truncated) matrix being more than such an approximation for determination of any intermediate result.I dont understand this attitude (whatever you mean by "truncated by principle"). The matrix operator method is based on the unique diagonalization of *finite* matrices (in the same way as Andrew's method is based on the solution of *finite* equation systems). Of course they are used to (hopefully) approximate the diagonalization of the infinite matrix which then facilitates the iteration of the function. But the essential idea is that we can not handle the infinite case (continuum many solutions), but we can uniquely handle the finite approximating cases and in this way get to a unique solution (again like with Andrew's solution).
The interesting (for me) about the matrix operator method was that it chose one solution for developments at non-fixed points, out of the infinitely many possible solutions.
I mean, hey, that about is all our discussion about tetration, to chose one "best" solution. Who needs a method that provides all solutions?
Quote:For instance: if the carleman-matrix is thought of finite size, then I don't see any relation between carleman matrices and any of my matrices.Dont know what you mean by this. The Carleman matrix is of infinite size but approximated by finite truncations. And sometimes perhaps Carleman matrix refers also to the truncated matrices.
Gottfried Wrote:Yes, \( \exp_b \) translates into \( {B_b}^\sim \) and \( \tau_a \) translates into something similar to the lower Pascal matrix. You can however put everything into the transposed view (as you use it) if you swap the order of the operands of \( \circ \). But for diagonalization question this does not matter.bo198214 Wrote:But if we translate this into matrix notation by simply replacing \( \circ \) by the matrix multiplication and replacing each function by the corresponding Bell matrix (which is the transposed Carleman matrix) then we see a diagonalization of \( B_b \) because the Bell matrix (and also the Carleman matrix) of \( \mu_{\ln(a)}(x)=\ln(a)x \) is just your diagonal matrix \( {_dV}(\ln(a)) \)!Yes, this seems so - but did we not already state the identity of the matrix B (or Bs) with the Bell/Carleman-transposes? I thought, that this had settled the question already? I was very happy, when you pointed out the relation in one of your previous posts - I couldn't have done it due to my lack of understanding of those concepts (described in elaborated articles, more than I could follow in detail).
Quote:If from there some shortcomings of the method are already *known* then I would like to know them, too.The shortcoming is that it only works on fixed points, and it is different for different fixed points, so nobody knows which is the "best" fixed point to choose. Also for \( b>\eta \) there are only complex fixed points and the regular iteration at complex fixed points yields non-real values for real arguments which is not desirable. (The matrix operator method does not have this deficit.)
