02/23/2009, 03:52 AM
bo198214 Wrote:I think I somewhere read that for infinite matrices not even \( A(BC)=(AB)C \) is valid, perhaps exactly because one limit does not exist.
Sure; I think that is the basic reason and I would like to see such a statement explicite anywhere, with a specific description of the range of applicability/non-applicability. For instance, in all my experiences with that formulae we can even use (AB)C = A(BC) in some instances: if the implicite dot-products are cesaro or Euler-summable; that means, if the infinite series are for instance alternating dirichlet or geometric series. There occured also some additional restrictions, but I don't have them systematically.
Moreover, even if (AB)C =/=A(BC) in many cases, this does not mean that we cannot deal with the matrix-concept in infinite dimension at all meaningfully. If we have triangular matrices, all either rowfinite or columnfinite, then we can have a very good deal, and no implicte infinite dot-product occurs. One example which I like much (it was the first I've seen and translated to the matrix-formula), is that of Helmut Hasse, where he used the Worpitzky-formula and proved the connection between bernoulii-numbers and zeta-values and introduced a special summing-procedure for the zeta-function, which in principle exploited that composition of S2*P~. which I mentioned in my previous post. A whole family of procedures in the theory of divergent summation deals automatically with infinite matrices, these are that of the Haussdorf-means. They unite Cesaro and Euler-summation by the definition of P * D * P^-1 (where the D is diagonal); and D's characteristic determines then the type of divergent summation.
Possibly the concise compilation of conditions and requirements for the algebra with infinite matrices is actually the job, one competent author should involve himself with...
Gottfried
Gottfried Helms, Kassel

