11/11/2007, 10:05 PM
Gottfried Wrote:Since you put the focus at this I assumed, that this is some principal aspect of the approach...Well, I dont want to dive into the philosophical difference between practical and theoretical approach. For me counts that it is a clear definition and that the result of this approach is different from the other approaches (as far as I can see).
Quote:then decomposes uniquely via Eigenvalues
... and I assume now, this is the difference.
I would call this "practical approach";
Your other directly infinite approach looks quite as if it is the fixed points approach merely written with matrices. So I wouldnt call it a different approach (compared to the fixed point method).
One more interesting fact about the (truncated) matrix operator method is that for \( 1<b<\eta \) it even chooses a fixed point! The Eigenvalues of the truncated B converge to the powers of the derivation at the *lower* fixed point! While for \( b>\eta \) they do not converge the powers of the derivation at any (complex) fixed point of \( b^x \). This is striking! Some kind of own intelligence

Quote:it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)What is a bad result in this context?
Quote:But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".
If the approximation converges it is neither rough nor fine (of course this is still not verified if I see it correctly, however the convergence of Andrew's method is also not yet verified, but we can work with it by supposing it and so we can do here).
Quote:However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.
For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation.
But thats exactly the beauty and the potential (for investigation) of the method that you can not see a certain structure in the eigenvalues (yet) but somehow it provides a (even real) solution despite.
Quote:Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.If I have solution to the infinite system why would I bother myself with truncated matrices?
Quote: B * W(B)[col] = d[col] * W(B)[col]You probably mean
B * W(B)[col] = W(B)[col] * d[col]
but thats just the matrix version of the Schroeder equation \( \sigma(\exp_b(x))=c\sigma(x) \).
If you find a solution \( \sigma \) to the Schroeder equation you make the Carleman/column matrix \( W(B) \) out of it and you have a solution of this infinite matrix equation and vice versa if you have a solution W(B) to the above matrix equation that is the Carleman/column matrix of a powerseries \( \sigma \) then this power series is a solution to the Schroeder equation. Nothing new is gained by this consideration.
