bo198214 Wrote:re a)
I would stay with the established notation, i.e. use
\( \exp_b^{\circ n}(x) \) or \( \exp_b^{[n]}(x) \)
instead of \( \{b,x\}^n \).
Well, I didn't want to put preferences for this more specialized notation. (as a matter of taste, I even would prefer the "big-T"-notation with parameters, as suggested here in another thread).
My main focus in a) was to point out the conceptual difference between the binary operator notation (two operands) and the notation which involves also the initial-value as a parameter. I think, it has consequences for the definition of partial evaluation, when infinite iteration is involved, as it occurs analoguously with the concept of partial sums for summable infinite series.
bo198214 Wrote:re b)
It was not clear what function \( f(b,x_0) \) do you want to use, I would guess \( f(b,x_0)=b^{x_0} \). It is also not clear whether this average method converges. All fixed points of \( b^x \) are repelling except the lower real fixed point (if existing).
Yeah, this point is an amusing one... I implemented a function out of the tip of my fingers, it worked, wow! but in a review next day, I thought, it shouldn't work at all.... I checked all results, but they were all ok. I even tried some complex parameters, for instance i - still ok :-)
The iteration is simply, with tmp the iterative improved guess
tmp1 = (base^tmp + a*tmp )/(a+1)
tmp = tmp1
where a=2 as a optimizing parameter (for most bases a=1..3 gave convergence, a~2 was average best for a reasonable range of bases)
Initial guess for tmp is the principal complex fixpoint for base=2.
This gives after some iterations values for a spiral, and if one round is reached (I check the next local minimum of distances to the first value), the next initial guess is the center of this approximate round/circle.
Why I was sceptical: a positive error in tmp seems to expand when averaging base^tmp and tmp - but, well, we're in complex numbers and so a rotation is involved, put it aside for later consideration...
bo198214 Wrote:re c)Hmm, c) is a bit speculative. Ok, also the Euler-sum analogy is not best chosen.
Didnt get the Euler summation for diverging newton method ...
I see two problems with my b) and c):
First, from my arguments Cesaro-sum were a better analogy, because I use averages of untransformed values.
Second - it's not exactly the analogy to evaluate partial sums, which always sum from the first term (possibly including a regular transformation). The lasso-method is rather an approximation utility, which helps to improve guesses iteratively. May be, that is also the reason, that I was unable to transfer the method into my other matrix-context. ("Gut, daß wir mal darüber geredet haben" :-) )
In my contour-plot http://math.eretrandre.org/tetrationforu...432#pid432 one sees the lines as borders of the leaves, where each of this lines contain the whole set of positive reals (may be restricted to be greater than a certain bound) - so the lines are the (curves of) loci of the fixpoints for that real numbers. The line of the biggest leaf markes the set of fixpoints, that you may refer to as the principal fixpoints or something like that - I'll look at Gianfranco's wording in his reply tomorrow.
(I'm a bit sick today, so I'll stop the msg here. If you like and want to try/improve the Pari/Gp-function I could just upload the script)
Gottfried
Gottfried Helms, Kassel

