I just came across a connection between two problems which I do not yet fully understand:
a) the problem of the different values for fractional iteration if the powerseries for tetration are developed around different fixpoints and
b) the problem of inconsistency between alternating iteration series of increasing (iteration-)height when evaluated by serial summation of terms or when computed by the matrix-inversion in the sense of the "Neumann series" (see http://en.wikipedia.org/wiki/Neumann_series to which I referred up to now only as "closed form of the geometric series for matrices" lacking the knowledge that this was a well-known method ) .
(A basically relevant thread related to this is the "bummer"-thread opened by Hendrik see http://math.eretrandre.org/tetrationforu...php?tid=69 In that thread Hendrik has also considered a similar example polynomial as I do it here)
I'll explain b) first and to make things simple I do not use the tetration-function but some polynomial of order 2, which has two fixpoints and which is bijective in the full range between the two fixpoints. One choice is \( f(x) = (x/4)^2 + x - 1 \) . The inverse of this function is \( g(x) = f^{-1}(x) = 4*\sqrt{x+5} - 8 \). The fixpoint by increasing heights is \( x_\omega = -4 \) and by decreasing heights is \( x_{-\omega} = 4 \)
Denote the iterates beginning at \( x_0 \) as \( x_0, x_1, x_2 , \ldots , x_\omega , \ldots \) and the backwards iteration as \( x_0, x_{-1}, x_{-2}, \ldots, x_{-\omega}, \ldots \) .
Then the alternating iterationseries from \( x_0 \) towards \( x_\omega \) may be denoted by
\( \hspace{24} sp(x_0) = x_0 - x_1 + x_2 - x_3 + \ldots - \ldots \)
and the complementary by
\( \hspace{24} sn(x_0) = x_0 - x_{-1} + x_{-2} - x_{-3} + \ldots - \ldots \) .
Then the overall sum is
\( \hspace{24} su(x_0) = sp(x_0) + sn(x_0) - x_0 \) .
With this it is also, that letting x vary from \( x_0 \text{ to } x_2 \) then the sum \( su(x) \) is periodic with a sinusoidal shape (it is not exactly a sinus-curve).
The Matrix-method:
Next step is the implementation of that sums sp and sn by a Neumann-series of the matrix-operator for the (recentered) functions f and g . In the context of tetration/powertower-series it seemed, that this matrix-based method could provide analytic continuation of that alternating series even for the (extremely) divergent case for bases \( b \gt \exp(\exp(-1)) \), see the initial powertower-article, the last link to powertower.pdf below and the discussion in sci.math.research in the first external link below).
Let's denote the version centered at \( x_\omega \) as
\( \hspace{24} f_p(x) = f(x+4)-4 = {1 \over 16} x^2 + {3 \over 2} x \)
and that at \( x_{-\omega} \) as
\( \hspace{24} f_n(x) = f(x-4)+4 = {1 \over 16} x^2 + {1 \over 2} x \)
Then the matrix-method requires the carleman-matrix Mp associated to \( f_p(x) \) and the Neumann-series interpretation of this problem is then the use of \( N_p = ( I + M_p)^{-1} \) to compute the matrix-based-version of \( sp(x_0) \) , let's call it \( sp_{matrix}(x_0) \) while originally I used the inverse of Mp by \( N_n = ( I + M_p^{-1})^{-1} \) to compute the complementary sum \( sn_{matrix}(x_0) \) .
The result of this was the surprising observation, that either \( sp(x_0) = sp_{matrix}(x_0) \text{ and } sn(x_0) \ne sn_{matrix}(x_0) \) or, complementary \( sp(x_0) \ne sp_{matrix}(x_0) \text{ and } sn(x_0) } = sn_{matrix}(x_0) \) which I observed first in the case, that the iterated function is the exponentiation/logarithm but which occurs now in the same manner with that simple polynomial, too.
The view at different fixpoints
Now I find, that the view at the different fixpoints reflects precisely that problem: if I use the matrix Np for \( sp_{matrix}(x_0) \) but the matrices Mn, Nn which are created analoguously, only using the function at the other fixpoint for \( sn_{matrix}(x_0) \) then I get equality for both sums:
\( \hspace{24} sp(x_0) = sp_{matrix}(x_0) \text{ and } sn(x_0) = sn_{matrix}(x_0) \)
(at least to some observable accuracy, I've to check this in more depth later).
The interesting observation is here, that the fixpoint-problem occurs although we do not use fractional iteration; it seems to suffice, that infinitely many terms are involved.
I cannot yet claim any useful consequences from this observation, but perhaps we can use it to quantify the difference by application of the two fixpoint-versions and possibly find out the source and an explict correction-term for it. Also this is then a backing argument, that my matrix-based computations for the alternating powertower-series are correct and meaningful even for the divergent cases. See the excerpt of a related discussion in the newsgroup sci.math.research, which I have documented in
http://go.helms-net.de/math/tetdocs/_ser...tion_1.htm
detailed discussion of the discrepancies mentioned above:
http://go.helms-net.de/math/tetdocs/Tetr...roblem.pdf
initial article, should be reworked... :
http://go.helms-net.de/math/tetdocs/10_4_Powertower.pdf
Gottfried
a) the problem of the different values for fractional iteration if the powerseries for tetration are developed around different fixpoints and
b) the problem of inconsistency between alternating iteration series of increasing (iteration-)height when evaluated by serial summation of terms or when computed by the matrix-inversion in the sense of the "Neumann series" (see http://en.wikipedia.org/wiki/Neumann_series to which I referred up to now only as "closed form of the geometric series for matrices" lacking the knowledge that this was a well-known method ) .
(A basically relevant thread related to this is the "bummer"-thread opened by Hendrik see http://math.eretrandre.org/tetrationforu...php?tid=69 In that thread Hendrik has also considered a similar example polynomial as I do it here)
I'll explain b) first and to make things simple I do not use the tetration-function but some polynomial of order 2, which has two fixpoints and which is bijective in the full range between the two fixpoints. One choice is \( f(x) = (x/4)^2 + x - 1 \) . The inverse of this function is \( g(x) = f^{-1}(x) = 4*\sqrt{x+5} - 8 \). The fixpoint by increasing heights is \( x_\omega = -4 \) and by decreasing heights is \( x_{-\omega} = 4 \)
Denote the iterates beginning at \( x_0 \) as \( x_0, x_1, x_2 , \ldots , x_\omega , \ldots \) and the backwards iteration as \( x_0, x_{-1}, x_{-2}, \ldots, x_{-\omega}, \ldots \) .
Then the alternating iterationseries from \( x_0 \) towards \( x_\omega \) may be denoted by
\( \hspace{24} sp(x_0) = x_0 - x_1 + x_2 - x_3 + \ldots - \ldots \)
and the complementary by
\( \hspace{24} sn(x_0) = x_0 - x_{-1} + x_{-2} - x_{-3} + \ldots - \ldots \) .
Then the overall sum is
\( \hspace{24} su(x_0) = sp(x_0) + sn(x_0) - x_0 \) .
With this it is also, that letting x vary from \( x_0 \text{ to } x_2 \) then the sum \( su(x) \) is periodic with a sinusoidal shape (it is not exactly a sinus-curve).
The Matrix-method:
Next step is the implementation of that sums sp and sn by a Neumann-series of the matrix-operator for the (recentered) functions f and g . In the context of tetration/powertower-series it seemed, that this matrix-based method could provide analytic continuation of that alternating series even for the (extremely) divergent case for bases \( b \gt \exp(\exp(-1)) \), see the initial powertower-article, the last link to powertower.pdf below and the discussion in sci.math.research in the first external link below).
Let's denote the version centered at \( x_\omega \) as
\( \hspace{24} f_p(x) = f(x+4)-4 = {1 \over 16} x^2 + {3 \over 2} x \)
and that at \( x_{-\omega} \) as
\( \hspace{24} f_n(x) = f(x-4)+4 = {1 \over 16} x^2 + {1 \over 2} x \)
Then the matrix-method requires the carleman-matrix Mp associated to \( f_p(x) \) and the Neumann-series interpretation of this problem is then the use of \( N_p = ( I + M_p)^{-1} \) to compute the matrix-based-version of \( sp(x_0) \) , let's call it \( sp_{matrix}(x_0) \) while originally I used the inverse of Mp by \( N_n = ( I + M_p^{-1})^{-1} \) to compute the complementary sum \( sn_{matrix}(x_0) \) .
The result of this was the surprising observation, that either \( sp(x_0) = sp_{matrix}(x_0) \text{ and } sn(x_0) \ne sn_{matrix}(x_0) \) or, complementary \( sp(x_0) \ne sp_{matrix}(x_0) \text{ and } sn(x_0) } = sn_{matrix}(x_0) \) which I observed first in the case, that the iterated function is the exponentiation/logarithm but which occurs now in the same manner with that simple polynomial, too.
The view at different fixpoints
Now I find, that the view at the different fixpoints reflects precisely that problem: if I use the matrix Np for \( sp_{matrix}(x_0) \) but the matrices Mn, Nn which are created analoguously, only using the function at the other fixpoint for \( sn_{matrix}(x_0) \) then I get equality for both sums:
\( \hspace{24} sp(x_0) = sp_{matrix}(x_0) \text{ and } sn(x_0) = sn_{matrix}(x_0) \)
(at least to some observable accuracy, I've to check this in more depth later).
The interesting observation is here, that the fixpoint-problem occurs although we do not use fractional iteration; it seems to suffice, that infinitely many terms are involved.
I cannot yet claim any useful consequences from this observation, but perhaps we can use it to quantify the difference by application of the two fixpoint-versions and possibly find out the source and an explict correction-term for it. Also this is then a backing argument, that my matrix-based computations for the alternating powertower-series are correct and meaningful even for the divergent cases. See the excerpt of a related discussion in the newsgroup sci.math.research, which I have documented in
http://go.helms-net.de/math/tetdocs/_ser...tion_1.htm
detailed discussion of the discrepancies mentioned above:
http://go.helms-net.de/math/tetdocs/Tetr...roblem.pdf
initial article, should be reworked... :
http://go.helms-net.de/math/tetdocs/10_4_Powertower.pdf
Gottfried
Gottfried Helms, Kassel