(Part 2 of 2 posts)
The practical reason for using the derivatives is in my case, that I want apply the Newton-iteration to find zeros and extremas to high/arbitrary accuracy. Because the matrix-based approach is powerful to compute the original values for the \( \operatorname{asum}() \) I want to apply it here too.
The naive approach for the derivative at some x and \( \lim_{h \to 0} \) is of course
This can be evaluated using the numerical evaluation of the iteration-series using sumalt() or, and better, using the matrix-based method for the numerical evaluation.
But because the derivative of the series is the series of the derivatives of its terms and the terms have an analytical expression for the derivative, we could try to base an evaluation on the derivatives of the iterates. Again, the matrix-method is superior here; after the partial iteration series \( asuma() \) and \( asumb() \) are expressible as analytic power series we can simply insert the coefficients for the term-by-term-differentation, such that in a first step we would write:
and the asum(x) by
This works very well in principle; even if I use some x in the near of 3.5 both series seem to converge sufficiently well with n=64 terms. However, to improve the approximation we shoulf again shift the x towards the fixpoints for each partial series and compute also the derivatives for the individual terms around \( x=x_0 \) by the explicit analytical formulae for the individual terms, such that we had something like
But now: I cannot make this computation correct if I assume more terms for the middle part of the series. Everything is still fine, if I use the numerically approximated derivatives for the two partial series
where in the \( \operatorname{asumae}() \) I can use the analytical expressions for its single terms but for the h in the evaluations of the other partial series I can only go to something like \( h=1e-12 \) and not much smaller because of loss of precision .
This is especially unsatisfactory because an analytical expression seems to hang around very close to this!
I tried a couple of days to find appropriate expressions for the arguments for the matrix-based analytical derivatives of the \( \operatorname{asuma}()' \) and \( \operatorname{asumb}()' \) but got always stuck.
Gottfried
The practical reason for using the derivatives is in my case, that I want apply the Newton-iteration to find zeros and extremas to high/arbitrary accuracy. Because the matrix-based approach is powerful to compute the original values for the \( \operatorname{asum}() \) I want to apply it here too.
The naive approach for the derivative at some x and \( \lim_{h \to 0} \) is of course
\( \operatorname{asum}(x)'={(\operatorname{asum}(x+\frac h2)-\operatorname{asum}(x- \frac h2)) \over h} \)
This can be evaluated using the numerical evaluation of the iteration-series using sumalt() or, and better, using the matrix-based method for the numerical evaluation.
But because the derivative of the series is the series of the derivatives of its terms and the terms have an analytical expression for the derivative, we could try to base an evaluation on the derivatives of the iterates. Again, the matrix-method is superior here; after the partial iteration series \( asuma() \) and \( asumb() \) are expressible as analytic power series we can simply insert the coefficients for the term-by-term-differentation, such that in a first step we would write:
\( \begin{array} {rcl} \operatorname{asuma}(x)' &=& \sum_{k=1}^\infty k * a_{0,k}*x^{k-1} \\
\operatorname{asumb}(x)' &=& 0 + \sum_{k=1}^\infty k*a_{1,k}*(x-t_1)^{k-1} \end{array} \)
\operatorname{asumb}(x)' &=& 0 + \sum_{k=1}^\infty k*a_{1,k}*(x-t_1)^{k-1} \end{array} \)
and the asum(x) by
\( \operatorname{asum}(x)' = \operatorname{asuma}(x)' + \operatorname{asumb}(x)' - 1 \)
This works very well in principle; even if I use some x in the near of 3.5 both series seem to converge sufficiently well with n=64 terms. However, to improve the approximation we shoulf again shift the x towards the fixpoints for each partial series and compute also the derivatives for the individual terms around \( x=x_0 \) by the explicit analytical formulae for the individual terms, such that we had something like
\( \operatorname{asum}(x)' = \operatorname{asuma}(x_n)' + \operatorname{asumae}(x_{n-1},x_{1-m})' + \operatorname{asumb}(x_{-m})' \)
But now: I cannot make this computation correct if I assume more terms for the middle part of the series. Everything is still fine, if I use the numerically approximated derivatives for the two partial series
\( \begin{array} {rcl}
\operatorname{asum}(x)' &\sim& {\operatorname{asuma}(f(x+h/2,n))-\operatorname{asuma}(f(x-h/2,n)) \over h}\\
& + & {\operatorname{asumb}(g(x+h/2-t_1,m)+t_1) - \operatorname{asumb}(g(x-h/2-t_1,m)+t_1) \over h} \\ &+& \operatorname{asumae}(x_{n-1},x_{1-m})' \end{array} \)
\operatorname{asum}(x)' &\sim& {\operatorname{asuma}(f(x+h/2,n))-\operatorname{asuma}(f(x-h/2,n)) \over h}\\
& + & {\operatorname{asumb}(g(x+h/2-t_1,m)+t_1) - \operatorname{asumb}(g(x-h/2-t_1,m)+t_1) \over h} \\ &+& \operatorname{asumae}(x_{n-1},x_{1-m})' \end{array} \)
where in the \( \operatorname{asumae}() \) I can use the analytical expressions for its single terms but for the h in the evaluations of the other partial series I can only go to something like \( h=1e-12 \) and not much smaller because of loss of precision .
This is especially unsatisfactory because an analytical expression seems to hang around very close to this!
I tried a couple of days to find appropriate expressions for the arguments for the matrix-based analytical derivatives of the \( \operatorname{asuma}()' \) and \( \operatorname{asumb}()' \) but got always stuck.
Gottfried
Gottfried Helms, Kassel

