09/06/2007, 10:57 PM
I've one improvement, which supports the estimation of bounds and the control over the error in the numerical approximation.
I stated:
If we write the whole equation
\( \hspace{24}
V(y)\sim =V(x)\sim * Bs
\)
we have
\( \hspace{24}
V(y)\sim =V(x)\sim * \left(W * \phantom{a}_d\Lambda * W^{-1} \right) \\
\hspace{24}
V(y)\sim =V(x)\sim * \left(\phantom{a}_dV(\frac1t) * P^{-1}\sim * XI_s * \phantom{a}_d\Lambda * W^{-1} \right)
\)
and may use associativity and change order of computation:
\( \hspace{24} \begin{eqnarray}
V(y)\sim &&=&& \left(V(x)\sim * \phantom{a}_dV(\frac1t) * P^{-1}\sim \right) * \left( XI_s * \phantom{a}_d\Lambda * W^{-1} \right)
\end{eqnarray}
\)
to compute
\( \hspace{24} \begin{eqnarray}
V(x')\sim &&=&& \left(V(x)\sim * \phantom{a}_dV(\frac1t) * P^{-1}\sim \right) \\
&&=&& V(\frac{x}t-1)\sim
\end{eqnarray}
\)
first, which gives exact terms up to dimension d.
Then the remaining rhs of the formula gives also correct terms up to the finite dimension ...
\( \hspace{24} \begin{eqnarray}
V(z) &&=&& XI_s * \phantom{a}_d\Lambda * W^{-1} [,1]
\end{eqnarray}
\)
... and we have only to consider convergence of the terms of a simple vector-product
\( \hspace{24}
s^x = y= V(y)\sim[,1] = V(x')\sim * V(z)
\)
where the error due to finite truncation of the matrices occurs only in this last step and may be minimized, if convergence-acceleration can be applied.
Actually, I computed the terms for the series for the half-iterates in my other post this way.
Gottfried
I stated:
Quote:\( \hspace{24}
W = \phantom{a}_dV(\frac1t) * PI * XI_s
\)
then, on the other hand, the matrix-multiplication PI * XIs involves summing infinite series of terms, and for the example t=2 these series are not absolutely convergent. So, unless I can make further progress in describing the entries in XIs too, this means approximate results for W itself, and still for general powers of Bs.
If we write the whole equation
\( \hspace{24}
V(y)\sim =V(x)\sim * Bs
\)
we have
\( \hspace{24}
V(y)\sim =V(x)\sim * \left(W * \phantom{a}_d\Lambda * W^{-1} \right) \\
\hspace{24}
V(y)\sim =V(x)\sim * \left(\phantom{a}_dV(\frac1t) * P^{-1}\sim * XI_s * \phantom{a}_d\Lambda * W^{-1} \right)
\)
and may use associativity and change order of computation:
\( \hspace{24} \begin{eqnarray}
V(y)\sim &&=&& \left(V(x)\sim * \phantom{a}_dV(\frac1t) * P^{-1}\sim \right) * \left( XI_s * \phantom{a}_d\Lambda * W^{-1} \right)
\end{eqnarray}
\)
to compute
\( \hspace{24} \begin{eqnarray}
V(x')\sim &&=&& \left(V(x)\sim * \phantom{a}_dV(\frac1t) * P^{-1}\sim \right) \\
&&=&& V(\frac{x}t-1)\sim
\end{eqnarray}
\)
first, which gives exact terms up to dimension d.
Then the remaining rhs of the formula gives also correct terms up to the finite dimension ...
\( \hspace{24} \begin{eqnarray}
V(z) &&=&& XI_s * \phantom{a}_d\Lambda * W^{-1} [,1]
\end{eqnarray}
\)
... and we have only to consider convergence of the terms of a simple vector-product
\( \hspace{24}
s^x = y= V(y)\sim[,1] = V(x')\sim * V(z)
\)
where the error due to finite truncation of the matrices occurs only in this last step and may be minimized, if convergence-acceleration can be applied.
Actually, I computed the terms for the series for the half-iterates in my other post this way.
Gottfried
Gottfried Helms, Kassel