Here are two plots with a different precisions for the nslog, i.e. matrix size 50 and exact fractional arithmetic. The precisions of dsexp is equal to those in the previous post
As the peak is roughly around 0.3 but will probably move to 0.2 as the matrix size of the matrix power approaches the matrix size of the natural abel function and the computation is very time consuming I continue only computing for \( t=0.2,0.3 \)
\( \delta(190,19)(0.3)\approx 0.53\times 10^{-5} \)
\( \delta(200,20)(0.3)\approx 0.43\times 10^{-5} \)
\( \delta(250,25)(0.3)\approx 0.15\times 10^{-5} \)
\( \delta(250,25)(0.2)\approx 0.16\times 10^{-5} \)
\( \delta(300,25)(0.2)\approx 0.16\times 10^{-5} \)
\( \delta(350,30)(0.2)\approx 0.65\times 10^{-6} \)
The first two arguments of \( \delta \) are the precision (number of digits) and the matrix size. Note, that the value degenerates if the precision is too little. Each arithmetic operation in the computation of the matrix power introduces a small error which sum up in the overall computation of the matrix power.
Actually I think we can see that \( \delta \) decreases with increasing precision and matrix size, though very slowly but continuously. So it rather seems that the diagonalization method and Andrew's method are the same in the limit case!
As the peak is roughly around 0.3 but will probably move to 0.2 as the matrix size of the matrix power approaches the matrix size of the natural abel function and the computation is very time consuming I continue only computing for \( t=0.2,0.3 \)
\( \delta(190,19)(0.3)\approx 0.53\times 10^{-5} \)
\( \delta(200,20)(0.3)\approx 0.43\times 10^{-5} \)
\( \delta(250,25)(0.3)\approx 0.15\times 10^{-5} \)
\( \delta(250,25)(0.2)\approx 0.16\times 10^{-5} \)
\( \delta(300,25)(0.2)\approx 0.16\times 10^{-5} \)
\( \delta(350,30)(0.2)\approx 0.65\times 10^{-6} \)
The first two arguments of \( \delta \) are the precision (number of digits) and the matrix size. Note, that the value degenerates if the precision is too little. Each arithmetic operation in the computation of the matrix power introduces a small error which sum up in the overall computation of the matrix power.
Actually I think we can see that \( \delta \) decreases with increasing precision and matrix size, though very slowly but continuously. So it rather seems that the diagonalization method and Andrew's method are the same in the limit case!
