(08/20/2022, 03:31 PM)Gottfried Wrote: How do you compute the coefficients? Using Pari/GP and self-tailored routines for triangular matrixes I get cpu-times for n x n matrixes with coordinates n: 16*[12,14,15,16,32] secs: [6,12,15,20,314] with integer arithmetic and trendcalculation in Excel gives potenziell-estimate with exponent 3.8 or even 4.0 tell me that calculating n=1024 needs 68 mins. For n=2048, which were my next goal, my routines would likely need 18 hrs. My computations work on (optimezed) procedures for the mercator-series for the carlemanmatrixes, and I don't think I can tweek the timeconsumption further down. With float numbers (but with risk of too few decimals provided) I get better timings: n=1024 shall need 23 mins. For n=2048 I should need 4.25 hrs...
In Sage I work with Integer fractions (QQ) und using ordinary recursion formula that comes from the defining equation.
I easily can compute more values if I feel like it because I save all the previous values and the recursion just can compute the next values.
There is though no optimization, I think it take a little longer 1h 30m perhaps for 1024 values.
But Gottfried, with doubling to 2048 you get only a tenth more in x values, you go from 10=log2(1024) to 11=log2(2048\), you will not see much more - thats what I mean - its exponential.
(08/20/2022, 03:31 PM)Gottfried Wrote: Anyway, I'll try to reproduce and extend some of your curves today and/or tomorow. Curious!Haha, now we are in the field of experimental mathematics, as in physics, results have to be reproduced by different teams, lol.
(08/20/2022, 03:31 PM)Gottfried Wrote: I'm moreover curious, whether a stretch of the x-axis should be an option, to capture the periodicity better. Say log(2.1) instead or so ... and see whether there would be a meaningful value there - yet I did not collect exampledata so far...you mean that the zeros go to integers?
