Let's assume we have a stack size: PIXELS BY PIXELS; where X and Y run through PIXELS BY PIXELS.
Let's call the operation WRITE(PIXELS BY PIXELS) the operation which runs through X and Y and writes to a file each value X and Y.
Let us call the time expended to do this E = O(PIXELS BY PIXELS).
When running Mike3's program; we not only have a stack size of PIXELS BY PIXELS; and not only do we run through PIXELS BY PIXELS; we evaluate each pixel. Instead of a fast run through of X and Y. We have to assign a value F(X,Y), which can take more computation time. For each X: F(X,Y); for each Y: F(X,Y). If F is a tetration kind of function, it takes a long time to evaluate these values...
This creates a far longer expended time than E. And especially so, because F(X,Y) appears at the bottom of the forloops.
The goal of my program is simple. We create a stack size of : PIXELS*R BY PIXELS*R. Where R is between 0 and 1 and PIXELS*R is a natural number. My code excels in the fact that:
OVERALL RUNTIME = E + STACK(PIXELS*R BY PIXELS*R)
So, if we set R =0.5; and increase PIXELS BY PIXELS, we lower the OVERALL RUNTIME. But we still have to account for the initial write time of E.
Instead of running:
O(PIXELS BY PIXELS) = O(F(X,Y))
Which is a for loop of X=0,PIXELS, and then Y=0,PIXELS, where we evaluate F(X,Y) then write it.
We instead run:
E + O(PIXELS*R BY PIXELS*R)
Where X=0, PIXELS*R, and then Y=0,PIXELS*R, and we evaluate F(X,Y), then we write PIXELS BY PIXELS--we have an offset of E amount of time to write PIXELS BY PIXELS with values in the registry.
The hypothesis of this code is that F(X+h_1,Y+h_2) = F(X,Y) + DF(X,Y) *(h_1 + ih_2)
The reason the large values produce more pixelated code, is because this approximation becomes less sharp. But nonetheless, we still have a very very good approximations. And the speed ups are worth it. Pixelation is a small price to pay, especially considering the following.
Here is a graph of 1000 by 1000 pixels of BETA(z). The same BETA as before. This graph would normally take about 6-8 hours. I have done it in 2 hours. I have set the "blur factor" = R = 0.5.
This means, I have spent the CPU time of 500 by 500 pixels on evaluating F(X,Y); and then added the time it takes to WRITE 1000 by 1000 pixels (E = O(1000 BY 1000)). And instead of 6-8 hours of CPU time, it's 2 hours.
This graph is the same \(\beta\) function as above, and done from \(0 \le \Re(z) \le 4\) and \(-2 \le \Im(z) \le 2\)... You can't even tell it's blurred
The run time for my program is:
E + O(F(PIXELS*R,PIXELS*R))
Rather than:
O(F(PIXELS,PIXELS))
If that makes sense.....
This might seem like nothing. But when you need to spend a long time to calculate each value F(X,Y), my code halves the time! By reducing the amount of times we need to call F by half...
Let's call the operation WRITE(PIXELS BY PIXELS) the operation which runs through X and Y and writes to a file each value X and Y.
Let us call the time expended to do this E = O(PIXELS BY PIXELS).
When running Mike3's program; we not only have a stack size of PIXELS BY PIXELS; and not only do we run through PIXELS BY PIXELS; we evaluate each pixel. Instead of a fast run through of X and Y. We have to assign a value F(X,Y), which can take more computation time. For each X: F(X,Y); for each Y: F(X,Y). If F is a tetration kind of function, it takes a long time to evaluate these values...
This creates a far longer expended time than E. And especially so, because F(X,Y) appears at the bottom of the forloops.
The goal of my program is simple. We create a stack size of : PIXELS*R BY PIXELS*R. Where R is between 0 and 1 and PIXELS*R is a natural number. My code excels in the fact that:
OVERALL RUNTIME = E + STACK(PIXELS*R BY PIXELS*R)
So, if we set R =0.5; and increase PIXELS BY PIXELS, we lower the OVERALL RUNTIME. But we still have to account for the initial write time of E.
Instead of running:
O(PIXELS BY PIXELS) = O(F(X,Y))
Which is a for loop of X=0,PIXELS, and then Y=0,PIXELS, where we evaluate F(X,Y) then write it.
We instead run:
E + O(PIXELS*R BY PIXELS*R)
Where X=0, PIXELS*R, and then Y=0,PIXELS*R, and we evaluate F(X,Y), then we write PIXELS BY PIXELS--we have an offset of E amount of time to write PIXELS BY PIXELS with values in the registry.
The hypothesis of this code is that F(X+h_1,Y+h_2) = F(X,Y) + DF(X,Y) *(h_1 + ih_2)
The reason the large values produce more pixelated code, is because this approximation becomes less sharp. But nonetheless, we still have a very very good approximations. And the speed ups are worth it. Pixelation is a small price to pay, especially considering the following.
Here is a graph of 1000 by 1000 pixels of BETA(z). The same BETA as before. This graph would normally take about 6-8 hours. I have done it in 2 hours. I have set the "blur factor" = R = 0.5.
This means, I have spent the CPU time of 500 by 500 pixels on evaluating F(X,Y); and then added the time it takes to WRITE 1000 by 1000 pixels (E = O(1000 BY 1000)). And instead of 6-8 hours of CPU time, it's 2 hours.
This graph is the same \(\beta\) function as above, and done from \(0 \le \Re(z) \le 4\) and \(-2 \le \Im(z) \le 2\)... You can't even tell it's blurred
The run time for my program is:
E + O(F(PIXELS*R,PIXELS*R))
Rather than:
O(F(PIXELS,PIXELS))
If that makes sense.....
This might seem like nothing. But when you need to spend a long time to calculate each value F(X,Y), my code halves the time! By reducing the amount of times we need to call F by half...

