02/23/2023, 07:59 AM
Alright! I'm still looking into this; seeing if I can create speed ups, but for the moment everything is working.
Here is a graph of \(\beta(z)\) for \(0 \le \Re(z) \le 6\) and \(|\Im(z)| \le 3\); for 300 by 300 pixels using Mike3's program:
And here is the exact same graph--when I set the "blur factor" to 1/2.
The second graph isn't considerably faster; but it is noticeably faster. When we get to something like 1000 by 1000 pixels; or hi res graphs; my program is going to be considerably faster. I set the blur factor to a 1/2, because I wanted to show the noticeable degradation. Normally, I would suggest something like 3/4 blur factor; this won't be as noticeable. And will make the pixel blocks smaller.
Also, you can notice it getting particularly pixelated near large values. I'm trying to think of a way around this; and I think I have a way; but I'm scared it'll be useless, because that's just Tetration and iterated exponentials.
But nonetheless...
Instead of graphing pixel by pixel; I have successfully implemented "pixel block" by "pixel block"--where the "blur factor" determines the size of the block. And by making a smaller blur factor, we can noticeably increase run time.
For example; with 300x300 pixels; and 1/2 blur factor; we only need to evaluate \(\beta(z)\) 150x150 times, rather than 300x300 times. And we aren't losing much detail; despite a bit of pixelation at Large values. I think the speed makes it worth it.
I'm still working on optimizing it; but there's only one thing I see as a problem at the moment; and I'm just trying to find the best way to code it. Will keep you guys updated.
PM me if you are interested in beta testing. I don't want to post this until I have everything working!
Here is a graph of \(\beta(z)\) for \(0 \le \Re(z) \le 6\) and \(|\Im(z)| \le 3\); for 300 by 300 pixels using Mike3's program:
And here is the exact same graph--when I set the "blur factor" to 1/2.
The second graph isn't considerably faster; but it is noticeably faster. When we get to something like 1000 by 1000 pixels; or hi res graphs; my program is going to be considerably faster. I set the blur factor to a 1/2, because I wanted to show the noticeable degradation. Normally, I would suggest something like 3/4 blur factor; this won't be as noticeable. And will make the pixel blocks smaller.
Also, you can notice it getting particularly pixelated near large values. I'm trying to think of a way around this; and I think I have a way; but I'm scared it'll be useless, because that's just Tetration and iterated exponentials.
But nonetheless...
Instead of graphing pixel by pixel; I have successfully implemented "pixel block" by "pixel block"--where the "blur factor" determines the size of the block. And by making a smaller blur factor, we can noticeably increase run time.
For example; with 300x300 pixels; and 1/2 blur factor; we only need to evaluate \(\beta(z)\) 150x150 times, rather than 300x300 times. And we aren't losing much detail; despite a bit of pixelation at Large values. I think the speed makes it worth it.
I'm still working on optimizing it; but there's only one thing I see as a problem at the moment; and I'm just trying to find the best way to code it. Will keep you guys updated.
PM me if you are interested in beta testing. I don't want to post this until I have everything working!

