Hey, Sheldon
This is very much the problem I've been facing when trying to get Taylor series. And for that; I'd like to say my taylor series, as I'm currently grabbing them, are rather defunct. And I can explain why pretty plainly. You're pointing out a very good problem, which I don't know how to avoid perfectly just yet. But ultimately, I feel it's a problem with how I've coded this--as opposed to the math. For that, I'm going to run through how this works.
Each point \( s\in \mathbb{C} \) is matched to a different level of iteration \( 1 \le n \le 10 \). If I write,
\(
\tau^0 = 0\\
\tau^{n+1}(s) = -\log(1+e^{-s}) + \log(1+\frac{\tau^{n+1}(s+1)}{\beta_1(s+1)})\\
\)
Then each point \( s \in \mathbb{C} \); when I run Abel_N(s,1) does \( n \le 10 \) iterations for some \( n = n(s) \). This was done using a limiter in my program which quits as soon as \( \Re(\beta(s,1)) > 3 \) or \( n > 10 \). So what you are seeing here, is a discrepancy, where,
\(
\beta(s,1) + \tau^9(s) \neq \beta(s,1)+ \tau^{10}(s)\\
\)
But they agree fairly well point wise. The trouble is... their Taylor series are vastly different; especially as you go further out in the terms. After our talk, and further testing today, I realize my protocol for grabbing taylor series are defunct, and not done perfectly. As each point \( s \) has it's own depth of iteration--and if for (on the real line for instance) n = 4 but then at s = 1+i we get n =10, the two functions, even though may somewhat agree pointwise--their taylor series are vastly different. So I've been thinking about:
\(
\beta(s,1) + \tau^n(s)\\
\)
And for the taylor series to work naturally; every s must have the same n. Now, I avoided this, mostly because it produces many errors on the real line. If I set n=10, then in no way will the abel function work on the real line, because we need to sample values of about \( \beta(10,1) \). Which are astronomical.
I've been thinking about a work around into how to code this. But, as your post shows, at least the taylor series code is entirely wrong. We require another analytic expression for this. We need a way such that each s has the same depth of iteration; rather than a patch work of doing different amounts of iterations at different points. Albeit, this may work pointwise, but it's a disaster analytically. I think the recursive method is ineffective for taylor series.
From this, enters the fixed iteration method. This was the initial way I had coded this problem. But it proved to create inaccuracies. But, it's analytic in nature. So, taking two steps back; we instead write,
\(
\text{Abel}_N(s,1,n) = \beta(s) + \tau^n(s)\\
\)
Where n is fixed for all s. And we completely remove the limiter \( \Re \beta(s) \le 3 \). This will not work as well point wise; but it will work analytically; because this will be an analytic expression.
I abandoned this method because it caps at about 15 digits precision before overflowing; and can decrease precision in the complex plane. But it is analytic. Again, the main evil is the overflow errors we get from nesting recursion. To recode this, we start with changing our rho function into,
And our Abel_N function into:
And now, everytime you call the Abel_N you have to specify the amount of recursion. count=4 gets good accuracy on the real line, but less accuracy in the complex plane. However, the benefit is that this is an analytic function. Remember, the answer is count = \infty. It's not a bunch of analytic functions pasted together; which agree fairly well pointwise.
So for example, if you work with Abel_N(s,1,5) you get about 15 digits on the real line (avoiding overflows, so sticking with z < 1).
And now we move onto your question. Which I write using this table:
And after this there's an overflow in the process. But the TRUE value of Abel_N(1+I,1) is when we let count go to infinity. This obviously can't be done without overflows.
Now this is important because Abel_N(z,1,9) is NOT THE SAME ANALYTIC FUNCTION as Abel_N(z,1,3); but on the real line, we kind of cap at 3. So of course the taylor series won't equal. I can't believe I missed this
So, in essence, my code is pointwise only. And grabbing taylor series won't work perfectly because some times we call Abel_N(z,1,3) and sometimes we call Abel_N(z,1,9). Think of my code as a piecewise approximation to 100 digits... I hadn't realized this, but your challenges make this clear.
My solution.... Pull back in the lefthalf plane more. I can't believe I hadn't thought of this sooner. But, let's restrict ourselves to nonzero imaginary vaues. So screw the real line for the moment. And let's work just with the values \( 0 < \Im(s) < \pi \). Then I'll add the code:
Then, you get 100 digit accuracy, displayed by this:
This allows us to run 1000 iterations; but only way off in the left half plane. You can see it aggregating towards the fixed point L. Which, I should say \( \tau(-\infty) = L \) for \( 0 < \Im(s) < \pi \).
And now we push forward. Infact, I think you'll be able to make a change of variables \( w = e^s \) in which at \( w = 0 \) we have \( Abel_N = L \); and now we're back to just pushing forward with exponentials; but we might be able to grab taylor series better.
Now, SIGNIFICANTLY, we alter the normalization constant A LOT. So that now, Abel_N(1+I,1) is way off in the right half plane, compared to before. We get 1000 iterations, but we now have to find the normalization constant again. To my best guess this normalization constant should be about \( \approx -180 \). I'm not really sure. Either way, this gives a tetration for \( 0 < \Im(s) < \pi \) and \( \Re(s) < -1000 \). And absolutely, we get a fixed amount of iterations; rather than pasting different iterations together. This should be much better for Taylor series.
All in all, I feel your challenge is perfectly correct. And you are absolutely correct in doubting these taylor series. But in my defense, it's how I coded it. The math is solid, even if this is the hill I die on. The math is right. I do not know how to code it efficiently though without fucking everything up. But the math is solid. I swear, I'll die on this hill.
I need your help coding this better, so thank you for any challenge you make. As I said to ember, let's break this method together.
Here are some values of \( \Re(s) \approx -180 \):
And BEWARE THIS IS INSANELY SLOWER THAN IT WAS BEFORE!!!!!!!!! It takes a minute to calculate a single value. I've avoided all optimization in favour of absolute calculations.
I hope this helps you assess the situation. I'm so happy to have you back, sheldon. This is the best way I can answer your question. I hope it makes sense.
Regards, James
Actually sheldon, I believe this is the proof that the beta method (including tommy's version) IS kneser. We need to use the normality at \( \Re(s) \to - \infty \) in our coding; and consequently the \( Im(s) \to \infty \) must also be normal; just by inspection. And if it's normal there, Paulsen & Cowgill take care of the rest.
Additionally sheldon. We pass your previous Taylor series test. The Taylor series converges ridiculously fast. As evidenced by this code:
This is very much the problem I've been facing when trying to get Taylor series. And for that; I'd like to say my taylor series, as I'm currently grabbing them, are rather defunct. And I can explain why pretty plainly. You're pointing out a very good problem, which I don't know how to avoid perfectly just yet. But ultimately, I feel it's a problem with how I've coded this--as opposed to the math. For that, I'm going to run through how this works.
Each point \( s\in \mathbb{C} \) is matched to a different level of iteration \( 1 \le n \le 10 \). If I write,
\(
\tau^0 = 0\\
\tau^{n+1}(s) = -\log(1+e^{-s}) + \log(1+\frac{\tau^{n+1}(s+1)}{\beta_1(s+1)})\\
\)
Then each point \( s \in \mathbb{C} \); when I run Abel_N(s,1) does \( n \le 10 \) iterations for some \( n = n(s) \). This was done using a limiter in my program which quits as soon as \( \Re(\beta(s,1)) > 3 \) or \( n > 10 \). So what you are seeing here, is a discrepancy, where,
\(
\beta(s,1) + \tau^9(s) \neq \beta(s,1)+ \tau^{10}(s)\\
\)
But they agree fairly well point wise. The trouble is... their Taylor series are vastly different; especially as you go further out in the terms. After our talk, and further testing today, I realize my protocol for grabbing taylor series are defunct, and not done perfectly. As each point \( s \) has it's own depth of iteration--and if for (on the real line for instance) n = 4 but then at s = 1+i we get n =10, the two functions, even though may somewhat agree pointwise--their taylor series are vastly different. So I've been thinking about:
\(
\beta(s,1) + \tau^n(s)\\
\)
And for the taylor series to work naturally; every s must have the same n. Now, I avoided this, mostly because it produces many errors on the real line. If I set n=10, then in no way will the abel function work on the real line, because we need to sample values of about \( \beta(10,1) \). Which are astronomical.
I've been thinking about a work around into how to code this. But, as your post shows, at least the taylor series code is entirely wrong. We require another analytic expression for this. We need a way such that each s has the same depth of iteration; rather than a patch work of doing different amounts of iterations at different points. Albeit, this may work pointwise, but it's a disaster analytically. I think the recursive method is ineffective for taylor series.
From this, enters the fixed iteration method. This was the initial way I had coded this problem. But it proved to create inaccuracies. But, it's analytic in nature. So, taking two steps back; we instead write,
\(
\text{Abel}_N(s,1,n) = \beta(s) + \tau^n(s)\\
\)
Where n is fixed for all s. And we completely remove the limiter \( \Re \beta(s) \le 3 \). This will not work as well point wise; but it will work analytically; because this will be an analytic expression.
I abandoned this method because it caps at about 15 digits precision before overflowing; and can decrease precision in the complex plane. But it is analytic. Again, the main evil is the overflow errors we get from nesting recursion. To recode this, we start with changing our rho function into,
Code:
rho(z,y,count)={
if(count>0,
count--;
log(1 + (rho(z+1,y,count)-log(1+exp(-y*(z+1))))/beta(z+1,y)),
0
);
}And our Abel_N function into:
Code:
Abel_N(z,y,count) = {
beta(z,y) + rho(z,y,count) - log(1+exp(-y*z));
}And now, everytime you call the Abel_N you have to specify the amount of recursion. count=4 gets good accuracy on the real line, but less accuracy in the complex plane. However, the benefit is that this is an analytic function. Remember, the answer is count = \infty. It's not a bunch of analytic functions pasted together; which agree fairly well pointwise.
So for example, if you work with Abel_N(s,1,5) you get about 15 digits on the real line (avoiding overflows, so sticking with z < 1).
And now we move onto your question. Which I write using this table:
Code:
Abel_N(1+I,1,3)
%72 = 0.3326962392308007020014865854237297888996872301263950452212819011740822116949238956645687894808235713 + 0.8253196416421547770143811245258918770537667057235651771844372452625085974948591308978646820358718994*I
exp(Abel_N(I,1,3))
%73 = 0.3331324362934382533107762285571036421697911477662160925662937067298050267131043999222875942632743548 + 0.8265817438212454693846804181500372134007618310660305633420340147921518796924964209508964583105448308*I
Abel_N(1+I,1,4)
%74 = 0.3338405164485978560617226914964707072664591940724549428755239283104216900912384022667310746444439571 + 0.8297707536819189534491342070803442963391124193412046139999079825530844387021891777815842418547851239*I
exp(Abel_N(I,1,4))
%75 = 0.3326962392308007020014865854237297888996872301263950452212819011740822116949238956645687894808235713 + 0.8253196416421547770143811245258918770537667057235651771844372452625085974948591308978646820358718994*I
Abel_N(1+I,1,5)
%76 = 0.3342824352516798494105860729409608796878908824771145078019334622592328575006054064884925397153343555 + 0.8315813619339502416453819519085905691938993051369868315671650853929466887077756760576359583380320225*I
exp(Abel_N(I,1,5))
%77 = 0.3338405164485978560617226914964707072664591940724549428755239283104216900912384022667310746444439571 + 0.8297707536819189534491342070803442963391124193412046139999079825530844387021891777815842418547851239*I
Abel_N(1+I,1,6)
%78 = 0.3343503812545163782483121839224093960869801264764702427619396667729679185297203272127713675669724870 + 0.8318522150534258467642344294075900135334283086787844285845867243763170498636223660748960810403490459*I
exp(Abel_N(I,1,6))
%79 = 0.3342824352516798494105860729409608796878908824771145078019334622592328575006054064884925397153343555 + 0.8315813619339502416453819519085905691938993051369868315671650853929466887077756760576359583380320225*I
Abel_N(1+I,1,7)
%80 = 0.3343528247466435854370710702152854431493564677151529776954052706900708135747687578304817050057444114 + 0.8318607651729233849245226812501299571093924010418649837027066585722523959004595892396238594261250086*I
exp(Abel_N(I,1,7))
%81 = 0.3343503812545163782483121839224093960869801264764702427619396667729679185297203272127713675669724870 + 0.8318522150534258467642344294075900135334283086787844285845867243763170498636223660748960810403490459*I
Abel_N(1+I,1,8)
%82 = 0.3343528247661539766247053296097915790613022346645454910710939522968467496625386880329966406027443183 + 0.8318607651979606419012858881111704414647307119906690903661656988862193774443656979962248951639318901*I
exp(Abel_N(I,1,8))
%83 = 0.3343528247466435854370710702152854431493564677151529776954052706900708135747687578304817050057444114 + 0.8318607651729233849245226812501299571093924010418649837027066585722523959004595892396238594261250086*I
Abel_N(1+I,1,9)
%84 = 0.3343528247661539766247053296097915790613022346645454910710939522968467496625386880329966406027443183 + 0.8318607651979606419012858881111704414647307119906690903661656988862193774443656979962248951639318901*I
exp(Abel_N(I,1,9))
%85 = 0.3343528247661539766247053296097915790613022346645454910710939522968467496625386880329966406027443183 + 0.8318607651979606419012858881111704414647307119906690903661656988862193774443656979962248951639318901*IAnd after this there's an overflow in the process. But the TRUE value of Abel_N(1+I,1) is when we let count go to infinity. This obviously can't be done without overflows.
Now this is important because Abel_N(z,1,9) is NOT THE SAME ANALYTIC FUNCTION as Abel_N(z,1,3); but on the real line, we kind of cap at 3. So of course the taylor series won't equal. I can't believe I missed this
So, in essence, my code is pointwise only. And grabbing taylor series won't work perfectly because some times we call Abel_N(z,1,3) and sometimes we call Abel_N(z,1,9). Think of my code as a piecewise approximation to 100 digits... I hadn't realized this, but your challenges make this clear.
My solution.... Pull back in the lefthalf plane more. I can't believe I hadn't thought of this sooner. But, let's restrict ourselves to nonzero imaginary vaues. So screw the real line for the moment. And let's work just with the values \( 0 < \Im(s) < \pi \). Then I'll add the code:
Code:
Abel_N(z,y,{count=1000}) = {
if(real(Const(z)) <= -1000,
beta(z,y) + rho(z,y,count) - log(1+exp(-y*z)),
exp(Abel_N(z-1,y,count))
);
}Then, you get 100 digit accuracy, displayed by this:
Code:
Abel_N(-1000+I,1)
%218 = 0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I
exp(Abel_N(-1001+I,1))
%219 = 0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*IThis allows us to run 1000 iterations; but only way off in the left half plane. You can see it aggregating towards the fixed point L. Which, I should say \( \tau(-\infty) = L \) for \( 0 < \Im(s) < \pi \).
And now we push forward. Infact, I think you'll be able to make a change of variables \( w = e^s \) in which at \( w = 0 \) we have \( Abel_N = L \); and now we're back to just pushing forward with exponentials; but we might be able to grab taylor series better.
Now, SIGNIFICANTLY, we alter the normalization constant A LOT. So that now, Abel_N(1+I,1) is way off in the right half plane, compared to before. We get 1000 iterations, but we now have to find the normalization constant again. To my best guess this normalization constant should be about \( \approx -180 \). I'm not really sure. Either way, this gives a tetration for \( 0 < \Im(s) < \pi \) and \( \Re(s) < -1000 \). And absolutely, we get a fixed amount of iterations; rather than pasting different iterations together. This should be much better for Taylor series.
All in all, I feel your challenge is perfectly correct. And you are absolutely correct in doubting these taylor series. But in my defense, it's how I coded it. The math is solid, even if this is the hill I die on. The math is right. I do not know how to code it efficiently though without fucking everything up. But the math is solid. I swear, I'll die on this hill.
I need your help coding this better, so thank you for any challenge you make. As I said to ember, let's break this method together.
Here are some values of \( \Re(s) \approx -180 \):
Code:
Abel_N(-180+0.5*I,1)
%23 = -4.594979390707927534666609124398354584088205114874356021315346273924960915837111059622974914088977405 - 1.245536222570619397930679863903537441983317514571627551506395779887607679115072488921172120017242391*I
Abel_N(-181+I,1)
%24 = -4.646093398162349222749913062587164021884607537129389945895507538405018235559517002184091847195395429 - 2.111079009223824705007214393403535783852031910962549740360763109382552393207990708425053211387566108*I
Abel_N(-182+0.25*I,1)
%25 = 1.319781624143128386109148423892091221888459095835226587197839366868690149315196477658600798681783902 + 1.104119945601300209454094790959724015807443342953939457605317862482700111558216705808742393140752610*IAnd BEWARE THIS IS INSANELY SLOWER THAN IT WAS BEFORE!!!!!!!!! It takes a minute to calculate a single value. I've avoided all optimization in favour of absolute calculations.
I hope this helps you assess the situation. I'm so happy to have you back, sheldon. This is the best way I can answer your question. I hope it makes sense.
Regards, James
Actually sheldon, I believe this is the proof that the beta method (including tommy's version) IS kneser. We need to use the normality at \( \Re(s) \to - \infty \) in our coding; and consequently the \( Im(s) \to \infty \) must also be normal; just by inspection. And if it's normal there, Paulsen & Cowgill take care of the rest.
Additionally sheldon. We pass your previous Taylor series test. The Taylor series converges ridiculously fast. As evidenced by this code:
Code:
Y = Abel_N(X+I-1000,1)
%23 = (0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I) + (-5.075883674631298447 E-116 - 2.537941837315649223 E-116*I)*X + (0.E-115 + 0.E-115*I)*X^2 + (8.459806124385497412 E-117 + 0.E-116*I)*X^3 + (3.172427296644561529 E-116 + 0.E-116*I)*X^4 + (2.5379418373156492235 E-117 - 6.027611863624666906 E-117*I)*X^5 + (4.229903062192748706 E-117 - 4.175640896903583953 E-117*I)*X^6 + (2.9458253468842357056 E-117 - 2.3563062119608880554 E-117*I)*X^7 + (1.3879369422819956690 E-117 - 9.125598249612160771 E-118*I)*X^8 + (5.067071376585063553 E-118 + 1.0158238880640300643 E-117*I)*X^9 + (3.965534120805701911 E-119 + 1.0439800113328959935 E-117*I)*X^10 + (-1.2392294127517818472 E-119 + 4.510898478223302923 E-118*I)*X^11 + (5.163455886465757697 E-120 + 9.517826428875257808 E-119*I)*X^12 + (7.238767963910648772 E-120 + 3.0485315845312846290 E-120*I)*X^13 + (4.259851106334250100 E-120 - 4.956852446317326376 E-120*I)*X^14 + (1.5119244267557546757 E-120 - 2.0008533378606380267 E-120*I)*X^15 + (4.021974122502197249 E-121 - 4.710772690704263553 E-121*I)*X^16 + (8.495487390177039476 E-122 - 7.894916805677055555 E-122*I)*X^17 + (1.4686098895831561809 E-122 - 9.776168703672051202 E-123*I)*X^18 + (2.0967608494748364605 E-123 - 8.979059969411839749 E-124*I)*X^19 + (2.430858610490628516 E-124 - 7.397819886053798921 E-125*I)*X^20 + (2.0121573193076205434 E-125 - 1.2662079281119933470 E-125*I)*X^21 + (2.945634939708346955 E-127 - 3.762574621168038482 E-126*I)*X^22 + (-3.417597698460109009 E-127 - 9.698025968913432879 E-127*I)*X^23 + (-9.634592148971158615 E-128 - 1.991065259289842006 E-127*I)*X^24 + (-1.8499050904353787682 E-128 - 3.444000794290251030 E-128*I)*X^25 + (-2.981946873618937088 E-129 - 5.195743215424565520 E-129*I)*X^26 + (-4.238317566690980895 E-130 - 7.055907248459257794 E-130*I)*X^27 + (-5.460417455040163927 E-131 - 8[+++]
func(z) = sum(j=0,99, polcoef(Y,j,X)*z^j)
%24 = (z)->sum(j=0,99,polcoef(Y,j,X)*z^j)
func(0)
%25 = 0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I
exp(func(-0.5)) - func(0.5)
%26 = -2.4046998908565776391 E-113 + 4.974366001138672478 E-114*I
