![]() |
|
My favorite integer sequence - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Etc (https://tetrationforum.org/forumdisplay.php?fid=4) +--- Forum: Community (https://tetrationforum.org/forumdisplay.php?fid=6) +--- Thread: My favorite integer sequence (/showthread.php?tid=911) |
RE: My favorite integer sequence - Gottfried - 08/22/2014 Just an accidental finding. [update]: Hmm, after some more investigation it seems, that the halfiterative of the sinh alone is not that "nugget" which I felt it were in the beginning. Still it might be a good starting point, but after some regressions with various cofactors it seems to me, that likely it cannot substantially be improved by adding terms of standardfunctions and/or further fractional iterates. So I think, Jay's very well worked function is still the way to go and this posting should only survive for historical reasons... [/update] I use the sequence A[k]=(1,1,2,2,4,4,6,6,...) with k beginning at 0 and tried the half-iterate of the asinh()-function to translate A[k]-> k^ . I get that nice approximation: Code: . In reverse it looks like Code: . Perhaps it is easy to finetune this much more with not too much effort. [update]: a first finetuning: a^ (k) = sinh°05( (1.018 k -0.174)/2) See the pictures with the already improved data. Absolute values: Ratios: [update]: upps, after looking behind index k>150 things are no more looking so nice... :-( Technical details: I computed the powerseries for the half-iterate from the sqrt of the carlemanmatrix for sinh to 128 terms. The convergence-radius is zero, but for x near zero one can evaluate the first 128 terms of the powerseries and get a value, which reinserted gives indeed the one-time-iterate to, say 20 or 30 dec digits precision. So high arguments x of the sinh°05(x) must be transferred by asinh()-iterations sufficiently near towards zero (say 0<x<0.5 or x<0.2 for k~80) , then the powerseries can be evaluated with that x, and then the result must be retransferred by appropriate sinh()-iterations. I computed this with 400 digits internal precision and float algebra (although the square-root of the Carlemanmatrix can be determined in rational algebra). Gottfried (If someone needs it I can supply the Pari/GP-code) RE: My favorite integer sequence - jaydfox - 08/22/2014 (08/21/2014, 01:15 AM)jaydfox Wrote: Sorry to post two versions in one day, but the performance boost was too huge to ignore. In the table below, the top column shows the number of polynomials I calculated. Below is the time in seconds. As you can see, version 3 cuts the time by 30%-50%, compared to version 2. But version 4 blows them both out of the water, cutting the time by more than an order of magnitude. Okay, I updated the code again. The algorithm is only changed slightly, but due to the manner of the change, the code actually changed quite a bit. (Just in the "details"; the structure remains largely intact.) Here's the performance for version 5, stacked up against previous versions. Interestingly, there's a rough match in the time to calculate 2n polynomials in version 5, versus n polynomials in version 3. Code: Vers. 60 100 150 200 250 300So what spiffy trick did I come up with this time? Well, I probably should have figured this out a while back, but sometimes you miss the forest for the trees, so to speak. Anyway, the generator polynomials are either even or odd. That is, they only contain even (or only odd) powers of x. This means they are symmetric about 0, so: \( \begin{array}{rcl} g_{2k}(-x) & = & g_{2k}(x) \\ g_{2k+1}(-x) & = & -g_{2k+1}(x) \end{array} \) By taking advantage of the symmetry, you only need to evaluate about half as many points. But figuring out an efficient way to calculate the coefficients for the Newton polynomials was eluding me (I couldn't reuse previous iterations). Then it occurred to me that I could change the even functions to full polynomials with half the degree, by substituting x^2 in place of x. The odd functions require a pre-multiplication by x, and then the same algorithm can be used. An example is probably in order. To calculate the 6th-order polynomial, we start by evaluating the 5th order polynomial, in order to generate the interpolation points: g_5(x) = 128/15*x^5 - 28/3*x^3 + 9/5*x (Internally, it's stored as 1024*x^5 - 1120*x^3 + 216*x; divide that by 5! to get the proper form.) We know that g_6(0)=0 and g_6(1) = 1. We evaluate g_5(x) at 3 and 5, giving 1827 and 25509. The recurrence relation is: \( \begin{array}{ccl} g_{n}(0) & = & 0 \text{ for } n>0 \\ g_{n+1}(k+1) & = & g_{n+1}(k)+g_{n}(2k+1) \end{array} \) This gives: g_6(-3) = 27337 g_6(-2) = 1828 g_6(-1) = 1 g_6(0) = 0 g_6(1) = 1 g_6(2) = 1828 g_6(3) = 27337 Now we rewrite this as: g_6(9) = 27337 g_6(4) = 1828 g_6(1) = 1 g_6(0) = 0 g_6(1) = 1 g_6(4) = 1828 g_6(9) = 27337 Notice that the first three nodes are redundant. At this point, it's just a matter of divided differences (taking into account the non-equal spacing), and applying Newton's interpolation formula. Because of the way the problem is constructed, we can reuse most of the calculations for the various interpolation polynomials. Also, it's just plain cool. That's the beauty of reducing the degree of the polynomial like this: half the polynomial evaluations to calculate the interpolation points; approximately 1/4th the number of divided differences; and the interpolation polynomials are half the degree as well. Overall, this speeds up the process by a factor of about 3. For odd functions, you'll see a pre-multiplication step before it starts calculating the divided differences. Edit: Fixed some formatting, and fixed the recurrence formula for g(k) Amazing variant - tommy1729 - 08/26/2014 An amazing variant imho is f(n) = 2 f(n-1) + f(n/2) where n/2 in rounded below. the sequence starts like 1,2,5,12. Amazingly f(9) = 2^9. We get similar properties. The first , second and third differences are fascinatingly linked. I will try to illustrate some things with an attachement. regards tommy1729 RE: My favorite integer sequence - tommy1729 - 09/01/2014 Im considering equations and relations of the form f(x) - 3 f(x-1) + f(x/2) = g(x) - 2 g(x-1) + g(x/3). Maybe Im going crazy ![]() regards tommy1729 RE: My favorite integer sequence - tommy1729 - 09/09/2014 (08/07/2014, 01:19 AM)jaydfox Wrote:(08/06/2014, 04:38 PM)Gottfried Wrote:(08/05/2014, 05:57 PM)jaydfox Wrote: I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term. As for those polynomials , factoring the main terms is intresting. for degree above 3 : For instance 137438953472/14175 x^10 = 2^37 / 3^4 5^2 7 x^10. Notice 4,8,128,2048,131072,... are all powers of 2 ! Also for the x^n term all denominators are of the form 3^a 5^b 7^c ... p^alpha where p is the largest prime up to n and a,b,... are nonzero ! The denominators are multiples of primorials of primes smaller than n. Fascinating. regards tommy1729 RE: My favorite integer sequence - jaydfox - 09/09/2014 (08/08/2014, 12:55 AM)jaydfox Wrote: Deriving the continuous function is actually pretty straightforward. I'm kind of disappointed I didn't see this earlier. I didn't even arrive at it by "first principles". Rather, I was playing around with creating a function that obeys the functional equation f'(x) = f(x/2), but which oscillates in a similar manner to the way the binary partition function oscillates. It's actually pretty cool, especially the code I came up with. Perhaps I'll share in a later post. At any rate, it dawned on me that the oscillating version of the function behaves like a Laurent series, with positive powers of x creating oscillations far from the origin (with a periodicity of about two zeroes per doubling of scale), and negative powers of x creating oscillations near the origin (with a periodicity of about one zero per doubling of scale). And as I was trying to formalize it, I realized something that I'd missed earlier, which should have been obvious: \( f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k \\ \Rightarrow f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} \Gamma(k+1)} x^k \) Treating Gamma(k) at negative integers as infinity, and the reciprocal of such as zero, we can take the limit from negative to positive infinity. And we can replace k with (k+b), where b is zero in the original solution, but can now be treated as any real (well, any complex number, but the complex versions are less interesting). \( f_{\beta}(x) = \sum_{k=-\infty}^{\infty}\frac{2^{-(k+\beta)(k+\beta-1)/2}}{\Gamma(k+\beta+1)} x^{k+\beta} \) So, without further ado, here are some graphs comparing f_{1/2} to f_0: RE: My favorite integer sequence - jaydfox - 09/09/2014 (09/09/2014, 07:43 PM)jaydfox Wrote: Treating Gamma(k) at negative integers as infinity, and the reciprocal of such as zero, we can take the limit from negative to positive infinity. And we can replace k with (k+b), where b is zero in the original solution, but can now be treated as any real (well, any complex number, but the complex versions are less interesting). One minor correction: treating Gamma(k) at non-positive integers as infinity... Anyway... Sheldon or Gottfried, if either of you is still interested in this problem, here's some code to calculate f_b(x): Code: InitF(beta=0, n=50, prec=77) = {I named this code bp1.gp, and here's a screenshot showing how it can be used. Basically, you can use beta=0 for the original version (which is entire), or use a non-integer, preferably negative. If you use a positive value for beta, or if you use a negative value close to zero, it won't be accurate near the origin. I used -25/2 in this example. The closer to the origin you intend to evaluate, the smaller beta needs to be (i.e., negative and larger in absolute value). RE: My favorite integer sequence - tommy1729 - 09/10/2014 (07/31/2014, 01:51 AM)jaydfox Wrote: \( f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k \) Im pretty sure this can be written as an integral. That might advance things. Time is unfortunately against me as usual. regards tommy1729 RE: My favorite integer sequence - jaydfox - 09/11/2014 As promised previously, here's a version that initializes the SeqParts array to an arbitrary non-negative index in the sequence: (08/20/2014, 02:11 PM)sheldonison Wrote: (...) I figured out how to initialize the SeqParts array for a power of 2, and then it is fast for numbers not to much bigger than that exact power of 2. What is the algorithm for initialization halfway to the next power of 2? For example (2^24+2^23)? I don't understand the polynomials well enough to write my own SeqPowers2 initialization starting with 3,6,12,24.... (08/20/2014, 07:57 PM)jaydfox Wrote: I'm still getting around to writing the code that calculates arbitrary terms of the sequence. It's actually a bit complicated, which is why I've been putting it off. Okay, I've updated quite a few functions, mostly to make them compatible with the new InitSeq() functions (which replace InitBinPartSeq() ). There's a new PrevSeq() function, which runs the sequence backwards. As such, the Seq() function can now take negative arguments, and will handle them accordingly. The InitSeq() function should demonstrate how the polynomials are used to calculate terms in the sequence. It is not optimized, but the sub-calls are, so it should still be reasonably fast. The InitSeq2() function is optimized. There are two main optimizations. First, the SeqPowers2() function is used to initialize the SeqParts array for the highest power of 2 in the provided index. Second, "negative" bits are used to reduce the number of calculations required. Between these two optimizations, you can quickly calculate terms which have low information. For example, InitSeq2(2^100) takes about 10 ms on my computer. InitSeq2(2^100 - 2^95) takes about 90 ms. However, InitSeq2(2^100\3) takes about 640 ms. Note that the binary representation of 2^100\3 has alternating 1's and 0's, and the use of negative bits cannot simplify this. For the InitSeq2() funciton, set the "show" parameter to 2 to view the bits. (Set the show parameter to "3" to get a printout of the SeqParts array at completion of the function call.) The output will look something like this: Code: (15:02) gp > InitSeq2(2^27-2^23-2^17+2^11-2^7, 2)In the example above, the first size/bits printout is using only positive bits. The second size/bits printout shows the same number, using negative bits. The left-most bit is the least significant. Once the sequence is initialized with InitSeq() or InitSeq2(), you can then use Seq() with positive or negative counts, or use NextSeq() and PrevSeq() directly. For example, to calculate the terms from 2^13-5 to 2^13+5 (which you can validate against this list at OEIS): Now let's check out 2^60-5 through 2^60+5: Final disclaimer: this was a complicated set of logic to pull together, and it doesn't help that I'm still getting used to indexing arrays from 1 instead of from 0 (PARI/gp is an outlier in this regard). So, I expect there are still a few bugs. Indeed, I had to re-edit the file several times while writing this post, as I found bugs while making screenshots or validating what I was writing. If I find any more bugs in the coming days, I'll post a small update, perhaps version "6b". RE: My favorite integer sequence - jaydfox - 09/16/2014 I'm not sure if anyone else is still playing around with the binary partition function. I'm still learning lots of new things (about the function, or about complex analysis), so I keep poking at it. Here are a few of my latest observations. Sorry for the long post; I probably should have split this up into multiple posts. Locally Polynomial While the relationship isn't exact, we can approximately say that as we double the scale on the x-axis, the function resembles a polynomial one degree higher. For example, if we say that the function resembles a 3rd order polynomial between about 4 and 8, then it will resemble a 4th order polynomial between about 8 and 16. (Those numbers are made-up examples, but better ones are demonstrated below.) This relationship is easy to see when you build the discrete binary partition function, because of the polynomials that I have previously discussed. The polynomials are found by interpolating points at intervals of 2^(n-1) for the nth-degree polynomial. In other words, we have to double the scale to interpolate one degree higher. I was curious to measure just how well each region could be approximated by a polynomial. My idea was simple: For any given x, determine the contribution of a_n x^n to f(x), for each n in the Taylor series. What I found was pretty interesting. Based on the definition of the power series, we can find the exact value of x for which a_k x^k = a_{k+1} + x^(k+1). Below this point, the function gets more of its "character" from the nth order term, while above that point, it gets more of its character from the (n+1)th order term. For example, at x=12, the 2nd and 3rd order terms are equal: 1/4 12^2 = 1/48 12^3. At x=32, the 3rd and 4th order terms are equal: 1/48 32^3 = 1/1536 32^4. Given these two observations, we can say that f(x) behaves approximately like a 3rd order polynomial on the interval (12, 32). Relatively Precise Approximation of f(x) In general, at x = k 2^(k-1), the k-1 and k order terms are equal. The really nice thing about this observation is that it leads to a relatively precise approximation of f(x). We have already seen an approximation earlier of ln(f(x)): (08/04/2014, 11:49 PM)sheldonison Wrote: I made some numerical approximations, and for big enough values of x, using Jay's Taylor series, I get that \( \ln(f(x)) \approx \frac{(\ln(x))^2}{2\ln(2)} \), which match the equations in Gottfried's link. If ln(x)=1E100, then ln(f(x))~=7.213E199, ln(f(f(x)))~=3.753E399 Note that ln(f(x)) = ln(x)^2/(2 ln(2)) is equivalent to log_2(f(x)) = log_2(x)^2/2. It takes (in my opinion) a very large value of log_2(x), which in turn implies an extremely large value of x, before this approximation gets close (i.e., before the ratio of the LHS and RHS approaches 1). And this is just an approximation of the logarithm of f(x). Exponentiate, and it's way off. My approximation gets closer to 1, faster, and it approximates f(x) directly. To use it, however, we have to be able to invert the relatively simple equation q(x) = (x-1) log_2(x) (Update) q(x) = x 2^(x-1) (End Update). This yields h(q(x)) = x. For example, h(12) = 3, h(32) = 4, h(448 )=7, etc. h(x) can be solved iteratively (taking the limit, but truncating when the desired accuracy has been reached): \( h_0(x) = \log_2(x) \\ h_k(x) = h_0(x)+1 - \log_2(h_{k-1}(x)) \\ h(x) = \lim_{k \to \infty} h_k(x) \) I also need to define a new constant, g_2, which I found experimentally, but which I believe I will be able to define rigorously in a later post: \( g_2 = 1 + 2 \sum_{k=1}^{\infty} 2^{-k^2/2} \approx 3.010767 \) With these definitions in place, the approximation of f(x) is: \( f(x) \approx g_2 \frac{x^{h(x)-1/2}}{\Gamma(h(x)+1/2) 2^{(h(x)-1/2)(h(x)-3/2)/2}} \) Let's start with an easy example. For k = 448, we find that h(x) = 7. That is, 7*2^(7-1) = 448. Therefore, we have (updated, see note at end): \( f(x) \approx g_2 \frac{448^{6.5}}{\Gamma(7.5) 2^{(6.5)(5.5)/2}} \\ f(x) \approx 3.01076739 \frac{1.7112249\mathrm{E}+17}{(1871.2543) (2^{17.875})} \\ \) This evaluates to f(x) ~= 1145355510.094 Compare this to the actual value of f(x) of 1042092969.261. The approximation is off by about 9.9%. For larger values of k, the approximation only gets better. Indeed, convergence in h(x) is linear. With just a small adjustment to this approximation, convergence in h(x) is quadratic: \( f(x) \approx g_2 \frac{x^{h(x)-1/2}}{\Gamma(h(x)+1/2) 2^{(h(x)-1/2)(h(x)-3/2)/2}(1+\frac{1}{h(x) \ln(4)})} \) I found this last adjustment experimentally as well, but I'm hoping there's a rigorous method to derive it. I suspect it's a tweak to the sum which yielded the g_2 "constant". g_2 isn't actually a constant, but a function of h(x), which goes to the "constant" value only in the limit as h(x) goes to infinity. With this better approximation, f(448 ) ~= 1038353545.081, which is accurate to within 0.36%. Coefficients for the Discrete Sequence I've previously described how to derive polynomials which can be used to calculate arbitrary terms in the discrete sequence. I've tried analyzing the coefficients of the polynomials, and I'm just not finding any useful patterns. The coefficient of the highest order term matches the corresponding coefficient of f(x), and the coefficient of the next term (two degrees lower) is generally negative. However, I can't find any useful patterns yet. It's a shame, because a closed form solution for these coefficients might help determine the a_0 (or C) constants previously discussed: a_0 ~= 1.0830629, C = 1/a_0 ~= 0.923307 However, what does seem to bear fruit is to look at the polynomials evaluated at powers of 2. The first few are: Code: 1From left to right, these are the evaluations of the 0th, 1st, 2nd, 3rd, etc. order polynomials. From top to bottom, these are the polynomials evaluated at 2^k, where k is the row number, with the first row being k=-1 (to make the math simpler, so that the second row is evaluated at 2^0 = 1). The diagonal elements are 1, with 0's above, and the following elements below: The first column is 1 for all rows The second column is 1 * 2^k The third column is 2^{2(k-1)} = 1/4 2^{2k} The fourth column 2^{3(k-2)} + 2*a_{k-1,3}, which is floor(1/48 2^{2k+2}) * 2^{k-2} The fifth column is 2^{4(k-3)+1} + 4*a_{k-1,4}, which is (4*floor(1/1536 2^{2k+2})+1) * 2^{2k-8} Note the use of the floor function to try to get find a way to get these terms to approximate the continuous function. The coefficients of the continuous function's Taylor series are highlighted in bold above. Here, a_{k-1,3} is referring to the previous row, third column (or fourth column, since I consider the leftmost column the 0th column). In other words, for the third column: k=2, 1 = 2^0 + 0 = floor(2^6/48 ) k=3, 10 = 2^3+2*1 = floor(2^8/48 )*2 k=4, 84 = 2^6 + 2*10 = floor(2^10/48 )*2^2 k=5, 680 = 2^9 + 2*84 = floor(2^12/48 )*2^3 k=6, 5456 = 2^12 + 2*680 = floor(2^14/48 )*2^4 Similarly for the fourth column: k=3, 1 = 2^1 + 4*(-1/4) = 2^2 * floor(2^10/1536)*4+1 k=4, 36 = 2^5 + 4*1 = (floor(2^12/1536)*4+1) * 2^2 k=5, 656 = 2^9 + 4*36 = (floor(2^14/1536)*4+1) * 2^4 k=6, 10816 = 2^13 + 4*656 = (floor(2^16/1536 )*4+1) * 2^6 I feel like I'm close to a pattern here that will really speed up calculations. I might be able to ditch the polynomials entirely. I've found some other strange recurrences, one of which uses that Thue-Morse sequence, which Gottfried previously described in post #10 on the first page of this thread. I'll try to detail those later this week. PS: The smiley is driving me nuts, because I have to insert a space to get it to be ignored.: f( should be f(8 )PPS: I made an edit above, and tried to make it obvious what the edit was. And just had to make a second edit to clean up some TeX that concatenated two numbers, and changed the formatting of the scientific notation to remove ambiguity about the "e". |