My favorite integer sequence
#51
Just an accidental finding.

[update]: Hmm, after some more investigation it seems, that the halfiterative of the sinh alone is not that "nugget" which I felt it were in the beginning. Still it might be a good starting point, but after some regressions with various cofactors it seems to me, that likely it cannot substantially be improved by adding terms of standardfunctions and/or further fractional iterates. So I think, Jay's very well worked function is still the way to go and this posting should only survive for historical reasons... [/update]



I use the sequence A[k]=(1,1,2,2,4,4,6,6,...) with k beginning at 0 and tried the half-iterate of the asinh()-function to translate A[k]-> k^ .
I get that nice approximation:

Code:
.  
  k  |  k^ = -1            |      |
     |  +2*asinh°0.5(A[k]) | A[k] |  d = k^ - k
-----+---------------------+------+-------------------
   0   0.871122566717836     1     0.8711225667178365
   1   0.871122566717836     1    -0.1288774332821635
   2   2.333123406391677     2     0.3331234063916770
   3   2.333123406391677     2    -0.6668765936083230
   4   4.444125935755581     4     0.4441259357555807
   5   4.444125935755581     4    -0.5558740642444193
   6   5.996686817157767     6  -0.003313182842233339
   7   5.996686817157767     6     -1.003313182842233
   8   8.309891291850146    10     0.3098912918501464
   9   8.309891291850146    10    -0.6901087081498536
  10   10.06605408134559    14    0.06605408134559265
  11   10.06605408134559    14    -0.9339459186544073
  12   12.14359799584454    20     0.1435979958445373
  13   12.14359799584454    20    -0.8564020041554627
  14   13.82177298614942    26    -0.1782270138505756
  15   13.82177298614942    26     -1.178227013850576
  16   16.08985351232628    36    0.08985351232627991
  17   16.08985351232628    36    -0.9101464876737201
  18   17.94134162168168    46   -0.05865837831832136
  19   17.94134162168168    46     -1.058658378318321
  20   20.09383140688473    60    0.09383140688473422
  21   20.09383140688473    60    -0.9061685931152658
  22   21.90407738775139    74   -0.09592261224861381
  23   21.90407738775139    74     -1.095922612248614
  24   24.09366985432521    94    0.09366985432520982
  25   24.09366985432521    94    -0.9063301456747902
  26   25.95938189509372   114   -0.04061810490628340
  27   25.95938189509372   114     -1.040618104906283
  28   28.04805382734139   140    0.04805382734139237
  29   28.04805382734139   140    -0.9519461726586076
  30   29.86181858151267   166    -0.1381814184873316
  31   29.86181858151267   166     -1.138181418487332
  32   32.04658019188079   202    0.04658019188078527
  33   32.04658019188079   202    -0.9534198081192147
  34   33.95209498128483   238   -0.04790501871516933
  35   33.95209498128483   238     -1.047905018715169
  36   36.08894669829039   284    0.08894669829039405
  37   36.08894669829039   284    -0.9110533017096059
  38   37.97426693042275   330   -0.02573306957725277
  39   37.97426693042275   330     -1.025733069577253
  40   40.14969749681103   390     0.1496974968110288
  41   40.14969749681103   390    -0.8503025031889712
  42   42.07952641659918   450    0.07952641659918464
  43   42.07952641659918   450    -0.9204735834008154
  44   44.20137456045023   524     0.2013745604502300
  45   44.20137456045023   524    -0.7986254395497700
  46   46.10105039582103   598     0.1010503958210259
  47   46.10105039582103   598    -0.8989496041789741
  48   48.26520579699401   692     0.2652057969940090
  49   48.26520579699401   692    -0.7347942030059910
  50   50.20979388257698   786     0.2097938825769775
  51   50.20979388257698   786    -0.7902061174230225
  52   52.33643071077409   900     0.3364307107740934
  53   52.33643071077409   900    -0.6635692892259066
  54   54.26025209047609  1014     0.2602520904760932
  55   54.26025209047609  1014    -0.7397479095239068
  56   56.40159041927338  1154     0.4015904192733788
  57   56.40159041927338  1154    -0.5984095807266212
  58   58.34609979287966  1294     0.3460997928796596
  59   58.34609979287966  1294    -0.6539002071203404
  60   60.44611088173228  1460     0.4461108817322844
  61   60.44611088173228  1460    -0.5538891182677156
  62   62.36400319817613  1626     0.3640031981761257
  63   62.36400319817613  1626    -0.6359968018238743
  64   64.49766201469299  1828     0.4976620146929893
  65   64.49766201469299  1828    -0.5023379853070107
  66   66.45061225735103  2030     0.4506122573510340
  67   66.45061225735103  2030    -0.5493877426489660
  68   68.56130277140938  2268     0.5613027714093829
  69   68.56130277140938  2268    -0.4386972285906171


In reverse it looks like
Code:
.        

     |  a^ =               |      |                      
  k  |  sinh°05(k/2+1/2)   | A[k] |   r = a^ /A[k] - 1
-----+---------------------+------+-------------------
   0   0.5102310310942976    1    -0.4897689689057024
   1   1.078138996064359     1     0.0781389960643593
   2   1.748304390930525     2    -0.1258478045347374
   3   2.552519461332870     2     0.2762597306664349
   4   3.515579526352944     4    -0.1211051184117640
   5   4.659355506757197     4     0.1648388766892992
   6   6.004813731822326     6  0.0008022886370542993
   7   7.572966044912433     6     0.2621610074854055
   8   9.385336346538606    10   -0.06146636534613942
   9   11.46420789661713    10     0.1464207896617127
  10   13.83276703665167    14   -0.01194521166773818
  11   16.51519557483509    14     0.1796568267739347
  12   19.53673656870888    20   -0.02316317156455596
  13   22.92374579523910    20     0.1461872897619548
  14   26.70373529679550    26    0.02706674218444246
  15   30.90541246272600    26     0.1886697101048460
  16   35.55871659041376    36   -0.01225787248850666
  17   40.69485405483909    36     0.1304126126344191
  18   46.34633276239224    46   0.007528973095483409
  19   52.54699630468353    46     0.1423260066235550
  20   59.33205807477817    60   -0.01113236542036384
  21   66.73813551558644    60     0.1123022585931074
  22   74.80328461281173    74    0.01085519747042881
  23   83.56703470866675    74     0.1292842528198210
  24   93.07042368928640    94  -0.009889109688442508
  25   103.3560335835290    94    0.09953227216520219
  26   114.4680266007278   114   0.004105496497612463
  27   126.4521816281218   114     0.1092296634045772
  28   139.3559312040241   140  -0.004600491399828113
  29   153.2283989795549   140    0.09448856413967771
  30   168.1204376795053   166    0.01277372096087535
  31   184.0846675712996   166     0.1089437805499975
  32   201.1755154498853   202  -0.004081606683735970
  33   219.4492541455617   202    0.08638244626515684
  34   238.9640425611633   238   0.004050598996484624
  35   259.7799662445876   238    0.09151246321255298
  36   281.9590785023372   284  -0.007186343301629582
  37   305.5654420595219   284    0.07593465513916145
  38   330.6651712715941   330   0.002015670519982010
  39   357.3264748929723   330    0.08280749967567349
  40   385.6196994076169   390   -0.01123153998046941
  41   415.6173729265633   390    0.06568557160657261
  42   447.3942496573710   450  -0.005790556316953349
  43   481.0273549504231   450    0.06894967766760693
  44   516.5960309269911   524   -0.01412971197139100
  45   554.1819826939725   524    0.05759920361445136
  46   593.8693251502086   598  -0.006907483026407088
  47   635.7446303892922   598    0.06311811101888332
  48   679.8969757037854   692   -0.01748991950320035
  49   726.4179921957754   692    0.04973698294187201
  50   775.4019139987164   786   -0.01348356997618778
  51   826.9456281155144   786    0.05209367444721939
  52   881.1487248778374   900   -0.02094586124684738
  53   938.1135490316460   900    0.04234838781293998
  54   997.9452514539660  1014   -0.01583308535111831
  55   1060.751841505941  1014    0.04610635256996131
  56   1126.644240027227  1154   -0.02370516462112075
  57   1195.736332976817  1154    0.03616666635772742
  58   1268.145025725405  1294   -0.01998066018129422
  59   1343.990298004411  1294    0.03863237867419740
  60   1423.395259516844  1460   -0.02507174005695639
  61   1506.486206215162  1460    0.03183986727065867
  62   1593.392677251357  1626   -0.02005370402745561
  63   1684.247512604481  1626    0.03582257847754043
  64   1779.186911390872  1828   -0.02670300252140491
  65   1878.350490862376  1828    0.02754403220042451
  66   1981.881346097857  2030   -0.02370377039514422
  67   2089.926110393341  2030    0.02952025142529100
  68   2202.635016356147  2268   -0.02882053952550826
  69   2320.161957708406  2268    0.02299909951869768


Perhaps it is easy to finetune this much more with not too much effort.

[update]: a first finetuning: a^ (k) = sinh°05( (1.018 k -0.174)/2)
See the pictures with the already improved data.

Absolute values:
   

Ratios: [update]: upps, after looking behind index k>150 things are no more looking so nice... :-(
   


Technical details:
I computed the powerseries for the half-iterate from the sqrt of the carlemanmatrix for sinh to 128 terms.
The convergence-radius is zero, but for x near zero one can evaluate the first 128 terms of the powerseries and get a value, which reinserted gives indeed the one-time-iterate to, say 20 or 30 dec digits precision.
So high arguments x of the sinh°05(x) must be transferred by asinh()-iterations sufficiently near towards zero (say 0<x<0.5 or x<0.2 for k~80) , then the powerseries can be evaluated with that x, and then the result must be retransferred by appropriate sinh()-iterations.
I computed this with 400 digits internal precision and float algebra (although the square-root of the Carlemanmatrix can be determined in rational algebra).

Gottfried

(If someone needs it I can supply the Pari/GP-code)
Gottfried Helms, Kassel
Reply
#52
(08/21/2014, 01:15 AM)jaydfox Wrote: Sorry to post two versions in one day, but the performance boost was too huge to ignore. In the table below, the top column shows the number of polynomials I calculated. Below is the time in seconds. As you can see, version 3 cuts the time by 30%-50%, compared to version 2. But version 4 blows them both out of the water, cutting the time by more than an order of magnitude.

Code:
Vers.   60      100     150     200
v2      0.7     9.1     91      --
v3      0.5     5.6     49
v4      0.15    0.5     3.2     15.3

How is this possible? Simple: I got rid of the rational coefficients. (...)

Okay, I updated the code again.

.gp   jdf_seq_v5.gp (Size: 9.65 KB / Downloads: 569)

The algorithm is only changed slightly, but due to the manner of the change, the code actually changed quite a bit. (Just in the "details"; the structure remains largely intact.)

Here's the performance for version 5, stacked up against previous versions. Interestingly, there's a rough match in the time to calculate 2n polynomials in version 5, versus n polynomials in version 3.

Code:
Vers.   60      100     150     200     250     300
v2      0.7     9.1     91      --      --      --
v3      0.5     5.6     49      --      --      --
v4      0.15    0.5     3.2     15.3    53.2    150
v5      0.15    0.33    1.3     5.4     17.4    46.3

So what spiffy trick did I come up with this time? Well, I probably should have figured this out a while back, but sometimes you miss the forest for the trees, so to speak.

Anyway, the generator polynomials are either even or odd. That is, they only contain even (or only odd) powers of x. This means they are symmetric about 0, so:

\(
\begin{array}{rcl}
g_{2k}(-x) & = & g_{2k}(x) \\
g_{2k+1}(-x) & = & -g_{2k+1}(x)
\end{array}
\)

By taking advantage of the symmetry, you only need to evaluate about half as many points.

But figuring out an efficient way to calculate the coefficients for the Newton polynomials was eluding me (I couldn't reuse previous iterations). Then it occurred to me that I could change the even functions to full polynomials with half the degree, by substituting x^2 in place of x. The odd functions require a pre-multiplication by x, and then the same algorithm can be used.

An example is probably in order. To calculate the 6th-order polynomial, we start by evaluating the 5th order polynomial, in order to generate the interpolation points:

g_5(x) = 128/15*x^5 - 28/3*x^3 + 9/5*x

(Internally, it's stored as 1024*x^5 - 1120*x^3 + 216*x; divide that by 5! to get the proper form.)

We know that g_6(0)=0 and g_6(1) = 1. We evaluate g_5(x) at 3 and 5, giving 1827 and 25509. The recurrence relation is:

\(
\begin{array}{ccl}
g_{n}(0) & = & 0 \text{ for } n>0 \\
g_{n+1}(k+1) & = & g_{n+1}(k)+g_{n}(2k+1)
\end{array}
\)

This gives:
g_6(-3) = 27337
g_6(-2) = 1828
g_6(-1) = 1
g_6(0) = 0
g_6(1) = 1
g_6(2) = 1828
g_6(3) = 27337

Now we rewrite this as:
g_6(9) = 27337
g_6(4) = 1828
g_6(1) = 1
g_6(0) = 0
g_6(1) = 1
g_6(4) = 1828
g_6(9) = 27337

Notice that the first three nodes are redundant. At this point, it's just a matter of divided differences (taking into account the non-equal spacing), and applying Newton's interpolation formula. Because of the way the problem is constructed, we can reuse most of the calculations for the various interpolation polynomials.

Also, it's just plain cool. That's the beauty of reducing the degree of the polynomial like this: half the polynomial evaluations to calculate the interpolation points; approximately 1/4th the number of divided differences; and the interpolation polynomials are half the degree as well. Overall, this speeds up the process by a factor of about 3.

For odd functions, you'll see a pre-multiplication step before it starts calculating the divided differences.

Edit: Fixed some formatting, and fixed the recurrence formula for g(k)
~ Jay Daniel Fox
Reply
#53
An amazing variant imho is

f(n) = 2 f(n-1) + f(n/2)

where n/2 in rounded below.

the sequence starts like 1,2,5,12.

Amazingly f(9) = 2^9.

We get similar properties.
The first , second and third differences are fascinatingly linked.


I will try to illustrate some things with an attachement.

regards

tommy1729





Attached Files
.pdf   amazing variant2.pdf (Size: 95.42 KB / Downloads: 586)
Reply
#54
Im considering equations and relations of the form

f(x) - 3 f(x-1) + f(x/2) = g(x) - 2 g(x-1) + g(x/3).

Maybe Im going crazy Smile

regards

tommy1729
Reply
#55
(08/07/2014, 01:19 AM)jaydfox Wrote:
(08/06/2014, 04:38 PM)Gottfried Wrote:
(08/05/2014, 05:57 PM)jaydfox Wrote: I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term.
(...)
I'll work on converting this to PARI/gp code, since I know that several of the active members of this forum seem to use that.

Wow, that's an ingenious approach. I was trying to arrive at a similar short-cutted computation-method using the idea of squaring the matrix M until sufficient few columns are relevant and only compute the required intermediate members of the sequence, but it seemed to become too complex and I left it for the time now. I'll be happy when I can try the Pari/GP-code....

I'll also be much interested if I can learn about the approach how to determine the function f(x) which prognoses/approximates the elements of the sequence. I could implement the function f(x) in Pari/Gp but just did not get the clue how one would arrive at it.
Gottfried

Alright, here's the first draft of PARI/gp code.

(Begin Edit: Adding link to attachment)
(End Edit)

Note the use of a constant V0 in my array indexes. Converting code from a typical language with 0-based arrays, to a language like PARI with 1-based arrays, can be very difficult to debug. So I took the easy way out and made all the code implicitly 0-based, using the V0 constant to add the 1 back in at the last moment. This allows me to port code back and forth between PARI/gp and SAGE/python or c++ or whatever.

Here are sample outputs. The first shows how to calculate elements from the sequence... sequentially.


The next shows how to initialize the generator polynomials and then print out the terms from the sequence with indexes as powers of 2 (i.e., A(1), A(2), A(4), A(Cool, etc.).



The next couple pictures show me calculating the first 30 terms with power of 2 indexes. You can see that the calculation is very fast, a fraction of a second. You can validate the terms up to A(8192) against the list at the OEIS page:
http://oeis.org/A000123/b000123.txt

As for those polynomials , factoring the main terms is intresting.

for degree above 3 :

For instance

137438953472/14175 x^10

= 2^37 / 3^4 5^2 7 x^10.

Notice 4,8,128,2048,131072,... are all powers of 2 !

Also for the x^n term all denominators are of the form 3^a 5^b 7^c ... p^alpha where p is the largest prime up to n and a,b,... are nonzero !

The denominators are multiples of primorials of primes smaller than n.

Fascinating.

regards

tommy1729
Reply
#56
(08/08/2014, 12:55 AM)jaydfox Wrote: Deriving the continuous function is actually pretty straightforward.

(...)

In general:
\(
D\left[b_k x^k\right] = b_{k-1} \left(\frac{x}{2}\right)^{k-1} \\
\Rightarrow k b_k x^{k-1} = \frac{1}{2^{k-1}} b_{k-1} x^{k-1} \\
\Rightarrow b_k = \frac{1}{k} \left(\frac{1}{2^{k-1}}\right) b_{k-1}
\)

By induction:
\(
b_k = \frac{1}{2^{(k-1)k/2} k!}
\)

Then, as I posted previously:

\(
f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k
\)

I'm kind of disappointed I didn't see this earlier. I didn't even arrive at it by "first principles". Rather, I was playing around with creating a function that obeys the functional equation f'(x) = f(x/2), but which oscillates in a similar manner to the way the binary partition function oscillates. It's actually pretty cool, especially the code I came up with. Perhaps I'll share in a later post.

At any rate, it dawned on me that the oscillating version of the function behaves like a Laurent series, with positive powers of x creating oscillations far from the origin (with a periodicity of about two zeroes per doubling of scale), and negative powers of x creating oscillations near the origin (with a periodicity of about one zero per doubling of scale).

And as I was trying to formalize it, I realized something that I'd missed earlier, which should have been obvious:

\(
f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k \\
\Rightarrow f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} \Gamma(k+1)} x^k
\)

Treating Gamma(k) at negative integers as infinity, and the reciprocal of such as zero, we can take the limit from negative to positive infinity. And we can replace k with (k+b), where b is zero in the original solution, but can now be treated as any real (well, any complex number, but the complex versions are less interesting).

\(
f_{\beta}(x) = \sum_{k=-\infty}^{\infty}\frac{2^{-(k+\beta)(k+\beta-1)/2}}{\Gamma(k+\beta+1)} x^{k+\beta}
\)

So, without further ado, here are some graphs comparing f_{1/2} to f_0:

   

   

   
~ Jay Daniel Fox
Reply
#57
(09/09/2014, 07:43 PM)jaydfox Wrote: Treating Gamma(k) at negative integers as infinity, and the reciprocal of such as zero, we can take the limit from negative to positive infinity. And we can replace k with (k+b), where b is zero in the original solution, but can now be treated as any real (well, any complex number, but the complex versions are less interesting).

\(
f_{\beta}(x) = \sum_{k=-\infty}^{\infty}\frac{2^{-(k+\beta)(k+\beta-1)/2}}{\Gamma(k+\beta+1)} x^{k+\beta}
\)

One minor correction: treating Gamma(k) at non-positive integers as infinity... Anyway...

Sheldon or Gottfried, if either of you is still interested in this problem, here's some code to calculate f_b(x):

Code:
InitF(beta=0, n=50, prec=77) = {
  local (defprec, a_k, func);
  defprec = default(realprecision);
  default(realprecision, prec);
  a_k = 2.^(-beta*(beta-1)/2) / gamma(beta+1.);
  func = List();
  listput(func, a_k);
  listput(func, beta);
  for (k=1, n,
    a_k /= (2.^(k-1.+beta) * (k+beta));
    func[1] += a_k * x^k;
  );

  default(realprecision, defprec);
  return (func);
}

EvalF(func, k) = {
  return (subst(func[1], x, k) * k^func[2]);
}

I named this code bp1.gp, and here's a screenshot showing how it can be used. Basically, you can use beta=0 for the original version (which is entire), or use a non-integer, preferably negative. If you use a positive value for beta, or if you use a negative value close to zero, it won't be accurate near the origin. I used -25/2 in this example. The closer to the origin you intend to evaluate, the smaller beta needs to be (i.e., negative and larger in absolute value).

   
~ Jay Daniel Fox
Reply
#58
(07/31/2014, 01:51 AM)jaydfox Wrote: \( f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k \)

Im pretty sure this can be written as an integral.

That might advance things.

Time is unfortunately against me as usual.

regards

tommy1729
Reply
#59
As promised previously, here's a version that initializes the SeqParts array to an arbitrary non-negative index in the sequence:

.gp   jdf_seq_v6.gp (Size: 13.35 KB / Downloads: 553)

(08/20/2014, 02:11 PM)sheldonison Wrote: (...) I figured out how to initialize the SeqParts array for a power of 2, and then it is fast for numbers not to much bigger than that exact power of 2. What is the algorithm for initialization halfway to the next power of 2? For example (2^24+2^23)? I don't understand the polynomials well enough to write my own SeqPowers2 initialization starting with 3,6,12,24....

(08/20/2014, 07:57 PM)jaydfox Wrote: I'm still getting around to writing the code that calculates arbitrary terms of the sequence. It's actually a bit complicated, which is why I've been putting it off.

Okay, I've updated quite a few functions, mostly to make them compatible with the new InitSeq() functions (which replace InitBinPartSeq() ).

There's a new PrevSeq() function, which runs the sequence backwards. As such, the Seq() function can now take negative arguments, and will handle them accordingly.

The InitSeq() function should demonstrate how the polynomials are used to calculate terms in the sequence. It is not optimized, but the sub-calls are, so it should still be reasonably fast.

The InitSeq2() function is optimized. There are two main optimizations. First, the SeqPowers2() function is used to initialize the SeqParts array for the highest power of 2 in the provided index. Second, "negative" bits are used to reduce the number of calculations required. Between these two optimizations, you can quickly calculate terms which have low information.

For example, InitSeq2(2^100) takes about 10 ms on my computer. InitSeq2(2^100 - 2^95) takes about 90 ms. However, InitSeq2(2^100\3) takes about 640 ms. Note that the binary representation of 2^100\3 has alternating 1's and 0's, and the use of negative bits cannot simplify this.

For the InitSeq2() funciton, set the "show" parameter to 2 to view the bits. (Set the show parameter to "3" to get a printout of the SeqParts array at completion of the function call.) The output will look something like this:

Code:
(15:02) gp > InitSeq2(2^27-2^23-2^17+2^11-2^7, 2)
size=27, bits=List([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0])
size=28, bits=List([0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 1])
A(125699968)=13989020196114224701143563476000978180153807452722418367111885874553950782754824115506668
(15:03) gp >

In the example above, the first size/bits printout is using only positive bits. The second size/bits printout shows the same number, using negative bits. The left-most bit is the least significant.

Once the sequence is initialized with InitSeq() or InitSeq2(), you can then use Seq() with positive or negative counts, or use NextSeq() and PrevSeq() directly. For example, to calculate the terms from 2^13-5 to 2^13+5 (which you can validate against this list at OEIS):

   

Now let's check out 2^60-5 through 2^60+5:

   

Final disclaimer: this was a complicated set of logic to pull together, and it doesn't help that I'm still getting used to indexing arrays from 1 instead of from 0 (PARI/gp is an outlier in this regard). So, I expect there are still a few bugs. Indeed, I had to re-edit the file several times while writing this post, as I found bugs while making screenshots or validating what I was writing. If I find any more bugs in the coming days, I'll post a small update, perhaps version "6b".
~ Jay Daniel Fox
Reply
#60
I'm not sure if anyone else is still playing around with the binary partition function. I'm still learning lots of new things (about the function, or about complex analysis), so I keep poking at it.

Here are a few of my latest observations. Sorry for the long post; I probably should have split this up into multiple posts.

Locally Polynomial

While the relationship isn't exact, we can approximately say that as we double the scale on the x-axis, the function resembles a polynomial one degree higher. For example, if we say that the function resembles a 3rd order polynomial between about 4 and 8, then it will resemble a 4th order polynomial between about 8 and 16. (Those numbers are made-up examples, but better ones are demonstrated below.)

This relationship is easy to see when you build the discrete binary partition function, because of the polynomials that I have previously discussed. The polynomials are found by interpolating points at intervals of 2^(n-1) for the nth-degree polynomial. In other words, we have to double the scale to interpolate one degree higher.

I was curious to measure just how well each region could be approximated by a polynomial. My idea was simple: For any given x, determine the contribution of a_n x^n to f(x), for each n in the Taylor series. What I found was pretty interesting. Based on the definition of the power series, we can find the exact value of x for which a_k x^k = a_{k+1} + x^(k+1). Below this point, the function gets more of its "character" from the nth order term, while above that point, it gets more of its character from the (n+1)th order term.

For example, at x=12, the 2nd and 3rd order terms are equal: 1/4 12^2 = 1/48 12^3. At x=32, the 3rd and 4th order terms are equal: 1/48 32^3 = 1/1536 32^4. Given these two observations, we can say that f(x) behaves approximately like a 3rd order polynomial on the interval (12, 32).

Relatively Precise Approximation of f(x)

In general, at x = k 2^(k-1), the k-1 and k order terms are equal.

The really nice thing about this observation is that it leads to a relatively precise approximation of f(x). We have already seen an approximation earlier of ln(f(x)):

(08/04/2014, 11:49 PM)sheldonison Wrote: I made some numerical approximations, and for big enough values of x, using Jay's Taylor series, I get that \( \ln(f(x)) \approx \frac{(\ln(x))^2}{2\ln(2)} \), which match the equations in Gottfried's link. If ln(x)=1E100, then ln(f(x))~=7.213E199, ln(f(f(x)))~=3.753E399

Note that ln(f(x)) = ln(x)^2/(2 ln(2)) is equivalent to log_2(f(x)) = log_2(x)^2/2. It takes (in my opinion) a very large value of log_2(x), which in turn implies an extremely large value of x, before this approximation gets close (i.e., before the ratio of the LHS and RHS approaches 1). And this is just an approximation of the logarithm of f(x). Exponentiate, and it's way off.

My approximation gets closer to 1, faster, and it approximates f(x) directly.

To use it, however, we have to be able to invert the relatively simple equation q(x) = (x-1) log_2(x) (Update) q(x) = x 2^(x-1) (End Update). This yields h(q(x)) = x. For example, h(12) = 3, h(32) = 4, h(448 )=7, etc.

h(x) can be solved iteratively (taking the limit, but truncating when the desired accuracy has been reached):

\(
h_0(x) = \log_2(x) \\
h_k(x) = h_0(x)+1 - \log_2(h_{k-1}(x)) \\
h(x) = \lim_{k \to \infty} h_k(x)
\)

I also need to define a new constant, g_2, which I found experimentally, but which I believe I will be able to define rigorously in a later post:

\(
g_2 = 1 + 2 \sum_{k=1}^{\infty} 2^{-k^2/2} \approx 3.010767
\)

With these definitions in place, the approximation of f(x) is:

\(
f(x) \approx g_2 \frac{x^{h(x)-1/2}}{\Gamma(h(x)+1/2) 2^{(h(x)-1/2)(h(x)-3/2)/2}}
\)

Let's start with an easy example. For k = 448, we find that h(x) = 7. That is, 7*2^(7-1) = 448.

Therefore, we have (updated, see note at end):
\(
f(x) \approx g_2 \frac{448^{6.5}}{\Gamma(7.5) 2^{(6.5)(5.5)/2}} \\
f(x) \approx 3.01076739 \frac{1.7112249\mathrm{E}+17}{(1871.2543) (2^{17.875})} \\
\)

This evaluates to f(x) ~= 1145355510.094

Compare this to the actual value of f(x) of 1042092969.261. The approximation is off by about 9.9%. For larger values of k, the approximation only gets better. Indeed, convergence in h(x) is linear. With just a small adjustment to this approximation, convergence in h(x) is quadratic:

\(
f(x) \approx g_2 \frac{x^{h(x)-1/2}}{\Gamma(h(x)+1/2) 2^{(h(x)-1/2)(h(x)-3/2)/2}(1+\frac{1}{h(x) \ln(4)})}
\)

I found this last adjustment experimentally as well, but I'm hoping there's a rigorous method to derive it. I suspect it's a tweak to the sum which yielded the g_2 "constant". g_2 isn't actually a constant, but a function of h(x), which goes to the "constant" value only in the limit as h(x) goes to infinity.

With this better approximation, f(448 ) ~= 1038353545.081, which is accurate to within 0.36%.

Coefficients for the Discrete Sequence

I've previously described how to derive polynomials which can be used to calculate arbitrary terms in the discrete sequence. I've tried analyzing the coefficients of the polynomials, and I'm just not finding any useful patterns. The coefficient of the highest order term matches the corresponding coefficient of f(x), and the coefficient of the next term (two degrees lower) is generally negative. However, I can't find any useful patterns yet. It's a shame, because a closed form solution for these coefficients might help determine the a_0 (or C) constants previously discussed: a_0 ~= 1.0830629, C = 1/a_0 ~= 0.923307

However, what does seem to bear fruit is to look at the polynomials evaluated at powers of 2. The first few are:

Code:
1
1     1
1     2     1
1     4     4     1
1     8    16    10     1
1    16    64    84    36     1
1    32   256   680   656   202     1
1    64  1024  5456 10816  8148  1828     1

From left to right, these are the evaluations of the 0th, 1st, 2nd, 3rd, etc. order polynomials. From top to bottom, these are the polynomials evaluated at 2^k, where k is the row number, with the first row being k=-1 (to make the math simpler, so that the second row is evaluated at 2^0 = 1).

The diagonal elements are 1, with 0's above, and the following elements below:
The first column is 1 for all rows
The second column is 1 * 2^k
The third column is 2^{2(k-1)} = 1/4 2^{2k}
The fourth column 2^{3(k-2)} + 2*a_{k-1,3}, which is floor(1/48 2^{2k+2}) * 2^{k-2}
The fifth column is 2^{4(k-3)+1} + 4*a_{k-1,4}, which is (4*floor(1/1536 2^{2k+2})+1) * 2^{2k-8}

Note the use of the floor function to try to get find a way to get these terms to approximate the continuous function. The coefficients of the continuous function's Taylor series are highlighted in bold above.

Here, a_{k-1,3} is referring to the previous row, third column (or fourth column, since I consider the leftmost column the 0th column). In other words, for the third column:
k=2, 1 = 2^0 + 0 = floor(2^6/48 )
k=3, 10 = 2^3+2*1 = floor(2^8/48 )*2
k=4, 84 = 2^6 + 2*10 = floor(2^10/48 )*2^2
k=5, 680 = 2^9 + 2*84 = floor(2^12/48 )*2^3
k=6, 5456 = 2^12 + 2*680 = floor(2^14/48 )*2^4

Similarly for the fourth column:
k=3, 1 = 2^1 + 4*(-1/4) = 2^2 * floor(2^10/1536)*4+1
k=4, 36 = 2^5 + 4*1 = (floor(2^12/1536)*4+1) * 2^2
k=5, 656 = 2^9 + 4*36 = (floor(2^14/1536)*4+1) * 2^4
k=6, 10816 = 2^13 + 4*656 = (floor(2^16/1536 )*4+1) * 2^6

I feel like I'm close to a pattern here that will really speed up calculations. I might be able to ditch the polynomials entirely. I've found some other strange recurrences, one of which uses that Thue-Morse sequence, which Gottfried previously described in post #10 on the first page of this thread. I'll try to detail those later this week.

PS: The Cool smiley is driving me nuts, because I have to insert a space to get it to be ignored.: f(Cool should be f(8 )

PPS: I made an edit above, and tried to make it obvious what the edit was. And just had to make a second edit to clean up some TeX that concatenated two numbers, and changed the formatting of the scientific notation to remove ambiguity about the "e".
~ Jay Daniel Fox
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  My favorite theorem tommy1729 0 4,824 08/15/2015, 09:58 PM
Last Post: tommy1729



Users browsing this thread: 2 Guest(s)