My favorite integer sequence
#21
(08/06/2014, 04:38 PM)Gottfried Wrote:
(08/05/2014, 05:57 PM)jaydfox Wrote: I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term.
(...)
I'll work on converting this to PARI/gp code, since I know that several of the active members of this forum seem to use that.

Wow, that's an ingenious approach. I was trying to arrive at a similar short-cutted computation-method using the idea of squaring the matrix M until sufficient few columns are relevant and only compute the required intermediate members of the sequence, but it seemed to become too complex and I left it for the time now. I'll be happy when I can try the Pari/GP-code....

I'll also be much interested if I can learn about the approach how to determine the function f(x) which prognoses/approximates the elements of the sequence. I could implement the function f(x) in Pari/Gp but just did not get the clue how one would arrive at it.
Gottfried

Alright, here's the first draft of PARI/gp code.

(Begin Edit: Adding link to attachment)
.gp   jdf_seq_v1.gp (Size: 2.57 KB / Downloads: 952)
(End Edit)

Note the use of a constant V0 in my array indexes. Converting code from a typical language with 0-based arrays, to a language like PARI with 1-based arrays, can be very difficult to debug. So I took the easy way out and made all the code implicitly 0-based, using the V0 constant to add the 1 back in at the last moment. This allows me to port code back and forth between PARI/gp and SAGE/python or c++ or whatever.

Here are sample outputs. The first shows how to calculate elements from the sequence... sequentially.
   

The next shows how to initialize the generator polynomials and then print out the terms from the sequence with indexes as powers of 2 (i.e., A(1), A(2), A(4), A(Cool, etc.).

   

The next couple pictures show me calculating the first 30 terms with power of 2 indexes. You can see that the calculation is very fast, a fraction of a second. You can validate the terms up to A(8192) against the list at the OEIS page:
http://oeis.org/A000123/b000123.txt

   

   
~ Jay Daniel Fox
#22
(08/07/2014, 01:19 AM)jaydfox Wrote: Alright, here's the first draft of PARI/gp code. Note the use of a constant V0 in my array indexes. Converting code from a typical language with 0-based arrays, to a language like PARI with 1-based arrays, can be very difficult to debug. So I took the easy way out and made all the code implicitly 0-based, using the V0 constant to add the 1 back in at the last moment. This allows me to port code back and forth between PARI/gp and SAGE/python or c++ or whatever.
Okay, first of all, I didn't make all the arrays 0-based with the V0 trick. Just in the sections of code I was porting directly from SAGE/python, which I had written and debugged in SAGE. Other sections of code (like the NextSeq function), which I wrote from scratch in PARI/gp, don't use the trick. I just wanted to point that out, in case I confused anyone.

Also, please note that when the polynomials are being printed in the InitPolys function, they are rescaled to match my previously posted polynomials. However, "under the hood", the polynomials are scaled properly. Just check out the SeqNum list to see the unscaled polynomials.

By the way, now that the code is finally written in PARI/gp, I took the opportunity to calculate the 2^99th element. (Note that a degree 100 polynomial only gets us up to 2^99. A very small tweak to the code will get the 2^100th element, so I'll be sure to include that in the next version.)

It took just 2.5 minutes to calculate the polynomials, about twice as fast as SAGE/python, if I recall correctly. Not sure why the difference, but I'm not complaining:

   

   

~ Jay Daniel Fox
#23
(08/06/2014, 04:38 PM)Gottfried Wrote: I'll also be much interested if I can learn about the approach how to determine the function f(x) which prognoses/approximates the elements of the sequence. I could implement the function f(x) in Pari/Gp but just did not get the clue how one would arrive at it.
Gottfried

Deriving the continuous function is actually pretty straightforward. I'll use exp(x) as an example:

\(
\exp(x) = \sum_{k=0}^{\infty} a_k x^k \\
a_0 = 1 \\
D\left[\exp(x)\right] = \exp(x)
\)

From this, you can derive exp(x).

\(
D\left[a_1 x\right] = a_0 \\
\Rightarrow a_1 = 1
\)

\(
D\left[a_2 x^2\right] = a_1 x \\
\Rightarrow 2 a_2 x = a_1 x \\
\Rightarrow a_2 = 1/2
\)

\(
D\left[a_3 x^3\right] = a_2 x^2 \\
\Rightarrow 3 a_3 x^2 = a_2 x^2 \\
\Rightarrow a_3 = \frac{1}{3}\left(\frac{1}{2}\right) = \frac{1}{6}
\)

In general:
\(
D\left[a_k x^k\right] = a_{k-1} x^{k-1} \\
\Rightarrow k a_k x^{k-1} = a_{k-1} x^{k-1} \\
\Rightarrow a_k = \frac{1}{k} \left(a_{k-1}\right)
\)

By induction:
\(
a_k = \frac{1}{k!}
\)


Now let's try with the problem at hand:

\(
f(x) = \sum_{k=0}^{\infty} b_k x^k \\
b_0 = 1 \\
D\left[f(x)\right] = f(x/2)
\)

From this, we start recursively solving:
\(
D\left[b_1 x\right] = b_0 \\
\Rightarrow b_1 = 1
\)

\(
D\left[b_2 x^2\right] = b_1 \left(\frac{x}{2}\right) \\
\Rightarrow 2 b_2 x = \frac{1}{2} b_1 x \\
\Rightarrow b_2 = \frac{1}{2} \left(\frac{1}{2}\right) = \frac{1}{4}
\)

\(
D\left[b_3 x^3\right] = b_2 \left(\frac{x}{2}\right)^2 \\
\Rightarrow 3 b_3 x^2 = \frac{1}{2^2} b_2 x^2 \\
\Rightarrow b_3 = \frac{1}{3} \left(\frac{1}{2^2}\right) \left(\frac{1}{4}\right) = \frac{1}{48}
\)

In general:
\(
D\left[b_k x^k\right] = b_{k-1} \left(\frac{x}{2}\right)^{k-1} \\
\Rightarrow k b_k x^{k-1} = \frac{1}{2^{k-1}} b_{k-1} x^{k-1} \\
\Rightarrow b_k = \frac{1}{k} \left(\frac{1}{2^{k-1}}\right) b_{k-1}
\)

By induction:
\(
b_k = \frac{1}{2^{(k-1)k/2} k!}
\)

Then, as I posted previously:

\(
f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k
\)
~ Jay Daniel Fox
#24
(08/08/2014, 12:55 AM)jaydfox Wrote:
(08/06/2014, 04:38 PM)Gottfried Wrote: I'll also be much interested if I can learn about the approach how to determine the function f(x) which prognoses/approximates the elements of the sequence. I could implement the function f(x) in Pari/Gp but just did not get the clue how one would arrive at it.
Gottfried

Deriving the continuous function is actually pretty straightforward. I'll use exp(x) as an example:

(...)

By induction:
\(
b_k = \frac{1}{2^{(k-1)k/2} k!}
\)

Then, as I posted previously:

\(
f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k
\)

Ahh, that's very straightforward and very nice. I'll have to contemplate this a while until I can handle this smoothly myself. Thank you so far!

Gottfried
Gottfried Helms, Kassel
#25
In post 19 I wrote :

quote:
5) I would like to comment that f(x) - f(x-1) can Always be approximated by
f(x) - ( f(x) - f ' (x) + f '' (x)/2 - f "' (x)/6 + f ""(x)/24 )
(end quote)

Jay tried to approximate the Original sequence(s) by the equation

f ' (x) = f(x/2)

A better approximation is probably

f (x) - f(x-1) = f(x/2)

It seems to hard at first , however

f(x) - f(x-1) can Always be approximated by
f(x) - ( f(x) - f ' (x) + f '' (x)/2 - f "' (x)/6 + f ""(x)/24 )

Therefore we can rewrite

f ' (x) - f " (x)/2 + ... = f(x/2).

These coefficients can be solved for by using the binomium expansion of the powers in the Taylor series for f(x-1).

This gives us a linear system of equations.

Actually we do not need the equation with the derivatives , its just equivalent. Just expand (x-1)^n and solve for the Taylor coefficients.

SO

(a0 + a1 x + a2 x^2 + ...) - (a0 + a1 (x-1) + a2 (x-1)^2 + ... ) =
a0 + a1/2 x + a2/4 x^2 + ...

Get rid of the (x-1)^n terms by using the binomial theorem.
Solve the system of equations.

regards

tommy1729
#26
(08/07/2014, 01:34 AM)jaydfox Wrote: It took just 2.5 minutes to calculate the polynomials, about twice as fast as SAGE/python, if I recall correctly. Not sure why the difference, but I'm not complaining:

Here is the second draft of the pari/gp code to calculate the sequence:

.gp   jdf_seq_v2.gp (Size: 3.15 KB / Downloads: 906)

I tried my hand at Newton's method, using Stirling numbers of the first kind, and the increase in speed is incredible. It took 2.5 minutes to calculate the first 100 polynomials in the previous version. Now it takes about 10 seconds!!

   

I calculated 150 polynomials in about 1.5 minutes. And for reference, 60 polynomials took about 1 second (between 850 and 1100 ms in repeated tests). Based on initial tests, the time seems to scale as the 5th power of N (and there are probably some factors of log(n) as well, but I don't have enough data to tease them out). If this trend continues, then 200 polynomials should take about 5 minutes, and 300 should take about 45 minutes.

Just to confirm that I didn't break anything when I changed the interpolation routine, here's a screenshot showing "manual" calculation of the A(2^8 ) through A(2^13), as well as the fast polynomial calculation of the A(0), A(2^0)..A(2^13). These can be validated against a public list of the first 10000 terms (which I already posted a link to previously, but here it is again for reference):
http://oeis.org/A000123/b000123.txt

   

Quote:Also, please note that when the polynomials are being printed in the InitPolys function, they are rescaled to match my previously posted polynomials. However, "under the hood", the polynomials are scaled properly. Just check out the SeqNum list to see the unscaled polynomials.

By the way, now that the code is finally written in PARI/gp, I took the opportunity to calculate the 2^99th element. (Note that a degree 100 polynomial only gets us up to 2^99. A very small tweak to the code will get the 2^100th element, so I'll be sure to include that in the next version.)

I also fixed this last issue I mentioned. With the 10th degree polynomial, the previous version would only calculate up to A(2^9). This version will calculate up to A(2^10). (In either case, if you try to go further than the advertized limit, you'll get an array index error.)
~ Jay Daniel Fox
#27
(08/09/2014, 09:42 AM)jaydfox Wrote:
(08/07/2014, 01:34 AM)jaydfox Wrote: It took just 2.5 minutes to calculate the polynomials, about twice as fast as SAGE/python, if I recall correctly. Not sure why the difference, but I'm not complaining:

Here is the second draft of the pari/gp code to calculate the sequence:


I tried my hand at Newton's method, using Stirling numbers of the first kind, and the increase in speed is incredible. It took 2.5 minutes to calculate the first 100 polynomials in the previous version. Now it takes about 10 seconds!!

(...)

Thanks for that scripts. that shall help me much to get experienced with the computation-idea. Everything went well, except that with my Pari/GP-version I had to introduce "listcreate" and could not apply multiple case-selection in the if()-procedure.
Here are some of my timings:

InitPolys(10) \\ 15 ms
InitPolys(20) \\ 31 ms
InitPolys(30) \\ 172 ms
InitPolys(40) \\ 577 ms
InitPolys(50) \\ 1,716 ms
InitPolys(60) \\ 4,649 ms
InitPolys(70) \\ 8,954 ms
InitPolys(100) \\ 52,041 ms

A(2^100) \\ 7,769 ms


Gottfried
Gottfried Helms, Kassel
#28
(08/08/2014, 11:02 PM)tommy1729 Wrote: In post 19 I wrote :

quote:
5) I would like to comment that f(x) - f(x-1) can Always be approximated by
f(x) - ( f(x) - f ' (x) + f '' (x)/2 - f "' (x)/6 + f ""(x)/24 )
(end quote)

Jay tried to approximate the Original sequence(s) by the equation

f ' (x) = f(x/2)

A better approximation is probably

f (x) - f(x-1) = f(x/2)

It seems to hard at first , however

f(x) - f(x-1) can Always be approximated by
f(x) - ( f(x) - f ' (x) + f '' (x)/2 - f "' (x)/6 + f ""(x)/24 )

Therefore we can rewrite

f ' (x) - f " (x)/2 + ... = f(x/2).

These coefficients can be solved for by using the binomium expansion of the powers in the Taylor series for f(x-1).(...)

That's actually a pretty interesting approach. I'll have to try it out. I do have one concern though.

For integer k:

A(2k) = A(2k-1) + A(k)
A(2k+1) = A(2k) + A(k)

Depending on whether x is even or odd, the functional equation would look like:

even: f(x) - f(x-1) = f(x/2)
odd: f(x+1) - f(x) = f(x/2) or f(x) - f(x-1) = f(x/2 - 1/2)

In effect, the even terms will tend to be larger than the odd terms, compared to a smooth interpolation function. The reason isn't hard to grasp. Look at the relations I posted above. A(2k) and A(2k+1) are each A(k) larger than the previous term. Assuming a smooth function with a positive curvature (which is intuitively obvious for large k). The difference A(2k+1)-A(2k) should be larger than A(2k)-A(2k-1), but it's not. The net effect is that odd terms tend to underestimate the continuous estimator, and the even terms tend to overestimate it.

But of course it gets worse. If you look at the terms by comparing the index modulo 4, you'll find that the terms with k = 3 mod 4 will tend to be smaller than the terms with k = 1 mod 4. Indeed, this pattern continues indefinitely. The terms with k = 15 mod 16 tend to be smaller than the terms with k = 7 mod 16. The "largest" odds will be the terms with k = 1 mod 2^m for arbitrary m. The smallest odds will be the terms with k = -1 mod 2^m.

The evens are more complex to analyze, but in general, they tend to be much better behaved. But they still have their "cycles" at power of 2 intervals (for arbitrary powers of 2).

So in the end, the only way to get a continuous function that will calculate every term in the sequence, is to introduce cyclic functions with periods of 2, 4, 8, 16, etc. These cyclic functions will guarantee that the function becomes unbounded with large imaginary part. Perhaps that's not harsh enough an assessment. While the sequence itself grows subexponentially on the real line, it would grow at least exponentially in the imaginary direction.

With this in mind, I've given up trying to find a continuous function that interpolates every term in the sequence. I'd rather just use a concatenation of line segments, connecting the odd terms, with the even terms at the midpoint of each line segment.

The function f(x) which I described before provides a useful first-order approximation, once you scale it by the constant 1.083063... I'm still trying to find betters ways to determine the value of this constant.

I'm also working on a second-order approximation. This would consist of two parts. The first is to fix oscillations in the error term (1/1.083...)*f(k)-A(k). These oscillations follow approximately a scaled version of f(-x). Remember the zeroes on the negative real line?

The second aspect of the second-order approximation is to come up with a function (or functions) to help approximate the even and odd terms. In essence, I want to split the (2^m)-cyclic aspects of the sequence from the smooth continuous approximation function.

Getting back to your post, I feel like your idea may provide more insights into the smooth function f(x) (and it might help me figure out the large-scale oscillations, which do not follow an exact cycle), but I don't see it helping with the (2^m)-cyclic aspects of the even/odd issue.
~ Jay Daniel Fox
#29
I realised that solving the equation

f ' (2x) = 3 f(x)

can be done in the same way as the equation f ' (2x) = f(x) or equivalent f ' (x) = f(x/2).

Maybe that leads to a closed form solution too.
( closed form here means expressible as standard functions , sums and integrals )

---

As for the equations

f(x) - f(x-1) = f(x/2)
or
f(x+1) - f(x) = f(x/2)

and Jay's comments ; Im aware of the weaknesses of the equations.

I was just trying to find a better approximation then the equation f ' (x) = f(x/2) gives.

Let A(x) be one of the Original sequences and let T(x) be the nonconstant analytic solution to T(x) - T(x-1) = T(x/2).

Then I think one of the A(x) satisfies

A(x) - T(x) = O(T(x)^C)

where 0<C<1

and also

lim x-> +oo A(x)/T(x) = 1.

---

I think I have seen 2d cellular automaton that give the Original sequences or approximated the functional equations given here.

I believe they came from Wolfram and Cook but Im not sure.
Anyone else who might recall this ?

---


As for J(x) <=> J ' (x) = J(x/2)
I think I know a way to prove lim x-> +oo A(x)/J(x) = 1.083...

But what really intrests me is a closed form for 1.083...
Havent I seen this number before ??


regards

tommy1729

" Choice is an illusion " tommy1729
#30
(08/09/2014, 09:16 PM)tommy1729 Wrote: As for J(x) <=> J ' (x) = J(x/2)
I think I know a way to prove lim x-> +oo A(x)/J(x) = 1.083...

But what really intrests me is a closed form for 1.083...
Havent I seen this number before ??

Well, I think a good place to start is to treat f(x) (or are we calling it J(x) now?) as a limit:

Define a step size h, and set x=hk
\(
J_h(x) = J_h(hk) = A_h(k) \\
A_h(2k+1) = A_h(2k) + h A_h(k)
\)

Then, the standard A(x) becomes A_1(x), and J(x) becomes:
\(
J(x) = \lim_{h \to 0} J_h(x) = \lim_{h \to 0} A_h(\lfloor x/h \rfloor)
\)

This form presents a couple of nice benfits. For starters, we can derive imaginary iterates without having to find a taylor series and plug in imaginary arguments. We simply use a complex/imaginary step size. The sequence itself is still linear. (Not linear as in degree 1, but linear in the sense that we follow a line in the complex plane, like a line integral, but with a sequence that simply has an integer index.)

The second benefit is that we can draw a direct analogy with Euler's constant e = 2.718... Let's redefine exponentiation using a similar form:

\(
\exp_h(x) = \exp_h(hk) = E_h(k) \\
E_h(k+1) = E_h(k) + h E_h(k)
\)

Then e is given by the limit:
\(
e = \lim_{h \to 0} \exp_h(1) = \lim_{h \to 0} E_h(\lfloor 1/h \rfloor)
\)

So what's the analogy to 1.083...?

1.083... is J_0(oo) / J_1(oo)

\(
a_1 = \left(\lim_{h \to 0} \lim_{x \to \infty} \frac{J_h(x)}{J_1(x)}\right) \\
\Rightarrow a_1 = \lim_{k \to \infty} \frac{J(k)}{A_1(k)} \\
\Rightarrow a_1 \approx 1.08306...
\)

Therefore, a_1 is analogous to e_1, where e_h = exp_0(1) / exp_h(1), with exp_0(1) = e. Numerically, e_1 = e/2 = 1.35914...
~ Jay Daniel Fox


Possibly Related Threads…
Thread Author Replies Views Last Post
  My favorite theorem tommy1729 0 5,409 08/15/2015, 09:58 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)