My favorite integer sequence - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Etc (https://tetrationforum.org/forumdisplay.php?fid=4) +--- Forum: Community (https://tetrationforum.org/forumdisplay.php?fid=6) +--- Thread: My favorite integer sequence (/showthread.php?tid=911) |
My favorite integer sequence - tommy1729 - 07/27/2014 My favorite integer sequence is http://oeis.org/A018819 or resp http://oeis.org/A000123 1, 2, 4, 6, 10, 14, 20, 26, 36, 46, 60 etc the idea that they (A018819) are computed as f(1)=f(2)=1 , f(2m-1) = f(1)+f(2)+...f(m) = f(2m) fascinates me. This is a nicer sequence/equation than Fibonacci and Somos or even Sylvester imho. Also very appealing is that the second difference of any of these is itself and the first difference is the other one. This seems like a natural analogue or extension of D exp(x) = exp(x) or the hyperbolic case (sinh,cosh) or even the differences of 2^n are 2^n. But it does not grow exponential in the sense that A^n is not a good fit for f(n). Its growth rate is in between polynomial and exponential what considering the differences is a bit counterintuitive (?). This is the simplest functional equation for strictly nondecreasing integer sequences with a growth rate between polynomial and exponential that I know. I wonder if this can be extended naturally to a real-analytic function !? I assume this is already done if possible , but I cant recall immediately. Perhaps the equation Dif^[2] (f(z)) = f(z+q) helps. (Dif is the difference operator and q is a real number) Generalizations of this are awesome too. No proven connection with tetration from these generalizations has been shown yet so I post it here and not in the General math section. Continu sums and continu repeated sums are a part of these generalizations. It seems that the generalizations of this not so well known sequence are also hidden from most books , just like tetration or cellular automatons are not in the complex analysis books or any other standard book. Still digging for the hidden fundamentals of math regards tommy1729 RE: My favorite integer sequence - jaydfox - 07/31/2014 (07/27/2014, 08:44 PM)tommy1729 Wrote: My favorite integer sequence is http://oeis.org/A018819 Wow, that's a very cool sequence! Quote:Also very appealing is that the second difference of any of these is itself and the first difference is the other one. You're right, I think we can generalize to the reals: f'(x) = f(x/2) for real x Rewriting as f'(2*x) = f(x), and taking repeated derivatives, you get: \( \frac{d^n}{{dx}^n} f(2^{n} x) = f(x) \) This becomes a differential equation. We only need the first derivative to solve numerically (small step size is needed), but a neat recurrence relation leads to the following power series: \( f(x) = \sum_{k=0}^{\infty}\frac{1}{2^{k(k-1)/2} k!} x^k \) Interestingly, the coefficients shrink a lot faster than the coefficients of exp(x), so this function should be entire. I haven't played with complex inputs yet, but I assume they will yield some additional strange insights. Anyway... Solving the first order DE numerically with step size 2^-16, and comparing against the power series, I get this: Code: k Seq(k) f(k), solved numerically f(k) from taylor series Pretty dang cool if you ask me. If we take k very high, the continuous function seems to converge on a constant multiple of the discrete sequence. I'm not sure what to make of this constant yet, but I assume it has an interesting interpretation. Based on analysis of the first 2^18 terms in the sequence, the constant is somewhere between 1.083035 and 1.083089, and probably pretty close to the center of that interval, about 1.083062. Here is SAGE (python) code: Code: RF = RealField(128) And here is the output of that code: Code: 0 1.0000000000000000000000000000000000000 RE: My favorite integer sequence - jaydfox - 08/01/2014 (07/31/2014, 01:51 AM)jaydfox Wrote: If we take k very high, the continuous function seems to converge on a constant multiple of the discrete sequence. I'm not sure what to make of this constant yet, but I assume it has an interesting interpretation. Based on analysis of the first 2^18 terms in the sequence, the constant is somewhere between 1.083035 and 1.083089, and probably pretty close to the center of that interval, about 1.083062. I did some googling, and I found this reference: http://link.springer.com/article/10.1007/BF01933448 I can't read the whole article, but when I was googling 1.083063 (going to 2^21 terms showed this as a slightly better approximation), I found this (the link is the article above): So I'm hoping that if someone could get access to that article, they might be able to shed a little light on this constant factor. RE: My favorite integer sequence - Gottfried - 08/01/2014 I've got the generating rule in terms of a matrix. Assume an (ideally infinite sized) matrix with a simple generating-scheme, whose top-left segments is Then the sequence A018819 occurs by taking M to infinite powers. Because the diagonal is zero except of the top left element M is nilpotent and the powers of M tend to become a column-vector in the first column. See for instance M^2: and a higher power M^6: Usually I try such approaches to consider diagonalization or Jordan-forms et al - perhaps for a suggestive pattern, for instance, A is an eigensequence for M because However I did not yet find more interesting things in this manner. Interestingly the sequence generated by Jay's function \( f(x) \) for real valued approximations to the elements of A has a vanishing binomial-transform: if that sequence of coefficients is premultiplied by the matrix P^-1 (the inverse of the lower triangular Pascalmatrix) then the transformed coefficients vanish quickly: (from left to right Jay's coefficients, the binomial-transforms, the original coefficients, and the binomial-transforms of the original coefficients): One more observation which might be interesting. Consider the dotproduct of a Vandermondevector V(x) with M (where V(x) is a vector of consecutive powers of x such that \( V(x)=[1,x,x^2,x^2,x^3,...] \)). Then the dot-product \( V(x)*M=Y \) gives a rowvector Y whose entries evaluate to geometric series such that \( V(x)*M = 1/(1-x) * [1,x^2,x^4,x^6,...] = 1/(1-x)*V(x^2) \). Clearly this can be iterated: \( V(x^2)*M = 1/(1-x^2) * [1,x^4,x^8,x^{12},...] = 1/(1-x^2)*V(x^4) \) and expressed with the power of M \( V(x)*M = 1/(1-x) * V(x^{2^1}) \). \( V(x)*M^2 = 1/(1-x)*1/(1-x^2) * V(x^{2^2}) \). ... \( V(x)*M^h = 1/(1-x)*1/(1-x^2)*...*1/(1-x^{2^h}) * V(x^{2^{h+1}}) \) . In the limit to infinite powers of M this gives for the first column in the result the scalar value \( ... \) \( y = w(x) = 1 / (1-x) / (1-x ^ 2) / (1 - x ^ 4 ) \ldots \) and all others columns tend to zero. We might say, that in the above notation w(x) is the generating function for the sequence A. update: I changed the name of the function to not to interfer with Jay's function f(x) which interpolates the sequence A by real values of f(x) The *value* y, on the other hand, is then the evaluation of the power series whose coefficients are the terms of the original sequence at x: \( y = w(x) = 1 + 1x + 2x^2 + 2x^3 + 4x^4+4x^5+... \) which seems to be convergent for |x|<1. I'm fiddling a bit more with it but do not yet expect much exciting news... Gottfried RE: My favorite integer sequence - jaydfox - 08/01/2014 (08/01/2014, 03:25 PM)Gottfried Wrote: I've got the generating rule in terms of a matrix. Hmm, I hadn't thought of doing it that way. Interesting... I've been analyzing the information flow, and came up with code to generate the sequence which does not need to store all the previous entries. Here is an illustration of the data flow (calculations from Excel spreadsheet): The naive implementation calculates the nth entry by considering the n-1 entry and the n/2 entry, so it has to keep at least n/2 entries in memory. The version below only keeps log_2(n) entries. Below is SAGE (python) code. Code: size = 6 Here is the output for validation: Code: 0 1 RE: My favorite integer sequence - sheldonison - 08/01/2014 (08/01/2014, 01:53 AM)jaydfox Wrote: If we take k very high, the continuous function seems to converge on a constant multiple of the discrete sequence... Based on analysis of the first 2^18 terms in the sequence, the constant is somewhere between 1.083035 and 1.083089, and probably pretty close to the center of that interval, about 1.083062.Jay, I can't read the article either; but I am very intrigued by this slowly converging ratio, and by your Taylor series approximation! Very impressive. I'm still working on understanding your "log_2(n)" term count memory model for the function too, but the Taylor series approximation is easy to calculate for reasonably large inputs. At some point, I'd like to see if I can figure out how fast \( f^{o n}(1) \) grows, as compared to tetration, \( \exp^{o n}(1) \). Of course, a graph of the Taylor series in the complex plane would be fun too, as well as the location of the zeros. RE: My favorite integer sequence - jaydfox - 08/01/2014 (08/01/2014, 11:36 PM)sheldonison Wrote: Jay, I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term. The generator polynomials are a set of log_2(n) polynomials of degree 1 through log_2(n). Calculating the kth polynomial requires calculating k+1 points from the (k-1)th polynomial. I haven't fully worked out the math, but I think it runs in something like O(log_2(n) ^ m) time, where m is a small constant, maybe 3 or 4. So, for example, calculating the 2^100th term in the sequence took about 10 minutes (as opposed to O(2^100) calculations, which would take forever on pretty much any processor). I'll try to post gp code next week (it's in SAGE at the moment, and I'm not sure who else is using SAGE besides me... Andy and Henryk were using it a few years ago, not sure if they still do...) RE: My favorite integer sequence - jaydfox - 08/02/2014 (08/01/2014, 11:36 PM)sheldonison Wrote: Of course, a graph of the Taylor series in the complex plane would be fun too, as well as the location of the zeros. The first zero on the real line is close to -1.5, and the following recurrence gives a good approximation of the next zero, as a seed for Newton's method: z_1 ~= -1.5 z_{n+1} ~= 2*z_n - 2^n Interestingly, if there is a zero at z0, then there is a local minimum/maximum at 2*z0, and an inflection point at 4*z0. This follows immediately from the recursively defined derivatives: Here are the first 50 real zeroes to double precision: Code: -1.48807854559971029 RE: My favorite integer sequence - sheldonison - 08/02/2014 (07/31/2014, 01:51 AM)jaydfox Wrote: .... I think we can generalize to the reals: f(x) in the complex plane, from real -60 to +100 with grids every 20 units f(f(x)) in the complex plane from real -60 to +100, with grids every 20 units. Notice for positive reals, it acts somewhat like exp(x), but the imaginary period is slowly increasing. These two plots can be compared to the equivalent plots in the exp^0.5 post, http://math.eretrandre.org/tetrationforum/showthread.php?tid=863&page=3, where the asymptotic converges much more closely to exp(z), and the asymptotic converges to a 2pi i period, whereas this function has an imaginary cycle that is increasing, but in many other ways, the plots are similar. RE: My favorite integer sequence - Gottfried - 08/02/2014 update: changed names of the functions to not to conflige with Jay's f(x) One more curious property. Let \( \hspace{48} w(x)=1+1x+2x^2+2x^3+4x^4+4x^5+... \) or \( \hspace{48} w(x) = \sum_{k=0}^\infty a_k x^k \) where the \( a_k \) are the terms of the original sequence (1,1,2,2,4,4,6,6,10,10,...) then \( \hspace{48} v(x)=1/w(x) = 1 - x -x^2 + x^3 -x^4 + x^5+x^6-x^7 \pm \ldots \) is a much-discussed series (even in MSE and MO ( http://mathoverflow.net/questions/157326 ) ). The pattern for the signs follow the Thue-Morse sequence (see wikipedia, or google "the ubiquituous thue morse sequence"), in letters: +--+ -++- -++- +--+ ... and has an interesting recursion: \( \hspace{48} A_1=+ \hspace{48} \hspace{48} B_1=- A_1 \\ \hspace{48} A_2=A_1 B_1, \hspace{48} B_2 = - A_2 \\ \hspace{48} A_3=A_2 B_2, \hspace{48} B_3 = - A_3 \\ \hspace{48} \cdots \) Hmm, I do not yet see a useful hint for the improvement of the computation of the terms of the sequence A nor for the taylor-series of Jay, but thought it might be additionally interesting. Gottfried |