Dissecting Andrew's slog solution
#1
I've been looking at the terms to Anrew's slog function, as computed by various matrix sizes. Andrew notes in his paper that there is a periodicity of 7 in the terms. As a first rough guess, he's correct, but a closer look reveals a more interesting pattern.

Perhaps it's best if I walk you down the path I followed to get where I'm at now. I started, of course, with the graph of the root test for power series. Here I show values for the 400-term solution, with root test values between 0.4 and 0.8, to expose a little more detail:

   

So far, nothing terribly interesting stands out. Sure, there's a periodicity, though it's not exactly at 7 terms. As we get about halfway through the series, the periodicity breaks down, but that would appear to be an artifact of the finite matrix we solved. For a 200x200 matrix, the periodicity is pretty consistent out to about 100 terms. For a 500x500 matrix, it's pretty consistent out to about 250 terms, give or take.

My first "aha!" moment came when I forgot to set the plotjoined flag when I saved the graph to disk. I was zooming in, which also helped. This fortuitous accident made something very striking stand out:

   

Notice the overlapping parabolic-looking arcs? The overlapping part is key, because it exposes the pattern that's nearly indiscernible in the connected plot. I knew at once that I needed to look at evens and odds. Here's the next graph I looked at, with the odd terms of the series graphed in red, the evens in blue:

   

I decided to take a closer look at the unaltered terms. The first thing that struck me was that the sign of the even terms seemed to alternate. If I graph the even terms, they alternate, so if I reverse the sign of the even evens, I should hope to see a smooth (relatively speaking) curve. Of course, the terms diminish quite rapidly. In the following graph, I added a scaling factor in addition to the alternately reversing the sign, by using (-2)^(k/2) as a scaling factor. For evens such as 0, 4, 8, 12, etc., this would be positive, while for 2, 6, 10, etc., it would be negative. Here's the resulting graph, again for the 400 term solution:

   
~ Jay Daniel Fox
Reply
#2
Here's the interesting part. Remember how I said I had to invert the sign of every other even term? The same thing happens for every other odd term. Moreover, the even and odd terms behave much like sine and cosine: the one near zero when the other is approaching its maximum distance from zero. Put these pieces of information together, and it appears as if we could put all four sets of numbers together, using a sine or cosine function with an argument that increases by about pi/2 for each successive term. Those who dabble in signal analysis are used to this sort of aliasing effect. After subtracting out the pi/2 per k, we're left with a little excess, which repeats about every 27 terms, perhaps closer to about 26.8 or 26.9, based on my initial studies. That's almost 28, which divided by 4 explains the nearly 7-term periodicity that Andrew observed.

When you look at only the evens or only the odds, and reverse the signs of every other term, you can make out the sine waves much more easily, and it's easy to confirm that the periodicity is less than 28 and greater than 26. More accuracy than that requires solving a larger system. My system maxes out at about 560 terms. Beyond that, 1.5 GB is no longer enough, and I start using swap space. Once maxima starts using swap space, CPU utilization drops to 20%, which means that what should take half a day to solve will take 2.5 days or more. I can't tie up my system that long, so I can only analyze the data out to 560 terms. I don't have enough data to make any solid conclusions on the solution that we're approximating with these small systems.

Anyway, moving along. I was trying to figure out why the root test initially dips, then climbs upwards towards some asymptote near 0.73. Taking the series for the logarithm as an example, I took the derivative of the power series for slog, and found the following for the root test (showing results for 200, 300, 400, 500, and 560):

   

Here's a detailed view:

   

Apologies for the axis labels, but the plotting engine in SAGE doesn't give me fine control over the axis scale and tick labels, and depending on the range I try to plot, I don't get any tick labels at all. So I have to pick very strange ranges to get the labels to appear, and I'm not entirely sure they're 100% accurate.
~ Jay Daniel Fox
Reply
#3
As you can see, the root test would appear to have a very well-behaved asymptotic root test, right from the start. Excepting the first 10 terms or so, it appears to be approaching an asymptote of about 0.729 or 0.730. This can be seen by zooming in even further on the root test of the derivative:

   

It's a bit messy, but I hope you can see my point. Presumably, with a significantly larger system (5000x5000, etc.), this should resolve itself into a very tidy pattern. For now, the best I can do is try to pick out the patterns.

One simple test I've found informative is to graph the terms of the derivative, scaled by a factor appropriate to undo the exponential decrease in size that's visible from the root test. In SAGE (notice I load a precalculated set of coefficients, which I can comment out on successive runs to save time):

Code:
Flt = RealField(1024)

FltScale = Flt(1.374)

#cofs560 = load('coeffs560.sobj')
sle_coeffs = vector(Flt, len(cofs560), cofs560)
rad1 = [[k-1, Flt(((-1)^(Integer((k-1)/2))*k*sle_coeffs[k])*(FltScale^(k-1)))] for k in range(1, len(sle_coeffs), 2)]
rad2 = [[k-1, Flt(((-1)^(Integer((k-2)/2))*k*sle_coeffs[k])*(FltScale^(k-1)))] for k in range(2, len(sle_coeffs), 2)]
L560_1 = list_plot(rad1, plotjoined=true, rgbcolor=(0.00, 0.00, 0.85))
L560_2 = list_plot(rad2, plotjoined=true, rgbcolor=(0.00, 0.50, 0.85))

G = Graphics()
G+=L560_1
G+=L560_2
G.save('dev-cycles-560.png', xmin=0, xmax=300, ymin=-1.0, ymax=1.0, figsize=(7,5))

The resulting graph:

   
~ Jay Daniel Fox
Reply
#4
Finally, I'll show some of the things I'm looking at at the moment. For starters, here's a graph of the 300-, 400-, 500-, and 560-term solutions, graphing the even terms of the derivative (i.e., based off the odd terms of the original power series). I've reverse the signs on alternating terms, and I've scaled by the same 1.374 factor as before:

   

See how the series seem to converge to some cyclic "curve"? I say curve, though it's a discrete set of points that I've artificially connected. Currently, I'm trying to work with the limited data I have to figure out what values each of these peaks approaches. I would like to ask if anyone has a machine/math library capable of solving a 600x600, 640x640, or even 700x700 system, to get more data points to work with. (If anyone has a library capable of solving an 800x800 system in less than a week, I'd very much appreciate seeing the results.) I don't need a lot of precision, though you'd need to solve the system exactly with rational math to ensure quality, then reduce the values to e.g. 100 digits or even just double precision so we could analyze further.
~ Jay Daniel Fox
Reply
#5
In this context it seems so astonishing that for the inverse the odd derivations are all convex/greater equal 0.
Reply
#6
Yes!!!

The coefficients are based off the fixed point of natural exponentiation! Remember the cycle with a period of about 26.9? And remember that I said that each term behaves as if it had been rotated about a quarter turn, i.e. that it was close to pi/2?

Well, take the imaginary part of the fixed point: 1.3372357...

Subtract it from pi/2 from it, and you get 0.2335606...

Now divide this into 2*pi, and guess what you get! 26.9 and change.

The coefficients of the series behave as if they are the real parts of points that are are rotating about 1.337... radians, and the root test of the first derivative indicates an exponential decline that is very close to the rate of decline of the distance of iterated logarithms from the fixed point!

So the fixed point rears its ugly head after all. Armed with this knowledge, I think we should be able to analyze the larger systems (300, 400, 500, etc.) and determine what the solution of the infinite system is going to look like.
~ Jay Daniel Fox
Reply
#7
Random speculation, but what if the coefficients look like the real part of a logarithmic spiral because one of the complex solutions based on the continuous iteration from the fixed point has a complex spiral for its coefficients? By dropping the imaginary parts, we recover a solution that yields real results...

Again, random speculation, based on nothing more than an observation...
~ Jay Daniel Fox
Reply
#8
jaydfox Wrote:Random speculation, but what if the coefficients look like the real part of a logarithmic spiral because one of the complex solutions based on the continuous iteration from the fixed point has a complex spiral for its coefficients? By dropping the imaginary parts, we recover a solution that yields real results...
So you think that Andrew's solution is the real part of the regular fractional iteration at a complex fixed point?!
What are then the real parts of the other solutions at a fixed point?
And is this then Kneser's solution at all (he started by regularly iterating at a certain fixed point and then by some transformation I did not fully understand yet he came up with a real solution)?
Questions over questions ...
Reply
#9
If we call the fixed point x, then here's a look at the coefficients a_k of Andrew's slog, divided by the real part of x^(k+1), multiplied by k (to effect the derivative), and multiplied by abs(x^2).

   

Here are a couple more detailed shots. Apologies for the axis labels on this first one, but SAGE's plotting engine has a few deficiencies.

   

   

With a few exceptions in the first handful of terms, the values seem to be converging on 1.0579. As it turns out, \( {\Large x}^{-1.057939991157 i} \) is equal to \( \Large{x}^{1.057939991157*{\large \left|\frac{x^{\tiny -i}}{x}\right|} \). In other words, if you start at a point very near the fixed point, then 4.44695 real iterations and -1.05794 imaginary iterations will get you to the same point.

And as it turns out, Andrew's solution would appear to be strongly reliant on complex iterations counts, and this isn't the only evidence. More on that to come.
~ Jay Daniel Fox
Reply
#10
It's hard to tell if the later terms will truly converge on 1.057934 or not. They're heading in that direction, but the convergence is painfully slow. Oh, labels. Yes, the purple line is for the coefficients of solving the 580x580 system, then 560, 540, 520, 500, 480, and 440 (skipped 460, will fill in later). Yellow is 440.

The convergence is slow, but I would not be surprised if we eventually got there, by the time we'd solved the million by million system, etc. Obviously, this will have to remain hypothetical, because even a supercomputer would have trouble solving a several thousand system using exact rational math. Beyond that is out of the reach of all the distributed computing in the world (we're talking five orders of magnitude above what my system can handle, just to get to 6,000!!!). The best we could hope for is to used floating point math with maybe millions of bits of precision, and let a distributed effort try to solve. Probably not worth it anyway, even if we could organize such a feat.
~ Jay Daniel Fox
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 43,242 05/16/2021, 11:51 AM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 18,593 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 50,088 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 6,999 09/10/2009, 06:47 PM
Last Post: bo198214
  sexp and slog at a microcalculator Kouznetsov 0 5,763 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 17,273 12/18/2007, 12:14 AM
Last Post: jaydfox
  SAGE code implementing slog with acceleration jaydfox 4 13,106 10/22/2007, 12:59 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 35,679 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)