09/25/2009, 03:46 PM
(09/25/2009, 03:24 PM)jaydfox Wrote: Armed with the referenced paper, I began experimenting with the first phase, which involves creating a parameterized closed curve around a point of interest, and then solving an integration problem numerically, via matrix methods. I initially performed the test with functions I could compute exactly, in order to verify the results against a theoretically exact answer. The method is quite stable: indeed, in my tests, I can solve a 400x400 matrix with merely 53-bit (double) precision, and retain 46 bits (measured versus a solution using 1024 bits). That means I lose only 7 bits during the matrix solution!I should point out an interesting quirk with this method. The authors of the paper describe a fairly rapid rate of convergence as they increased system size. However, based on numerous tests I performed, it seems that the convergence can only be expected to be so rapid when two conditions are met:
1) The first and second derivatives of the parameterized boundary function must be fairly "tame", by which I mean that they do not grow too large compared to their average values. In particular, singularities seem to prevent rapid convergence. This condition is of course not met with the kslog, but was easily met by the epitrochoid function used by the authors.
2) The parameterization must be the "correct" one, such that the mapping to the unit disk yields a boundary with constant magnitude derivatives (moving around the disk at a constant rate). Any cyclic deviation from the "correct" parameterization results in poor convergence, though of course the smaller the deviation, the smaller the effect. So the better the initial "prior" parameterization, the better! As far as I could tell, the authors used the "correct" parameterization, but they weren't very explicit on the point.
For the kslog, condition 2 could be met by parameterizing with ksexp function, assuming we knew it ahead of time. I was tempted to use the islog, but I settled for testing a few different "naive" approaches. The most simple is to parameterize the (0,1) interval linearly. Surprisingly, the results with the linear parameterization were only slightly worse than those using pretty decent approximations. I think this is because the effect of the singularities is the overriding hindrance to convergence, so other sources of inaccuracy are rather muted.
However, the mapping integral relies on the first and second derivatives, and the linear parameterization does not have a continuous second derivative. So I decided in the end to use the aforementioned cubic polynomial parameterization, which yields a continuous second derivative for the sexp "prior".
~ Jay Daniel Fox