Well, the Gaussian method, has a better chance, especially if you choose a proper function \(t(s)\) such that:
\[
T(s+1) = e^{t(s)T(s)}\\
\]
And then you ask that:
\[
T(s) \sim \text{tet}_K(s)\\
\]
Then we get that:
\[
T(s+1)/\text{tet}_K(s+1) = e^{t(s)T(s) - \text{tet}_K(s)}\\
\]
Then we get that:
\[
T(s) - \text{tet}_K(s) \to 0\\
\]
Then the iterated logarithms should produce Kneser. The question then becomes, how much better at approximating Kneser is \(T\) (for Tommy's Gaussian) than the \(\beta\) method is (which does not approximate Kneser outside of the real line). The function \(T\) has a really good shot, so long as we ask that \(t(s)\) tends to \(1\) in a half plane, and in somewhat of a regular manner.
\[
T(s+1) = e^{t(s)T(s)}\\
\]
And then you ask that:
\[
T(s) \sim \text{tet}_K(s)\\
\]
Then we get that:
\[
T(s+1)/\text{tet}_K(s+1) = e^{t(s)T(s) - \text{tet}_K(s)}\\
\]
Then we get that:
\[
T(s) - \text{tet}_K(s) \to 0\\
\]
Then the iterated logarithms should produce Kneser. The question then becomes, how much better at approximating Kneser is \(T\) (for Tommy's Gaussian) than the \(\beta\) method is (which does not approximate Kneser outside of the real line). The function \(T\) has a really good shot, so long as we ask that \(t(s)\) tends to \(1\) in a half plane, and in somewhat of a regular manner.

