Generalized Kneser superfunction trick (the iterated limit definition)
#1
Teaser

What is the general argument behind the "superfunction trick"?
It seems to me that it is possible to study the general machinery lurking behind this superfunction trick. As JmsNxn correctly notice, he is using a more complex "mutation" of the mentioned trick, but if I can make some progress understanding the old trick, I guess I know how to easily expand the construction category-theoretically to include his iterated composition. Let's begin by setting some notation and by stating a "fake theorem" that illustrates my sentiment about this matter better than words.

Disclaimer on notation.
From now on I will call "a function" only a binary relation that is everywhere defined and single-valued, i.e. for every element in the domain exists a unique element in the codomain and so on...; I'll denote functional composition by juxtaposition \( gf=g\circ f \) and integer iteration by \( f^n \). With \( s \) i vaguely mean the successor endomap and with \( {\rm mul}_a \) the endomap that scales by \( x\mapsto a\cdot x \).

For two given endofunctions \( f:X\to X \) and \( g:Y\to Y \) define the set  \( [f,g]\subseteq Y^X \) as the solution set of the equation
\( \chi f=g\chi \)

we define two subsets of sequences \( \phi_n\in (Y^X)^{\mathbb N} \) as follows

\( [f,g]_{\Delta}:=\{\phi_n|\phi_{n+1}f=g\phi_{n}\} \) and \( [f,g]_{\Delta}^{op}:=\{\phi_n|\phi_nf=g\phi_{n+1}\} \)


Ok! We are now ready for the...

SuperLazy Prototheorem. Given functions \( f:X\to X \) and \( g:Y\to Y \) and a sequence of functions \( \phi_{-}:{\mathbb N}\to Y^X \)if the conditions
  1. for every natural number, \( \phi_nf=g\phi_{n+1} \) or \( \phi_{n+1}f=g\phi_{n} \);
  2. \( \phi_0 \) is "appropriate";
are met then the "limit" of the sequence \( \phi_n\to \phi \) exists and lands in the subset \( [f,g]\subseteq Y^X \), i.e.
\( \phi f=g\phi \)

This "fake theorem" depends fundamentally on the existence and on our ability to build sequences of maps satisfying (1). Luckily this is not a problem! Such kinds of sequences exist, are definable by recursion and are abundant: in fact we can prove these

Easy Lemmas. Given functions \( f:X\to X \) and \( g:Y\to Y \). For every function \( \phi:X\to Y \) we prove that:
  • \( f \) is split-mono \( \Rightarrow \) exists a sequence \( \alpha_n \) s.t. \( \alpha_0=\phi \) and \( \alpha_{n+1}f=g\alpha_{n} \);
    \( f \) is split-epi \( \Rightarrow \) given \( \alpha_n,\alpha'_n\in[f,g]_{\Delta} \) if \( \alpha_0=\alpha'_0 \) then \( \alpha_n=\alpha'_n \); 
    if \( f \) is iso \( Y^X\simeq [f,g]_{\Delta} \)
  • \( g \) is split-epi \( \Rightarrow \) exists a sequence \( \beta_n \) s.t. \( \beta_0=\phi \) and \( \beta_nf=g\beta_{n+1} \);
    \( g \) is split-mono \( \Rightarrow \) given \( \alpha_n,\alpha'_n\in[f,g]^{op}_{\Delta} \) if \( \alpha_0=\alpha'_0 \) then \( \alpha_n=\alpha'_n \);
    if \( g \) is iso \( Y^X\simeq [f,g]^{op}_{\Delta} \)
  • if \( \phi\in [f,g] \) the constant sequence \( \phi! \) is in both \( [f,g]_{\Delta} \) and \( [f,g]_{\Delta}^{op} \);
  • for every \( \gamma\in [f,f]_\Delta \) and \( \delta\in [g,g]_\Delta \), if \( \phi_n\in[f,g]_\Delta \) then \( \delta\phi\gamma\in[f,g]_\Delta \);

Where split-mono (split-epi) means that the function has a retract (a section): in the context of arbitrary set functions this is equivalent to being injective (surjective but requires axiom of choice).

For the proof of the former I'll have to work by a sequence of, hopefully finite and convergent, approximated attempts: you can read an attempt in the pdf. The latter result is pretty trivial to prove and generalize because it's pure algebra (it's in the Appendix A of the same pdf).

Some context

But let's start from scratches and provide some context. I'm sure I'm not providing the oldest references on this site but in
[TF] 2008 jul, Trappmann, Robbins - Tetration Reference

the principal Abel and Schroeder functions are defined as follow
\( \mathcal A[f]={\lim_{n\to\infty}}(s^{-1})^n\circ\log_a\circ f^n \)
\( \mathcal S[f]={\lim_{n\to\infty}}({\rm mul}_a^{-1})^n\circ f^n \)

for \( a=f'(0) \), \( s \) is the successor and \( {\rm mul}_a \) is multiplication by \( a \). While in the thread
[KSuLog] 2008 nov, bo198214 - Kneser's Super Logarithm:

bo198214 writes that one way to construct the principal Schroeder function is given by the limit
\( \chi=\lim_{n\to\infty}({\rm mul}_c^{-1})^n\circ(s^{-1})^c\circ f^n \)

where \( c \) is a fixed point, which Kneser (1949) composes with an Abel function of multiplication, \( \log \), to obtain an Abel function and solve the half-iterate of exp.

Few posts later Sheldonison defines an Abel function of \( \exp \) (\( \rm{slog} \) developed for the fixed point c) with this limit
\( \psi^{-1}=\lim_{n\to\infty}\exp^n\circ s^c\circ \exp_c\circ (s^{-1})^n \)
\( \psi=\lim_{n\to\infty}s^n\circ\log_c\circ(s^{-1})^c\circ\ln^n \)

Question! What is the general pattern here?! Let's ignore now the convergence issue of the limit for a moment, and I'll try later to, at least, black-box it and return to it when the underlying algebraic argument is clear to me.

The general scheme here seems to comprise the following:
  1. They select a pair of functions \( f:X\to X \) and \( g:Y\to Y \) with some properties, i.e. continuous, analytic, linear, or no property at all, e.g. \( (f,s) \) in the case of principal Abel; \( (f,{\rm mul}_a) \) in the case of Principal Schroeder; \( (s,\exp) \) in the case of tetration; \( (\exp,s) \) in the case of super-logarithm;
  2. they go on drawing from their magician's hat a function \( \phi:X\to Y \), a kind of first approximation chosen so as to obtain some desired properties related to the fixed points, to the behavior or to the very success of the limiting construction, e.g. \( \log_{f'(0)} \) in the principal Abel function; the identity in the principal Schroeder function defined in [TF]; subtraction by the fixed-point \( s^{-c} \) in [KSuLog] and, if I'm not mistaken \( 2\sinh \) in the Tommy-method;
  3. in all the cases shown, by inverting \( f \) or \( g \), they define recursively from a "broken conjugation" a sequence of functions \( \phi_n \) with base of the recursion \( \phi_0=\phi \) such that \( \phi_n\in [f,g]_{\Delta} \). In other words, if the definition of \( [f,g]_{\Delta} \) wasn't very attractive, this means that the sequence \( \phi_n \) behaves imperfectly almost like a solution of \( \chi f=g\chi \);
  4. they take the limit of the sequence \( \phi_n \) and get the desired function. \( \lim_{n\to\infty}\phi_n=\chi \) Well, what they mean probably is that for every \( x\in X \) they evaluate the sequence \( \phi_n(x)\in Y \) defining for every \( x \) a sequence over \( Y \) and then they evaluate that limit in \( Y \)
       \( \phi_n(x_0)\to y_0 \)
       \( \phi_n(x_1)\to y_1 \)
       \( \phi_n(x_2)\to y_2 \)
       \( \vdots \)
    studying the subset \( R\subseteq X \) for which the sequence \( \phi_n(x) \) converges.
  5. what we obtain at the end should be one of the desired inaccessible elements of \( [f,g] \), e.g. \( [f,s] \) is the solution set of the Abel equation on \( f \); \( [f,{\rm mul}_{a}] \) is the solution set of the Schroeder equation on \( f \); \( [\exp, s] \) contains the slog; \( [s,\exp] \) contains tetration; in general \( [s,f] \) is the set of superfunctions of \( f \);

As I said ealry, the proof attempt is in a LaTex version of this post that has also an appendix to a real proof on the cosntruction of sequences.


.pdf   (2021 01 16) Generalized_superfunction_trick.pdf (Size: 307.64 KB / Downloads: 366)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#2
That was very interesting. I really like the rephrasing. Got a little lost towards the end with the categories; but other than that seems like a very good standardization of what we think of when we say recursive functions. It'd be interesting to see if you can derive the same analysis if \( f \) and \( g \) are defined with multiple variables.  I.e; producing a function

\(
\chi(s,z): \mathbb{C} \times \mathbb{C} \to \mathbb{C}\\
f(s,\chi(s,z)) = \chi(s,g(s,z))\\
\)

...I had an incorrect idea here...

 I've only ever worked with \( \chi(s+1,z) = f(s,\chi(s,z)) \) and I was quick to jump the gun

Now I'm actually scratching my head at how in the hell you could solve these equations...

Very interesting though. This reads as a good foundation to all the super-function tricks.

Regards, James


EDIT:

So the best I could think of is solving for the successor case and then through conjugation solving the general case (which is not ideal).

If,

\( \phi \bullet s = f \bullet \phi \)

Then, start with a function,

\(
A(s) = \Omega_{j=1}^\infty e^{s-j} f(z) \bullet z\\
A(s+1) = e^s f(A(s))\\
\)


It may be more helpful to use a different convergent factor other than \( e^{s-j} \); but let's just say it works. Then taking,

\(
F(s) = \lim_{n\to\infty} f^{-n} \bullet A(s+n)\\
\)

This could solve the successor problem. This is for instance, definitely doable if we stick to a real monotonic function, say taking \( \mathbb{R} \to \mathbb{R} \); with a sufficiently well behaved inverse (think like \( \log \)...).

Then, taking \( f,g \)--which are similar functions; then we can construct a \( G \) similarly. Then,

\(
F\bullet G^{-1} = \phi\\
\)

Should be a suitable function on \( \mathbb{R} \) satisfying the required,

\(
\phi g = f \phi\\
\)
...........


I'm actually kind of curious now if I can make this work more generally. Do you think this might give you concrete ground to stand on? If we can make your "convergence criteria" a little more absolute with an example of a space where this works? I.e: holding a space \( f,g \in \mathcal{B} \) such we can always find functions \( \phi \in \mathcal{B} \) such that \( f \phi = \phi g \). Maybe not so perfectly, but I imagine something like this. Makes for good normal subgroup; functor; etc nonsense I imagine. Lmao.




EDIT 2:


One example I'm running through my head, because it's simple, is,

\(
f(z) = 1+z^3\\
\)

which is \( \mathbb{R} \to \mathbb{R} \), it's well behaved, and its inverse is too. Then,

\(
A(s) = e^{s-1} (1+(e^{s-2}(1+...)^3)^3)\\
\)

This certainly converges geometrically. And the limit \( \phi = \lim_{n\to\infty} \sqrt[3]{\sqrt[3]{A(s+n)-1}...-1} \) probably converges (at least from what I'm running it through). Which would solve the equation,

\(
\phi(s+1) = 1+ \phi(s)^3
\)

If we can find a common family of functions like this. I'm sure the exponential convergents will suffice. I'm thinking surjective/injective \( \mathbb{R}\to\mathbb{R} \); at least continuously differentiable. The problem I see is that \( \phi \) will not be in the same family of functions (which is a bummer, but probably to be expected).
Reply
#3
Hi James, I need more time to properly reply to this (without pulling out another 8-pages paper).
I'm actually very excited because you are on my wavelength!
I'm glad you found that interesting and don't worry about appendix B and C. I wanted to use that functorial definition to introduce generally iterated composition.

JmsNxn Wrote:That was very interesting. I really like the rephrasing. Got a little lost towards the end with the categories;

Btw if something seems off or is not clear I can simplify and explain it better to you.

I'm very open to criticism; I'd love having some.

About your ideas... at the moment I'm confident I can derive almost the same rephrasing for your approach up to some subtle detail in the "convergence" black-box... Very soon I'll reply to every point you've made: I need some days to order the ideas and concepts in a way a human can check. You make very good points actually, not nonsense! You'll be surprised for how much concrete ground we have under our feet there... I'm rediscovering land since our first pm, 6 years ago.


Anyways before going at full speed on your specific case, i.e. your tetration paper and your Omega functor \( \Omega_{i=m}^n \), yes it is a functor, I have to achieve the following:
-I have to be 100% sure that this first elementary case has no shadows... so I need to ask you for some help with Kneser and Iteration in general;
-I have to go back seriously to your papers.

But as I said I'm confident, extremely confident.

ps: why do you use the bullet as composition (instead of \circ)?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#4
(01/24/2021, 09:04 PM)MphLee Wrote: ps: why do you use the bullet as composition (instead of \circ)?

I use a bullet because I think of it as a product. And it's not exactly the same thing as \( \circ \). The bullet acts as a binding variable; so we bound the variable \( z \) to \( \Omega \). Then, once we've binded it; we can perform group operations (key word group; composition (as said with \( \circ \)) is not necessarily a group).

If I were to write;

\(
\int_b^c f(s,z)\,ds\circ \int_a^b f(s,z) ds\circ z = \int_a^c f(s,z)\,ds\circ z\\
\)

It's not entirely obvious that we are, first of all binding \( z \) to the integral; and second of all that we are in a group structure. The bullet looks more like multiplication; and its a novel symbol in this scenario; so we don't run into a problem with overriding a symbol that we already use a lot.


It's largely an issue with novelty and precision. And trying to hammer home that it means something different than composition; it's composition restricted to a group structure.
Reply
#5
(01/25/2021, 01:19 AM)JmsNxn Wrote: \(
\int_b^c f(s,z)\,ds\circ \int_a^b f(s,z) ds\circ z = \int_a^c f(s,z)\,ds\circ z\\
\)

So, if I get it, this kind of interpretations problems arise often in category theory (when u have to compose lot of weird stuff).
Let's test my understanding: If I get this right the following should make sense for you as well

\(
\int_b^c f(s,-)\,ds\circ \int_a^b f(s,-) ds = \int_a^c f(s,-)\,ds\\
\)

That is the same practical reason, let's ignore the historical one, of the use of [tex]ds[\tex] to specify the variable you are integrating over.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#6
(01/25/2021, 11:26 AM)MphLee Wrote:
(01/25/2021, 01:19 AM)JmsNxn Wrote: \(
\int_b^c f(s,z)\,ds\circ \int_a^b f(s,z) ds\circ z = \int_a^c f(s,z)\,ds\circ z\\
\)

So, if I get it, this kind of interpretations problems arise often in category theory (when u have to compose lot of weird stuff).
Let's test my understanding: If I get this right the following should make sense for you as well

\(
\int_b^c f(s,-)\,ds\circ \int_a^b f(s,-) ds = \int_a^c f(s,-)\,ds\\
\)

That is the same practical reason, let's ignore the historical one, of the use of \( ds \) to specify the variable you are integrating over.


Yes, it's exactly the same thing!

Especially when I write,

\(
\Omega_{j=1}^\infty \phi_j(s,z)\\
\)

Does this mean,

\(
\lim_{n\to\infty}\phi_1(s,\phi_2(s,...\phi_n(s,z)))\\
\)

or does it mean,

\(
\lim_{n\to\infty} \phi_1(\phi_2(...\phi_n(s,z)...,z),z)\\
\)

So we add a bullet to bind the variable. Then, when I write,

\(
\Omega_{j=n}^{m} \phi_j(s,z)\bullet \Omega_{j=m+1}^k\phi_j(s,z)\bullet z = \Omega_{j=n}^k \phi_j(s,z)\bullet z\\
\)

Tell me this doesn't look better than writing:

\(
\Omega_{j=n}^{m} \phi_j(s,z)\bullet z \circ \Omega_{j=m+1}^k\phi_j(s,z)\bullet z
\)

But if I did all this bullet suff with \( \circ \)--that's not really how \( \circ \) is usually used, so I'd be overriding the meaning of an existent symbol within this context. Better to use a new symbol and be fresh. This is especially beneficial when we talk about \( ds\bullet z \) which is almost like a differential form. Writing \( ds\circ z \) would be going a step too far I think.

Edit:

I think a good idea is to think of it this way.

\(
f(g(z)) = f \circ g = f\bullet g \bullet z\\
\)

When we use a bullet, we should declare what we're binding it to. We don't need to do that with \circ. It would be wrong to override this and write,

\(
f \circ g \circ z\\
\)

Wtf is that nonsense? lol

Plus, now we can write,

\(
\Omega f \bullet g \bullet z\\
\)

and it's **almost** like a differential form
Reply
#7
I read your paper again, and I think I have some more thoughts, but I have more questions. I think I'll formulate a couple of questions and try to explain myself through an air of questioning; and hone the questions better and then ask. First, I thought it warranted to try to talk categorically.

Can we write,

\(
\mathcal{F} = \{f \in \mathcal{C}(\mathbb{R}^+,\mathbb{R}^+),\,f\,\text{is an isomorphism},\,f' \neq 0\}
\)

So that \( f \) is say, a diffeomorphism (I believe that's the word, if not; it's something like that) of \( \mathbb{R}^+ \). Just so my shallow brain can think of a representative of the category; and it's not all up in the air. Let's additionally assume that:

\(
|f(x)| \le Ae^{Bx}
\)

For some constants \( A,B \). Which will make the exponential convergents behave well. And it would imply it's inverse at worse grows like \( \log \) somethin' somethin'. This would be a perfectly good algebraic space where we could derive,

\(
\forall f,g \in \mathcal{F} \exists \phi \in \mathcal{F}
f\phi=\phi g
\)

Now I haven't proven that, not entirely sure how to, but it's manageable--I could probably prove something close enough to continue the discussion.

-------------------------

With that out of the way, I'm going to keep thinking about this as operations on \( \mathcal{F} \) and functors; but to me they make sense as functors on \( \mathcal{F} \); or subgroups, or different versions or whatever. What I mean is, can we think of \( \mathcal{F} \) as an almost IDEAL space. Like the best space possible; where all the algebra is simple. Rather than monsters like \( e^x \) we look at simple amoebas like \( x^2 + x \). And build from the bottom up. Because I agree with a lot of what you are saying. But from a categorical perspective, start simple, no?

Unless I'm missing something drastic. You're paper was the most riveting the 3rd time... Maybe I just got over analytical, lmao
Reply
#8
Hey James, you are touching many topics.
I'm in the middle of a huge effort of tackling all the points you raise.
The answer is growing longer and longer so I guess I'll cut it into little pieces.

In order:
  1. Very soon I'll make a brief comment on a detail on notation and the bullet: this is self contained and short so will be the first;
  2. Then I'll go on with those superfunction-complete spaces. This is more self-contained and the categorical treatment is much more clean than the one needed for the "compositorial" and for the various multi-valued superfunctions trick. I'll post a new thread for this since you seems very interested in it.
    I'll make you wait longer for this, since I want to gain more understanding on your paper first and solving some unsolved mysteries in this thread;
  3. I need some clarifications on the convergence issues;
  4. I'm beginning to translate the example you represented in this thread and coming up with a proof sketch of the general mutivalued superfunction trick. This part is longer and more involved and it has my priority now.
Oh btw, I'm very glad you find something worth thinking in my "paper." I, instead, love to finally see some concrete stuff that seems to fill the empty abstractions that I play with.

EDIT 1 (Jan 30, 2021): First point is completed. Here's my preliminary thoughts on the bullet notation: I'll make a new thread about this once ive  [tex]-ed it very  soon, so that we can talk about this there if you want.

.pdf   (2021 01 30) Composition bullet notation and the general role of categories.pdf (Size: 544.4 KB / Downloads: 360)

EDIT 2 (Feb 02, 2021): Updated, corrected and polished the draft, made some paragraphs more clear. Created a new thread about the role of categories and composition.

.pdf   (2021 02 02) Composition bullet notation and the general role of categories - the softest introduction ever made.pdf (Size: 585.27 KB / Downloads: 335)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#9
PROGRESS UPDATE (Feb 05, 2021):
I'm making a lot of progress on point 4 but the road is way longer then I believed. I must delay a post on this for another week at least. I need time to check all the details and I'm convinced that I can go very far in generalizing the trick even more. Now I'm going from the special cases to the more general, I'm adding many pages and correcting many typos and phrasing.

Here is a spoiler of where I am currently: I can now, 99%, take on the "Nixon's trick" at the same level of generality I treated the "Superfunction trick". That's not much but it's promising. Here's the vague statement.

[Image: image.png]


What you do in fact is to take \( \chi(\sigma):=\chi(\sigma,0) \), for \( h_j(\sigma,x)=e^{\sigma-j}\cdot F(x) \) and \( f \) the successor, and to plug \( \chi(\sigma) \) in the classical superfunction trick to obtain a supefunction of F. (apologize for mixing up letters and notation, I'll work on uniformity later).
The plan is to prove all the algebraic stuff and then using Jmsn's theorems as black-boxes to derive as corollaries something useful.

---
For this reason I'll take more time for this and turn to the point 3), aka the "superfunction complete spaces".

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#10
(02/05/2021, 03:26 PM)MphLee Wrote: PROGRESS UPDATE (Feb 05, 2021):
I'm making a lot of progress on point 4 but the road is way longer then I believed. I must delay a post on this for another week at least. I need time to check all the details and I'm convinced that I can go very far in generalizing the trick even more. Now I'm going from the special cases to the more general, I'm adding many pages and correcting many typos and phrasing.

Here is a spoiler of where I am currently: I can now, 99%, take on the "Nixon's trick" at the same level of generality I treated the "Superfunction trick". That's not much but it's promising. Here's the vague statement.

[Image: image.png]


What you do in fact is to take \( \chi(\sigma):=\chi(\sigma,0) \), for \( h_j(\sigma,x)=e^{\sigma-j}\cdot F(x) \) and \( f \) the successor,  and to plug  \( \chi(\sigma) \) in the classical superfunction trick to obtain a supefunction of F. (apologize for mixing up letters and notation, I'll work on uniformity later).
The plan is to prove all the algebraic stuff and then using Jmsn's theorems as black-boxes to derive as corollaries something useful.

---
For this reason I'll take more time for this and turn to the point 3), aka the "superfunction complete spaces".

Yes, that is exactly what I'm driving at. I think in the space,

\(
Q = \{f \in \mathcal{C}^1(\mathbb{R}^+ \to \mathbb{R}^+); f\,\text{ is a bijection}, f' \neq 0\}
\)

This method definitely works with the exponential convergents. In the general case you definitely have to be more subtle. Where in this space, for all \( f,g \in Q \) we can always find \( \phi \in Q \) where, \( f\phi=\phi g \).


EDIT:

The trouble I see with this space is that the super function of \( f\in Q \) will not exist here \( F\not\in Q \). I don't envy you if you're trying to create a general structure to where the superfunction sits.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Real tetration as a limit of complex tetration Daniel 5 549 06/20/2023, 07:52 PM
Last Post: tommy1729
  Simple limit approximation to exp(x) tommy1729 0 172 05/16/2023, 11:13 PM
Last Post: tommy1729
  4 hypothesis about iterated functions Shanghai46 11 1,417 04/22/2023, 08:22 PM
Last Post: Shanghai46
  Question about the properties of iterated functions Shanghai46 9 1,115 04/21/2023, 09:07 PM
Last Post: Shanghai46
  Pictures of some generalized analytical continuations Caleb 18 2,306 03/17/2023, 12:56 AM
Last Post: tommy1729
  Artificial Neural Networks vs. Kneser Ember Edison 5 856 02/22/2023, 08:52 PM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 535 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] iterated sin using Besselfunction 1st kind Gottfried 7 1,358 12/18/2022, 02:06 PM
Last Post: Gottfried
  Iterated function convergence Daniel 1 547 12/18/2022, 01:40 AM
Last Post: JmsNxn
  Some "Theorem" on the generalized superfunction Leo.W 59 28,275 09/18/2022, 11:05 PM
Last Post: tommy1729



Users browsing this thread: 2 Guest(s)