Posts: 1,214
Threads: 126
Joined: Dec 2010
04/30/2022, 08:37 AM
(This post was last modified: 04/30/2022, 08:43 AM by JmsNxn.)
So I've started running more advanced code. I haven't changed the code at all, but I've been running more advanced trials.
For this, I'd like to paste a bunch of commands you can run, which then produce the same output I will paste. I will focus solely on first degree variations. This is an explanation of the semi_operators program.
To begin initialization, once in pari-gp, read the program and declare A as this.
Code: \r semi_operators
A = Ibennet(1+s,2,2+w+O(w^2),f+O(f^2))
This is precisely the equation:
\[
A = 2 <1+s>_{f} 2+w\\
\]
But we only care about the first two terms of the Taylor series in \(f\) and \(w\), each centered about zero.
Now we introduce the numbers this produces:
Code: %11 = ((4.000000000000000000000000000000000000000 + 1.386294361119890618834464242916353136151*f + O(f^2)) + (2.000000000000000000000000000000000000000 + 1.000000000000000000000000000000000000000*f + O(f^2))*w + O(w^2)) + ((4.028429993553611161 E-43 + 0.4528112327564592919426380883134781216581*f + O(f^2)) + (0.9580900168644644000649219792917077097099 - 0.1356667678466469569688361793634485948803*f + O(f^2))*w + O(w^2))*s + ((6.579116249093475050 E-44 + 0.07395183096062961295613131306770006209139*f + O(f^2)) + (-0.09757488571442024085704436528909808885552 - 0.1314752626782190355417664773490717959648*f + O(f^2))*w + O(w^2))*s^2 + ((7.163215891112436507 E-45 + 0.008051733859954727214726890947358116898670*f + O(f^2)) + (-0.06932117930707267344880303791788394197782 - 0.05392397722344977943503659016787524597137*f + O(f^2))*w + O(w^2))*s^3 + ((5.849379304145433158 E-46 + 0.0006574930327220950519769745338508746188389*f + O(f^2)) + (-0.01583215451145976116656433947903545843674 - 0.01734912798323829586380034471450982423896*f + O(f^2))*w + O(w^2))*s^4 + ((3.821215360683631440 E-47 + 4.295195003681144004114867909746573876425 E-5*f + O(f^2)) + (-0.002444332250449864366921702815873886462024 - 0.004305681437306917432108522766354604094587*f + O(f^2))*w + O(w^2))*s^5 + ((2.0802330859242300099 E-48 + 2.338263069149667219796167858305565345683 E-6*f + O(f^2)) + (-0.0002959644872275539864420689126690125585809 - 0.0008401974800674236013287136324014847675665*f + O(f^2))*w + O(w^2))*s^6 + ((9.706791351474238090 E-50 + 1.091081181751779435490420298220746956322 E-7*f + O(f^2)) + (-2.994057135719400337679700228929075018729 E-5 - 0.0001340837717811466774764455199877469224530*f + O(f^2))*w + O(w^2))*s^7 + ((3.963213262315457120 E-51 + 4.454806179721509602629562334846970016409 E-9*f + O(f^2)) + (-2.620033049418160161413540043074308064995 E-6 - 1.812641173160424769156356958489636153262 E-5*f + O(f^2))*w + O(w^2))*s^8 + ((1.4383569234139090268 E-52 + 1.616769244288898090374658488227443062511 E-[+++]
From here we talk about:
\[
B = 2 <s>_{g} A\\
\]
Which becomes:
Code: B = Ibennet(s,2,A,g+O(g^2))
%12 = (((6.000000000000000000000000000000000000000 + 1.000000000000000000000000000000000000000*g + O(g^2)) + (1.386294361119890618834464242916353136151 - 5.613967822811889894 E-46*g + O(g^2))*f + O(f^2)) + ((2.000000000000000000000000000000000000000 - 8.089529890453804219 E-46*g + O(g^2)) + (1.000000000000000000000000000000000000000 - 1.2780703748038775050 E-44*g + O(g^2))*f + O(f^2))*w + O(w^2)) + (((1.189949083815681480393430888415606037397 + 0.8417315202049436792677019359063981245388*g + O(g^2)) + (1.000719794693837950862764734580566482002 + 0.1469473776340554402038720597133570834643*g + O(g^2))*f + O(f^2)) + ((1.748554982032124044032368915689313800842 + 0.2120002529842895601913183207125649505424*g + O(g^2)) + (-0.04373588768808528901181784667221274173166 - 0.05458972429145882245883848409293278725907*g + O(g^2))*f + O(f^2))*w + O(w^2))*s + (((0.5008088256433267578529147867862586443978 + 0.4898149264719639103174144458346616286488*g + O(g^2)) + (0.6675480426182790657218391397189526167333 + 0.2286945117977071640326024100678923402640*g + O(g^2))*f + O(f^2)) + ((0.8792787821023755987942472648656339966780 + 0.3622475552788372384714479063158105266271*g + O(g^2)) + (-0.1333431007554850822479490261212832581241 - 0.1745944775957549810056107935950344863934*g + O(g^2))*f + O(f^2))*w + O(w^2))*s^2 + (((0.1942849409981862815449175271024313895223 + 0.2456453146530388712583021862265536282185*g + O(g^2)) + (0.3998559258485188889027993342244624720898 + 0.2171633103305817165950812905984866299433*g + O(g^2))*f + O(f^2)) + ((0.5063705716324703251059901499869113497935 + 0.3313802751353067258060520803433950314312*g + O(g^2)) + (0.05650644674046454733268988387440112126009 - 0.1593880733599506333918730958105527370538*g + O(g^2))*f + O(f^2))*w + O(w^2))*s^3 + (((0.07307635427874960048126815750544007013580 + 0.1129435824959106148489227445288041493028*g + O(g^2)) + (0.2107034201691668111303555331928121547808 + 0.1598975709948236622798142534183039736352*g + O(g^2))*f + O(f^2)) + ((0.244[+++]
Now the goal from here is to relate the final result to \(f,g\). These play the role of \(\varphi_2,\varphi_1\) respectively. We will design our solution such that \(\varphi_3\) is a function of the above two. At least, we will design the solution in this instance. We will say \(h\) is a function of \(f,g\) so that:
\[
2<s>_g\left(2<s+1>_{f}2+w\right) = 2<s+1>_h 3+w\\
\]
The value \(h\) is then called with this code:
Code: h = Iexp(-s-1,B,3+w+O(w^2)) - Iexp(-s-1,2,3+w+O(w^2)) - 3 - w -O(w^2)
*** log: Warning: increasing stack size to 16000000.
%13 = (((-4.993567076550786624856049516800516312200 E-10 + 0.4551196131538278335227906183724725879997*g + O(g^2)) + (0.6309297533502175232422057906869026880288 - 0.1051549589200787456803620238772458073549*g + O(g^2))*f + O(f^2)) + ((3.543397536875013131988607015654339882352 E-8 - 0.1380892300724521722372012563450441842459*g + O(g^2)) + (0.2636872921730001624603967617469509578375 - 0.008896226977222334731860334301022909213812*g + O(g^2))*f + O(f^2))*w + O(w^2)) + (((-0.07768558393188282342115547358779740775592 - 0.01015344185200757851025127121996980849904*g + O(g^2)) + (-0.08970191377896801856281812185334754508365 - 0.06141814843793742831710323647194593304319*g + O(g^2))*f + O(f^2)) + ((0.04918565647026930110099196500453115445801 - 0.1065173717674995882109371953568912347709*g + O(g^2)) + (-0.6716227325727445260977542869162375520244 + 0.05965190728832847433737714435779781136407*g + O(g^2))*f + O(f^2))*w + O(w^2))*s + (((0.08252415909077602415472930552111683061178 + 0.01248579915994197062667945382975394706707*g + O(g^2)) + (0.07742856714620473248450754980829984371186 + 0.002615207622404777499200964124979216939756*g + O(g^2))*f + O(f^2)) + ((-0.08901111934548888374832533444093805256379 + 0.006867486033378285216751176346463146563634*g + O(g^2)) + (-0.01037995218916216468293471665016699477237 + 0.04844999500533523427556484357205950302367*g + O(g^2))*f + O(f^2))*w + O(w^2))*s^2 + (((-0.007020215702594515380164121231704846731659 - 0.002631184343897851646355404663080024544446*g + O(g^2)) + (0.007829887393306312960052145960192432909575 - 0.001950730365551465780848676898844163507034*g + O(g^2))*f + O(f^2)) + ((0.05364123779427932696347221092482634071556 + 0.01255478455261672717193317739436683665404*g + O(g^2)) + (0.001785790365460631865975588101929528360753 - 0.001978829162948244458338459868584428891072*g + O(g^2))*f + O(f^2))*w + O(w^2))*s^3 + (((0.002359456171603009420132877252429216608766 + 0.0003843534947151930638314944520071316729913*g + O(g^2)) + (0.0043290220474440[+++]
This is the exact functional equation I made above. Remember that I'm only asking pari to check the first term in every variable other than \(s\).
And now the goal is to check, that up to first order terms:
\[
2<s>_g\left(2<s+1>_{f}2+w\right) -2<s+1>_h (3+w) = 0\\
\]
We run this check with:
Code: B - Ibennet(1+s,2,3+w+O(w^2),h)
*** log: Warning: increasing stack size to 16000000.
%16 = (((1.097198830817115840580640383740426105346 E-9 + 5.335234388653624176126351941311194771086 E-10*g + O(g^2)) + (7.396205348243446130059830035065780100486 E-10 + 1.433670230476728327818058387004388550141 E-10*g + O(g^2))*f + O(f^2)) + ((-7.752349706696407465510397411648075463539 E-8 - 3.823679586907149411923075600094968883301 E-8*g + O(g^2)) + (-5.247393106172077730471169094032216921336 E-8 - 1.037201762553142918609860579552292670284 E-8*g + O(g^2))*f + O(f^2))*w + O(w^2)) + (((7.916717181790533867820027857920293452362 E-10 + 6.561601955365761690827143577917803392156 E-10*g + O(g^2)) + (8.209767281168871684138449979351427186223 E-10 + 3.079293025448427775001022847589823762440 E-10*g + O(g^2))*f + O(f^2)) + ((-5.518542542422709936196748114259743075607 E-8 - 4.663778484674427092988315411890056052904 E-8*g + O(g^2)) + (-5.827302856245977431549306248837868963595 E-8 - 2.223683439631890100008727112315780612967 E-8*g + O(g^2))*f + O(f^2))*w + O(w^2))*s + (((4.452378780099874635612548098715402982312 E-10 + 5.035278599165156491403799417503666241698 E-10*g + O(g^2)) + (6.575036890453595230403040984504917560050 E-10 + 3.787711689056494287191792808246850539164 E-10*g + O(g^2))*f + O(f^2)) + ((-3.082750185203135359157015393672450861482 E-8 - 3.551309064897895084302374043923006774901 E-8*g + O(g^2)) + (-4.671337498729872955564733070778658889302 E-8 - 2.733869995004656929501998483482988880244 E-8*g + O(g^2))*f + O(f^2))*w + O(w^2))*s^2 + (((2.203917275845259628744966439945247657971 E-10 + 3.117927660141714454224385568600062300231 E-10*g + O(g^2)) + (4.502252583065156472977668171370664419663 E-10 + 3.516625137387558164346354075930654313331 E-10*g + O(g^2))*f + O(f^2)) + ((-1.512024102446443131058097835961820175504 E-8 - 2.180739465138867015518149311894135354548 E-8*g + O(g^2)) + (-3.188865758065010989314802401209628163679 E-8 - 2.532089299817959287259590818245032503508 E-8*g + O(g^2))*f + O(f^2))*w + O(w^2))*s^3 + (((1.004143152172660312944934132436136179849 E-10 + 1.69[+++]
Which is 0 in all variables, up to 8 digits. I ran this at low precision, because the moment you ask this to be 50 digits you better be ready to melt your computer. I've set a default at 30 series precision, and 40 digit precision; so through all of these equations we lose like 25 digits of accuracy, or something like that.
There exists a differential equation here that is a much simpler form than what I started with. But I don't see it yet. But there's a differential equation which solves all of my troubles. Either way, the code I posted works for this shit.
Posts: 374
Threads: 30
Joined: May 2013
05/01/2022, 04:48 PM
(This post was last modified: 05/01/2022, 09:34 PM by MphLee.)
Let me just add a note about the choice if the identity 2[s]4=2[s+1]3. It's interesting to understand if other similar identities of the Goodstein h.os can be used to study other aspects of what you are doing. That identity is just one of the many arising once one fixes the base and the exponent to fixed integer values and let the rank be the variable.
Just one example: let \(h_s(n):=3+_s n\) then using the funny fact that for ranks above multiplication \(h_{s+2}(2)=h_{s+1}(3)=h_{s}h_{s+1}(2)\) holds and defining \(T(s):=h_s(2)\) we obtain \[T(s+n)=\prod_{i=-1}^{n-2}h_{s+i}\bullet T(s)\]
where the product is meant to be omega notation for outer composition. Probably this needs some fine tuning for the domain of variables and some index. It is remarkable that in this identity the ranks increase from right to left, while in most of the Goodstein iterative identities one can easily deduce the ranks increase going from left to right.
addendum 1: in the same way, by induction, one might be able to derive some iterated composition for the case\(h_s(n):=2+_s n\). Maybe using the equation \(h_{s+2}(3)=h_{s+1}(4)=h_{s}h_{s+1}(3)\). I wonder if it is possible to describe completely the class of all the equation of this kind.
addendum 2: the general case fixing the exponent to 2 is: let\(h_{b,s}(n):=b+_s n\) and define \(T_b(s):=h_{b,s}(2)\). Use this notation to rewrite and use the identity \(h_{b,s+2}(2)=h_{b,s+1}(b)=h_{b,s}^{b-2}h_{b,s+1}(2)\) as \[T_b(s+2)=h_{b,s}^{b-2}T_b(s+1)\] and obtain
\[T_b(s+n)=\prod_{i=-1}^{n-2}h^{b-2}_{b,s+i}\bullet T_b(s)\]
Obviously, If I'm able to follow your methods a lil bit, you could use these kind of eqs. for ranks near the integers, something like \(s\in \bigcup_{n\in\mathbb N} [n-\epsilon,n+\epsilon]\).
addendum 3: we can look the previous recursion also in the other direction: define \({\bf A}_b(s):=h_{b,s}(b)\) and rewrite the previous recursion as \({\bf A}_b(s+1)=h_{b,s}^{b-2}h_{b,s+1}(2)=h_{b,s}^{b-2}{\bf A}_b(s)\) obtaining
\[{\bf A}_b(s+n)=\prod_{i=0}^{n-1}h^{b-2}_{b,s+i}\bullet {\bf A}_b(s)\]
addendum 4: the case \(h_{b,s+1}(3)=h_{b,s}(4)\) works only for \(b=2\) and seems much more difficult to generalize. First of all it doesn't belong to the previous family of iterated composition even if it is very similar in its form. Define \(E(s):=h_{s}(3)=2+_s3\) then \(E(s+2)=h_{s+1}(4)=h_s(E(s+1))\) and \[E(s+n)=\prod_{i=-1}^{n-2}h_{s+i}\bullet E(s)\]
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 374
Threads: 30
Joined: May 2013
05/02/2022, 12:35 PM
(This post was last modified: 05/02/2022, 05:47 PM by MphLee.)
addendum 5: found a nice generalization that cover most of the cases that uses the "hypersquare" equation \(b+_{s+2}2=b+_{s+1}b\), and also found the general formula that uses the "four-rule" equation \(2+_{s+1}2=4\).
Setting \(T_{b,k}(s):=b+_s k\) just note that \(T_{b,k}(s+2)=h_{b,s+1}^{k-2}T_b(s+2)\)
\[\begin{align}T_{b,k}(s+2)&=h_{b,s+2}(k)\\
&=h_{b,s+1}^{k-2}h_{b,s+2}(2)&& 2\leq k\\
&=h_{b,s+1}^{k-2}h_{b,s+1}(b) && \textrm{"hypersquare" equation for} \,\, s\in\mathbb N\\
&=h_{b,s+1}^{k-2}h_{b,s}^{b-k}h_{b,s+1}(k)&& k\leq b \\
&= h_{b,s+1}^{k-2}h_{b,s}^{b-k}T_{b,k}(s+1)
\end{align}\]
This gives an extension of the recursion when \(2\leq k\leq b\). Just to be enlightening, let's convert it in box notation with \(b\) implicit
\[\begin{align}T_{b,k}(s+n+2)&=\prod_{i=0}^n h_{b,s+i+1}^{k-2}h_{b,s+i}^{b-k}\bullet T_{b,k}(s+1)\\
[s+n+2]k&=[s+n+1]^{k-2}[s+n]^{b-k}[s+n]^{k-2}[s+n-1]^{b-k}...[s+3]^{k-2}[s+2]^{b-k}[s+2]^{k-2}[s+1]^{b-k}[s+1]^{k-2}[s]^{b-k}[s+1]k \\
&=[s+n+1]^{k-2}[s+n]^{b-2}[s+n-1]^{b-2}...[s+3]^{b-2}[s+2]^{b-2}[s+1]^{b-2}[s]^{b-k}[s+1]k\\
&=h_{b,s+n+1}^{k-2}\circ \left(\prod_{i=1}^n h_{b,s+i}^{b-2}\bullet h_{b,s}^{b-k}T_{b,k}(s+1)\right)
\end{align}\]
To extend the exponent in the other case can perform a similar trick using the \(4\) rule for hos, i.e. 4 is a fixed point. Set \(E_k(s):=h_s(k)=2+_sk\)
\[\begin{align}E_k(s+2):&=h_{s+2}(k)\\
&=h_{s+1}^{k-3}h_{s+2}(3)&& 3\leq k\\
&=h_{s+1}^{k-3}h_{s+1}h_{s+2}(2)\\
&=h_{s+1}^{k-3}h_{s+1}(4) && \textrm{"four-rule" equation for} \,\, s\in\mathbb N\\
&=h_{s+1}^{k-3}h_{s}^{4-k}h_{s+1}(k)&& k\leq 4 \\
&= h_{s+1}^{k-3}h_s^{4-k}E_{k}(s+1)
\end{align}\]
In this case we are less lucky: we can only have \(k=3\) or \(k=4\).
Note: There should be some hidden structure underneath this that generates all these iterated compositions... but idk what is the ultimate source. I have a feeling that all the identities of this kind are important to improve our understanding of the behaviour of possible rank-extensions in a neighborhood of the positive integers (bigger than 1).
Wtf is going to happen if you apply your Ramanujan black magic or your Infinite iteration/limit-trick to these outer compositions? Genuine non-integer ranks?
Because, if this is the case, I have a bunch of iterated compositions (outer and inner) that I've derived for Goodstein's H.Os that we can work on.
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
Woooh, very interesting! Those are very nice looking identities. The trouble I'm having with it, is that none of my computations will work outside of the region \(0 \le \Re(s) \le 2\). So any of the iteration formulas that go beyond that, I wouldn't be able to compute and verify. To do that you would first have to solve for: \(f(z) = 2 <s+1> z\), and take the iteration \(f^{\circ z}\) to get \(<s+2>\), and that would be far too computationally exhausting.
I can test for \(2 <s> y\) for all \(y > 1\), and verify the functional equation. The trouble is, I don't know how to glue these solutions together. The monodromy theorem guarantees there is a function, but I can't for the life of me find a way to compute it. All the trouble now is describing the surface \(\boldsymbol{\varphi} = (\varphi_1,\varphi_2,\varphi_3)\). I can generate all the solutions (which produces a surface in \(\mathbb{C}^3\) (so it has complex dimension 2 for \(s,y\) fixed)), but I don't know which is the correct solution we want. Well, I mean, I do know the correct solution, but I don't know how to test for finding the correct solution. So for the moment I've been trying to derive a differential equation.
I'm trying to see if there's an obvious formula for the first term in \(w\):
\[
2<s>(2<s+1> y+w) = 2 <s+1> (y+1+w)\\
\]
I think there might be a first term relation which can help us determine the solution we want. That's mostly what I'm trying to track by what I wrote above.I'm just scratching my head trying to figure out how to ensure:
\[
\varphi_2(2<s+1>y+w,s) = \varphi_1(y+w,s)\\
\]
Because, we require this identity to paste all the solutions together. The trouble is, this isn't a local equation, so it doesn't help us for \(s,w \approx 0\). If we had a local equation it'd be easy to test in pari-gp; which is why I'm looking for a differential equation test instead. Jesus this is frustrating... There's gotta be something I'm missing to test this better.
I am very sure that this method is not equivalent to the Ramanujan method for a simple reason. For \(\Re(s) \ge 2\), we are given \(\alpha \uparrow^s 1 = \alpha\), which, due to the identity theorem, we must have it for \(0 \le \Re(s) \le 2\). This doesn't happen for \(<s>\). So instantly they are in disagreement. This is again, why I didn't use the uparrow notation, to make sure there was no confusion.
As to the iterated/infinite composition, I'm not sure where this would stand. As far as I'm concerned, zoologically speaking, these are two very different animals--from two entirely different genuses, they just both happen to live in tetrationland, lol.
Posts: 374
Threads: 30
Joined: May 2013
05/02/2022, 07:25 PM
(This post was last modified: 05/02/2022, 07:32 PM by MphLee.)
Please, forgive my ignorance and my silly points and observations.... but really, I have not time to put in it the required time and effort.
First, I'd bet 100$ that those equations do not hold for non integer ranks... but they put in some way a constraint on the rank infinitesimally approaching integers ranks. So I hoped there could be a way to use those identities to force something on the ranks in \((0,\epsilon) \cup (1-\epsilon,1+\epsilon)\cup (2-\epsilon,2)\) - or regard this as a sum of open balls in \(\mathbb C\).
Secondly, the \(2<s+1>2=2<s>2=4\) identity is just one special case of the identities I showed above. That's why I believe that that is no not going to hold for non-integer ranks. In other words, since the rule breaks for zero-th rank, I'd expect an oscillating behavior of that function hitting the value \(4\) only at integers.
If I'm not misunderstanding completely what you are doing, that would rule out your choice of fixing to zero one coordinate of your surface... that would pump the complex dimension to three probably making impossible to parametrize it with just two complex parameters.
I'm sure you have good reason to claim that those identities are not going to help with your pasting-solution works, so I trust you.
Last thing: I cited infinite composition because those recurrence relations I showed you have as solutions finite iterated compositions, as I made evident... just as they were Gamma-function-like recurrence relations, but in the rank variable!! So, maybe, composition calculus could jump in, find the limit of those compositions, and then obtain non-integer ranks.
ps: just ignore my remark about the surface... I went back re-reading again what you wrote about it... and I'm not sure I know what I'm talking about... srry.
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
05/02/2022, 09:40 PM
(This post was last modified: 05/03/2022, 05:44 AM by JmsNxn.)
(05/02/2022, 07:25 PM)MphLee Wrote: First, I'd bet 100$ that those equations do not hold for non integer ranks... but they put in some way a constraint on the rank infinitesimally approaching integers ranks. So I hoped there could be a way to use those identities to force something on the ranks in \((0,\epsilon) \cup (1-\epsilon,1+\epsilon)\cup (2-\epsilon,2)\) - or regard this as a sum of open balls in \(\mathbb C\).
You are precisely correct. What you mean by sum of open balls, you are referring to the monodromy theorem. This is guaranteed for noninteger ranks, and there's at least one solution.
We can choose \(2 <s> 2 \neq 4\), but that reduces this case from 1 variable surface, to 2 variable surface, so we're still perfectly fine. If you assume that it's 4, then you are assuming that \(\varphi_2 = 0\) in the equation:
\[
2 <s>_{\varphi_1} \left(2 <s+1>_{\varphi_2} 2\right) = 2 <s+1>_{\varphi_3} 3\\
\]
Then we only have one free variable, either \(\varphi_1\) or \(\varphi_3\), as they are bound to each other by a weird logarithmic equation. I wanted to set \(2 <s> 2 = 4\), because I wanted to reduce the equation, assume that \(\varphi_2 = 0\).
I know what you're saying about the 0th rank not working, but I'm not concerning myself with successorship at all. I only care about making a holomorphic solution for \(0 \le \Re(s) \le 2\). My presumption is that along \((-\infty,0)\), we will have a branch cut in \(s\). So at best, we'll have holomorphy on \(\mathbb{C}/(-\infty,0]\). And so the discontinuity at the 0th rank is perfectly fine in my terms.
As to whether this equation works or not, the monodromy theorem will take care of all the trouble.
\[
2<s>_{\varphi_1}(2 <s+1>_{\varphi_2} 2) = 2 <s+1>_{\varphi_3} 3
\]
The implicit function theorem, ensures these three functions always exist in \(s\). The derivatives are non-zero, and we have existence of points. This means we have a bunch of open balls in \(s\), and in \(\varphi_i\). This means we can paste together these open balls, using the monodromy theorem. This gives multiple functions \(\boldsymbol{\varphi}(s) = (\varphi_1(s),\varphi_2(s),\varphi_3(s))\), for \(0 \le \Re(s) \le 1\). These functions satisfy:
\[
2<s>_{\varphi_1(s)}(2 <s+1>_{\varphi_2(s)} 2) = 2 <s+1>_{\varphi_3(s)} 3
\]
There are multiple functions which satisfy this, I can program arbitrary ones--but the trouble is which one is the right one such we can extend globally (switch \(2\) for a variable \(y\), and \(3\) for \(y+1\)). This is the trouble, where now we add \(\boldsymbol{\varphi}(y,s)\)--the monodromy theorem, ensures we can glue together all the open balls in \(y\) together, to make one function (though there are still many candidate functions).
So now, we have a whole bunch of candidate functions \(\boldsymbol{\varphi}(y,s)\). It's actually pretty simple to program in these equations. I made a couple of tests with some dummy versions. For example \(y=2\), set \(\varphi_2 = 0\). And set \(\varphi_1 = 0.1*s(1-s)\), this will perfectly satisfy the equation. Won't be generalizable, there's only one that is generalizable (uniqueness is part of monodromy theorem once we add initial conditions).
We want the one such that \(\boldsymbol{\varphi}(y,0) = \boldsymbol{\varphi}(y,1) = 0\). And now, we are asking that the individual terms \(\varphi_1(y,s),\varphi_2(y,s),\varphi_3(y,s)\) satisfy relations to each other.
\[
\begin{align}
\varphi_2(2 <s+1>_{\varphi_2(y,s)} y, s-1) &= \varphi_1(y,s)\\
\varphi_3(y,s) &= \varphi_2(y+1,s)\\
\end{align}
\]
This is absolutely doable... We are making 2 restrictions on a surface in complex dimension 2--while still allowing \(y,s\) to move and perturb the surface. Yes, I know. It hurts my fucking head too.
The key is to ensure that \(\varphi_1(y,1)=\varphi_2(2 <2>_{\varphi_2(y,1)} y, 0)\) as a taylor series, the rest will take care of itself. They both equal zero, but you have to make sure the Taylor series are exactly the same.
This is essentially just asking that:
\[
\frac{d^k}{ds^k}\Big{|}_{s=1} \varphi_2(2 <s+1>_{\varphi_2(y,s)} y, s-1) = \frac{d^k}{ds^k}\Big{|}_{s=1}\varphi_1(y,s)\\
\]
From here then, if we call the singular function \(\phi(y,s+1) = \varphi_2(y,s)\), and define:
\[
2<s+1>_{\phi(y,s+1)} y = 2<s+1> y\\
\]
And similarly can define a \(\phi(y,s)\) for \(0 \le \Re(s) \le 1\), using the relations between \(\varphi_{1,2,3}\).
As to your question that we would have a discontinuity, I normally would agree with this much stuff going on. But I'm confident this doesn't happen, because the derivatives in either \(\varphi\) are non-zero. And additionally, we have points everywhere, so there's always existence of points. So we won't have the trouble of a branching singularity, something like \(\exp(y) =0\). This is thankful to the fact that \(||\boldsymbol{\varphi}|| < \delta\), it's a very small perturbation, so it doesn't really affect us too much.
All of this is making my head swim. I apologize for inconsistencies in the notation, I'm still trying to figure this out. My head just keeps going in circles. I'm confident we have an implicit solution, but I'm not sure about how to construct/program it. I'm certain I'm not there yet, but I'm getting close to getting there. This will probably be my project for this summer. See if I can get it going more straightforwardly.
Another way to think about it, is that there are two functional equations we are requesting of the functional equation \(\phi\). One relates \(\varphi_2\) and \(\varphi_3\), and the other \(\varphi_1\) and \(\varphi_2\). We have a surface in two complex dimensions, so two restrictions reduces us to a point. Which is what we want, as it's a single value for the given \(s,y\). The two equations are simple,
If you shift \(\varphi_2\) forward from \(y \mapsto y+1\) you get \(\varphi_3\) without this shift in \(y\).
If you nest \(\varphi_2\) twice in the specific manner I wrote, you get \(\varphi_1\).
This is no different than a matrix equation in \(3\) variables, that belong to a surface in \(\mathbb{C}^2\), and making two restrictions. As long as the points exist, as long as the equation is non singular, you're golden. It's precisely the same thing. It's just god awful more difficult than matrices. lmao..
As to set this home, remember, we can describe a surface \( y= x\), the moment you add one restriction: \(y = x^2\), we know the distinct points are \((x,y) = (0,0)\) and \((x,y) = 1\). It's the exact same principle here. Now, move the surface by \(s\):
\[
y = s+x\\
\]
Constructs an evolving surface when only talking about \(x\) or \(y\). When we ask that \(y = x^2\), we are asking for the quadratic formula, such that \(s+x -x^2 = 0\), this gives \((x(s),y(s)\). These are unique, once you add initial conditions. This is the same for higher order polynomial. There may be 10 solutions, but they're unique up to initial conditions.
Now \(y(s)\) is reduced to \(x(s)\); so the two variables become one. And then we evolve the surface over time, where time is really complex dimensional time in \(s\). There may be cuts. But surprise surprise, since there's always existence of points (there are always x's and y's and s's to satisfy this equation regardless of the other values), we always have a number at least. To confirm analycity just confirm that:
\[
\frac{d}{dx} \left(s+x-x^2\right) = 1-2x \neq 0\\
\]
This certainly works if we just focus on \(x\neq 1/2\). Since it's a quadractic equation, we have "two solutions" about this branching problem. They are each unique upto initial conditions though. How do we prove this? Well you can take the unsophisticated route, and just use the quadratic formula. Or you can use the monodromy theorem... Which covers everything!!!!
This is sort of the template of my argument. It's just taylor series, but, I know how to talk about analytic functions like polynomials. I'm not the best at it, but we're nearly there Mphlee. So damn close.
I want you to know I really appreciate you engaging. I tend to forget subtle details, or misexplain, assume you know, without someone engaging me and reminding myself what I forgot to explain. I really appreciate you asking questions.
I also don't want you to think I'm writing off your equations above. They are absolutely beautiful equations. And I know what you mean, by they are a priori truths on any extension. The trouble is, they don't help too much from the computational angle. To even attempt at programming \(\Re(s)>3\) is a night mare. And that's dependent on a solution for \(0 \le \Re(s) \le 2\).
I understand, that we need an effective manner to describe these outer composition identities. I think, solving these equations will be far more fruitful, and the above equations will fall in place.
I will also keep in mind, that \(\varphi(2,2,s) \neq 0\) necessarily. I will explore that \(\varphi(2,2,s) = 0\) is constant, and I will explore non-constant. To readdress the 0-rank discontinuity. Though I'm expecting successorship to be an essential singularity in 3 variables. Which I have no idea how crazy it can get... Again, why I choose the notation \(<s>\), this only for a bennet modifier. We'll get to higher/lower ranks when we get there. Let's just stick to gluing bennet together...
HOLY FUCKING SHIT! You're right \(\varphi(2,2,s) \neq 0\)!!!!!!
The only solution must handle the border solutions holomorphically. I was secretly dreading looking at \(x<s> e\), because \(e\) means we hit the boundary value \(\eta = e^{1/e}\). And this can cause a branching problem. THIS IS WHERE WE ASSIGN OUR \(\varphi_2 = 0\). This means, everywhere at \(e\):
\[
x<s> e = \exp^{\circ s}_{\eta}(\log_\eta^{\circ s}(x) + e)\\
\]
THAT'S WHERE THE \(\varphi\) IS CONSTANT!!!!! NOW MY GRAPHS ARE CALM AS HELL MPHLEE!!!!! YASSSSSSS!!! SO FUCKING PUMPED!!!!!!! THE BOUNDARY OF SHELL-THRON WE ASSIGN EXACT VALUES TO BENNET. HOLY FUCK!!!! THANK YOU MPHLEE!!!
veni, vidi, vici, thanks to my bro MphLee... So instead of:
\[
2<s> 2 = 4\\
\]
We are instead saying that:
\[
x<s> e = x<s>_0 e\\
\]
This is far more intrinsically tied to the eta tetration and the cheta solution than I originally imagined. YES!!!!!!!!!!!!! SO PUMPED! I SEE HOW TO SOLVE THESE EQUATIONS!!!!
Posts: 1,918
Threads: 414
Joined: Feb 2009
(03/23/2022, 03:19 AM)JmsNxn Wrote: Hey everyone! Some more info dumps!
I haven't talked too much about holomorphic semi-operators for a long time. For this brief exposition I'm going to denote the following:
\[
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
\]
Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyper-operators.
...
First note :
In my notebook - and maybe posted here too - i found the identity: \((x<k+1>y) <k>x= x<k+1> y+1\) is consistant with
\[
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
\]
This seems a nicer choice or not ?
Why not this ? because it is slower ?
Second note : Im going to ignore holomorphic for now because I do not believe that. Might explain later ( more time ) and maybe already did in the past.
Third note :
Which fractional iteration for exp and log ?? There are many and they do not agree on the 2 real fixpoints or ( in case of base e^(1/e) a single real fixpoint that is not analytic ! )
These problems and choices are not simultaniously adressed , picked and motivated.
Fourth note : why non-commutative ?
Fifth note :
you basicly are looking for a function f_1(a,b,s) and " find " the solution f_2(a ,b ,s , f_3(a,b,s) ) where f_3(a,b,s ) is unknown , undefined and unproven analytic.
that feels like solving the quintic polynomial as exp(a_0) + f(a_0,a_2,a_3,a_4,a_5) for some unknown f ...
forgive my parody.
I could continue but I respect you
regards
tommy1729
Posts: 374
Threads: 30
Joined: May 2013
05/03/2022, 01:20 PM
(This post was last modified: 05/03/2022, 01:21 PM by MphLee.)
Hi Tommy!
I'm not going to discuss the details of James approach, but I can say something about some lateral matters.
That identity you found, is long known as the recursion law for right-associative hyperoperations. The oldest reference of this I'm aware of dates back to 1953 by Giuseppe Arcidiacono, "On the extension of arithmetic operations".
It is a variant of Goodstein's original definition and they both agree for ranks 1, 2 and 3.
Said that, idk if it is a nicer choice. Ofc, if James's strategy of pasting together the Bennets spectrum of operation interpolating +, x and ^ works for extending Goodstein's (left-associative) will works too for (Arcidiacono's) because the ranks beyond are extended by piecewise method...
Third note: maybe I'm mistaken, but James is using his beta method algorithm for computing fractional iterations of exp, this post was born exactly from that.
Fourth note: sorry but I missed that part... where we he is talking about non commutativity?
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,918
Threads: 414
Joined: Feb 2009
05/04/2022, 12:25 PM
(This post was last modified: 05/04/2022, 12:26 PM by tommy1729.)
(03/23/2022, 03:19 AM)JmsNxn Wrote: Hey everyone! Some more info dumps!
I haven't talked too much about holomorphic semi-operators for a long time. For this brief exposition I'm going to denote the following:
\[
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
\]
Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyper-operators.
Now there exists a really old thread on here, where using fatou.gp you could get really close to a solution of semi-operators. Let's let \(b \in \mathfrak{S}\), be in the interior of the Shell Thron region. Let \(\exp/\log\) be base \(b\). Let \(\omega\) be the fixed point assigned.
Then:
\[
x <s> \omega = \exp^{\circ s}(\log^{\circ s}(x) + \omega)\\
\]
Which is holomorphic and allows us to solve for all \(\omega \pm k\) for all \(s\). Now, the idea is that we have to solve implicit equations in log. I've never had a familiarity with this since I've investigated \(\beta\), but it should be doable on the following domain. If you take all forward and backwards iterates of \(\omega \pm k\) for \(\omega \in \mathcal{W}\); which is the domain of the fixed points. You should be able to construct an implicit solution to the equation:
\[
x <s> (x<s+1>y) = x <s+1> y+1\\
\]
For all \(x \in \mathbb{C}/\mathcal{E}\) and \(y \in \mathcal{W} + \mathbb{Z}\)--where \(\mathcal{E}\) is measure zero in \(\mathbb{R}^2\).
I mean, this problem is really solved if you think of it implicitly. We are just varying \(\mu,\lambda\) until we find a solution to the above equation while we freely move \(s\). This is very fucking difficult to do. I have not done it, as this would require a good 20 pages of work, but it is definitely possible. I may come back to this, but for the moment my brain is switching to PDE/ODE territory, and this type of research is secondary.
Regards, James
Also using \(x<k>(x<k+1>y) = x<k+1> y+1\)
we get for k = 3 and using that x <2> y = x^y :
x^(x< 3 > y) = x< 3 > (y+1)
and that is just tetration base x.
nothing with base x^(1/x) or y^(1/y) or 3^(1/3).
x <3> y = x^^(y + constant)
and probably x<4> is pentation or so.
regards
tommy1729
Posts: 1,214
Threads: 126
Joined: Dec 2010
05/04/2022, 10:31 PM
(This post was last modified: 05/05/2022, 01:23 AM by JmsNxn.)
(05/03/2022, 12:16 PM)tommy1729 Wrote: (03/23/2022, 03:19 AM)JmsNxn Wrote: Hey everyone! Some more info dumps!
I haven't talked too much about holomorphic semi-operators for a long time. For this brief exposition I'm going to denote the following:
\[
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
\]
Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyper-operators.
...
First note :
In my notebook - and maybe posted here too - i found the identity: \((x<k+1>y) <k>x= x<k+1> y+1\) is consistant with
\[
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
\]
This seems a nicer choice or not ?
Why not this ? because it is slower ?
Second note : Im going to ignore holomorphic for now because I do not believe that. Might explain later ( more time ) and maybe already did in the past.
Third note :
Which fractional iteration for exp and log ?? There are many and they do not agree on the 2 real fixpoints or ( in case of base e^(1/e) a single real fixpoint that is not analytic ! )
These problems and choices are not simultaniously adressed , picked and motivated.
Fourth note : why non-commutative ?
Fifth note :
you basicly are looking for a function f_1(a,b,s) and " find " the solution f_2(a ,b ,s , f_3(a,b,s) ) where f_3(a,b,s ) is unknown , undefined and unproven analytic.
that feels like solving the quintic polynomial as exp(a_0) + f(a_0,a_2,a_3,a_4,a_5) for some unknown f ...
forgive my parody.
I could continue but I respect you 
regards
tommy1729
1) I don't want left associative, who wants left associative....
2) It is certainly holomorphic. I have infinite choices, I'm just looking for the correct choice at the moment. I have a holomorphic expansion in my hands, that you can calculate the taylor series of...
3) I explained which choice of iteration I'm using, but I did play pretty fast and loose. We don't have to use beta (but it's certainly preferrable, since I'm sticking to a rough outline at the moment, I'll give the rough layout), I initially thought it'd be easier--but it's computationally exhausting, so instead I chose the alternate. For example, take \(\sqrt{2}\); we have two tetration functions. One which tends to \(2\) at infinity, and one which tends to infinity. For simplicity I'll stick to the real line; which is really the only place this comparison is viable, at the moment.
\[
\exp^{\circ s}(x)\\
\]
For \(x \in (-\infty,4)\), use the iteration about \(2\). Similarly, for \(x \in (2,\infty)\) use the iteration about \(4\). These functions agree on the line \((2,4)\) (upto small discrepancies), and therefore there's an iteration \(\exp^{\circ s}(x)\) for \(x \in \mathbb{R}\) (of course, depending on \(s\) we may have a branching problem). There. There's my iteration. It works perfectly fine, for now. The same principle holds for all \(y >1\). Even when \(\eta =e^{1/e}\), you use the unbounded or the bounded solution, doesn't matter, still produces a holomorphic iteration... upto the branching problem at both fixed points, which I'm not worried about momentarily.
The trouble is when you start talking about \(s\); both of these iterations have different periods. I'm not too worried about that at this moment. And why the \(\beta\) method is superior in my eyes, as you can fix the period for both iterations.
I understand your worry with how haphazardly I'm writing the iteration of the exponential, but I'm not two worried at this point of time. I need to write the details out much better, but the core is still there. I'm concerning myself with \(\varphi\) first, and then I'll concern myself with a perfectly constructed holomorphic function:
\[
\exp^{\circ s}(x)\\
\]
I'm not too concerned about the branching problem at \(2\) and at \(4\) right now, but this would be the easy part compared to \(\varphi\). As this would just imply that there are branches in \(x,y,s\) in the expression \(x <s> y\). Which I'm fully aware of. I would not be surprised if they disappear though. Honestly, if you stick to the repelling case primarily, and focus solely on \(x > \eta\) and \(y \ge e\) you will never encounter this problem--so long as you stick to the repelling iteration.
This is what I mean by how I am not looking forward to the case \(y = e\) and neighborhoods of it. I wouldn't be surprised if we end up with one solution \(x <s> y\) for \(y \ge e\) and another solution \(x <s> y\) for \(1/e \le y \le e\), and for \(y<1/e\) we'd get some unholy complex mess of a solution.
But if it makes you feel better.
Assume that \(y > e\), and assume \(\exp^{\circ s}_{y^{1/y}}(x)\) is the repelling iteration about the repelling fixed point \(y\). Which is holomorphic for \(x > y\). With a branching problem at \(y\). Then we're only concerned with checking the equation when \(x > y+1\). I think this really castrates the problem though, as we're going to have to talk about branching at some point. Might as well get a head start... So I'm doing that by only referring to local holomorphy, which is always true. My iteration \(\exp^{\circ s}(x)\) is locally holomorphic everywhere, except at \(x = 4\).
You can make this better too, instead of \(x > y\) all you need is \(\log^{\circ s}(x) > 0\) for \(0 \le \Re(s) \le 2\). Which essentially reduces to \(x > y^{1/y}\). So if we're setting \(y \ge e\), let's stick to \(x > \eta\) and \(y > e\). Then the expression:
\[
\exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Is analytic. If it makes you happy, let's stick to that. And now we're solving for \(\varphi \approx 0\) which let's us solve goodstein's equation.
4)
Of course it's non-commutative. I think you'd be hard pressed to genuinely believe that there'd be a commutative operator between addition and multiplication that is still holomorphic. I mean come on...
\[
a <s> b = b <s> a\\
\]
Well then... \(a^b \neq b^a\), done, contradiction. Can't be commutative. Sure an odd operator may be commutative, but good luck finding one.
5)
Well I'm not sure what this means, so I won't take it as a sleight.
I'm still roughing out how to do this, but locally everything is kosher. The trouble I'm having is making they can be pasted together properly using the monodromy theorem.
6)
Also your parody is pretty much exactly what I'm doing. But I'm not trying to solve in radicals the roots of a quintic polynomial. I'm just trying to describe the surface and achieve a taylor expansion, which is perfectly possible (upto branching of course).
|