Posts: 4
Threads: 2
Joined: Jul 2019
Hello! I'm a 21 year old amateur mathematician who was homeschooled until the age of 16 and has never set foot in a public school or college (yet)... everything I know about math I either learned from my father (an electrical engineer) or from my own studies. I'm a hyper-creative type so it's hard for me to hold myself to studying one specific subfield thoroughly and getting grounded in the basics - instead I keep coming up with crazy ideas and trying to figure out how to make them work (and usually getting told all my ideas are useless / meaningless / incomprehensible by "real" mathematicians on reddit...)
I discovered this forum while looking for information on fractional hyperoperations, and then discovering zeration and the "controversy" so to speak around it - I am now studying hyperoperations a bit to try to see what would be required to axiomatically derive properties of different variants of zeration and the higher operations. I have some extremely vague suspicions about how this will turn out but nothing consequential yet. (And yes, I've read all the threads I can find on the subject here, but probably should read them again multiple times!)
Besides that, other things I've played with, which are less relevant to this forum, are: a mixed-radix number system with place values in Sylvester's sequence (minus one) whose digits are entire numbers of one fewer digit, and which thus has a tree, rather than linear, structure... *another* number system based on golden ratio base but with the only legal digits being 1 and -1 (so that zero is an infinite sequence of 1,-1,-1)... a logic of directed multigraphs whose arrows can sort of "travel along" one another to make inferences, which has interesting similarities to Linear Logic... etc.
Basically I'm prone to inventing weird stuff and not knowing how to actually prove anything about it
Anyway I look forward to getting to know you all and maybe you'll be able to help me learn more about Tetration and the other hyper-operations (and your attempts to make Zeration and fractional ops work) - I don't know any analysis yet except the vague general ideas of differential and integral calculus, so I can't really follow a lot of theorizing about tetration yet :3 but I hope to learn.
Posts: 1,631
Threads: 107
Joined: Aug 2007
Hi Syzithryx, welcome on the forum. For me zeration is the increment. But yes I remember fiery discussions on this board with Gianfranco, there are surely other opinions about zeration. About the topic of fractional rank of hyperoperations I don't remember much said here. Do you have an approach to that?
Posts: 4
Threads: 2
Joined: Jul 2019
07/18/2019, 05:13 PM
(This post was last modified: 07/18/2019, 05:49 PM by Syzithryx.)
(07/18/2019, 04:05 PM)bo198214 Wrote: Hi Syzithryx, welcome on the forum. For me zeration is the increment. But yes I remember fiery discussions on this board with Gianfranco, there are surely other opinions about zeration. About the topic of fractional rank of hyperoperations I don't remember much said here. Do you have an approach to that?
My particular view on zeration is that a ∘ b = max(a,b)+1, the variant which Cesco Reale used - I just finished his paper and he came to many of the same conclusions as I have. His way of producing the inverse operation by extending the reals with "stigma-reals" is slightly different than what I would have done though, but it may be better! I was thinking that given a ∘ b = c < (b+1), the resulting a must then be greater than b but have a successor less than b's successor, so my version of "stigma numbers" would have had ςx>y for all real x,y - but Reale decided instead to have ςx>y iff x<y - his method however may actually be more "symmetrical" than mine; not sure.
Either way, though, I think that it makes more sense to define zeration as a truly binary operation relying on both its arguments, rather than unary or just the successor of the right argument. This definition actually also obeys the Mother Law, properly generalized - here are two equivalent versions, one basing the definition of an operation on the next lower operation, one on the next higher:
a [n] b = a [n-1] (a [n] (b - 1)) OR a [n] b = (a [n] (b - 1)) [n-1] a
a [n] b = a [n+1] ((b [n+1]\ a) + 1) OR a [n] b = b [n+1] ((a [n+1]\ b) + 1)
Notice that OR. This statement is true for all n≥1 - in particular it doesn't matter which of the two possibilities you choose for n=1 or 2 because addition and multiplication are commutative - it DOES matter which for 0 or 3+, but zeration can still be made commutative by having zerate(a,b) = max(a,b)+1.
Though note, there are other operations that also obey this law, such as min(a,b)+1, or even {floor(a+b) is odd: a+1; floor(a+b) is even: b+1}! So, the choice of exactly which pseudo-zeration is the "correct" one still depends on adding other identities it must obey, and I think given all the things Reale proved in his paper, it seems clear max(a,b)+1 is the most useful. (Note also btw that my new version of the Mother Law, which is more permissive, *still* doesn't accept Rubtsov and Romerio's variant!)
As for intermediate / fractional ops, I have thought of various approaches. My initial concept relied on using averages. For instance, with 1.5ation, the arithmetic average of (a+b) and (ab) obviously in some sense "favors" addition, because it's summing them before dividing by two; but the geometric average "favors" multiplication because it multiplies them before taking the square root; so properly we want to go right between those two averages.
So, define two functions: f(a,b) = (a+b)/2, and g(a,b) = ²√(ab). Then make an iterative sequence: c₀ = a+b, d₀ = ab; c_i = f(c_(i-1), d_(i-1)); d_i = g(c_(i-1), d_(i-1)). Then c_i and d_i should get closer together with each iteration and converge on a single value, then defined to be the 1.5ation of a and b; but unfortunately I don't know enough math to actually *prove* that or find any kind of closed expression for what that actually is.
Another way of doing 1.5ation, which probably wouldn't end up with the same values (but of course, who knows), is something I came up with yesterday looking at graphs of y=x+a versus y=xa - they are both lines, which meet at some point except when a=1; so, one could define a new line which bisects the angle they form, or which is exactly intermediate between them in the a=1 case, and that would be 1.5ation. I haven't really had a chance to analyze the properties of this variant yet though - and I really don't know how to extend this method to other pairs of ops.
Posts: 213
Threads: 47
Joined: Jun 2022
06/26/2022, 10:47 AM
(This post was last modified: 06/30/2022, 11:21 PM by Catullus.)
(07/18/2019, 05:13 PM)Syzithryx Wrote: As for intermediate / fractional ops, I have thought of various approaches. My initial concept relied on using averages. For instance, with 1.5ation, the arithmetic average of (a+b) and (ab) obviously in some sense "favors" addition, because it's summing them before dividing by two; but the geometric average "favors" multiplication because it multiplies them before taking the square root; so properly we want to go right between those two averages.
So, define two functions: f(a,b) = (a+b)/2, and g(a,b) = ²√(ab). Then make an iterative sequence: c₀ = a+b, d₀ = ab; c_i = f(c_(i-1), d_(i-1)); d_i = g(c_(i-1), d_(i-1)). Then c_i and d_i should *** closer together with each iteration and converge on a single value, then defined to be the 1.5ation of a and b; but unfortunately I don't know enough math to actually *prove* that or find any kind of closed expression for what that actually is. That kind of mean is the arithmetic–geometric mean.
There was a thread about using it to do 1.5ation here.
Quote:Another way of doing 1.5ation, which probably wouldn't end up with the same values (but of course, who knows), is something I came up with yesterday looking at graphs of y=x+a versus y=xa - they are both lines, which meet at some point except when a=1; so, one could define a new line which bisects the angle they form, or which is exactly intermediate between them in the a=1 case, and that would be 1.5ation. I haven't really had a chance to analyze the properties of this variant yet though - and I really don't know how to extend this method to other pairs of ops.
Then that would equal b*tan(π/8+arctan(a)/2)+a+a/(a-1)-a/(a-1)*tan(π/8+arctan(a)/2), using radians.
That definition of 1.5ation is not the same as the other one.
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Posts: 1,924
Threads: 415
Joined: Feb 2009
Posts: 1,214
Threads: 126
Joined: Dec 2010
(07/18/2019, 05:13 PM)Syzithryx Wrote: (07/18/2019, 04:05 PM)bo198214 Wrote: Hi Syzithryx, welcome on the forum. For me zeration is the increment. But yes I remember fiery discussions on this board with Gianfranco, there are surely other opinions about zeration. About the topic of fractional rank of hyperoperations I don't remember much said here. Do you have an approach to that?
So, define two functions: f(a,b) = (a+b)/2, and g(a,b) = ²√(ab). Then make an iterative sequence: c₀ = a+b, d₀ = ab; c_i = f(c_(i-1), d_(i-1)); d_i = g(c_(i-1), d_(i-1)). Then c_i and d_i should get closer together with each iteration and converge on a single value, then defined to be the 1.5ation of a and b; but unfortunately I don't know enough math to actually *prove* that or find any kind of closed expression for what that actually is.
Welcome to the forum!
This approach was constructed on here a long time ago! The problem with it, is it doesn't satisfy goodstein's equation. So it makes very very smooth solutions, but it doesn't actually satisfy the recursion necessary.
So, if you were to solve this iterative formula for \(0 \le s \le 1\) for \(0 = +\) and \(1 = \cdot\), this will not be extendable analytically to \(s = 1+\delta\) as \(\delta \to 0\). Which is to say, if you extend this definition to \(0 \le s \le 2\) and you ask that:
\[
a [s] (a [s+1] b) = a [s+1] b+1\\
\]
Then the solution is not analytic at \(s=1\), and essentially makes a piece wise solution.
The graphs are somewhere on this forum, you should be able to find them. I forget who posted it it was so long ago. This essentially becomes a very pretty looking interpolation of addition and multiplication. But unfortunately, we have many many many of these. But we've yet to find one that is analytic and satisfies Goodstein's equation.
I'm excited to see what you bring to the table with fractional hyper-operators. It's a personal problem for me, that I've been stuck on for 10 years, lmao.
|