"circular" operators, "circular" derivatives, and "circular" tetration.
#1
Well, before I can begin our trip into abstract algebra, I must first explain what a "circular" operator is, but to do that, I must first explain what a "hyperbolic" operator is.

hyperbolic exponentiation, is what we normally refer to as exponentiation. I've coined it "hyperbolic" because of the following:

\( \exp(x) = e^x = \cosh(x) + \sinh(x) \)

and then therefore, hyperbolic multiplication is normal multiplication; it being that hyperbolic exponentiation is the super function of hyperbolic multiplication. And also therefore hyperbolic multiplication is the superfunction of hyperbolic addition. Very, simple.

Circular exponentiation, is a different form of exponentiation, defined by the following:
\( \text{cxp}(x) = \delta^{\circ x} = \cos(x) + \sin(x) \) where \( \delta \) is a constant to be revealed shortly.

Interestingly, for imaginary arguments, it behaves the same as hyperbolic exponentiation, only reversed.
\( \text{cxp}(xi) = \delta^{\circ xi} = \cosh(x) + i\sinh(x) \)



Now that was all fairly simple to understand, but now we come to the part where we must define circular multiplication. It's still fairly simple, but may seem a bit awkward.

\( x \odot (x^{\circ n}) = x^{\circ n+1} \)

but seeing as \( x^{\circ n} \) is still unknown to us, we better stick to using \( x = \delta \)

\( \delta \odot (\delta^{\circ n}) = \delta^{\circ n+1} \)

therefore
\( \delta \odot (\delta^{\circ 0}) = \delta^{\circ 1} \)

and now, to make things simple, since \( \delta^{\circ 0} = \cos(0) + \sin(0) = 1 \) we can say that:

\( \delta \odot 1 = \delta^{\circ 1} \) and if we let \( \delta = \text{cxp}(1) = cos(1) + sin(1) \), 1 becomes the identity of circular multiplication and circular exponentiation.

Also, we know that circular multiplication, and circular exponentiation behave like normal multiplication and exponentiation.

\( a^{\circ x} \odot a^{\circ y} = a^{\circ x+y} \)
\( (a^{\circ x})^{\circ y} = a^{\circ x \cdot y} \)

and so therefore we get the beautiful, circular logarithm laws, (cln is taken to mean natural circular logarithm, which is just the circular logarithm base \( \delta \)) The principal branch always returns between \( [0, 2\pi) \):

\( \text{cln}(x \odot y) = \text{cln}(x) + \text{cln}(y) \)
\( \text{cln}(x^{\circ y}) = y \cdot \text{cln}(x) \)

where: \( \delta^{\circ \text{cln}(x)} = x \)

And now, with these laws, defining circular multiplication is as simple as:

\( x \odot y = \delta^{\circ \text{cln}(x)} \odot \delta^{\circ \text{cln}(y)} = \delta^{\circ \text{cln}(x) + \text{cln}(y)} = \cos(\text{cln}(x) + \text{cln}(y)) + \sin(\text{cln}(x) + \text{cln}(y)) \)

circular division, the inverse of circular multiplication is:
\( x\oslash y = \delta^{\circ \text{cln}(x) - \text{cln}(y)} \)

And circular exponentiation is as simple as:
\( x^{\circ y} = \delta^{\circ y \cdot \text{cln}(x)} = \cos(y \cdot \text{cln}(x)) + \sin(y \cdot \text{cln}(x)) \)

And now, with all three of these defined, we come to the issue of defining circular addition. Since circular multiplication is it's superfunction we can write the equation as such:

\( x \oplus y = y \odot (x \oslash y + 1) \)
and
\( x \ominus y = y \odot (x \oslash y - 1) \)
or generally:
\( x \oplus (y \odot n) = y \odot (x \oslash y + n) \)

This gives the very strange equation that "converts" addition into circular addition:

\( a \odot (x + y) = (a \odot x) \oplus (a \odot y) \),

which also implies:
\( a \odot (x \oplus y) \neq (a \odot x) \oplus (a \odot y) \)

Circular addition has identity \( \upsilon \), therefore:

\( a \oplus \upsilon = a \)
\( a \odot \upsilon = \upsilon \)
\( a \oslash \upsilon = \infty \)

These equations become necessary to observe when defining the circular derivative. Which we shall do now.

\( \frac{c}{cx} f(x) = \lim_{h\to \upsilon} (f(x+h) \ominus f(x))\oslash h \)

Therefore:
\( \frac{c}{cx} k \odot x = \lim_{h\to \upsilon} (k \odot (x + h) \ominus k \odot x) \oslash h = \lim_{h\to \upsilon} (k \odot x \oplus k\odot h \ominus k \odot x) \oslash h = \lim_{h\to \upsilon} (k\odot h) \oslash h = k \)

and:
\( \frac{c}{cx} (f(x) \oplus g(x)) = \frac{c}{cx} f(x) \oplus \frac{c}{cx} g(x) \)

and also:
\( \frac{c}{cx} x^{\circ n} = n \odot x^{\circ n-1} \)

We'll probably find:
\( \frac{c}{cx} f(g(x)) = \frac{c}{cg} f(g(x)) \odot \frac{c}{cx} g(x) \), but I don't want to just state it and I cannot prove it.

Therefore, with this, we can now create an infinite termed "circular" polynomial that will be it's own circular derivative.

if
\( n \dagger = 1 \odot 2 \odot 3 ... \odot n-1 \odot n \)

and
\( \oplus \sum_{n=0}^{R} f(n) = f(0) \oplus f(1) \oplus f(2) ... \oplus f(n-1) \oplus f( R ) \)

If:
\( \frac{c}{cx} J(x) = J(x) \)

\( J(x) = \oplus \sum_{n=0}^{\infty} x^{\circ n} \oslash n\dagger \)

And this is where I'm stuck and I need help. If J(x) turns out to equal \( \delta^{\circ x} = \text{cxp}(x) \) I believe we may have a sort of symmetry between circular tetration and hyperbolic, or normal tetration. We should be able to, by regular iteration (of the normal equation for cxp), create the superfunction of cxp, or "circular" tetration. It should be simpler to do than exp because it has a real fixpoint.

I have the equations sort of worked out, but they rely on confirmation that \( J(x) = \text{cxp}(x) \)

The only way I can properly confirm it is if I had a way of calculating \( \text{cln}(x) \). I tried using Lagrange's inversion theorem to solve for its power series, but this was to no avail. It's necessary that the values work for imaginary numbers as well since the range without imaginary numbers is \( [-\sqrt{2}, \sqrt{2}] \).

If anybody is curious, the test I'd be doing is seeing if:
\( J(1) = \oplus \sum_{n=0}^{\infty} 1 \oslash n\dagger = \oplus \sum_{n=0}^{\infty} (n\dagger)^{\circ -1} = \cos(1) + \sin(1) \),

if it does, then we're in business and there's a whole lot more I can post.
Reply
#2
This one can be solved via trigonometric identities. We have

\( \cos(a - b) = \cos(a) \cos(b) + \sin(a) \sin(b) \)

If we can find a value \( b \) for which \( \cos(b) = \sin(b) \), then we can use division to make the right-hand side equal your "\( \mathrm{cxp}(a) \)". Solving the equation yields \( 1 = \frac{\sin(b)}{\cos(b)} = \tan(b) \), \( b = \arctan(1) = \frac{\pi}{4} \). Then, \( \cos(b) = \sin(b) = \frac{1}{\sqrt{2}} \), and

\( \mathrm{cxp}(x) = \sqrt{2} \cos\left(x - \frac{\pi}{4}\right) \).

Then, the inverse, the "circular logarithm", is given by

\( \mathrm{cln}(x) = \arccos\left(\frac{x}{\sqrt{2}}\right) + \frac{\pi}{4} \).

Of course, since \( \mathrm{cxp}(x) \) is not injective (i.e. not "one-to-one"), then this is actually a multivalued "function" (relation). But if we choose the principal branch of \( \arccos \), then the above will range in \( \left[\frac{\pi}{4}, \frac{5\pi}{4}\right] \), and the domain is \( \left[-\sqrt{2}, \sqrt{2}\right] \) if we interpret as a real-valued function of a real number. The circular logarithm will not return in \( [0, 2\pi) \) if taken as a single-valued branch, because \( \mathrm{cxp} \) is not injective over that interval.
Reply
#3
(06/23/2011, 12:58 AM)mike3 Wrote: This one can be solved via trigonometric identities. We have

\( \cos(a - b) = \cos(a) \cos(b) + \sin(a) \sin(b) \)

If we can find a value \( b \) for which \( \cos(b) = \sin(b) \), then we can use division to make the right-hand side equal your "\( \mathrm{cxp}(a) \)". Solving the equation yields \( 1 = \frac{\sin(b)}{\cos(b)} = \tan(b) \), \( b = \arctan(1) = \frac{\pi}{4} \). Then, \( \cos(b) = \sin(b) = \frac{1}{\sqrt{2}} \), and

\( \mathrm{cxp}(x) = \sqrt{2} \cos\left(x - \frac{\pi}{4}\right) \).

Then, the inverse, the "circular logarithm", is given by

\( \mathrm{cln}(x) = \arccos\left(\frac{x}{\sqrt{2}}\right) + \frac{\pi}{4} \).

Of course, since \( \mathrm{cxp}(x) \) is not injective (i.e. not "one-to-one"), then this is actually a multivalued "function" (relation). But if we choose the principal branch of \( \arccos \), then the above will range in \( \left[\frac{\pi}{4}, \frac{5\pi}{4}\right] \), and the domain is \( \left[-\sqrt{2}, \sqrt{2}\right] \) if we interpret as a real-valued function of a real number. The circular logarithm will not return in \( [0, 2\pi) \) if taken as a single-valued branch, because \( \mathrm{cxp} \) is not injective over that interval.

Wow that's incredible how you did that. That's awesome that cxp can be a closed form expression of only cos, that means for imaginary arguments it should be purely positive. I thought it would be way harder to solve for it... I guess sometimes the answers just so simple and right infront of you that you can't think of it.

Thanks a lot for your help. I'll try to see if this pans out to anything Tongue
Reply
#4
(06/23/2011, 12:58 AM)mike3 Wrote: Then, the inverse, the "circular logarithm", is given by

\( \mathrm{cln}(x) = \arccos\left(\frac{x}{\sqrt{2}}\right) + \frac{\pi}{4} \).
You did not rationalize your denominator. Can you imagine trying to compute x/√(2) on slide ruler, without the denominator being rationalized?
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Reply
#5
this is a curious thread.

But cxp(x) satisfies the nice equation 

f ' (x) = f( - x)

now I know you like fractional derivatives so

D^z f(x) = f( (-1)^z x)

or

D^z f(x) = f( cos(pi z) x)

???

regards

tommy1729
Reply
#6
(06/18/2022, 12:43 AM)tommy1729 Wrote: this is a curious thread.

But cxp(x) satisfies the nice equation 

f ' (x) = f( - x)

now I know you like fractional derivatives so

D^z f(x) = f( (-1)^z x)

or

D^z f(x) = f( cos(pi z) x)

???

regards

tommy1729

Lol, I was like 18 years old when I made this thread tommy. It's hot garbage!
Reply
#7
(06/18/2022, 01:22 AM)JmsNxn Wrote:
(06/18/2022, 12:43 AM)tommy1729 Wrote: this is a curious thread.

But cxp(x) satisfies the nice equation 

f ' (x) = f( - x)

now I know you like fractional derivatives so

D^z f(x) = f( (-1)^z x)

or

D^z f(x) = f( cos(pi z) x)

???

regards

tommy1729

Lol, I was like 18 years old when I made this thread tommy. It's hot garbage!

ah yes but how about the fractional derivatives ? Smile

is that garbage too ? 

why prefer one garbage over another ? Smile

Im not convinced by fractional derivatives nor their applications like interpolation and such.

I find it funny that you can define the fractional derivatives in infinitely many ways but only makes sense for integers imo ...
Reply
#8
(06/18/2022, 11:48 PM)tommy1729 Wrote: ah yes but how about the fractional derivatives ? Smile

is that garbage too ? 

why prefer one garbage over another ? Smile

Im not convinced by fractional derivatives nor their applications like interpolation and such.

I find it funny that you can define the fractional derivatives in infinitely many ways but only makes sense for integers imo ...

Tommy, for you to so explicitly call fractional derivatives is garbage shows me how little you understand about it.

Yes, you can also define exponentiation INFINITELY many ways, but there is one which satisfies a uniqueness condition.

There's only one mellin transform. Don't be a dunce.
Reply
#9
(06/19/2022, 08:19 PM)JmsNxn Wrote:
(06/18/2022, 11:48 PM)tommy1729 Wrote: ah yes but how about the fractional derivatives ? Smile

is that garbage too ? 

why prefer one garbage over another ? Smile

Im not convinced by fractional derivatives nor their applications like interpolation and such.

I find it funny that you can define the fractional derivatives in infinitely many ways but only makes sense for integers imo ...

Tommy, for you to so explicitly call fractional derivatives is garbage shows me how little you understand about it.

Yes, you can also define exponentiation INFINITELY many ways, but there is one which satisfies a uniqueness condition.

There's only one mellin transform. Don't be a dunce.

what are you talking about ?

Maybe I know little about fractional derivatives.

But how is it not an arbitrary thing ?

What uniqueness criterion ?  And why that one ? 

I know Mellin transform. so ?

You sure you dont mean laplace transforms ?

enlighten me.

regards

tommy1729
Reply
#10
There is ONE fractional derivative to be used for interpolation, as I've said it. It's nothing more than the Mellin transform.

\[
\frac{d^{-z}}{dw^{-z}}\vartheta(w) = \frac{1}{\Gamma(z)} \int_0^\infty \vartheta(w-x)x^{z-1}\,dx\\
\]

Which converges in a vertical strip \(0 \le \Re(z) \le b\). We assume that \(\vartheta\) is entire, and additionally has some kind of \(O(x^{-b})\) growth as \(x \to \infty\).  This fractional derivative can be extended to \(-b \le \Re(z)\):

\[
\Gamma(-z)\frac{d^{z}}{dw^{z}}\vartheta(w)  = \sum_{n=0}^\infty \vartheta^{(n)}(w) \frac{(-1)^n}{n!(z-n)} + \int_1^\infty \vartheta(w-x)x^{-z-1}\,dx\\
\]

This is absolutely a unique operator, so long as you assume that \(F(z) = \frac{d^{z}}{dw^{z}}\Big{|}_{w=0}\vartheta(w) \) is in the exponential space:

\[
|F(z)| \le Ce^{\rho|\Re(z)| + \tau|\Im(z)|}\,\,\text{for}\,\,\rho,\tau \in \mathbb{R}^+\,\,\tau < \pi/2\\
\]

There then exists a correspondence of functions \(\vartheta\) which produce \(F\)'s. What I have shown, is that certain functions produce \(\vartheta\), which then produce \(F\) again through this formula. It's nothing really more than Ramanujan's Master Theorem, described using Fractional calculus. THIS IS UNIQUE.


NOW, what I think you are getting at, is that there are uncountably many infinite fractional derivatives. This is true. We can iterate the derivative uncountably many ways. BUT! There is only one way to do this such that:

\[
\frac{d^z}{dw^z} e^w = e^w\\
\]

THERE'S ONLY ONE FRACTIONAL DERIVATIVE THAT DOES THIS!

It's known as the Riemann-Liouville, or the Exponential, fractional derivative. It's also known as the Weyl fractional derivative. IT IS UNIQUE.

So it is not arbitrary to interpolate sequences using fractional calculus, if you only use this operator. It's absolutely the exact opposite of arbitrary. It's unique. It either works or it doesn't. And if it works, it's unique.

I don't know what to say, tommy. You've never liked the fractional calculus approach. But you clearly also don't understand it. And I don't know what else to say.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 2,061 11/27/2022, 05:46 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 4,210 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 8,064 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 11,445 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Holomorphic semi operators, using the beta method JmsNxn 71 40,178 06/13/2022, 08:33 PM
Last Post: JmsNxn
  Isomorphism of newtonian calculus rules for Non-Newtonian (anti)derivatives of hypers Micah 4 10,259 03/02/2019, 08:23 PM
Last Post: Micah
  Hyper operators in computability theory JmsNxn 5 14,665 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 4,863 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 98,952 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 40,982 08/22/2016, 12:19 AM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)