02/26/2023, 04:56 PM
(02/24/2023, 12:55 AM)Caleb Wrote:(02/24/2023, 12:30 AM)tommy1729 Wrote: Apart from the idea of reflection formula's ( which may or may not be a good idea )This is a good question. Let me try to answer it by sharing my motivation in studying the series I decided to study in the post.
I want to take for example the prime zeta function P(s).
It is well known to have formulas that converge for Re(s) > 1 or Re(s) > 0.
There also exists a formula for Re(s) > 1/2 :
Assuming the RH then \[P(s) = s \int_2^\infty \pi(x) x^{-s-1}dx = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+ L(s) \\ = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\] where the latter integrals converge and are analytic for Re(s) > 1/2.
\[ Li(x) = \int_2^x \frac{dt}{\log t}\] \[L(s) = s\int_2^\infty Li(x) x^{-s-1}dx = \int_2^\infty \frac{x^{-s}}{\log x}dx \] \[ = L(2)+\int_2^s L'(u)du = L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\]
since \[L'(s) = -\int_2^\infty x^{-s}dx = \frac{2^{1-s}}{s-1}\]
I guess that is clear to all here.
Now the natural boundary at Re(s) = 0 is made completely of log singularities getting dense.
Maybe we should make distinctions of the type of natural boundaries we are getting.
I mean for instance
g(x) = (1 - x)(1 - x^3)(1 - x^5)(1 - x^7)...
or h(x) = 1 + x^2 + x^2^2 + x^2^3 + ...
are having "different" natural boundaries, like accumilations of zeros.
i SAID MAYBE lol
But now,
what is the value of P(s) for Re(s) < 0 ??
OR is this type of natural boundary unsuitable because it has logs instead of poles and zero's ??
AND IF UNSUITABLE , WHAT DOES THAT MEAN ?? no continuation for some but for others we do ?
I will definitely have a talk about that with my friend mick.
I want to point out that the derivative of the prime zeta has an infinite amount of poles instead of logs.
and the inverse of the derivative of prime zeta has an infinite amount of zero's on Re(s) = 0.
Im holding back on making conjectures , im a bit confused.
regards
tommy1729
Analytical continuation beyond natural boundaries is hard! I don't think anyone knows how to do it in general. I suspect it can't be done in general-- because I don't think it would be meaningful to analytically continue a series that has purely random coefficents for instance. Since the problem is so hard, I'm choosing to study the easiest possible example I could come up with.
The examples I study have a really nice property. For instance, consider the following series
\[f(x)=\sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\]
This series has a natural boundary, since 1+x^n provides a dense set of poles. So, it cannot be analytically continued to |x|>1. However, the series is still well-defined outside of |x|>1. In particular, I can compute f(2) by just plugging into the series. This gives me a very natural candidate for a definition of \( f(x)\) outside the boundary.
So, I choose to study these much easier functions, and try to analyze what sort of properties they have. My goal looks like this
\[ \text{Get a bunch of very easy examples of functions with natural boundaries } \to \]
\[\text{ Study those examples in depth, and understand the mechanism underlying how those functions behave } \to \]
\[\text{ Try to generalize that mechanism into harder functions }\]
The prime zeta function is definitely in the category of "harder functions." I don't know if logarithmic singularities will cause the continuation to behave differently than regular poles-- that's something I will only find out once I've studied the easy examples in depth!
So, its not neccesariy that the prime zeta function has properties that make it unsuitable for continuation-- its that I haven't yet figured out the right way to do continuation beyond natural boundaries.
Falsifiability is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934)
Black swan theory is also interesting.
Why the f do I mention this ? ( I sound like James now lol )
Well unfalsifiable is a major concept in my philosophy thinking and moral ( and extends to politics and other topics and many debates and criticism but that is not the subject here )
I think Karl Popper ideas are not respected enough these days.
But this is about math.
Proof and falsifiable are somewhat important concept in math.
Now are there statements made that are true or false , falsifiable or unfalsifiable ?
Well maybe.
Here is the thing.
If you define things arbitrary ergo there were multiple ways ;
then you basically create something like axioms , rules , tautologies , ZFC etc
such things are , even when self-consistant , unfalsifiable.
This was once the critisism of summability methods :
you can define those values like you want.
AND MOST IMPORTANTLY : nobody can disprove them.
1 + 1 + 1 + ... = - sqrt (89)
sure why not.
The only " disproof " is using another way to do the summability.
or add additional rules or properties like analytic continuation.
But the problem is clear here.
This " magic "
1 + 1 + 1 + ... = - sqrt (89)
and similar was once considered bad math
" the master forbids it " is a famous quote we all know.
Ok so how does this relate here ?
mick and I created functions that " satisfy " or " should satisfy "
f( -s ) = f( s )
Now we find a natural boundary
and we get a contradition when we use that " zeta expansion "
So you work around that by adding residue and thereby defining the function for Re(s) < 0.
While loosing the property f( -s ) = f(s)
So we end up
f( -s ) =/= f(s).
HOWEVER
we could also just plug in values and then
f( s ) = f( -s )
and then criticize the zeta expansion as unvalid.
YOU picked the first.
But Is there proof it is better ??
Does that even make the zeta expansion valid or is it still not valid ? ( afteral the boundary still exists ! )
If the zeta expansion is not valid , your motivation is weak.
And even if it is valid , your motivation is still weak.
And what do you do here ( with your function ) ??
YOU SAY JUST PLUG IN THE VALUE !
Now James may find that the residue cancel anyway , but still.
In fact you did not even consider the residues , you just plugged in the value.
Now I hate to use your example function against you.
And I love your ideas and posts.
I use caps and f but Im not hostile.
It is just something that is bugging me.
If sometimes plugging in is ok and sometimes not and both choices are analytic at the same places ...
That makes me feel like I watched a magic show.
It clearly is an unfinished theory.
And that is for the cases where we already have a continuation !!!!
When we dont have one yet and use those ideas it becomes even more ... dubious ?
And then there will be " masters who forbid it "
And critics will say it is unfalsifiable.
Now analytic continuation is however falsifiable when the function is properly defined.
Now I know I was more optimist in the past , but I had this issue from the start. I only express it now.
My summability method might be a way to resolve things,
then again it was intended
1) for entire functions
2) used an interpolation which is also arguable something " random " the master forbids
Also what is there are multiple natural boundaries ??
regards
tommy1729

