Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Xeda112358

Pages: 1 ... 31 32 [33] 34 35 ... 317
481
Thanks! Next I plan to make a similar algorithm for the complex logarithm using the same LUTs. It's pretty similar.


EDIT: Complex log will offer a logarithm function (real and complex) as well as an arctangent function (real).

482
Overview
I was reading the BKM algorithm and I thought it was similar to my previous attempt at making an algorithm for the complex exponential function. However, I tried implementing it and failed miserably. So I altered my previous algorithm with an idea from the BKM algorithm, and I made my own algo that works as quickly, uses a smaller LUT, works on a wider range of values, and has a more straightforward implementation.


Why is the complex exponential function useful?
The complex exponential function can be evaluated as ##e^{x+iy}=e^{x}\left(\cos(y)+i \sin(y)\right)##. So for people who actually need a complex exponential function, this is useful, but it can also compute the real exponential function(take y=0), or real sine and cosine, simultaneously (take x=0).

Goals of the algorithm
Although I only advertise my algorithm as working on the rectangle [-1,1]+[-1,1]i, it actually works on approximately (-1.754365792,1.754365792)+(-1.213427912,1.213427912)i. Feel free to use this to your advantage.
As well, since shifting and adding are easycitation needed, this is a shift and add algorithm. It takes approximately n iterations for n bits of accuracy, and 8 tables of n/2 floats (versus 18 tables suggested by the BKM algorithm).


The Algorithm
We will first give simple pseudo code, then proceed to optimize it.
As in my old algorithm and the BKM algorithm, we start by observing that ez-s=eze-s=ez/es. So then if z-a-b-c-d-...=0, then we have ez-a-b-c-...=ez/(eaebec...), so then 1=ez/(eaebec...), so eaebec...=ez.

The trick is finding values for a,b,c,... so that the multiplications are "easy" as in, comprised of a shift and add.

Since we are working with complex numbers, take the complex number d where the real and imaginary parts are in the set {-1,0,1}. Multiplication by those values is even more trivial than a shift and add. What we will do is take ea=1+d2-n. Multiplication by 2-n is a right shift by n.

If ea=1+d2-n, then log(ea)=log(1+d2-n), so a=log(1+d2-n). There are 9 possible values for d, so you might think, "oh no Zeda, we need 9 tables of complex numbers which is basically 18 tables of floats," but we all know that math+programming=magiccitation not needed. So let's look at how the complex logarithm is defined and call the real part of d, s, and the imaginary part t. So d=s+t*i. By the way, references to "log" imply the natural logarithm in the math world, so loge or ln() not log10.
##\log(x+iy) = \log(\sqrt(x^{2}+y^{2}))+i\tan^{-1}(y/x)= \frac{1}{2}\log(x^{2}+y^{2})+i\tan^{-1}(\frac{y}{x})## (side note: atan has infinitely many valid outputs-- just add 2*k*pi for integers k, but we are only interested in k=0, the principle Arg).

So our tables would be with x=1+s2-n, y=t2-n.
Spoiler For tables for n:
s=-1, t=-1: ##\frac{1}{2}\log((1-2^{-n})^{2}+2^{-2n})-i\tan^{-1}(\frac{2^{-n}}{1-2^{-n}})##
s=-1, t=0: ##\log(1-2^{-n})##
s=-1, t=1: ##\frac{1}{2}\log((1-2^{-n})^{2}+2^{-2n})+i\tan^{-1}(\frac{2^{-n}}{1-2^{-n}})##
s=0, t=-1: ##\frac{1}{2}\log(1+2^{-2n})-i\tan^{-1}(2^{-n})##
s=0, t=0: 0
s=0, t=1: ##\frac{1}{2}\log(1+2^{-2n})+i\tan^{-1}(2^{-n})##
s=1, t=-1: ##\frac{1}{2}\log((1+2^{-n})^{2}+2^{-2n})-i\tan^{-1}(\frac{2^{-n}}{1+2^{-n}})##
s=1, t=0: ##\log(1+2^{-n})##
s=1, t=1: ##\frac{1}{2}\log((1+2^{-n})^{2}+2^{-2n})+i\tan^{-1}(\frac{2^{-n}}{1+2^{-n}})##
But there are lots of repeated values, so we actually need tables for:
Spoiler For Actual tables needed:
##\frac{1}{2}\log((1-2^{-n})^{2}+4^{-n})##
##\frac{1}{2}\log((1+2^{-n})^{2}+4^{-n})##
##\tan^{-1}(\frac{2^{-n}}{1-2^{-n}})=\tan^{-1}(\frac{1}{2^{n}-1})##
##\tan^{-1}(\frac{1}{2^{n}+1})##
##\log(1-2^{-n})##
##\log(1+2^{-n})##
##\frac{1}{2}\log(1+4^{-n})##
##\tan^{-1}(2^{-n})##

For our purposes, we will note that to choose s and t, we will use s=sign(iPart(Re(z)*2m+1)), t=sign(iPart(Im(z)*2m+1))  where iPart() returns the integer part of the input, and sign() returns 1 if positive, -1 if negative, and 0 if 0. Without further ado, the algorithm:
Code: [Select]
1.0→v
1.0→w
For k,0,n-1
    sign(x)→s
    sign(y)→t
    if |x|<2^(1-k)
        0→s
    if |y|<2^(1-k)
        0→t
    if s≠0 & t≠0
        v→a
        if t=0
            x+log(1+s2^-k)→x
            v+s*v>>k→v
        if s=0
            x+log(1+4^-k)→x
            y+t*atan(2^-k)→y
            v-tw>>k→v
            w+ta>>k→w
        if s=-1
            x+log((1-2^-k)^2+4^-k)→x
            y+t*atan(1/(2^k-1))→y
            v-v>>k-tw>>k→w
            w-w>>k+ta>>k→v
        if s=1
            x+log((1+2^-k)^2+4^-k)→x
            y+t*atan(1/(2^k+1))→y
            v+v>>k-tw>>k→w
            w+w>>k+ta>>k→v

The result is v+iw to n bits of accuracy. It seems complicated, but it is basically just taking care of each case for s and t. As well, we can multiply x, y by 2 each iteration so instead of checking the nth bit, we can check if they are less than 1. Then we would need to multiply each table entry by 2k+1. For further optimization, we note that to 2m bits of accuracy, when k>m-1, log(1+d2-(k+1))=.5*log(1+d2-k) So we can basically reuse the last table value, divide by 2 each iteration, then multiplying by 2 to get our 2-(k+1). But wait, x/2*2=x, so we can directly use the last value in each table!
To clean things up a bit, lets name the tables and use an index to address each value:
Spoiler For tables:

for n from 0 to m-1:
table0=##2^{n}\log((1-2^{-n})^{2}+4^{-n})##
table1=##2^{n+1}\frac{1}{2}\log((1+2^{-n})^{2}+4^{-n})##
table2=##2^{n+1}\tan^{-1}(\frac{2^{-n}}{1-2^{-n}})=2^{n+1}\tan^{-1}(\frac{1}{2^{n}-1})##
table3=##2^{n+1}\tan^{-1}(\frac{1}{2^{n}+1})##
table4=##2^{n+1}\log(1-2^{-n})##
table5=##2^{n+1}\log(1+2^{-n})##
table6=##2^{n}\log(1+4^{-n})##
table7=##2^{n+1}\tan^{-1}(2^{-n})##
For the optimized code to 2m bits of accuracy,
Code: [Select]
1.0→v
1.0→w
For k,0,2m-1
    if k<m
        k→n
    x<<1→x
    y<<1→y
    sign(x)→s
    sign(y)→t
    if |x|<1
        0→s
    if |y|<1
        0→t
    v→a
    if s=0
        if t=-1
            x+table6[n]→x
            y-table7[n]→y
            v+w>>k→v
            w-a>>k→w
        if t=1
            x+table6[n]→x
            y+table7[n]→y
            v-w>>k→w
            w+a>>k→v
    if s=-1
        if t=-1
            x+table0[n]→x
            y-table2[n]→y
            v-v>>k+w>>k→w
            w-w>>k-a>>k→v
        if t=0
            x+table4[n]→x
            v-v>>k→w
            w-w>>k→v
        if t=1
            x+table0[n]→x
            y+table2[n]→y
            v-v>>k-w>>k→w
            w-w>>k+a>>k→v
    if s=1
        if t=-1
            x+table1[n]→x
            y-table3[n]→y
            v+v>>k+w>>k→w
            w+w>>k-a>>k→v
        if t=0
            x+table1[n]→x
            v+v>>k→w
            w+w>>k→v
        if t=1
            x+table1[n]→x
            y+table3[n]→y
            v+v>>k-w>>k→w
            w+w>>k+a>>k→v
Summary:
For floating point math with floats in base 2, arbitrary shifting is trivial (just change the exponent). So each iteration requires at most 6 trivial shifts, 6 trivial compares, 6 non-trivial additions/subtractions. So for 2m bits of precison, the total speed cost is 12mA+epsilon, where A is the speed of an addition or subtraction, and epsilon is a small, positive number. The total space cost is 8m Floats. Compared to the BKM algorithm, the speed is the same (with just a change in epsilon), but the BKM uses 18m Floats, so we save 10 Floats. We also work on a larger space than the BKM algorithm which works on a trapezoid region of the complex pain, entirely contained in the set (a proper subset) on which this algorithm works.

Future Plans
I plan to write this up more formally, including the proofs as to why we can use m elements for 2m bits of accuracy, why we can just look at the sign and the nth bits of the real and complex component of the input to choose our d. This last bit was a difficult task for the creator of the BKM algorithm and in their paper, they mentioned that the bounds were theoretical and only as tight as they could manage. In practice they found that the bounds were loose and they suspected that they could expand their bounds, but they couldn't prove it. I found an easy way to prove this for my algorithm which I suspect will apply to the BKM algorithm since at this point they are nearly identical.






Gosh I hope I don't have typos, formatting issues, or errors in the algorithms I posted. I have been typing and revising this for hours .__.

483
Math and Science / Re: Approximating Tangent
« on: November 25, 2014, 12:47:51 pm »
The best part about being friends with mathematicians a gazillion times smarter than me? They look at a problem and are like, "why don't you just use stuff you learned in Calc 2?" .__. Anyways, I figured out the main stuff, but one of my professor friends made the final, "obvious" observation. The constant does go to 4/pi2 and here is how it works:

Say we have a Maclaurin series of the form ##\sum\limits_{k=0}^{\infty}{c_{k}x^{2k+1}}=c_{0}x+c_{1}x^{3}+c_{2}x^{5}+...##. Then applying Horner's Method:
##c_{0}x+c_{1}x^{3}+c_{2}x^{5}+c_{3}x^{7}+c_{4}x^{9}+...##
##=c_{0}x(1+\frac{c_{1}}{c_{0}}x^{2}+\frac{c_{2}}{c_{0}}x^{4}+\frac{c_{3}}{c_{0}}x^{6}+\frac{c_{4}}{c_{0}}x^{8}+...##
##=c_{0}x(1+\frac{c_{1}}{c_{0}}x^{2}(1+\frac{c_{2}c_{0}}{c_{0}c_{1}}x^{2}+\frac{c_{3}c_{0}}{c_{0}c_{1}}x^{4}+\frac{c_{4}c_{0}}{c_{0}c_{1}}x^{6}+...##
##=c_{0}x(1+\frac{c_{1}}{c_{0}}x^{2}(1+\frac{c_{2}}{c_{1}}x^{2}+\frac{c_{3}}{c_{1}}x^{4}+\frac{c_{4}}{c_{1}}x^{6}+...##
...
##=c_{0}x(1+\frac{c_{1}}{c_{0}}x^{2}(1+\frac{c_{2}}{c_{1}}x^{2}(1+\frac{c_{3}}{c_{2}}x^{2}(1+\frac{c_{4}}{c_{3}}x^{2}(1+...##

Here is the last piece of the puzzle:
Quote
Me: So I know this converges because I am basically using the ratio test of the Maclaurin Series for tangent, which I already know converges.
Dr. B: Well Zeda, you know that the limit of the ratio test is inversely related to the radius of convergence, and you know that the radius of converges for tangent about 0 is pi/2 before it gets wonky.
So if we were finding the radius of convergence, we would want to get r in the following:
##\lim\limits_{n\rightarrow\infty}{|\frac{c_{n+1}(x-r)^{2n+3}}{c_{n}(x-r)^{2n+1}}|}=1##
However, we know r, and we know it is centered about x=0 (because we are using a Maclaurin series), so we get:
##\lim\limits_{n\rightarrow\infty}{|\frac{c_{n+1}(0-r)^{2}}{c_{n}}|}=1##
##\lim\limits_{n\rightarrow\infty}{r^{2}|\frac{c_{n+1}}{c_{n}}|}=1##
##\lim\limits_{n\rightarrow\infty}{|\frac{c_{n+1}}{c_{n}}|}=\frac{1}{r^{2}}##
Also, it happens to be that all the coefficients in the Maclaurin series of tan(x) are positive, so getting rid of the absolute value:
##\lim\limits_{n\rightarrow\infty}{\frac{c_{n+1}}{c_{n}}}=\frac{4}{\pi^{2}}##

Yay :D For extra fun, instead of using the shorthand cn to refer to the coefficients, let's plug in actual values:
##\lim\limits_{n\rightarrow\infty}{-\frac{B_{2n+4}4^{n+2}(4^{n+2}-1)(2n+2)!}{B_{2n+2}4^{n+1}(4^{n+1}-1)(2n+4)!}}=\frac{4}{\pi^{2}}##
##\lim\limits_{n\rightarrow\infty}{-\frac{B_{2n+4}4(4^{n+2}-1)(2n+2)!}{B_{2n+2}(4^{n+1}-1)(2n+4)!}}=\frac{4}{\pi^{2}}##
##\lim\limits_{n\rightarrow\infty}{-4\frac{B_{2n+4}(4^{n+2}-1)}{B_{2n+2}(4^{n+1}-1)(2n+3)(2n+4)}}=\frac{4}{\pi^{2}}##
##\lim\limits_{n\rightarrow\infty}{-4\frac{B_{2n+4}4}{B_{2n+2}(2n+3)(2n+4)}}=\frac{4}{\pi^{2}}##
##\lim\limits_{n\rightarrow\infty}{-16\frac{B_{2n+4}}{B_{2n+2}(2n+3)(2n+4)}}=\frac{4}{\pi^{2}}##
##\lim\limits_{n\rightarrow\infty}{-\frac{B_{2n+4}}{B_{2n+2}(2n+3)(2n+4)}}=\frac{1}{4\pi^{2}}##
##\lim\limits_{n\rightarrow\infty}{-\frac{B_{2n+2}(2n+3)(2n+4)}{B_{2n+4}}}=4\pi^{2}##
##\lim\limits_{n\rightarrow\infty}{-\frac{B_{2n}(2n+1)(2n+2)}{B_{2n+2}}}=4\pi^{2}##


From this you can get that ##B_{2n+2}\approx -\frac{B_{2n}(2n+1)(2n+2)}{4\pi^{2}}##. And further:
##B_{2n+4}\approx -\frac{B_{2n+2}(2n+3)(2n+4)}{(2\pi)^{2}}\approx \frac{B_{2n}(2n+1)(2n+2)(2n+3)(2n+4)}{(2\pi)^{4}}##
And in general:

##B_{2n+2m}\approx \frac{B_{2n}(2n+2m)!(-1)^{m}}{(2\pi)^{2m}(2n)!}##


So let's put that into practice. Say we know B20=-174611/330 and we want to estimate B30:


##B_{20+2\cdot 5}\approx \frac{B_{20}(30)!(-1)^{5}}{(2\pi)^{10}(20)!}##
##= \frac{174611/330(30)!}{(2\pi)^{10}(20)!}##
##= \frac{55016531182.1502685546875}{\pi^{10}}##
##B_{30}\approx 601581447.225687068979178849893094649083608778571...## according to Wolfram Alpha.  Meanwhile the actual value:
##B_{30}\approx 601580873.900642368384303868174835916771400642368...##


This is a useful identity since the Bernoulli numbers are used in many applications, but they are tough to compute efficiently.  I am pretty sure I have seen this identity before, and I've actually hypothesized it earlier in the year (in one of my previous notebooks), but it is awesome to actually prove it!

484
Math and Science / Re: Approximating Tangent
« on: November 22, 2014, 11:09:30 am »
Good job! Do you think this can have applications in z80 programming (is it faster, more memory efficient, etc.)?
This would not be the best way to compute tangent (though it is decent). For 21 bits of accuracy, there are probably much better ways :P It's just neat and if you've never seen Horner's method, it can get you thinking about math programming in the future.
I think that if you really want speed, you should use a LUT for the sine function, calculate the cosine as sine+90°, and calculate the tangent as sine/cosine. This does require a bit of memory and isn't really related to math though.

Related: I previously had no idea how the sine, cosine and tangent were actually calculated, and good job on simplifying it so much!

EDIT: while the z80 will probably not be able to calculate that at high speed for use in games, I guess it might be usefull for CAS-related programs.
Yeah, LUT's are the fastest method. There are fairly efficient ways to use small LUTs to compute the fun functions (trig and exponentials, among others). Glad to show you something new!
This is really cool! I found using ries that the constant is arctan(pi)/pi (if it turns out to be a constant, which seems to be the case).

EDIT: that is probably not correct. I just saw that first and it had a tangent in it. Looking at the others, 4/(pi^2) looks like it could be promising. But those are all approximations anyway.
Yeah, 4/pi2 looks most promising, but it might not converge to anything nice. Then again, I have lots of identities that I've derived about the Bernoulli numbers, so who knows! For example, if you sum them up to get 'C', then do c/(c-1), you get euler's number, and if you sum the absolute value of the terms, you get a function involving cotangent!
Wow this topic is really interesting!
It's pretty funny that I'm reading this now, because it's Math-day at school today :)
Math day is best day.


I worked on this a little last night, and I will probably work on this some more. I might actually figure out the constant and I would be more inclined to believe that it has something to do with e.

485
Introduce Yourself! / Re: Hi, I'm André! :D
« on: November 20, 2014, 10:32:44 am »
Hi Zealot, welcome to Omni!
 !peanuts
People rarely know where it is from, glad to see someone that knows the game.
I'm really bad at the game, though, since I play causally (in SC2 I only play unranked).

486
Math and Science / Approximating Tangent
« on: November 20, 2014, 10:07:48 am »
Hi again! This time I was fooling around with tangent (who knows why? I don't .__.) Anyways, I was having some fun and I figured out that Horner's method can be abused to approximate tangent by a similar logic I used to approximate arctangent: all you have to do is approximate infinitely many terms of the Maclaurin series with one simple function!


So before the fun, I'll show what Horner's method is (it's a super simple idea):
If you've never seen the identity that ##sin(x)=x-\frac{x^{3}}{3!}+\frac{x^{5}}{5!}-\frac{x^{7}}{7!}+...##, that's fine, just trust me (you'll get to that in calculus and it's crazy cool stuff). Horner's method is like what a true 31337 programmer would come up with if you want to efficiently approximate that polynomial. First, factor out the first term in the series (##x##):
##\sin(x)=x(1-\frac{x^{2}}{3!}+\frac{x^{4}}{5!}-\frac{x^{6}}{7!}+...##
Now in that inside term, factor out the ##-\frac{x^{2}}{3!}##:

##\sin(x)=x(1-\frac{x^{2}}{3!}(1-\frac{x^{2}}{4\cdot 5}+\frac{x^{4}}{4\cdot 5\cdot 6\cdot 7\cdot }-\frac{x^{6}}{4\cdot 5\cdot 6\cdot 7\cdot 8\cdot 9}+...##
Rinse, repeat:

##\sin(x)=x(1-\frac{x^{2}}{3!}(1-\frac{x^{2}}{4\cdot 5}(1-\frac{x^{2}}{6\cdot 7}(1-\frac{x^{2}}{8\cdot 9}-...##
So the programmer sees, "aw, shweet, I only need to compute x2 once and reuse it a bunch of times with some constant multiplications!" So yeah, Horner's method is snazztacular, and I urge you to use it when possible.


Now let's look at tan(x). The Maclaurin series expansion for tan(x) is uuuugly. Each term uses Bernoulli numbers which are recursively computed and are intricately linked with some really cool functions like the Riemann Zeta function. For comparison (thanks to Mathworld for the table):

##\cos(x)=\sum\limits_{k=0}^{\infty}{\frac{(-1)^{k}}{(2k)!}x^{2k}}## (simple, cute, compact)
##\sin(x)=\sum\limits_{k=0}^{\infty}{\frac{(-1)^{k}}{(2k+1)!}x^{2k+1}}## (simple, cute, compact)
##\tan(x)=\sum\limits_{k=0}^{\infty}{\frac{(-1)^{k}4^{k+1}\left(4^{k+1}-1\right)B_{2k+2}}{(2k+2)!}x^{2k+1}}## (wtf are math)
That's what it looks like to divide sine by cosine. Not pretty (well, okay, it is actually pretty gorgeous, but whatevs). So the first few terms of tan(x) are:
##\tan(x)=x+\frac{x^{3}}{3}+\frac{2x^{5}}{15}+\frac{17x^{7}}{315}+\frac{62x^{9}}{2835}+\frac{1382x^{11}}{155925}+...##
Nasty gorgeous, but applying Horner's Method, we end up with:
##\tan(x)=x(1+\frac{x^{2}}{3}(1+\frac{2x^{2}}{5}(1+\frac{17x^{2}}{42}(1+\frac{62x^{2}}{153}(1+...##

So seeing this, I realized, "wait a minute, those coefficients sure don't look like they are converging to zero or one, but something in between!" And that made me think about how ##\frac{1}{1-ax^{2}}=(1+ax^{2}(1+ax^{2}(1+ax^{2}(1+ax^{2}(1+...##. Now, I am a mathematician, but I am also a programmer so what I am about to say is blasphemous and I am sorry, but I decided to numerically evaluate and assume those constants converge. I ended up with approximately .405285. What I am saying is:

##\tan(x)\approx x(1+\frac{x^{2}}{3}(1+\frac{2x^{2}}{5}(1+\frac{17x^{2}}{42}(1+\frac{62x^{2}}{153}(1+.405285x^{2}(1+.405285x^{2}(1+.405285x^{2}(1+...##
So then I ended up with:

##\tan(x)\approx x(1+\frac{x^{2}}{3}(1+\frac{2x^{2}}{5}(1+\frac{17x^{2}}{42}(1+\frac{62x^{2}}{153}\frac{1}{1-.405285x^{2}}))))##
##\tan(x)\approx x(1+\frac{x^{2}}{3}(1+\frac{2x^{2}}{5}(1+\frac{17x^{2}}{42}(1+\frac{62x^{2}}{153-62x^{2}}))))##
##\tan(x)\approx x(1+\frac{x^{2}}{3}(1+\frac{2x^{2}}{5}(1+\frac{17x^{2}}{42}\frac{153}{153-62x^{2}})))##It's still a bit of work to compute, but I had fun! No to do the mathy thing and actually prove the constants converge and maybe even find it's exact value!

487
Math and Science / Re: PrettyPrinting your Math Posts
« on: November 13, 2014, 12:20:44 pm »
Good idea to post this for those who didn't know the forum supported this, those who don't know how to LaTeX, or those who know but forgot :P

Something also worth noting is the difference between those codes :
\sum_{k=1}^{n}{\frac{1}{k}}: ##\sum_{k=1}^{n}{\frac{1}{k}}##
\sum\limits_{k=1}^{n}{\frac{1}{k}}: ##\sum\limits_{k=1}^{n}{\frac{1}{k}}##
One is faster to write and doesn't kill line alignement too much, the other one is more realistic.
Ooh, thanks for that! I had a version on my old computer that worked the natural way without needing the \limits thingy.

488
TI-Nspire / Re: micropython - Python for Nspire calculators
« on: October 28, 2014, 01:28:17 pm »
Wow, this is cool! Will it work for grayscale models?

489
Math and Science / A question about zeroes of the Zeta function
« on: October 28, 2014, 01:24:11 pm »

The problem:
Is there an ##s\in \mathbb{C}## such that ##\zeta(s)=\zeta(2s)=0##? If so, what is ##\lim_{z\rightarrow s}{\frac{\zeta(s)^{2}}{\zeta(2s)}}##?



Okee dokee, I know this is getting dangerously close to the Riemann Hypothesis, but I swear that wasn't my original intent!


I was playing around with infinite sums and products and I wanted to figure out the sum form of ##\prod_{p}{\frac{1+p^{-s}}{1-p^-s}}##. I started by knowing that ##\prod_{p}{(1-p^{-s})}^{-1} = \zeta(s)##, and from previous work, ##\prod_{p}{\sum_{k=0}^{n-1}{(p^{-s})^{k}}} = \frac{\zeta(s)}{\zeta(ns)}##). From that, I knew ##\prod_{p}{1+p^{-s}} = \frac{\zeta(s)}{\zeta(2s)}##, thus I know the product converges when ##\zeta(2s) \neq 0, \zeta(s) \in \mathbb{C}##. I knew convergence wasn't an issue (for the most part) so I expanded the product some (a reversal of Euler's conversion from ##\zeta(s) = \sum_{k=1}^{\infty}{k^{-s}} = \prod_{p}{(1-p^{-s})}^{-1}##) and obtained:

##\prod_{p}{\frac{1+p^{-s}}{1-p^-s}}= \sum_{k=1}^{\infty}{2^{f(k)}k^{-s}}##,


 where ##f(k)## is the number of prime factors of k. It turns out that this has been known since 1979 and after I had access to the internet, I figured out the typical symbol used for my ##f(k)## in this context is ##\omega(k)##. So that was cool, and I could write that sum and product in terms of the zeta function as ##\frac{\zeta(s)}{\zeta(2s)}\zeta(s) = \frac{\zeta(s)^{2}}{\zeta(2s)}##, so do with that what you will (I tried to use it to find the number of prime factors of a number n, but I didn't get anywhere useful). What I decided to pursue was when this function was zero. As long as ##\zeta(s)=0, \zeta(2s)\neq 0##, we know that it is 0, but I haven't yet figured out if there is ever an instance where ##s\in \mathbb{C}, \zeta(s)=\zeta(2s)=0##. If there is such an s, then the Riemann Hypothesis is false. However, if such an s does not exist, this says nothing about the Riemann Hypothesis :P .
Spoiler For Why the existence of an s would prove RH false:
It is simple: RH says that ##\zeta(s)=0## if and only if ##Re(s)=\frac{1}{2}##. So if ##\zeta(s)=\zeta(2s)=0##, assuming RH is true, then ##Re(s)=Re(2s)=2Re(s)##, which would imply s=0, a contradiction to the Riemann Hypothesis which we just assumed was true!


As well, if there is such an s, I wanted to know if it could be one of those removable discontinuity thingies.


EDIT: fixed a tag and an incorrect restatement of the RIemann Hypothesis :P

490
Math and Science / Re: Graph coloring mini challenge
« on: October 28, 2014, 12:37:52 pm »
Well, I've been wanting to get back into graph theory, so I guess this will make an excuse to look into some of my books.

491
Math and Science / PrettyPrinting your Math Posts
« on: October 28, 2014, 12:24:57 pm »
If you want to post math stuff, it is a fantastic idea to take advantage of the [tex] <expression> [/tex] tags. Here is an example:

##e^{\sqrt{\frac{-3x}{2}}}## versus e^(sqrt(-3x/2)).


The code used in the tags is called LaTex and you can look up the fun stuff you can do with it via Google. However, for a bit of a quick start I will give the following nonsense expression and it's code:


##\sqrt{3} \neq \frac{\pi^{2}}{6} = \sum_{k=1}^{\infty}{\frac{1}{k^{2}}} \gt \zeta \left(2+\epsilon\right), \epsilon \in \mathbb{R}, \epsilon>0##


The code is:
[tex]\sqrt{3} \neq \frac{\pi^{2}}{6} = \sum_{k=1}^{\infty}{\frac{1}{k^{2}}} \gt \zeta \left(2+\epsilon\right), \epsilon \in \mathbb{R}, \epsilon>0[/tex]


That might look scary, so let's break it down:
\sqrt{} will put a square root symbol over everything inside {}
\neq is the "not equals" sign. There are also \leq, \lt, \geq, \gt.
\frac{numerator}{denominator} makes an expression in fraction form.
\pi is the pi symbol
^{exponent} raises the expression to a power
= is the = symbol
\sum_{lower limit}^{upper limit}{expression} creates a sum from an upper to lower limit (optional arguments). As a note, _{} creates a subscript, ^{} is a super script, if that helps to figure the stuff out.
\infty is the infinity symbol.
\gt is the greater than symbol
\zeta is the lower case zeta symbol. \Zeta is uppercase. LaTeX is case sensitive.
\left( is a left parentheses. It gets sized according to what comes between it and it's matching \right).
\in makes the "element of" symbol
\mathbb{} makes a "blackboard bold" character for certain special sets (N,Z,Q,A,R,C,H,O,S)


So for fun, here are some simpler codes and their outputs:
\left( 3^{2} \right)
##\left( 3^{2} \right)##


x_{n}
##x_{n}##


\frac{1}{k}
##\frac{1}{k}##


\sum_{k=1}^{n}{\frac{1}{k}}
##\sum_{k=1}^{n}{\frac{1}{k}}##

492
Humour and Jokes / Re: Why are tr1p1ea's still so expensive?
« on: October 14, 2014, 12:00:05 pm »
At the store where I work, I can buy  pack of 8 tr1p1eas for either $1 or $1.50 (I forget). They are cheap store-brand ones, though.

493
Other / Re: Dysphoria, a Knex Ball Machine
« on: August 15, 2014, 08:20:13 am »
I can't wait to see what you are going to do with it more o.o Have you ever experimented with making as machine controlled by a calculator like through the IO port? That's something I wanted to do when I had enough stuff!

494
Well I could do it all except the CSE stuff, but I doubt I'll ever again have time or drive :P

495
Grammer / Re: Grammer
« on: July 11, 2014, 09:58:49 am »
no o.o You mean after the ".0:" ?

Pages: 1 ... 31 32 [33] 34 35 ... 317