0 Members and 1 Guest are viewing this topic.
do rannum = random() // generate random 512 bit number or greater num = gcd(rannum,n) while(num == 1 or num == n) // break from loop if rannum a multiple of one of the primes ? print "Great !!!, factor found ", num
I'm working on a server now... what do you guys suppose that would be a good range length to begin with? I think a whole 512-bit range is a bit too much.
Um, people, remember ( http://ourl.ca/6418/129294 and other posts of mine): factoring a 1024-bit RSA key, using the best known algorithm, is a task more than three orders of magnitude harder than the state of the art (using unpublished algorithm implementations - the open source implementations aren't at that level yet, by a significant margin).IOW, a problem that only very few persons in the entire world would have an idea how to tackle, let alone have the resources to run to completion - we're talking about dozens of thousands of cores for the sieving phase, and some pretty odd clusters and implementations for the post-processing phase...As I wrote, some TF'ing can be done, although very extremely unlikely to succeed. It's even OK for you to make some software similar in spirit to the PRPNet/ECMNet server+client (that's where I'd start looking, if I wanted to make client+server software for RSA TF'ing).So I'm not criticizing the idea itself. But if you're into TF'ing, then please, don't make an inferior independent reimplementation of a working wheel that was created months ago, on this very forum...Namely, a C# TF'ing program with primality checking is a two-fold waste of CPU power - and a significant one at that, judging by the very low rate of progress indicated in various posts here (IIUC DJ's post, 3K divisions per hour... ouch !! Even 3M/hour would be bad):* using C#, Java or any other language of the same class, for such kinds of very heavy number computation, is a cardinal sin. Even AOT compilation of those languages doesn't let one get down to the performance of C/C++ directly calling into highly optimized ASM-based libraries;* the computational cost of primality-testing the numbers is extremely prohibitive, as shown by myself when optimizing the C program started by Tribal. Dividing mindlessly N, without any form of primality testing for P, makes the program much faster. And it can even be made slightly faster by adding several trial divisions to weed out the most obvious non-prime P. Check the post I linked to above in this post, and the topic that it links to.And what's more, P or Q are very unlikely to be near enough from 0xxxxx00<many zeros>00 to be vulnerable to TF. Look at the hexadecimal representation of the factors for the TI-Z80 and TI-68k RSA keys. Choosing P and Q for making a good RSA key N=P*Q is a complicated process, which must (in the RFC sense) contain checks to ensure that P and Q are not too easy to find by indirect methods: strong primality testing/proving (APRT-CLE / ECPP / AKS), for P and Q; P-1, P+1, Q-1, Q+1 having at least one very large prime factor (i.e. P and Q are dubbed "strong primes"); and more.I'm no cryptographer, and I don't know how any of the efficient integer factoring algorithms works. But after feeding the RSALS grid with more than 200 integers considered large for individuals or small groups of individuals, attending the serious sections of MersenneForum for one year and a half, and having some data about the harder factorizations done by the NFS@Home grid (which are still significantly easier than the factorization of a 768-bit RSA key), I think I have a better idea about the difficulty of factoring a 1024-bit RSA key, and the most efficient methods to achieve this aim, than all of you do TL;DR: facepalm. You guys fail at implementing a decent program for the task of factoring a RSA key, fail at using it, and as a result, you're wasting the vast majority of the CPU time you're devoting to it. Heaven forbid anyone notify MersenneForum, especially personalities such as Professor Robert Silverman, of this topic...