0 Members and 2 Guests are viewing this topic.
I'm not quite sure that arbitrary precision (here, 1024 / 512 bits) arithmetic has been implemented on a GPU, even if I know that trial factoring Mersenne numbers to ~80 bits at least can be done on GPUs.But let's assume arbitrary precision arithmetic has been implemented on a GPU, and for the sake of the argument, let's also assume that the GPU trial factoring program is a billion times faster than the CPU program (which is wildly optimistic !). It still wouldn't improve much the chances to find the factor by much: 1e154 - 1e22 ~ 1e154 >> the number of atoms in the universe >> 1e22 >> 1e13.Remainder: trial factoring is completely impractical for our purposes, if nothing else because it requires many more operations than the number of atoms in the universe.