0 Members and 1 Guest are viewing this topic.
#include <stdio.h>int main();int factorial(int n);int main() { int n; scanf("%d",&n); int i; for (i=0;i<n;i++) { int x; scanf("%d",&x); printf("%i\n",factorial(x)); } return 0;}int factorial(int n){ if (n==0) { return 1; } int sum = 1; int i; for (i=n;i>0;i--) { sum = sum*i; } return sum;}
printf("%i",factorial(x));
printf("%i\n",factorial(x));
That may or may not be faster depending on CPU design, memory bandwidth, caching, integer size, et cetera. If you have high-latency memory, BCD may well make things much faster, as the code for screwing with the individual nibbles could be faster than the penalty for a cache miss.The Z80 has instructions for helping with BCD a little, namely the DAA instruction. You might consider making an Axiom of a generic open source BCD library.
Ah, doubles will hold numbers as large as 170! but will you get all of the decimal precision? I think not, if I remember correctly the double in C only allows for around 16 decimal digits
Quote from: Builderboy on June 17, 2011, 12:25:28 amAh, doubles will hold numbers as large as 170! but will you get all of the decimal precision? I think not, if I remember correctly the double in C only allows for around 16 decimal digitsIf it's an integer data type, then yes. A double floating point number will not have the precision, though. As for BCD...Anyone who uses BCD for any reason other than ease of use should be shot You get all the precision of a floating point number with a little more than half the space efficiency.If you need to operate on large numbers without math.h, then I recommend using arrays to store your numbers. 100! takes up 66 bytes in integer format, so a custom routine that malloc's an array and then applies the operations to that array will be able to represent massive numbers with no loss of precision. You'll also likely want to improve that algorithm with numbers like 100!