Author Topic: Computing Logarithms  (Read 2849 times)

0 Members and 1 Guest are viewing this topic.

Offline Xeda112358

  • they/them
  • Moderator
  • LV12 Extreme Poster (Next: 5000)
  • ************
  • Posts: 4704
  • Rating: +719/-6
  • Calc-u-lator, do doo doo do do do.
    • View Profile
Computing Logarithms
« on: March 06, 2014, 12:45:55 pm »
One of the cooler ways of computing logarithms that I know of generates 1 bit per iteration using a look up table and add/shift operations in programming-terms. It works using some simple rules about logs, in particular:
##log(xy)=log(x)+log(y)##



So for example:
##log(x)=log(x\frac{3}{4}\frac{4}{3})=log(x\frac{3}{4})+log(\frac{4}{3})##


log(1)=0 for any base, so if we multiply x by enough stuff to get it close to 1, we end up with 0+stuff and "stuff" can be in a precomputed table. Multiplying by 3/4, for example, is easy. Division by a power of 2 can be done with bit shifts, so to translate to code, x-x>>2. If we only multiply by ##\frac{2^{n}-1}{2^{n}}## then the process is an add and shift, and we just need to add ##log(\frac{2^{n}}{2^{n}-1})## to the accumulator. Let's do some practice to find log(23):


First, get 23 into the range [1,2). If we divide by 24, then we get 1.4375, so:
##log(23)=4log(2)+log(1.4375)##
Since 1.4375>4/3, if we multiply by 3/4, we will get a smaller number that is still bigger than 1. So:
##4log(2)+log(1.078125)+log(4/3)##
In turn, 1.078125<8/7, so we skip that.
1.078125>16/15, 1.078125*15/16=1.0107421875
##4log(2)+log(1.0107421875)+log(4/3)+log(16/15)##

Continuing this process up to 1024/1023 for 3 digits of accuracy, we get:

##4log(2)+log(1.002845764...)+log(4/3)+log(16/15)+log(128/127)+log(512/511)##
##log(23)\approx 4log(2)+log(4/3)+log(16/15)+log(128/127)+log(512/511)##


For comparison, ##log_{10}(23)\approx1.361727836##, and  ##4log_{10}(2)+log_{10}(4/3)+log_{10}(16/15)+log_{10}(128/127)+log_{10}(512/511)\approx 1.361342752##. We only wanted 3 digits of accuracy, but this is actually accurate to within 1/2596.

This is fast, but can we make it faster? (hint: You know the answer to this.)


Basically, we are comparing to 4/3, 4/3 (yes, twice), 8/7, 16/15, ... but if we are computing logs in a program, especially at the assembly level, we will be working with bits. Allow me to rewrite these numbers in binary:


4/3=1.0101010101...
8/7=1.0010010010...
16/15=1.000100010001...


Do you see the pattern? On top of that, we can really just count the leading zeroes after the decimal. If there are 5, we can multiply by 63/64  and at the worst case, it will be an overestimate by 1 shift (for example, 1.000001 is not bigger than 63/64, so we would actually need to x*127/128 = x-x>>7 instead of x-x>>6). This lets us skip any unnecessary steps for comparing and we don;t need an LUT of these values. As a new example, lets compute log(1.890625). In binary, this is log(1.111001). There are no leading zeroes, so multiply by 3/4:
1.111001-.01111001=1.01101011
[Aside: this is a case where we have to do an initial, multiply by 3/4 and do it again]


Now we jump into the main iteration. There is 1 leading 0, so multiplying by 3/4, we still get a number bigger than 1:
1.01101011-.0101101011=1.0001000001
Now we have 3 leading zeroes, but it happens to be smaller than 16/15 (notice it is "3 zeros, 1, 5zeroes, 1") so we multiply by 31/32:
1.0001000001
-.000010001000001
--------------------------
1.000001111011111

Now there are 5 leading zeroes, so we will stop here and estimate log(4/3)+log(4/3)+log(32/31)+log(64/63) which is accurate to two decimal places.


[/font]
I promised faster, though. That was an improvement, but not by as much as the following. Notice that multiplying by 15/16 is the same as shifting x 4 times to the right and then subtracting that from x. Since x is on [1,2), that means there is always 3 leading zeroes after the decimal followed by a 1 and everything else. We also know the first 3 digits have to be zeroes, to, so after the 1, we have 3 zeroes, so it looks like this:
Code: [Select]
1.0001abcdefgh
-.0001000abcdefgh
But if we subtract them, we see that we will get about 1.0000abc.... The abc are left alone if d=1 (no carry from subtraction) otherwise the bit propagates. Still, if a is set, we can expect that the next iteration is going to require multiplying by 31/32, but that won't likely change b or c, so then the next iteration depends on what b and c are.

Basically, what I am saying is this: If we have the number log(1.01abcdefg), we would multiply by 3/4 and if we stop here, we could get a better approximation by adding using a*log(8/7). If we stop at log(1.000001abcdefgh), though, there are 4 leading zeroes, so we can add to the end of this for a better approximation, a*log(128/127)+b*log(256/255)+c*log(512/511)+d*log(1024/1023)+e*log(2048/2047). Going back to an example in the previous section, we had x=1.000001111011111 with the approximation log(4/3)+log(4/3)+log(32/31)+log(64/63). If we want to guess and get pretty close to double the accuracy, we can add 1*log(128/127)+1*log(256/255)+1*log(512/511)+0*log(1024/1023)+1*log(2048/2047). Indeed, we get 4 digits of added accuracy (.2766723863, estimated vs. .2766053963... actual).


We can apply this recursively to double the digits of accuracy at each iteration, but this usually overestimates, especially early on. This will be easy to detect and correct when you subtract x-x>>n→x for each bit, so you can adjust there. Ideally, this is useful to try to squeeze as much extra precision as you can in the last iteration. If I wanted to write a logarithm routine to 64 bits of accuracy, I could wait until it got to 32-bits or more completed, and then I would have a pretty safe estimate for at least the next 31 bits and most likely the 64th bit and this would be without all the computation of x-x>>n.