466
Art / Re: Mockups "please say this is going to be a game"
« on: February 09, 2015, 09:03:06 am »
@Nanowar and tr1p1ea: Wow, those are amazing o.o
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. 466
Art / Re: Mockups "please say this is going to be a game"« on: February 09, 2015, 09:03:06 am »
@Nanowar and tr1p1ea: Wow, those are amazing o.o
467
ASM / Re: eZ80 Optimized Routines« on: February 06, 2015, 08:35:51 am »
For the Z80, I made floating point routines for 80-bit floats (64-bit precision) and 24-bit floats (16-bit precision). I just started working on a single-precision floating point library for the eZ80 and Z80 that is conforming to IEEE-754 standards and it is going alright.
I've written a bunch of fixed-point and integer routines, too. 468
ASM / Re: eZ80 Optimized Routines« on: February 05, 2015, 02:17:37 pm »
GCD is the Greatest Common Divisor. So GCD(15,27) is 3 since 3 is the largest number that divides both 27 and 15. It isn't used often, bt if somebody wanted to make a 16-bit rational library *cough*I should*cough* it is extremely useful.
469
Other / Re: Raspberry Pi 2« on: February 03, 2015, 08:34:15 am »
Yessss! I am totally getting this. I'm glad I waited
470
ASM / Binary Search Algorithm« on: February 02, 2015, 03:00:47 pm »
Hi all, I was writing a binary search algorithm and since I cannot test it just yet, I was hoping people might be able to spot any problems or optimizations. My intent was to optimize this for the eZ80. Feel free to offer suggestions that include instructions from the eZ80 instruction set.
First, the input is a "key" to search for. It is zero terminated, so it may be something like .db "Hello",0 The table to look for it in is comprised of indices to the actual data. So the table might look like: Code: [Select] table_start: The actual strings do not need to be in order, just the order in which they are indexed. The strings must be zero-terminated, too.So, passing a pointer to the keyword in DE and the table start in HL: Code: [Select] BinarySearch: The output should have z flag if a match was found, else nz. In the case of a match being found, BC is the index number, DE points to the byte after the key, HL points to the byte after the match. If no match was found, HL and DE point to the index in front of where the match should have been and BC points to the table data start. In any case, the c flag is reset.
471
TI Z80 / Re: SPASM-ng, now with eZ80 support!« on: January 30, 2015, 09:35:00 am »
Yay, thank you!
472
ASM / Re: eZ80 Optimized Routines« on: January 30, 2015, 09:11:02 am »
As it is only a 16-bit by 16-bit multiplication, yes, only up to 16-bit factors
And the upper 16 bits of the result are in DE, lower 16 in BC. 473
TI Z80 / Re: SPASM-ng, now with eZ80 support!« on: January 30, 2015, 09:03:38 am »
Oh, am I able to use it on my Raspberry Pi?
474
ASM / Re: eZ80 Optimized Routines« on: January 30, 2015, 09:02:00 am »
No, the eZ80 has two modes, ADL mode (24-bit) and Z80 (16-bit). In Z80 mode, certain instructions are one or two cycles faster, so it is beneficial for me to use that mode since I don't use any ADL-specific instructions. As well, add hl,hl sets the c flag on 16-bit overflow in Z80 mode, which I needed, but doesn't set it in ADL mode (it has to overflow 24 bits). Since there is no easy way to access the top bits of HL, it would have to be performed in Z80 mode, making it take even more clock cycles.
The full 32-bit result is returned. 475
ASM / eZ80 Optimized Routines« on: January 29, 2015, 09:25:53 am »
EDIT: Routines so far:
mul16 mul32 gcd16 sqrt16 sqrt24 rand24 setbuffer hl*a/255 div16 prng24 atan8 [link] Now that the eZ80 has been confirmed as the new processor in the next line of TI calculators, I thought it would be fun to start making optimized routines in preparation! We can take advantage of the 8-bit multiplication routine for multiplication of course. As a note, I did not find Karatusba multiplication to be faster (it was 66 to 76 clock cycles). mul16 optimized Code: [Select] mul16: Notice on the right side are the instruction timings. MLT is always 6 cycles and can be used in Z80 or ADL mode. All they did was include it in the extended instructions. In this case, I use Z80 mode to take advantage of some of the math and as a result, I also save 3 t-states over ADL mode which would require that add hl,bc to be done in z80 mode, so "add.s hl,bc" which costs 2 cycles instead of one, and the pushes/pops would require 4cc instead of 3cc each. So, who wants to have fun? EDIT: 5-Feb-15 mul32 optimized Code: [Select] mul32: gcd16 optimizedCode: [Select] GCD16: sqrt16 optimizedCode: [Select] sqrt16: rand24 optimizedCode: [Select] Rand24: Set Buffer optimized Code: [Select] setBuf: EDIT 9-Feb-15 HL*A/255 (can be used for division by 3,5,15,17,51, and 85, among others) This one performs A*HL/255. Be warned that this does not work on certain boundary values. For example, A=85 would imply division by 3, but if you input HL as divisible by 3, you get one less than the actual result. So 9*85/255 should return 3, but this routine returns 2 instead. Code: [Select] fastMul_Ndiv255: HL*A/255 "fixed"Modifying this to correct the rounding issue is a pain in terms of speed hit and size hit: Code: [Select] fastMul_Ndiv255_fix: HL*A/255 RoundedHowever, rounding it is at little cost and works well: Code: [Select] fastMul_Ndiv255_round: sqrt24 optimizedFinally, a routine that expects ADL mode instead of Z80 mode! This is an integer square root routine, but can be used for 8.8 fixed point numbers by copying the fixed point number to HL as 8.16 where the lower 8 bits are zero (or if you had extended precision from a previous calculation, feel free to use that). then the square root is 4.8 Code: [Select] sqrt24: It is huge and I believe less than 240cc.div16 optimized Code: [Select] div16: prng24 optimizedThis is a lot faster and smaller than the rand24 routine. I'm also pretty sure this routine wins on unpredictability, too, and it has a huge cycle length. Code: [Select] prng24: Spoiler For template for Zeda because she is lazy: 476
TI Z80 / Re: Figuring out what's in front of you.« on: January 18, 2015, 04:16:51 pm »
So do you have the coordinates of the objects?
477
Math and Science / Re: Quadratic and Faster Convergence for Division« on: December 24, 2014, 10:26:23 am »
When I first learned about Goldshmidt Division (case a=2), my first thought was to generalize it. Of course, after I did that I researched if it had already been done and it has I even think that it is known that a=3 is the most optimal case judging by this.
EDIT: And now I found a paper that mentioned an even more efficient way of doing it. It finds the inverse of D using one fewer mul at each iteration, then you use a final multiplication to do N*(1/D). For the curious: Code: [Select] r=1 (or some approximation of 1/D) The last step is carried out finitely. The number of 'x' terms is how quickly it converges.
478
Math and Science / Quadratic and Faster Convergence for Division« on: December 24, 2014, 02:12:45 am »
EDIT: See the necro-update below-- it provides a much more efficient algorithm, even 1 multiply better than most suggested implementations of Newton-Raphson division
In the study of numerical algorithms, quadratic convergence refers to an iterative algorithm that approximates a function and each iteration doubles the digits of accuracy. In this post, I will expose an algorithm for division that can double, triple, quadruple, etc. the number of digits of accuracy at each iteration. I will then show that the optimal algorithm is that which offers cubic convergence (digits of accuracy tripled). What does it mean to multiply the number of correct digits by a? Suppose that we have a sequence ##x_{0},x_{1},x_{2},x_{3},...## and suppose that ##x_{n}\rightarrow c## Then by the definition of convergence, ##\lim\limits_{n\rightarrow\infty}{|x_{n}-c|}=0##. You should always think of |x-y| as a "distance" between two points, and in this case, that distance is the error of our approximations, ##x_{n}##. If the number of correct digits is multiplied by a, then that means ##\log(|x_{n+1}-c|)=a\cdot\log|x_{n}-c|##, or rewritten using logarithm tricks, ##|x_{n+1}-c|=|x_{n}-c|^{a}##. This assumes that the error is initially less than 1. The Algorithm The algorithm is very simple, but to break it down we need to look at "obvious" bits of math. Firstly, ##\frac{N}{D}=\frac{N\cdot c}{D\cdot c}##, and secondly, all real numbers can be written in the form of ##x=y\cdot 2^{m}, y\in(.5,1]##. You may have said "duh" at the first statement, but if you need convincing of the second, it basically says that if you divide or multiply a number by 2 enough times, then it will eventually get between 1/2 and 1. Or in another way, shown by example: 211<3767<212, so dividing it all by 212, .5<3767/4096<1. All real numbers can be bounded by consecutive power of 2, therefore they can all be written in the above form. The final piece of the puzzle is to recognize that ##\frac{N}{1}={N}##. What we will do is recursively multiply D by some constant c at each iteration in order to drive it closer to 1. If we use range reduction techniques to get D on (.5,1] (multiply both N and D by the power of 2 that gets D in that range). Then what we want is to choose c so that ##|1-D_{n}\cdot c|= |1-D_{n}|^{a}##. If ##0\leq D_{n}\leq 1##, then we have ##1-D_{n}\cdot c = (1-D_{n})^{a}##. When a is a natural number greater than 1 (if it is 1, then there is no convergence ), we have the following: ##1-D_{n}\cdot c = \sum\limits_{k=0}^{a}{{a \choose k}(-D_{n})^{k}}## ##D_{n}\cdot c = 1-\sum\limits_{k=0}^{a}{{a \choose k}(-D_{n})^{k}}## ##D_{n}\cdot c = 1-(1+\sum\limits_{k=1}^{a}{{a \choose k}(-D_{n})^{k}})## ##D_{n}\cdot c = -\sum\limits_{k=1}^{a}{{a \choose k}(-D_{n})^{k}}## ##D_{n}\cdot c = {a \choose 1}D_{n}-{a \choose 2}D_{n}^{2}+{a \choose 3}D_{n}^{3}+\cdots## ##c = {a \choose 1}-{a \choose 2}D_{n}+{a \choose 3}D_{n}^{2}+\cdots## Using a technique similar to Horner's Method, we can obtain: ##c = {a \choose 1}-D_{n}({a \choose 2}-D_{n}({a \choose 3}-D_{n}({a \choose 4}-D_{n}\cdots## Then the selection for c requires a-2 multiplications and a-1 subtractions. If we take ##N_{n+1}=N_{n}\cdot c## and ##D_{n+1}=D_{n}\cdot c##, then the number of digits of accuracy is multiplied by a using a total of a multiplications and a-1. Example Lets say we want quadratic convergence. That is, a=2. Then ##c_{n+1}=2-D_{n}## That's right-- all you have to do is multiply by 2 minus the denominator at each iteration and if you started with 1 digit of accuracy, you next get 2 digits, then 4, then 8, 16, 32,.... Let's perform 49/39. Divide numerator and denominator by 26=64 and we get .765625/.609375. First iteration: c=2-D=1.390625 Nc=1.064697266 Dc=.8474121094 Second iteration: c=1.152587891 Nc=1.227157176 Dc=.9767169356 next iterations in the form {c,N,D}: {1.023283064,1.255729155,.9994578989} {1.000542101,1.256409887,.9999997061} {1.000000294,1.256410256,.9999999999} {1.000000294,1.256410256,1.000000000} And in fact, 49/39 is 1.256410256 according to my calc. Can we find an optimal choice for a? The only variable over which we have control is a-- how quickly the algorithm converges. Therefore we should find an optimal choice of a. In this case, "optimal" means getting the most accuracy for the least work (computation time). First, ##1-D_{0}<.5=2^{-1}## So then after n iterations, we have at least an bits of accuracy and since ##D_{0}## may get arbitrarily close to .5, this is the best lower bound on the number of bits of accuracy achieved for n iterations of a randomly chosen D. As well, at n iterations, we need to use ##a\cdot n## multiplications and the subtractions are treated as trivial. Then in order to get m bits of accuracy, we need roughly ##a\log_{a}(m)## multiplications. We want to minimize this function of a for a is an integer greater than 1. To do that, we find when ##0=\frac{d}{da}(a\log_{a}(m))##: ##0=\frac{d}{da}(a\log_{a}(m))## ##0=\frac{d}{da}(a\frac{\log(m)}{\log(a)})## ##0=\frac{d}{da}(\frac{a}{\log(a)})## ##0=\frac{1}{\log(a)}-\frac{1}{\log^{2}(a)}## Since a>1: ##0=1-\frac{1}{\log(a)}## ##1=\frac{1}{\log(a)}## ##\log(a)=1## ##a=e## However, e is not an integer, but it is bounded by 2 and 3, so check which is smaller: 2/log(2) or 3/log(3) and we find that a=3 is the optimal value that achieves the most accuracy for the least work. Example: Our algorithm body is: Code: [Select] c=3-D*(3-D) Then 49/39=.765625/.609375 and {c,N,D}=:{1.543212891,1.181522369,.9403953552} {1.063157358,1.256144201,.9997882418} {1.000211803,1.256410256,1.000000000} Conclusion In reality, their is a specific M so that for all m>M, a=3 always requires less computational power than a=2. Before that cutoff, there are values for which they require the same amount of work or even the cost is in favor of a=2. For example, 32768 bits of accuracy can be achieved with the same amount of work. The cutoff appears to be m=2^24 which is pretty gosh darn huge (16777216 bits of accuracy is over 5 million digits). 479
Math and Science / Re: Efficient Computation of the Complex Exponential Function« on: December 08, 2014, 01:52:53 pm »
So for the actual, cleaned up code:
First, take the following tables: ##L1[k]:=2^{k}\log((1-2^{-k})^{2}+4^{-k})## ##L2[k]:=2^{k+1}\log(1-2^{-k})## ##L3[k]:=2^{k}\log(1+4^{-k})## ##L4[k]:=2^{k}\log((1+2^{-k})^{2}+4^{-k})## ##L5[k]:=2^{k+1}\log(1+2^{-k})## ##A1[k]:=2^{k+1}\arctan(\frac{1}{2^{k}-1})## ##A2[k]:=2^{k+1}\arctan(2^{-k})## ##A3[k]:=2^{k+1}\arctan(\frac{1}{2^{k}+1})## Code: [Select] 1→v It might also be convenient to move all the v,w calculations outside of the cases, since they are all essentially the same with components multiplied by s or t which are just multiplications by {1,0,-1}.Code: [Select] 1→v Also, in the last m iterations, can be even more optimized if you are using a binary floating point format for x and y. Convert the mantissas so that you have m*2^0 (so 1.1011*2^-5 would have a mantissa of 110110000.... but would be converted to 000011011000...). Now use these as your new, converted x and y value, treating them as ints. If the function "MSb(x)" returns the most significant bit in x (so for 64-bit numbers, bit 63):Code: [Select] sign(x)→b In the last part, s,t are either 0 or 1, and b,c are either -1, 0, or 1. If any of them are 0, of course the algorithm can be optimized further (just plug in 0 to see how it reduces). For example, suppose x=0 upon entering:Code: [Select] for k,m,2m-1:
480
News / Re: https by default« on: December 06, 2014, 05:59:32 pm »
I don't know anything about these things, so I don't actually know the significance of this news .___.
|
|