Ooh that is awesome . It seems the code gets smaller, right? How well does it get optimized for particularly complex programs?
Thanks!
Well, the Source program is a lot smaller, but that's just because the compiler doesn't inline simple statements yet. The equivalent HPPL program would be a bit smaller if a human did it right now. Eventually, I'll make it replace the TMPs with the statements that the TMPs equal; right now, these temporary variables are created by the Source compiler to allow for statements that can't be inlined in HPPL to be inlined in Source. Also, there isn't optimization yet. But trust me, I have a few ideas.
Also, more progress. Here's a Hello Source program:
Conclusion: PIXON is impossibly fast. So fast, in fact, that I'm wondering if I messed up my time test somewhere. Of course, even if pixels are really fast, you have to worry about working with ints, which are noticeably slower than reals. If those results are real, though, I might have to suggest working with grobs myself.
test() BEGIN LOCAL TMP_1:=1; LOCAL TMP_2:=1; LOCAL TMP_0:=TMP_1+TMP_2; x:=TMP_0; END; This is a huge step for the Source compiler. Right now, there is a huge lack of minification, as you can see. This will be fixed once the full syntax of Source gets added to the compiler.
That might be quite problematic since the code gets compiled when you send it across... I'll have to check that.
Also, I think storing a char directly also does not require any memory allocations since the size isn't changing.
Hmm? You can STORE characters using that notation too? Didn't know that.
It's a good thing you hang around here, Tim. We'd never have any documentation otherwise!
EDIT: Speaking of documentation, I was looking at #pragma. Can it accept any other arguments tham mode? In mode, can it only take separator and integer modes?
Conclusion: GETPIX is slower than list access, at least for smallish indices. However, the main problem is probably the fact that you're getting back an integer, which is slow.
Big 1D Data
Let's compare strings to lists again. Let's bring this to THE MAX.
Conclusion: Setting values is much, much slower than getting them. Here, strings serve as an EXTREMELY good storage mechanism. So strings are better if you're willing to have a longer access time.
Making large strings
How does one make a large string programmatically? Well, there's the built in ΣLIST function that works, but what else can work? CHAR could do.
EXPORT TST3() BEGIN FOR I FROM 1 TO 1000 DO FF(); END; END;
The results are as follows:
MAKELIST
ITERATE
FOR
10,500 μs
7,900 μs
18,900 μs
Conclusion: If you don't care too much about the iteration count, ITERATE is better. I would bet it to be superior even if you put code in FF to make ITERATE give you back a real iteration value.
(If this test was done with a ClassPad II, I would be 65 years old and retired by the time it's finished. )
Thanks a lot for this post by the way. THis might be pretty handy because sometimes we might need the fastest possible speed, but not be aware of what is actually faster. It sucks to have to rewrite most of the code after discovering we could have done it faster in certain ways >.<
Yeah, I know what you mean. Looking back at HP Tetris, I realize that representing the grid with a complex matrix was a bad idea.
Also, speed was a major concern for me regarding tilemapping because I was not sure if I should use lists, matrices, strings or images. I am not surprised that strings are slower, though, because on the TI models they were much slower than lists (in fact, the farther a character was in a string, the slower it was to read it, so inside a 2000 bytes large string, reading map data at the beginning was about 3 times faster than at the end). For data storage and reading, could you test how fast/slow it is with Pixel-test commands? Using such data would make it much easier to design tilemap data, but if the speed loss is considerable, then it might not be worth it.
Do you mean testing list access vs. pixel access? If that's what you mean, sure! Im going to guess that lists are faster, but who knows! This is why I'm testing.
As you guys may know, I'm working on an application that (in a way) optimizes HPPL code. So recently, I've been time-testing several methods of computation. So if you want to know absolutely what's faster to execute, you now have this thread!
I take 10,000 to 40,000 executions of a code snippet, and average the returned execution times. All parts of the programs being tested against each other have all other variables/code held constant. This isn't EXACTLY how long each operation takes; since there's boilerplate in there, too, all the times are relative. I did all of these on a physical calculator.
Conclusion: GETPIX is slower than list access, at least for smallish indices. However, the main problem is probably the fact that you're getting back an integer, which is slow.
Big 1D Data
Let's compare strings to lists again. Let's bring this to THE MAX.
Conclusion: Setting values is much, much slower than getting them. Here, strings serve as an EXTREMELY good storage mechanism. So strings are better if you're willing to have a longer access time.
Making large strings
How does one make a large string programmatically? Well, there's the built in ΣLIST function that works, but what else can work? CHAR could do.
EXPORT TST3() BEGIN FOR I FROM 1 TO 1000 DO FF(); END; END;
The results are as follows:
MAKELIST
ITERATE
FOR
10,500 μs
7,900 μs
18,900 μs
Conclusion: If you don't care too much about the iteration count, ITERATE is better. I would bet it to be superior even if you put code in FF to make ITERATE give you back a real iteration value.
EXPORT TST3() BEGIN FOR I FROM 1 TO 1000 DO FF(I); END; END;
The results are:
MAKELIST
ITERATE
FOR
35,700 μs
39,300 μs
48,200 μs
Conclusion: Passing a parameter costs you a lot of time.
Oh, and I guess I lost that bet.
Fun fact with lists: Use LIST[0] to get the last element of a list. Set something to LIST[0] to append it to the end of the list. This is a sensible way to do things. Yessir.
Well, that's really all the useful ones I have right now. I'll do more later. Feel free to ask me for more comparisons!
Indeed. I am really curious about how this language will look like. Just make sure it's not overly cryptic like Antidisassemblage on the 84+. Ironically, that language was meant to bridge the gap between TI-BASIC and Z80 ASM, yet it required learning ASM in order to even understand why Squirrelbox works in certain ways, which defeated the entire point.
As for the new name, just keep in mind it might confuse people if they post actual source code literally , but I like the idea. Btw do you plan to write a tutorial on how to code in Source when finished? It might be good to post about it here if that's the case, so more people see it.
Source is a good name. I don't see how it could be confusing.
Anyways, I plan on writing Source tutorials soonish. Probably before the whole project is done, actually! Sooner than later, though, will be coming a description of the syntax. Syntax is important! How else can I get feedback on this thing?
btw is this still a language written in HP PPL or does it now use ASM/C through any HP Prime exploit?
Sadly, I have not found any good haxx. It will be initially entirely compiled to HPPL. However, as we learn more about the OS, I am willing to add in features in ASM/C.
So you guys may be surprised to learn that this project isn't dead. Actually, I've restarted from scratch.
Today, I'm introducing Source.
Source is the new name for HPP+. I made the change because I might (might!) make this language compilable to other places eventually.
There is currently the Source compiler, which parses Source programs but doesn't compile it yet, and a Source plugin for the IDE Netbeans. The plugin has syntax highlighting and error displaying already built in.
Soon, I plan on giving you guys a WIP document on the exact syntax of Source. Also, I really need to update the OP.
One las thing, concerning pointers: I've worked with QUOTE a little (I need to show you guys how it works), but it acts quite... Uniquely. I may or may not be able to figure out how to use this to make pointers fast.
I loaded the .BIN file from the old SDK OS into a ARM dissassembler (armU, to be exact), and I found that I can emulate it's basic operations from there. I discovered what memory addresses correspond to the splash screen bootup and the main input loop.