This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Messages - shkaboinka
46
« on: March 06, 2012, 04:18:48 pm »
Honestly, I do think they have some good points. However, just because a language is easy (and for many, including myself, it's not) doesn't mean worthless things come out of it. (Case and point, PortalX, Graviter) Actually, in my experience with games, high quality Axe games totally beat all but the highest quality ASM games. (Again, from my experience)
Each language has its uses. Axe is designed for speedy development of games. It was made for programmers to make as high quality games as ASM, in a fifth of the time. (However, as with every language, there is no such thing as a perfect design or programmer)
Something I find personally funny to the Axe/ASM debate, is that I actually find ASM easier than Axe! 
So I agree, each language is capable of great things. It's not about what you're given before hand or how you go about it (though the experience of it is important) It's about the product. It's the same in professional development as well. 1. Make it work, 2. Make it fast, 3. Make it small. if one and two are taken care of in one fell swoop, then all the better. And if that's not your style, then just make something even awesomer in ASM! But don't bash the Axe programmer because of it.
As a side note, but I wonder how OPIA will influence this. 
The hope is that OPIA will be to z80 what C/++ is to everything bigger! I'm quite certain that there will be nothing closer to assembly (for TI z80) that provides all the high-level stuff in place of it (or along with it). OPIA will not be what Axe or Grammer is; but also, nothing will be what OPIA is either Thanks, Homer-16, for including my project!!
47
« on: February 16, 2012, 12:15:09 pm »
No gotos. Instead, I try to provide constructs/commands that should make them unnecessary if you know how to structure your code decently. I do provide labels and labelled-break/continue:
foo: while(...) { bar: while(...) { while(...) { break; // break from the innermost loop (or switch) continue; // continue the innermost loop (or switch) break foo; // break from the "foo" loop (or switch) continue bar; // continue (skip to next iteration of) the "bar" loop (or switch) } } } One person suggested that "do goto start { ... }" would be more readable, though personally I prefer to leave that word out of the language. I suggested "do @ start {" or "do start {" ... I like "do start" or "do(start)" because I think it reads well enough (and yeah, you might have to know about this construct in the first place; but that's what it means to know a language. Stuff can be obvious "enough" without having to have a built-in readme I think). ... Nevertheless, opinions?
48
« on: February 16, 2012, 08:50:36 am »
Agreed  (and do-for would be bizarre...). I retract my suggestion about just having "for" ... any opinions about my follow-up comment though?
49
« on: February 16, 2012, 05:11:16 am »
You are probably right. One reason I liked it (other than that Go uses it) is that the infinite loop is just "loop { ... }", but without another keyword. Otherwise "while(true) { ... }" is easily detectable. "Arr" is some array variable. In other languages this sometimes looks like "foreach(var in arr)" (I used shortened names so as to not totally give away stuff; but perhaps people have seen that mechanism, or perhaps the code itself would make it clear enough). I do think that perhaps a version of the "for" that mirrors TI-BASIC could coexist and make some code easier to write (e.g. "for(x,1,10)" versus "for(x=1;x<=10;x++)"). To keep things simple, the for is also used for "foreach" loops in some languages; and with that, I figured that allowing it to have just a condition (or perhaps have nothing to signify a "forever" loop) would not be much of a stretch ... but I admit that things like "while" and "do while" read much better.
I can stick with the standard setup, but perhaps keep extra options for "for":
while(c) { ... } // loop while c is true until(c) { ... } // loop until c is true do { ... } while(c); // do ... and then repeat while c is true do { ... } until(c); // do ... and then repeat until c is true do { ... } // do "forever" (useful) for(init; test; update) { ... } // C++/Java/C# "for" for(var, start, end, inc) { ... } // BASIC "for" (optional "inc") for(var : array) { ... } // "foreach" (probably added in later); possibly also // work with funcs/cofuncs where the last of the return-values is a bool One thing I would like to provide is a way to start a loop in the middle somewhere to avoid having to use duplicate code before the loop and within the loop (trust me, everybody runs into this sooner or later):
// "A" (a large chunk of code) is duplicated to do this: A; while(C) { B; A; }
// My solution (just as it reads, i.e. begin at label "start"): do(start) { B; start: A; } while(C); // ...but repeat the WHOLE loop "while C" is true
// A common solution (but requiring "loop manipulation"): do { A; if(!C){break;} B; } // !C is evil
50
« on: February 16, 2012, 12:15:02 am »
To make my point about what I am about to propose, first look at each construct to see if its obvious enough what each would mean: for { ... }
for(test) { ... }
for(init; test; next) { ... }
for(var : a, b) { ... }
for(var : a, b, i) { ... }
for(var : arr) { ... }
for(num) { ... } What I am proposing is to just use "for" for every kind of loop ("loop" didn't look as good, and "repeat" feels like the weird cousin of "while"). The idea is that there are fewer keywords, and the context would be easy enough anyway. The answers are (smashed together here so that people can guess first): while(true) { ... } while(test) { ... } init; while(test) { ... next; } for(var = a; var < b; var++) { ... } for(var = a; var < b; var += i) { ... } for(ix = 0; ix < size; ix++) { var = arr[ix]; ... } for(blah = number; blah > 0; blah--) { ... } // uses DJNZ Of course, I could throw out the last one if that's going a bit far (and I'd probably implement some of these only once everything else is in place), but I wanted to include several examples to make my point ...Opinions on this? (i.e. instead of having while/until/etc.)
51
« on: February 14, 2012, 02:32:03 am »
A few days ago, I made a post (elsewhere) re-debating the syntax for using & declaring arrays and pointers; but I take it back (post deleted)!  ... I was about to replace it with a post asking opinions on whether to use standard order (e.g. {*a[j]} is resolved to either {(*a)[j]} or {*(a[j])} depending on the datatype); but I just answered that question by myself when I realized that OPIA inserts derferences etc. automatically if the context is clear. This means that {a*[j]} can be shortened to {a[j]}, assuming that "a" is a pointer to an array so that the compiler would know that the "*" is implied. The reasons for not allowing "*" to be in front and "[]" to be in back (even though the compiler could determine the correct order of evaluation from the type) is the following: since {a*[j]*} can be shortened to {a[j]*}, then the other form would have to mean that {**a[j]} can be shortened to {*a[j]}, which means that something like {*a[j] = *b[k]} would could be ambiguous!! If all that is confusing, then just consider this a recap on the syntax for array & pointer declaration/usage: *[]byte x; // pointer to array of bytes []*byte y; // array of pointers to byte byte b;
x[i] = b; // short for x*[i] = b; y[i] = b; // short for y[i] = &b; y[i]* = b; // no shorthand
b = x[i]; // short for b = x*[i]; b = y[i]; // short for b = y[i]*; The rule is to FIRST insert derferences "*" so that array-indexing "[]" (and dot ".") makes sense (thus {x[j]} is ALWAYS {x*[j]}), and SECONDLY to make the right-side suitable for the left side. The only issue with this it creates asymmetry (note the differences between {y[j] = b} and {b = y[j]}). This is not fixable because {&b = ...} is illegal anyway, and I'm set about having assignment-to-pointer always give an address (plus it allows values to be passed "by reference" more transparently).
52
« on: February 04, 2012, 01:39:54 am »
BlakPilar: Just to be very clear on what's what and usage:
struct S { func fp(byte); // function-pointer func mp(this, byte); // method-pointer (caller is passed to "this" automatically) cofunc cf(byte x) { ... } // member cofunc ("this" is implied) // NOTE: cf is NOT directly modifiable (it changes after each 'yield') }
func f(*S, byte x) { ... } // non-member function func S.m(byte x) { ... } // non-member method
s := S{f,m}; // cf is not in the list (not a pointer)
// To show that they are all of type func(*S,byte): s.fp = s.mp = f; // all point to f s.fp = s.mp = S.m; // all point to S.m s.fp = s.mp = s.cf; // ... (Yes, even the cofunc!)
// To show how call syntax differs: s.fp(s,5); // pass an *S directly (does not have to be s!) s.mp(5); // s is automatically as "this" s.cf(5); // s is automatically passed as "this" NOTE: cofuncs cause the containing struct to contain a tail-call to the underlying function (and any other data needed to be saved between 'yield' commands). However, the containing struct is passed as "this" (rather than just the cofunc itself). This allows the cofunc to access the other members of the struct (i.e. the implication of declaring a cofunc WITHIN a struct is that you are giving a function-behavior the actual STRUCT rather than to some separate cofunc WITHIN the struct). You COULD declare a cofunc separately and then have a struct contain one, but then the cofunc takes a reference to itself rather than the struct:
cofunc CF(byte x) { ... } // cannot access y in there struct S { byte y; CF cf; } s := S{5,CF{}}; s.cf(5); ptr := s.cf; // ptr is a func(*CF, byte), rather than a func(*S,byte)
53
« on: February 03, 2012, 01:33:44 am »
EDIT: I am going to change the form a bit to be closer to how it was before (i.e. still use the "cofunc" keyword). I am going to update the overview to reflect this, and modify this (and previous posts) to reflect it. TO ANSWER THE QUESTION (with modifications applied):
cofunc T(byte x : byte) { // THIS SHORT-HAND FORM... for(byte last = 0; true; last += x) { yield last; } }
struct T { // ...REALLY MEANS THIS cofunc(byte x : byte) { for(byte last = 0; true; last += x) { yield last; } } }
t := T{}; // make a T variable t(1); // 0+1 = 1 t(2); // 1+2 = 3 You can also name cofuncs, and have multiple of them in one struct:
struct Foo { byte x; cofunc f(byte y) { ... } cofunc g(char c) { ... } }
f := Foo{1}; // x f.f(2); // y f.g('H'); // c Embedding cofunc-definitions in a struct causes the struct to store the data needed to allow "yield" commands to work (i.e. local variables [like "last"], and the point of execution to continue from after the yield). In this case, the cofunc is anonymous (has no name), hence "t(1)" in the first part versus f.f(2) in the second.
54
« on: February 02, 2012, 06:41:31 am »
I discovered something fundamentally wrong with how I present "confuncs" in my language: They are METHODS! ... In other words, I cannot just embed a function-call at the start and say it acts like a function. Therefor, I propose a new scheme which allows cofuncs to act as embedded-methods of a struct:
struct S { byte x; cofunc f(...) { ... } // f and g may contain yields cofunc g(...) { ... } // f and g have access to each other and x }
s := S{1}; s.f(...); s.g(...); And there is a short form as well:
struct T { cofunc(...) { ... } } // the cofunc is an anonymous field of T cofunc T(...) { ... } // This is a shorthand for the same thing above
t := T{}; t(); The benefits of this approach: - A struct may embed multiple cofuncs like this (anonymous ones referred to as ".cofunc") - These cofuncs count as methods (and are thus compatible with function-pointers and interfaces) - These cofuncs may call each other or access other members of the struct (they "are" the struct)
On a side note, would it be nice to have ".x" be a shorthand for "this.x"?
55
« on: February 01, 2012, 02:39:23 am »
Those tools certainly make stable and effective parsers; but I've seen the code for some of them (I do not recall which), and it looked very generated ... I prefer to do it on my own for the same reason that some people prefer to code in assembly. Sure those tools make it a lot easier and are already proven to work perfectly, but I don't think that necessarily makes them "better". Thought it is ridiculous code everything from scratch when the tools are already provided and proven; but being able to master those techniques means that you know when it might be better to make a custom version yourself (e.g I use my own LinkedList class because I found that java.util.LinkedList concatenates lists by converting them to arrays first!). Also, sometimes its better to have a custom structure that has all the general purpose features removed or works as needed without having extra layers ... but lets just settle on that I am doing it all manually because that's part of what makes it mine. And trust me, I know what it takes (experience and research) to make a solid parser & compiler, and how to do it efficiently (e.g. recursive decent, predefined Tokens with flags marking precomputed aspects, good parse-tree constructs, and keeping things modular). ... If anyone can do it, I can. I want to DO this. The "change in grammar" from before was also a big change in the semantics of the grammar. The language was also particular about what could be contained where (e.g. whether classes, functions, etc. were abstract, virtual, static, etc.). After it was parsed, I also had to check for integrity between classes in regards to inheritance (things that had to be consistent or exclusive, etc.). This is also a sign that the grammar was not of a simple form ... which is one of many reasons that I like the Go-ish style much better than the C++/C#/Java style for a polymorphic language (especially one geared to such a low-level!)
56
« on: January 31, 2012, 04:59:26 am »
Quigibo: I've had a few premature run-throughs with compilers before. Last time, I went through and changed some key rules which really affected parsing and verification rules for classes & interfaces. I do have a couple aspects already in place; However, the new design is free of MUCH of that red tape (since it lacks a type-hierarchy), so I might be able to put more up sooner (school etc. permitting)
BlakPilar: There will be a Tokenizer, Preprocessor, and a Parse-Tree (which indeed makes it worlds easier to parse and optimize. I will go a step further by tracing values across variables and allowing things to be marked as "interpreted"). Each will be modularized so that other tools can use them the compiler itself to analyze code at different levels. This compiler will NOT be tied into a GUI, but will allow optional arguments to specify file-paths, extra files to include, and which environment to use. Allowing these as compiler arguments means that they don't HAVE to be present in the source code (e.g. so that other tools/editors can organize things however they like, and then pass that information on to the compiler).
[General]: I want to reemphasize that OPIA is not "better" than other languages, though I do intend for it to be the most versatile and (computationally) powerful. I do this by carefully choosing flexible and efficient mechanisms necessary to make the language "limitless", while making the design as simple as possible. Unlike Axe or Grammer, OPIA does not offer a fully integrated environment with most of the tools you need to make a game; but OPIA is extensible to allow such things to be coded-in/linked-in so as to seamlessly integrate directly with/into any environment (and that is where people can contribute)!
57
« on: January 30, 2012, 01:57:52 am »
Thanks for clarifying (I just thought that Omnimaga & Cemetech were just different sites, though both more communal that ticalc.org). I think that your suggestions would do me/OPIA well when it is ready to present. However, it's not ready to say "here it is!" ... but in announcing that it is upcoming, or allowing other to participate in the design (or discussion thereof), it would be beneficial to make it more readable / visual / involved / etc. Let me take just ONE paragraph to explain why this has not been much of the case yet (and then I will explain where/when this will come into play more): The reason that my approach is so different from that of Axe (etc.) is that it is NOT a bunch of features crammed together. I have done my best to analyze features for their power, efficiency, usability, etc., doing my best not to include anything just because it looks nice or is familiar, etc. ... I am using strong principles of programming language and compiler theory to design a language that is as simple, expressive, powerful, understandable, and unrestricted as possible (which means "Ooh, can you put X into the language? I like X" has less sway than all those other factors). For example (skip rest of paragraph if you don't want one), questions about closures and coroutines led me to consider their necessity versus other techniques versus efficiency. Upon finding that they require the same internal mechanisms, I provided the "cofunc" as a clean and efficient way to do either, removing some of the headache and overlap that either present in their typical usage. WHEN THE LANGUAGE IS IN PLACE (and at least on its WAY to being developed [e.g. the design already resolved]), then the language will need a LOT of contribution! Unlike Axe & Grammer, OPIA will NOT have any functions or libraries built into it. Using the core language, libraries (for graphics, input/output, math, OS calls, etc.) be written to extend the language by (1) tying directly into other tools/OS's, (2) providing direct access to other assembly resources, (3) embedding (hiding) assembly code into functions, or (4) using pure OPIA code to design new constructs, tools, etc. One is NOT required to know anything of assembly in order to use OPIA, since such integration techniques are only provided so that it CAN tie directly into other already existing tools (no matter where they came from). When I can provide something which can generate runnable code, I will provide some basic functionality for printing etc.; but I intend to involve as many people as want to participate when it comes time to come up with some sort of standard(s) and/or write some of the libraries that people are going to want. THIS is where people can be actively and freely involved in developing the language (through expansion) ... I, however, will design the core language (which I am mostly done with anyway). I will most certainly need to provide some more friendly documentation. So far, I've decided that posting updates and opening things up for discussion is better than just doing it in the dark, since (1) I can get meaningful feedback from anyone willing to discuss internal theory/design, (2) perhaps I can find what would/would not be "acceptable" for people wanting to use the language, (3) people can see active design and get an idea for what is to come, while (4) building up a larger group of followers who will be anxious to help test it and develop the language further  EDIT: Perhaps I will provide some better examples before then though, so people can see what it would look like etc.
58
« on: January 27, 2012, 10:46:35 pm »
Since I am still developing it, I use my Overview to lay out every single aspect (in detail) as a representation of what I have so far (that way, none of it just becomes lost thought) ... I've been working on designing a language since 2004 now (that means tons of research and experimentation). This is going to be the language that I've meant for previous attempts to be, and much more ... Trust me, I'm being thorough because I know what it takes; not because I'm crapping out ideas left and right (though I started there)  These posts are a way for people to see my thoughts as I design this, and contribute responses (e.g. intelligent opinions about what is/isn't useful/understandable/etc.). This is helpful because I hope to offer a dimension of z80 Calc programming not yet realized. However, I understand that my examples are too abstract for most people (focusing more on intricacies than realistic usage, because THOSE are the details I have yet to polish up). ... The purpose of my discussion is not so much to discuss how to use OPIA, but to discuss compiler/language design/theory directly, with OPIA being a realization of that design. Some of it is very dry and lacking explanation because I figure that people can look at the (constantly updated) overview to reference what I mean (that is partially the point of the overview right now). However, I am SO close to having it all set, so I will try to wrap it up. Hopefully you'll see more coding being done soon (on the compiler, I mean; it currently only has some very foundational stuff). As I have more testable stuff, then perhaps I can focus more on giving people something that actually DOES something (and yes, example programs like PONG or something). I do apologize for all the convoluted code; I've been hammering out intricacies rather than trying to provide realistic examples. I figure that the overview provides all details, but is not meant to teach people how to program (so it's very dry) ... I realize that the lack of useful examples makes it difficult though, so maybe I will have to do better at that  ... I kinda like the overview to be cut and dry though, since it's meant to be "THE" language definition. Perhaps I can make some tutorials/examples/etc. as things progress though. By the way, there has been responses and discussion on some of these topics (especially anonymous fields) on other sites. I post it all everywhere though so that nobody is left out from knowing what is being discussed or changed. I will stop posting links everywhere though (I had my reasons; but now I have reason not to, thanks).
59
« on: January 23, 2012, 07:21:14 pm »
Ok, I've made an update: * Anonymous fields DO bring their methods into the containing struct (i.e. they can be used to fulfill interface requirements) * Changed "virtual methods" to "method pointers", and altered the syntax from "func blah(...)" to "func(this,...) blah" (so that they just act as special function-pointers). There is also no "declared within" paradigm (i.e. the "with or without a body" thing), though they MAY be given a default value just like anything else (e.g. " ... = func(...) { ... }"). * Made some minor formatting adjustments with the changes (shortened some examples, but added some others) I did not mention it in the overview, but bridge-functions will have to be used to map non-initial anonymous fields to their methods when they are used with interfaces.
60
« on: January 12, 2012, 10:09:54 pm »
(whole post edited ... again)
Ok, I am going to reinforce some new rules:
(1) Struct members with default values must come last in the struct. When initializing a struct instance, the initial value for a particular data member may be left out of the list only if the values after it are also left out of the list. This includes function bodies for virtual methods (though "= null" or "= otherFunc" may be used as default values for member-functions). Also, default values can be expressions (e.g. the sum of the other values)
(2) Cofunctions are declared as a combination of a struct and a function: "cofunc name { members } (args : returns) { body }", but "{members}" may be left out. They are initialized the same as structs.
(3) Switch-variables may be declared directly, but not "on" other enums (just "switch{X,Y,Z} foo"). Values will be assigned directly by name ("foo=X" rather than "foo=foo.X"). The ":=" operator is not allowed for switch variables, since the initial value can be given directly by name anyway ("switch{X,Y,Z} foo = X"). The compiler will treat these assignments specially, so that "foo=X" does not interfere with some other "X".
(4) "Methods" are only allowed for structs, cofuncs, and interfaces ("single identifier" types), so as to avoid strange dot-expressions on functions and numbers, etc. This does not stop anyone from making a struct containing some other type as a work-around though (since the storage an manipulation would be the same; especially if the member is anonymous)
|