Re: more questions (none / 0) (#90)
by nile on Fri Jul 13, 2001 at 05:46:49 PM EST
1.In particular, they couple syntactical relationships of objects (i.e., it is legal for
'+' to come after '0-9') with semantical relationships (i.e., what it means for
plus to come after a number).
As I mentioned elsewhere, this is meaningful only for people writing
parsers. Most objects in applications don't even have a syntactic
relationship. What relationships they do have are far more complex than
one-dimensional positions in a token string.
All problems have elements that have syntactical and semantic relationships. Consider an ATM. It
has relationships between its money feeder, its button interface, the customer,
and the bank. Consider a calculator. It has relationships between numbers,
operators, and parenthesis. Consider a web browser. It has relationship between
navigation, history, user interface, etc.
I think part of the confusion here is that we mean different things by syntactical. When I say syntactical relationships, I mean legal relationships. It is legal, for example, for a customer to ask money from the bank and vice-versa. It is not legal for the button interface to ask for a loan from the money feeded. All programs restrict the ways in which elements in their domains can interact. The fact that object-oriented programming does not provide an explicit mechanism for handling these relationships does not mean that they do not exist.
What words do is restrict what relationships these elements can have with each other to prevent bad programming behavior. They accomplish this by coupling the syntactical relationships between elements (i.e., how they can be legally related) with the semantic relationships (i.e., what those relationships mean).
I agree, by the way, that there are some relationships that should be represented to the user other than as a text string (matrixes, for example). This is a representation problem, though. Words can parse multi-dimensional matrixes fine.
2.The following token might also match tokens
So I take it that's a yes, you are limited to prefix notation. There's no way
for me to say: "'x' must come between two numbers" without changing the
number definition to allow 'x' to follow it.
Again, I didn't give enough detail. So, if you look in BlueBox, the matchers actually match a beginning and end expression and the syntax/semantic structures are matched inbetween them:
The matcher in words actually looks like the following:
So, it is very easy to match pairs of parenthese, x between two numbers, etc. I am actually increasing the power of the matching system too with something I call abstract regular expressions. Abstract regular expressions are regular expressions that can match words as well. Most of the work is done on this -- I'm just finishing the backtracking algorithm.
3.I don't see that your code example has anything to do with "words." It
could be written in a variety of ways with a variety of languages, and some
are just as easy to read as your second implementation. What you didn't
show was the rest of the implementation it would require to define the
functions you use: with words, you'd have to write your own parser to allow
you to use nested parentheses like you have, while most languages
provide a useful syntax ready-made. I'd like to see evidence that defining
your own sytntax every time your program is worth the work; what's wrong
with the standard notation?
I provide this evidence farther down the page in "5 benefits of word-oriented programming." However, I can see your concern here so let me see if I can answer it.
The worry appears to be that programmers are going to be burdened with the additional burden now of defining a syntax. I think this will rarely be the case, though. Most syntaxes inherit from other syntaxes.
For example, in English, most words inherit from nouns, verbs, and modifiers. Creating a new domain in English, then, does not mean respecifying all of the grammar rules that define how nouns, verbs, and modifiers can be conjoined together. Instead, the programmer simply has to inherit from those three core words that already define the English language. No parser has to be written to create a domain specific dialogue in English since the core language is already defined.
Something similar happens in object land, by the way, we just don't see it. When we inherit from the root object of a language, what we are really saying is I am creating a new "word" that can be used wherever "object" can be used in this language. Of course, since objects don't have the properties of words, we don't look at it this way even though that is what is happening.
Understanding langauge inheritance is key to understanding words. Without inheritance, the system would require a great deal of work for ordinary programmers.
[ Parent ]