Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The Word Model: A Detailed Explanation

By nile in Technology
Mon Mar 26, 2001 at 03:20:15 PM EST
Tags: Software (all tags)
Software

In this paper we explain the word model from a programmer's perspective. We begin by solving a simple programming problem with the traditional object-oriented approach. Then, we solve the same problem with words and contrast the solution with the object-oriented approach. Finally we explain why the word model is better than the object model and briefly examine its implications.


Terms

Word: Words couple data, methods, and grammar rules in the same way that objects couple data and methods.

Action-Oriented Programming: More commonly known as procedural programming.

Coupling: The process of grouping programming elements (e.g., methods and data) so that they must be accessed as a single entity (e.g., objects).

Domain: An area of study like mathematics, networking, physics, chemistry, etc. Most domains have their own peculiar syntaxes (i.e. jargon) that map concepts or elements in the domain. These elements have both syntactical and semantic relationships.

Grammar Rule: A rule that parses a document, finds a structure, and provides access to the properties of the structure it finds.

Semantic Relationships: The relationships allowed between different concepts in a problem domain. In math, semantic relationships are the operations allowed between numbers, addition and subtraction, and parentheses. In English, semantic relationships are relationships between the meanings of nouns, verbs, adverbs, and adjectives.

Syntactical Relationships: The literal relationships between different words as seen by an imaginary parser. In math, syntax is the set of relationships allowed between the characters '0-9', operators like '+' and '-,' and parentheses. In English, syntax is the set of relationships allowed between the literal text strings of nouns, verbs, adverbs, and adjectives.

 

An Object-Oriented Solution

Readers who are not programmers should be forewarned that the next two sections cover programming examples and as such are necessarily technical. If you are unfamiliar with XML, you may find the section entitled "Why Words are Better than Objects" more useful. If you are familiar with XML, however, the next two sections will provide concrete examples of the differences between objects and words. We will demonstrate these differences by building a simple XHTML syntax.

Let's start by building a parser class for XHTML with the Apache XML Parser from IBM. The following is an XHTMLParser class that parses XHTML documents.

class XHTMLParser
{

public:

XHTMLParser();
virtual ~XHTMLParser();

virtual DOM_Node findXHTML();
static bool isImage(DOM_Node currentNode);
static bool isBR(DOM_Node currentNode);
.......

};

To use an XHTMLParser object, programmers pass the filename of an XHTML file, and then call XHTMLParser methods to navigate through the document. To find XHTML a programmer would call the "findXHTML" method and would be returned the root of the document.

Now, we need to use the parser to actually do something: that is, we have to create the semantics of the XHTML language by creating a browser. This is accomplished by creating objects that use the parser to perform actions. The "Body" class, for example, would use the parser to cycle through the body of the XHTML document and lay out images and text as it came across them. It would look like:

class Body
{

public:
Body();
virtual ~Body();
virtual void drawImage(image i);
virtual void drawText(text t);
virtual void drawLine(height h);
virtual void processElements();
private:
DOM_Node fCurrentElement;
Position fPosition;
.....

};

The code for processElement would be:

void
Body::processElements()
{

while (!currentNode.isNull())
{

if (isImage(currentElement))
{

Image image(currentElement);
drawImage(image);
}
else if (isBR(currentElement))
{

fPosition.incrementX(fLineSize);

}
.....
currentElement = XMLUtil::getNextElement(currentElement);
}

}

In this way, the Body class cycles through the elements under the "Body" in the HTML document and instantiates objects, inserts images, and updates layout as it comes across new tags. Some of the objects that it creates, like Image, are passed the current Element so that they can access the attributes of their current node like width and height.

The important thing in this example is how the objects use the Xerces parser to parse the syntax of the document. Note how the Body object uses the grammar methods of the XHTMLParser to cycle through elements and find out their properties. Note also that the "findXHTML" method does not know who its clients are.

 

Solving the Same Problem with Words

Let's solve this same problem with words. In contrast to the object model, there is no central parser class. Instead the grammar rules on how to find elements and navigate through the document will be localized to words themselves. In this way, words couple the syntax and semantics of solving problems in a domain space.

The Body word, in contrast to the Body object above, would have the grammar rules to recognize "Img" tags coupled with the data and methods.

word Body
{

public:

Body();
virtual ~Body();
virtual void drawImage(image i);
virtual void drawText(text t);
virtual void drawLine(height h);
virtual void runGrammarRules();

// Grammar rules
virtual void ImgRule() match "Img" with XML;
virtual void BRRule() match "Br" with XML;
virtual void TableRule() match "Br" with XML;
.....
private:
DOM_Node fCurrentElement;
Position fPosition;
.....

};

The code for runGrammarRules would be:

void
Body::runGrammarRules()
{

while (!currentNode.isNull())
{

ImgRule(currentNode);
BRRule(currentNode);
....
currentElement = XMLUtil::getNextElement(currentElement);

}

}

As a grammar rule matched an element, it would execute the code in the grammar rule. TableRule, for example, would run the following code:

void
Body::TableRule()
{

Table table(currentElement);
table.runGrammarRules();

}


Here's where it gets interesting. When the Table word is instantiated, it can also have grammar rules that parse the document. The Table word would look like:

word Table
{

public:
Table();
virtual ~Table();
virtual void drawRow(Row row);
virtual void drawCell(Cell cell);
virtual void resizeTable();
virtual void runGrammarRules();
// Grammar rules
virtual void TrRule() match "Tr" with XML;
private:
TableStructure fTableStructure;
.....

};

When the method runGrammarRules was called in Table, it would start parsing the document with its grammar rules and would run the code in TrRule every time that it encountered a TR. In this way, the parsing would be passed from the "Body" word to the "Table" word.

In the word model, the central parser of the object model disappears. Words are a decentralized means of handling syntax in the same way that objects are a decentralized means of handling data. The word model's power is a result of this decentralization.

 

Why Words are Better than Objects

We want to explain why the word model is better at solving problems than the object model. Let's start by looking at why the object model is better than the procedural or action-oriented model of programming. Of the many books on the theory of programming languages, Object-Oriented Design Heuristics by Arthur Riel has one of the best explanations of the benefits of the object model. In it, the author discusses how the action-oriented model handles data, what complexity problems it causes for programmers, and why the coupling of data and methods in the object model solves this complexity problem:

[In action-oriented programming, the] underlying data structures are created as part of the implementation of functions[(e.g., f1() and f2() are dependent on a piece of data X)]. The developers of these data structures have realized that some functions can share parts of their underlying data. In the action-oriented world, it is easy to find data dependencies simply by examining the implementation of functions. However, we have a problem if we wish to know the functional dependencies on a piece of data in the system. In the action-oriented paradigm, there is not an explicit relationship between data and functionality ...

[Imagine that] last weekend another developer created f6() without your knowledge. It is also dependent on the data marked X. You make all of your changes to the data X and the functions f1() and f2(). You compile, link, and run the resultant executable and things do not work properly. You spend the next n days trying to find out what went wrong. Anyone who has ever developed any application of reasonable size has undoubtedly run into this problem. Most action-oriented systems have these undocumented data/behavior dependencies due to the unidirectional relationship between code and data. Most action-oriented systems have a spaghetti-like underlying data structure on which all developers hang their code.

How does object-oriented programming control this complexity? While action-oriented software development is involved with functional decomposition through a very centralized control mechanism, the object-oriented paradigm focuses more on the decomposition of data with its corresponding functionality in a very decentralized setting. It is this decentralization of software that gives the object-oriented paradigm its ability to control essential complexity. It is also the cause for much of the learning curve. When the object-oriented community talks about the need for designers to undergo a paradigm shift, it is this decentralization to which they refer. OODH, pp. 29-32. Rearranged to condense information.

According to Riel, the problem with the action-oriented model is that programs only have a unidirectional relationship between data and methods (that is, methods use data) when they should have a bidirectional relationship. As a result, when a new method is created that has a dependency on existing data, any changes to that data could and probably will have undesirable side effects.

Riel's analysis is a tool for examining programming models to see if they are correctly designed. We will use this tool to examine the object-oriented framework and see if there are any bidirectional relationships between data, methods, timing, semantics, syntax, etc. that have been miscategorized as unidirectional. If there are, we can expect to find that the absence of this relationship leads to undesirable side effects.

It is not very difficult to see that there is a very strong relationship between syntax and semantics: it practically follows from the definitions of the terms at the beginning of the paper. The relationships between the literal tokens '0-9', '+' and '-', and '()' are identical to the relationships between the concepts of number, plus and minus, and parentheses in a math program. In such a program, '0-9' is related to '+' in the same way that numbers are related to addition. Creating this relationship is a necessary part of good programming. A mathematical syntax that allowed '()))66(+' to be entered as a valid problem would be forcing the algorithms that handled addition, numbers, and parentheses to work in ways they were not designed for. A program that allowed this syntax would probably crash.

The question, then, is how does the object model represents the relationship between syntax and semantics. Again, this is not a difficult question; the answer can be seen simply by looking at the previous XHTML example. In the example objects were dependent on the Xerces parser. The "XHTML" object used the grammar rule in the "XHTMLParser" object "findXHTML" to find the "XHTML" tag in the document. In the relationships presented in the XHTML example it is very easy to find out what dependencies the XHTML object has on grammar rules, but it is not possible by looking at the "findXHTML" grammar rule to determine what objects have dependencies on it. The relationship is unidirectional.

The object model does not have any formal coupling between syntax and semantics. The standard way of creating this relationship is to have the semantics dependent on the syntax and not vice-versa. Given this lack of coupling in the object model, we would expect that there would be unintended side effects when one tried to integrate the semantics of different domains, and there are. If another programmer creates an object that uses the Img rule in XHTMLParser and then changes that grammar rule, there will be direct semantic consequences on the Body/Img relationship. The program would then have hidden bugs and the original programmer would have to search through the code to find which of the dependencies on the IMG grammar rule were broken.

The central claim of the word model is that the relationship between syntax and semantics is unidirectional in the object model when it should be bidirectional. As a result, integrating the semantics of different domains is hard. Programmers are forced to search through dozens of files to find what the consequences of changing a syntactical relationship are on the semantic relationships in a program. Words solve this problem by coupling syntax and semantics with each other into a single entity.

 

Concluding Remarks

The word model is important because it increases the number of problems that computer scientists can solve. Consider calculus. Calculus uses both the domains of set theory and logic. If a programmer was presented with two libraries, one for sets and the other for logic, and wanted to solve calculus problems, the programmer would have to integrate the semantics of those two domains. In the object model, this would be hard because of the improper coupling between syntax and semantics. In the word model, it would be easy, because creating new semantic relationships between words has no side effects on existing relationships.

Calculus is but one example of a much larger picture. Humans weave intricate thoughts with language. Walk through a college and, in one room, a professor will be solving gambling problems with statistics and, in the next, philosophy students will be analyzing the ontologies of Plato with logic theory. Elsewhere, design students will be mixing different styles of art and physicists will be using the mathematics of topology to study black holes. Human language, by integrating multiple domains to express complex thoughts, can solve problems that today's programming languages cannot. The word model, by eliminating the side effects of integrating the semantics of domains, increases the number of problems that computer scientists can solve.

On Monday, pending comprehension of this paper, the third paper in this five part series will be released. In the meantime, readers who are interested in the technical details of the word model should read the more theoretical paper. Programmers are also welcome to examine the BlueBox source code when it is made available through CVS on Monday, March 26th (a more official release will occur at the end of April). Until then, we welcome help working on the code and writing and rewriting these papers to make them clearer to programmers. We thank the readers of Kuro5shin.org for their feedback.

 

Short Answers to Common Questions

Why are words important?
Most problems involve integrating the semantics of multiple domains. In mathematics, for example, calculus problems cannot be solved without integrating the domains of set theory and logic. Dynamic HTML is an integration of Javascript and HTML. Topology was integrated into physics by Roger Penrose to look at physics problems from a new angle. The list goes on and on and most examples integrate substantially more than two domains. The word model, by making these domains interoperable, expands the types of problems that computer science can solve.

Can't object-oriented programming solve every type of problem?
Modern programming languages, be they C, Java, Python, or even Basic can solve any Turing computable problem. Programmers can couple grammar rules, data, and methods together themselves in object-oriented languages like Python or in procedural languages like C. In the BlueBox source code, for example, we are currently translating the word structure down to Python classes that couple data, methods, and grammar together. GNOME programmers perform a similar trick by coupling data and methods together in the structured C programming language.

Why doesn't this paper's word model match the one currently used by dLoo?
In the same way that there are different ways to create the syntax of a class, there are different ways to create words. dLoo uses XML for its word structure. This paper presented words in C++ because readers requested examples in a familiar language.

So, what does a natural language browser do?
It doesn't understand English or other languages: that's natural language processing, not natural language programming. Technically, BlueBox reads words that are on the Internet and caches them on the user's machine. Then, when a user comes across a programming language that speaks those words (for example, HTML, regular expressions, mathematics, etc.), the browser is able to run them correctly for the user. Because words do not have side effects on each other, the browser can keep downloading more and more words without side effects. In this way, the language the browser understands can become richer and richer.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
How comprehensible was this article?
o 5 (Very) 3%
o 4 (Pretty comprehensible) 15%
o 3 (I had to work to get it) 26%
o 2 (Not very) 27%
o 1 (What are words?) 27%

Votes: 91
Results | Other Polls

Related Links
o theoretica l paper
o Also by nile


Display: Sort:
The Word Model: A Detailed Explanation | 166 comments (144 topical, 22 editorial, 0 hidden)
No clearer than before (5.00 / 1) (#1)
by Paul Johnson on Fri Mar 23, 2001 at 05:19:03 PM EST

Sorry, but this whole idea is still no clearer than before. The example suggests that some kind of rule-based pattern matching is going on here. It sounds a bit like a parser-generator such as YACC, but without the shift-reduce rules. Is this the case? Or are we looking at something more like Haskell or Prolog?

Paul.
You are lost in a twisty maze of little standards, all different.

Not like YACC, Prolog, or Haskell (none / 0) (#2)
by nile on Fri Mar 23, 2001 at 05:37:26 PM EST

Words couple grammar rules, methods, and data together in the same way that objects couple data and methods.

The parsing that you mention is decentralized. Each word parses its own piece of syntax. The central parser of the OO world disappears.

Does this clear things up?

Nile

[ Parent ]
Also confused (a bit) (none / 0) (#5)
by BigZaphod on Fri Mar 23, 2001 at 06:27:50 PM EST

So, does that mean that in the case of the example you gave above that each Rule() function would actually be an implementation of a parser? But that it would be restricted to only caring about it's purpose?

So, the IMGRule() function would take a node and check it to see if it matches the IMG tag and if it does it would handle it right then and there, right? And you're saying that this is different than OO because in OO you would generally build a single parser that, once found a tag, would pass it off to the right function using something like a switch/case statement, right?

That can't possibly be the only difference here, because that's really nothing more than changing (very slightly) how you look at the problem. For example, I've built tag parsers/handlers for Apache modules in C and what I do is parse out my standard tag format (in one case it was just like HTML, so I was parsing out my special tags like <silly_tag> out of docs that had <html> tags in them as well). What I would do is when I pulled out a complete <---> tag I would then take the name part out and simply check it in an array that contained tag names and a function pointer to the handler. If I found a match, the function pointed to by that tag would be called and the whole tag would be passed in. After that point the parser no longer cared what happened. I'm really not seeing how this is much different. Basically, all I see is that you would be doing mostly the same thing (in effect). You are pulling out the tag and making a node object out of it. Then, you just call all the handlers (Rule() functions) and let them deal with it. If it doesn't know what to do with the current tag, it just returns. Simple enough. But that's not really much different from just looking it up in a list and directly calling the right function, is it? The only possible difference in the case of words being called in a long line like that is that you could have more than one word handle a given node/tag and no other rule/word function would need to know. But that could be easily done if I just continuted to search through the list of function pointers once the first call returned.

Anyway, I voted +1 because I want to talk about this. But I am still confused as to what the big deal is here. :-)

"We're all patients, there are no doctors, our meds ran out a long time ago and nobody loves us." - skyknight
[ Parent ]
Re: Not like YACC, Prolog, or Haskell (none / 0) (#6)
by alisdair on Fri Mar 23, 2001 at 06:29:28 PM EST

The parsing that you mention is decentralized. Each word parses its own piece of syntax. The central parser of the OO world disappears.

Why? What does that gain you, apart from spreading your parser code all throughout the program (or class, or sentence, or whatever you call it).



[ Parent ]
In OO, you spread data all over the place. (5.00 / 1) (#25)
by nile on Sat Mar 24, 2001 at 02:09:58 AM EST

I know that this is difficult to understand and your comments are very helpful because they point to what needs to be explained.

In OO, data is decentralized because it is coupled with the methods that it relates to. There is not a single file that all of the methods in a program access: that would be a pretty bad design, actually.

In words, grammar rules are decentralized because they are coupled with methods and data to create a new fundamental unit of programming. The decentralization is a result of this coupling, but if you don't catch the coupling, it does look like one is spreading the parser all over the place.

Nile

[ Parent ]
OK, I think I got it (5.00 / 2) (#10)
by nymia_g on Fri Mar 23, 2001 at 08:00:34 PM EST

Finally, after reading the document, I got what trying to convey. But I won't explain them since it's too hard for me to that. Instead I'll ask you some questions regarding words.

Question #1:

The word model is interesting. But, why does it benefit the programmer?

Question #2:

The word model focuses on words and let the grammar rules take which action to take. Now, how does this model handle polymorphic words?



Answers to (1) and (2) (none / 0) (#11)
by nile on Fri Mar 23, 2001 at 10:42:25 PM EST

(1) How does the word model benefit the programmer?

Let's give both a theorectical and practical example to this question:

Theory

The object model creates spagetti like relationships between concepts in a domain. In an object-oriented mathematics program, for example, the relationships between numbers, addition, multiplication, etc. are scattered through the program. This occurs because of the lack of coupling between syntax and semantic relationships.

Practical

Say you want to integrate two libraries that two different programmers wrote. One is a Javascript interpreter. The other is a Web browser that can only read plain HTML (i.e., sans Javascript). In the object-oriented world integrating these two would be difficult because you would have to change the syntax of the browser so that it would allow Javascript to be embedded in it. As noted, though, in OO, this could have unintended side effects on the semantic relationships between tables, images, etc.

What words offer to the programmer is the ability to seamlessly integrate syntaxes and semantics without side effects. In the word model, integrating Javascript under the Script element is one change to one grammar rule in one word. All the Script word has to do is pass off the job of parsing to the Javscript word and use its access to the Javascript word to access its methods and data.

Now, for the second question. What is a polymorphic word? Here's a brief theorectical example taken from another paper.

The encapsulation of grammar rules in words means that they can be inherited by their children in the same way that data and methods can be inherited by objects in object-oriented programming. Consider a programmer building an object-oriented version of Javascript with words like "Object," "for," "if," and "return." Users of the language would not need any knowledge of grammar rules to create objects. They could simply derive their new classes from the "Object" word and automatically inherit the grammar rules, data and methods that make it an object in the Javascript syntax. This inheritance and the resulting polymorphism make it easy to deal with irregular words simply by overriding the grammar rules of their parents.

More information on this can be found in the slightly more theory heavy "The Word Model" which can be found here (note this is not the earlier posting to Kuro5hin).

Thanks for the good questions. Do these answers make sense to people?

Nile

[ Parent ]
You keep using that word... (none / 0) (#17)
by KnightStalker on Sat Mar 24, 2001 at 12:51:23 AM EST

...but I don't think it means what you think it means.

Say you want to integrate two libraries that two different programmers wrote. One is a Javascript interpreter. The other is a Web browser that can only read plain HTML (i.e., sans Javascript). In the object-oriented world integrating these two would be difficult because you would have to change the syntax of the browser so that it would allow Javascript to be embedded in it. As noted, though, in OO, this could have unintended side effects on the semantic relationships between tables, images, etc.

Why? It would be nice if you provided some basis for these allegations you keep making. Here's one easy way to allow this, even in straight C:

The browser defines an HTMLNode structure that contains a pointer to the containing document, type information (a string describing the type, perhaps), pointers to subnodes, a void pointer to node-specific information, and a pointer to a function that processes this node. The browser implements many node types this way, including tables, images, horizontal rules, etc. The browser also implements functions that allow the node processing function to modify its containing document and the nodes therein. The browser contains an internal registry that maps node types to functions that create the node structure. The browser exports a plugin interface to allow third parties to implement unpredicted node types, which register themselves and can provide implementations for types the browser doesn't recognize, by modifying the document when asked to process themselves.

Is that so hard?

[ Parent ]

It's possible to do words in C (none / 0) (#22)
by nile on Sat Mar 24, 2001 at 01:33:03 AM EST

I mapped out your example and it looks to me like you're almost using words in C, which is perfectly legal (in the same way one can do OO in C).

Looking at the node we have:

Node ---- pointer to document
type information
pointers to subnodes
pointer to node-specific information
pointer to function that processes node

That last is the key. It means you allow the job of parsing to be passed on to a third party at any time who also handles the semantics of what that parsing means (as I understand it). The distance between related grammar rules, data, and methods may not be completely eliminated here but it sounds like it is significantly reduced. So, we would expect the problems discussed in this paper to be mitigated.

Again, you can use the word model in structured or OO languages without problem. As your example shows, you should.

Nile

[ Parent ]
Need an example that's not parsing related (none / 0) (#55)
by Gat1024 on Sat Mar 24, 2001 at 10:37:28 PM EST

I think things are so unclear because you're using a parsing related example.

Say you want to integrate two libraries that two different programmers wrote. One is a Javascript interpreter. The other is a Web browser that can only read plain HTML (i.e., sans Javascript). In the object-oriented world integrating these two would be difficult because you would have to change the syntax of the browser so that it would allow Javascript to be embedded in it. As noted, though, in OO, this could have unintended side effects on the semantic relationships between tables, images, etc.
Parsing and grammars are well understood. The above problem is not a problem at all. Especially if you're using a program like Flex. Even if you weren't using flex and you coded your parser up with the worse dependencies and coupling you can imagine, it would still be easy to merge the two. You indicate this below:

In the word model, integrating Javascript under the Script element is one change to one grammar rule in one word. All the Script word has to do is pass off the job of parsing to the Javscript word and use its access to the Javascript word to access its methods and data.
All you need to do is modify your HTML parser to recognize the <SCRIPT> tag, suspend that code and execute the parser for Javascript. The Javascript parser must be completely self contained -- as it would be if it was a separate library to begin with. Of course you would have to change the Javascript parser to recognize the </SCRIPT> tag and return control to the calling parser. The script tag is simply an "escape" code into a completely separate parser. Making these two modifications is hardly likely to stomp on any other dependencies. You can even say that the script tag can appear anywhere in an HTML document and still there would be no unintended side effects.

The reason is that any non-ambigious grammar is basically a DAG. Take the DAG, replicate all of the nodes that have more than one parent so that every node has only a single parent and you wind up with a tree. No matter how badly you code the parser, it will basically be a tree walker that attempts to find a path through the tree (forward, backwards, depth first, breadth first, loop the loop -- it doesn't matter). Merging two trees is simply a matter of choosing an "escape" node and making that node in tree1 be the root node in tree2.

Most of the people here know this, so they're having trouble with the concept when you rely on parsing so heavily. I think you should provide more meat and jump ahead a little. We need a less trivial example or maybe a case study where you've solved a problem using your model. Something that was extremely difficult or impossible to solve using any other technique. And don't worry about talking up AI. If that's where the word model shines, then you should really start with that.



[ Parent ]
An addition example (none / 0) (#68)
by nile on Sun Mar 25, 2001 at 06:38:28 PM EST

Look under the post that has several bolded criticisms (very helpful by the way) and you'll see an addition example there.

Here's a quick repost, though it's not in the original context:



Here's a definition of a word:

Word
----

Self Identify Function (i.e., a parser to recognize its symbol)
Definition (Data/Mehods from object land)
Rule/Relationships: The rule says how its symbol can be syntactically related to other symbols. The relationship defines what that means.

So, how does one approach problems with words. Well, first one identifies the elements of a domain. In arithmentic for example, the elements are subtraction, addition, and numbers.

Now, one writes a word for each of these elements that can recognize itself. That is, the addition word can recognize '+," the subtraction word can recognize '-,' and the number word can recognize '0-9.'

Now, one defines what that word means by writing data and methods. In the addition and subtraction functions, one has to write addition and subtraction methods, and the number word would have a value for the number it actually parses.

Next, one writes the rule/relationships so that these words can be connected to each other. The rule part says literally how they can be written out. So, the rule part in the number word that says the plus word can come next, literally means that a programmer using them could write: 8 +. Or 3 +. Etc.

Now, the relationship part says what that means. In this case, it means pass your value on to the plus sign and tell it to parse its relationships.

Now, the plus sign will have rules that a number can be to the right of it and a relationship that says get the value from those numbers. It will then add that value to the value it already has to form a sum.

Let's say, a programmer writes 3 + 4. Well, then the number will recognize 3 and store it as a value. It will then check its rule/relationships and ask them to see if there are any matches. The plus will say "I match" and the number word will pass on the three. The plus will then check its rule/relationships for a match and the number word (4) will say I match. The plus will then take its value and add them together to form a sum.

As you can see, this is a very different way to frame problems. It's power, though, is that one can easily integrate concepts from other problem domains by simply creating new rule/relationships (i.e., coupled syntax/semantics). There are no side effects from creating new ones.



cheers,

Nile

[ Parent ]
Somewhat better, but still needlessly thick (5.00 / 1) (#14)
by KnightStalker on Sat Mar 24, 2001 at 12:17:50 AM EST

If I understand correctly, word programming consists of this:

1) creating clearly defined input tokens (in this case, XML data)

2) unambiguously associating methods with those tokens

for the purpose of

3) using generic code to operate on unpredicted data types.

Am I right?

Assuming I'm right, this can be addressed in several ways with existing methodologies. When linking against provided source code, one can use parameterized types (i.e. template classes in C++) to provide this functionality. You create a template class logic<T> and instantiate it with logic_object = new logic<ontologies_of_Plato>;. Then you can use the functionality provided by logic on ontologies, sets, bananas, whatever. Or, you can use polymorphism and abstract inheritable types (interfaces in Java, pure virtual base classes in C++). Create your logic class to operate on pointers to LogicalObject which is implemented by bananas, Plato's ontologies, etc. Again, generic operations on unpredictable types.

When one must deal with code provided in compiled form, component technologies such as COM and CORBA provide this functionality. A COM logic object can operate on a COM ILogicalObject interface which is implemented by what-have-you. Libraries can then be glued together with COM- or CORBA-aware scripting languages.

This ain't exactly rocket science, and anyone who could understand your column should be bored to tears with my comment. So what don't I understand? What does word programming offer that these methods do not?

Not right, but maybe this will help (none / 0) (#16)
by nile on Sat Mar 24, 2001 at 12:46:26 AM EST

Good try. This stuff is hard. It sounds like you're definiing templates to me which can't solve the problems that words can. I'll give an example of this down below. Let's first look at your defintions and modify them. Change:

1) creating clearly defined input tokens (in this case, XML data) 2)unambiguously associating methods with those token

To:

(1)Writing grammar rules that parse a portion of text. (emphasis on rules not tokens)
(2)Coupling these grammar rules with methods and data (association needes to be bidirectional and include data).

And we have the word model! (3) has nothing to do with it.

Now, let's show how templates can still have unintended side effects when someone tries to integrate them. In this example, we will imagine that someone tries to integrate a Chemistry template with a Logic template where both use the Element token in a different semantic way. See the original Word model posting for the context

The following is my understanding of what you are doing. Let's say we write a logic and a set template so that we can relate elements between both. One of those relationships (borrowed from an earlier commentator) might be:

.... LogicalElement<Element, RuleMemberOf> le;
SomeLogicalSet<Element> set;


Now, a quick way to analyze this to see if it is identical to words is to look at where the grammar rules exist. In this example, the syntax rules are expressed in the above global file and semantics are encapsulated in the templates.

The problem that a programmer writing this code will face is that the relationships above are expressed globally and, as a consequence, new relationships could have unintended side effects on existing ones: that's the point of the paper. The template example works with just a few rules, but let's imagine another chemistry template, now:

template<class T, class Y> class ComplexMolecule{
../stuff
public:
}

The syntax of chemistry and material science could then be integrated with: #include <...>
ComplexMolecule<Element, AnotherElement>
Material<Element>


Now, let's say that we wanted to prove logical claims about sets of molecules. This would require integrating the two sets of relationships:

#include <words>
LogicalElement<Element, RuleMemberOf> le;
SomeLogicalSet<Element> set;
ComplexMolecule<Element, AnotherElement>
Material<Element>


Notice that Element has two different meanings here. This is the exact problem words are trying to solve.

Nile

[ Parent ]
Namespaces (none / 0) (#20)
by KnightStalker on Sat Mar 24, 2001 at 01:15:22 AM EST

I'm convinced that you actually have something here. Nobody would put this much effort into a troll. If this whole "word programming" thing is a hoax, it's by far the best one on a weblog that I've ever seen, and you ought to get a medal of some kind. :-)

However, the problem here can be neatly resolved with namespaces in C++. I don't know whether Java has a similar feature. However, you can no doubt get around it, as you can in C, by writing wrappers around those function names that conflict. In C at least, a modern compiler should optimize away any thusly-induced inefficiency. (Maybe it won't, but it should. :-) I'm sure I don't need to give an example of either.

I don't, however, understand how word programming would separate a logical Element from a chemical Element, except by processing the context, and that context is essentially what namespaces provide, but unambiguously to the programmer. Wouldn't word programming have to provide an equivalent feature if it is to avoid ambiguity?

[ Parent ]

Clever. You can do words in C++, too (none / 0) (#24)
by nile on Sat Mar 24, 2001 at 01:53:16 AM EST

Damn, you are very creative and clearly a good programmer.

Let me see if I understand this right.

You write a file that uses a set of templates to express the syntactical relationships in a domain with another domain. By itself, this would be a problem because, as mentioned above, all of the template relationships are global.

To fix this, you make them local to the file. Notice, though, that now we have all of the related grammar rules, data, and methods in one file and we are now binding the grammar rules to them in the same way we bind data to methods in objects. This looks very close to a word to me. So, there is no problem here. I think you did the same thing in C too.

Nile

[ Parent ]
Seems to be awfully generic (none / 0) (#26)
by KnightStalker on Sat Mar 24, 2001 at 02:30:39 AM EST

Damn, you are very creative and clearly a good programmer.

Thanks. Now if only I could convince everyone in just a few short paragraphs, I'd have a job by now. :-)

Still, I fail to see what word based programming uniquely offers me as a programmer, if I (and what I've proposed is a technique as ancient as C; it's not a product of my creativity) can create words or word-like constructs this easily.

You state in another reply to a comment of mine that word based programming doesn't map to English words because it's also meant to represent mathematical concepts, chemical concepts, potted meat, etc. However, these other things it's supposed to represent cannot be described as words, and therefore "word programming" is, IMO, an inappropriate term. They can however be described as objects. I thus conclude that word programming is nothing more than a method (and an idiomatic method at that) of applying the concepts of object oriented programming.

I don't quite understand all of your comments about grammar. You've said that my two examples both encapsulated grammar rules along with the methods and data, yet that was certainly not my intention and I still have no idea what the grammar is that I encapsulated. Perhaps it's just something I already understand, to which you're applying a different word than I'm used to seeing applied to it.

You also haven't commented on component technologies. Since I can take a COM object written in C++, no matter what it is, make it implement the ICanOfSpam interface, and send it to another COM object implementing the ISingingViking interface, we might have a banana and a rabbit talking to each other in language both objects can understand. Doesn't this summarize your word-based methodology? It's as old as the hills, and it more or less maps to the same concept in C++ or Java, just with pre-compiled programs.

Also, who is "we"? Your paper almost sounds like the unabomber's manifesto :-)

[ Parent ]

It's a new programming model so it is generic (none / 0) (#32)
by nile on Sat Mar 24, 2001 at 02:24:26 PM EST

You can do object-oriented programming in C and, in fact, you should. Coupling data and methods is a good design pattern for all problems. C++ just enforces this relationship.

You can also couple data, methods, and syntactical relationships in any language to. The question is, to what set of problems does this better way of doing things apply. If there are only a few problems with syntactical relationships, then its just a design pattern. Nothing exciting.

So, how many problems have relationships between elements. It's not hard to see that they all do. Consider an ATM. It has relationships between its money feeder, its button interface, the customer, and the bank. Consider a calculator. It has relationships between numbers, operators, and parenthesis. Consider a web browser. It has relationship between navigation, history, user interface, etc. All programs have syntactical and semantic relationships.

This means that, like the object model, the coupling of grammar rules, data, and methods in the word model applies to the entire domain of programming. The proper name for a pattern that applies to all of programming problems is a programming model.

Does this make sense?

Nile

[ Parent ]
We are dLoo (none / 0) (#34)
by nile on Sat Mar 24, 2001 at 02:48:09 PM EST

The company behind BlueBox, which is being open sourced, consists of more than just me. Other people at the company edit and make dozens of suggestions on the papers that are being written so I was using we to give them credit for their help.

Nile

[ Parent ]
Answers to other questions. (none / 0) (#65)
by nile on Sun Mar 25, 2001 at 04:49:15 PM EST

Sorry, I missed your other questions first time through.

As for components, it's not really the same thing. Yes, COM, CORBA, etc. allow interlanguage interoperability, but they don't allow rich integration. For example, you can't take the regular expressions from Perl and insert them in C++. You'll get a compiler warning. You can reuse a Perl object, but that's about it. Does this make sense at all? It's not "computer language" interoperability, but the ability to richly integrate the syntax and semantics of different domains. As in, I have a math library and a modeling programming, now I can do physics problems visually. You have to seriously rewrite programs even with components to do this in the OOP world.

I don't quite understand all of your comments about grammar. You've said that my two examples both encapsulated grammar rules along with the methods and data, yet that was certainly not my intention and I still have no idea what the grammar is that I encapsulated. Perhaps it's just something I already understand, to which you're applying a different word than I'm used to seeing applied to it.

Ok, here goes. Syntax is the literal way in which things are allowed to be put next to each other. For example, if a program allows '0' and '+' to be put next to each other, that's syntax. A grammar rule is the code in the program that enforces/allows that. Now the semantics is what that relationship means. In a math program, that would mean zero plus .... There would be algorithms to implement that: those algorithms would be the semantics.

So, when I say that you are moving the grammar rules closer to the data and methods, I mean that you are moving the algorithms that define valid relationships closer to the algorithms that define what those relationships mean. Reread the C example you gave and tell me if this makes any sense. If it doesn't let me know and I'll try again.

cheers,

Nile

[ Parent ]
Thoughts (4.50 / 2) (#15)
by slaytanic killer on Sat Mar 24, 2001 at 12:44:48 AM EST

Hello, there are some things that come to mind while reading your paper.

1) Penrose's linking of topology to physics is an old practice, similar to Kepler linking crude calculus to find the volume of wine bottles. What happens is that mathematics is the study of consequences if something fits certain criteria; and when you find something that ends up fitting these assumptions, then you suddenly have a wealth of properties that some mathematician dreamed up. If you can call something a "field," then you have centuries of algebra at your disposal.

A lot of work is done, mapping things from one domain into another. A big discovery often is that one thing also happens to be another.

In this case, it seems like Words are trivially implemented in normal object-oriented programs. Event models. When you wrote the improved version of your program using Words, it seems as if you finally just used good design. What if the first version was just naively implemented?

I do think a lot of work should be done to create languages to map better onto other fields. All too often we care more about accessing hardware, and we end up not creating durable languages. But then again, maybe there is some point where a language is sufficient, and reasonably accessible to other concepts. Perhaps, object-oriented programming is well-understood enough yet for us to find The Next Big Thing, and we make errors in thinking there are flaws in OOP, when they are in fact handled smoothly.

But I am always happy for someone to prove this wrong. It is inevitable.

2) If this really is deeper than I think, then the cost may come at an increase in internal complexity. Basically what you are arguing for is better notation. But is that compatible with the introspective nature of "real-world" computing, which is all about algorithms? Will this new method reduce orthagonality? I know that the trend is toward greater abstraction, away from the bare metal. However, there are many, many situations where the time to execute and space constraints remain inputs into the program's operation. This can not be overstated.


Quick example of a bug with "Words" (none / 0) (#18)
by slaytanic killer on Sat Mar 24, 2001 at 12:58:56 AM EST

Perhaps, object-oriented programming is well-understood enough yet for us to find The Next Big Thing...
There should be a "not" in there somewhere. I'll blame K5 for leaving it out.

[ Parent ]
Not event models or notation, actually (none / 0) (#19)
by nile on Sat Mar 24, 2001 at 01:11:00 AM EST

Trying to reduce words to something that already exists is a very good practice. It's a good way of explicating the differences. Let's deal with the second part first because it is more important.

2) If this really is deeper than I think, then the cost may come at an increase in internal complexity. Basically what you are arguing for is better notation.

No, absolutely not. Candy grammars completely miss the role that syntax plays in languages. Words decrease the essential complexity of programming because they eliminate the unintended side effects of adding new dependencies on existing grammar rules. This makes it possible to solve more complicated problems.

In this case, it seems like Words are trivially implemented in normal object-oriented programs. Event models.

I think I see the problem here. You've correctly noticed that words pass the flow control around through grammar rules. That is good. However, that is not all that is going on. What is really going on is that they are passing the job of parsing on to different words. And this isn't really the important part of the model. What's important is the coupling between grammar rules, data, and methods. This coupling does not exist in Event Models.

But I can see where the confusion is coming from You're thinking of the methods and data as being coupled with the tag in the document that is being parsed. Instead they are coupled with the grammar rule that is doing the parsing. The data and methods are not in the document being parsed, but in the program doing the parsing. Look at the examples again. Does this make sense?

cheers,

Nile

[ Parent ]
Hmm... (none / 0) (#28)
by slaytanic killer on Sat Mar 24, 2001 at 11:09:49 AM EST

I'm looking back at the article and thinking either "design patterns" or "you're trying to define a new unit of program."

Ok, I see that OOP may not necessarily be the best way to express Words. It still looks to me like Words are a map between some "rule" and an object (or at least, something which could be expressed as an object).

So since you want to have compositing rules, you create sentences out of Words. And therefore the state you modify is the state of that sentence, and not of others, keeping disasterous side-effects to a minimum..?

I tend to see methods in terms of side-effects and return values, and I take care of the side-effect problem through documenation. Using regular, well-known coding styles (design patterns, beans) is part of this. I don't see that using Words will relieve this need. It would seem that the documentation would lie in the words themselves, but any sentence of sufficient complexity creates its own state that can be damaged, and extra documentation would still be needed.

[ Parent ]
Yes, it's a new unit of programming. (none / 0) (#31)
by nile on Sat Mar 24, 2001 at 02:16:53 PM EST

You're definitely on the right track here. Words are a new unit of programming because they couple grammar rules, data, and method.

You're also thinking right when you start wondering about side effects. What words do is eliminate side effects by coupling related programming elements in the same way that objects do.

Does this make sense?

Nile

[ Parent ]
A different tactic (4.00 / 1) (#21)
by KnightStalker on Sat Mar 24, 2001 at 01:26:07 AM EST

Perhaps, to enrich our understanding, you could give an example of different types of words. If the metaphor is to have any significance, different types of "words" should correspond roughly to nouns, verbs, adjectives, prepositions, etc. How would one write a complete sentence using word-based programming?

Programming examples (none / 0) (#23)
by nile on Sat Mar 24, 2001 at 01:43:46 AM EST

If it was natural language processing, this would be a very good route to take. It's a programming model, though, so its better to give programming examples. BlueBox will be a fairly large one and hopefully will answer several questions.

As to the metaphor, different domains have different syntaxes. Integrals, for example, are words but in calculus syntax, their relationships are not determined by the rules of English, but the rules of math. In this way, words are broader than the syntax of English. Also, I try to avoid giving English examples, because I am afraid that this will be confused with natural language processing.

Nile

[ Parent ]
How does it differ from standard design patterns? (4.66 / 3) (#27)
by bgalehouse on Sat Mar 24, 2001 at 04:13:26 AM EST

It particular:
  • How does this differ from the 'parser' design pattern from the GOF book.
  • How is this so different than the lambda design pattern - the idea of creating behavioral objects dynamically, possibly out of other behavioral objects. Functional programming is all about this.
Basically, it looks like you are building a data structure to match the input, then spreading functionality over it to perform the correct actions for the data structure. I don't see this as a particularly new concept. Important maybe, worthy of more use maybe, but not particularly new.

Actually, it's not a design pattern. (none / 0) (#29)
by nile on Sat Mar 24, 2001 at 02:07:33 PM EST

This might be a little difficult to explain, but let's give it a try.

A design pattern is a way of solving a particular type of problem. The idea behind design patterns is that whenever you come across a particular problem, you can benefit from past architectural work by using a well-tested solution.

The lambda design pattern that you mentioned, for example, only applies to certain problems. So, to see if words are a design pattern, we have to look and see what types of problems, the word model solves better than traditional models. This means we have to specifically state which problems have semantic and syntactical relationships that will benefit from coupling.

But all problems have semantic and syntactical relationships. Consider an ATM. It has relationships between its money feeder, its button interface, the customer, and the bank. Consider a calculator. It has relationships between numbers, operators, and parenthesis. Consider a web browser. It has relationship between navigation, history, user interface, etc. All programs have syntactical and semantic relationships. This means that, like the object model, the coupling of grammar rules, data, and methods in the word model applies to the entire domain of programming.

The proper name for a pattern that applies to all of programming problems is a programming model. We call the object model, for example, a programming model and not a design pattern because it applies to all problems.

Does this make sense?

cheers,

Nile

[ Parent ]
A little. (none / 0) (#36)
by bgalehouse on Sat Mar 24, 2001 at 02:52:37 PM EST

I guess I'm starting to see it more as a flavor/style of OO programing. A style in which more emphasis is making object which match the data and then putting functionality within those objecte.

But if you didn't put functionality with the data, it wouldn't be OO programming.

So I still just find myself seeing this as an aspect of good OO design style. Perhaps one that is hard to do correctly, and so worthy of discussion. But not something particularly revolutionary.

[ Parent ]

OOP is just a style of procedural programming (none / 0) (#39)
by nile on Sat Mar 24, 2001 at 03:11:03 PM EST

I think you're actually close to grasping it. I posted the subject line (which is false) to draw out the differences between words and OOP. Obviously, it's not true that OOP is just a procedural programming style. It's a new programming model that arises from proper coupling. One can do OOP in C, but then one is no longer doing procedural programming.

The same is true about words vs. objects. The goal isn't to make the object match the data. Rather it is to make the rules by which objects can relate to each other (i.e., the syntax) be in the objects themselves. The resulting unit of programming consists of coupling between syntactical relationships, methods, and data. Now, one can do this coupling in an OOP language, but then one is no longer doing object-oriented programming.

Thanks for responding. Does this make it any clearer?

Nile

[ Parent ]
Umm... coupling is an integral part of behavior (none / 0) (#48)
by bgalehouse on Sat Mar 24, 2001 at 05:31:30 PM EST

I mean, OO colaboration diagrams are all about describing how objects couple to each other.

Putting behavior with the data/objects is the basic principle of OOP. So, I'm still not sure I see it as something other than OOP. Good OOP. OOP better than a lot of what is out there. But OOP nontheless.

[ Parent ]

True, but that's not the coupling being discussed (none / 0) (#49)
by nile on Sat Mar 24, 2001 at 06:10:32 PM EST

You're right that semantic relationships are in objects. That's exactly right. What's not there, however, are the syntactical relationships. Let's explain the difference between the two:

The syntax relationships are the literal ways that the characters '0-9,' '+-*/,' and '()' can be put together. These are expressed through grammar rules.

The semantic relationships are how the concepts in the domain relate to each other, that is, how numbers, operators, and parentheses relate to each other. This is expressed through data and methods.

Now, OOP languages do put behavior - i.e., data and methods - in a single place. But they do not put the syntax relationships there also. In fact, most syntax relationships are statically set when the compiler is compiled. In C++, for example, you can't add new syntax relationships so that the language can understand "Perl" like you can in Lisp or Pliant.

The coupling being discussed is not the coupling between data and methods, but the coupling between data/methods (i.e., semantics) and grammar rules (i.e., syntax).

Does this make sense?

Nile

[ Parent ]
It's a programming model. (none / 0) (#30)
by nile on Sat Mar 24, 2001 at 02:12:46 PM EST

See the above comment. I posted a seperate comment because the subject line was to short.

Nile

[ Parent ]
Interesting discussions (4.00 / 1) (#33)
by nile on Sat Mar 24, 2001 at 02:31:38 PM EST

I want to thank everyone who is commenting on this and point out some of the more interesting discussions that are going on.

The first is the question of whether this is a design pattern or a new programming model to replace the object model. The key to which it is whether it applies to only some problems or all. Look below and you'll see a discussion around this issue.

The second set of discussions center on whether one can do the same thing in C, C++, or other languages. The claim of this article is that it is impossible to do so unless you couple the syntactical relationships with data and methods (just as it is impossible to do OOP in C unless you couple data and methods). There are several threads on this issue.

There are other discussions as well. These two stick out at the moment.

Nile

Meta-comment (4.00 / 1) (#35)
by rusty on Sat Mar 24, 2001 at 02:49:09 PM EST

Thanks for your energy and attention in replying to people here, nile. I have to admit, I'm not getting much of it either, but you have been very patient and energetic in trying to help people figure out what you're saying. Here's hoping we can get something posted about this eventually. :-)

____
Not the real rusty
[ Parent ]
Anyone have any ideas? (none / 0) (#38)
by nile on Sat Mar 24, 2001 at 03:03:37 PM EST

Thanks Rusty for the encouragement! I'm really grappling with how to explain a new programming model. What's the best way to explain OOP to a programmer who has used C all their lives and has never seen an OOP language? How do you explain the differences between OOP and structured programming?

If you give concrete examples, OOP is mistaken for a design pattern....

If you explain the benefits of coupling data and methods, the programmer notes that they can just do the same thing in C ...

These are the same responses that I'm hearing when I try to explain words. So, maybe, I'm taking the wrong tack. Anyone have any ideas on how to best explain OOP to a C programmer? Please post them because I can then use the same tactic to explain the word model to OOP programmers.

cheers,

Nile

[ Parent ]
I don't know (none / 0) (#40)
by rusty on Sat Mar 24, 2001 at 03:20:29 PM EST

Unfortunately, you're probably asking the wrong person. I beat my head against OO for a long, long time before I finally "got it" in practice. Like, I understood the concept, but working with it is a whole different ball of fish. The biggest problem for me, in working with OO, was always designing useful classes and subclasses (which, if you look at Scoop, is still a bit of a problem :-)). Also, one of the big gripes I have with OO is that to design good classes, you really have to fully understand the problem space beforehand, which is really hard to do before you've worked in it. It's probably just an experience thing.

Anyway. I think you're still not quite there with the "explaining this to humans" thing yet. But keep at it. I'll pore through the two articles some more and see if I can get it. :-)

____
Not the real rusty
[ Parent ]

Comments (5.00 / 1) (#41)
by nymia_g on Sat Mar 24, 2001 at 03:38:27 PM EST

I think I almost got the idea. But, it took me several passes, like 'scanning' and 'parsing' them to understand your statements (sorry for the term but that's the best word I could come up with).

What I think would be a good delivery is to use a detailed focus and then zoom-out on the subject (object model). Then slowly pan it to the Word model and then slowly zoom-in, carefully taking every bit of term you use defined using layman's term.

What seems to be confusing is the term word and class. To an OO programmer, that term will cause confusion and will probably muddy the water even more.

Another aspect that caused confusion is the presence of the grammar rules in the word. It was shown like a member of a class. Which probably brought out the impression that it was like a design pattern. Actually, it's not. It's an entirely new way of expressing methods, data and grammar rules.

I hope you can explain it because you're probably the only one who can explain it. Me, I don't know if I can explain it. I'm still on the learning curve of the Word model though.

Anyway, I would like to thank you for showing your ideas here. It is really something new on the syntax and semantics level.

Congratulations, BTW.

I like that idea. (none / 0) (#42)
by nile on Sat Mar 24, 2001 at 03:48:53 PM EST

The zoom in/zoom out/zoom in idea sounds good. Right now, it starts our zoomed in on two examples and then zooms out. It should probably return back to the examples.

Thanks, that really helps.

Nile

[ Parent ]
Programming Model (none / 0) (#58)
by nymia_g on Sun Mar 25, 2001 at 01:01:27 AM EST

Regarding the programming model of Words. I would assume a word is an instance of something and it could be taken from a symbol which is like a class or definition. To make a distinction between a declaration from allocation, certain terms should be defined. In the case of Words, a symbol could be any syntax token that can relate to any other token. A symbol becomes a word when instantiated or allocated in a domain or multiple domains, provided the relation between words are semantically correct.

Does this make sense to you, nile?

[ Parent ]
Yes! (none / 0) (#61)
by nile on Sun Mar 25, 2001 at 03:08:19 PM EST

That's exactly right! I was working over it last night and I actually think the above examples are confusing as you pointed out. A word consists of:

Word
----
Self Identify Function (i.e., a parser to recognize its symbol)
Definition (Data/Mehods from object land)
Rule/Relationships: The rule says how its symbol can be syntactically related to other symbols. The relationship defines what that means.

So, you are dead on! Thanks for discussing this with me. It's really encouraging to have someone understand.

Thanks for coming up with 'symbol', by the way. It's much better nomenclature than just talking about syntax. What do you think of the definition? Is it easier to understand?

cheers,

Nile

[ Parent ]
Service Discovery and Domain Integration (4.66 / 3) (#44)
by zephiros on Sat Mar 24, 2001 at 04:15:52 PM EST

If I'm keeping up with the discussion, the real value to this is not simply a new way of decomposing application functions, but in allowing dynamic integration of new functionality. In other words, if we have a static mapping for processing "tr" tags, it doesn't really matter where this logic lives. OTOH, if we encounter a "foo" tag, and can ask "does anyone know what to do with a 'foo' tag?" then we might have something interesting. In this case, the application, as designed, does not have to include logic for parsing this new element; we can add it on later in a manner which is transparent to the core application.

The problem is, this is Real Damn Hard. As it turns out, there is a very real business application for this type of functionality, and there is (and has been for some time) a small army of functional consultants chipping away at the issue. The business application is, of course, distributed service provision. If I have a shipment of bananas that I want to send to Outer Elbonia, and I'm not clear on how much I need to bribe the Elbonian Fruit Junta, I should be able to query my network of ASPs, and find one that handles Elbonian import paperwork. More importantly though, my purchasing system should be able to contact that network, and programmatically figure out who can handle that task, whether or not we should trust them with it, what information we need to send them, what information we need back, and what else needs to happen on our end to support this transaction.

The Real Damn Hard part comes from semantic ambiguity in describing services and business objects. Just because a remote system also happens to know what an "shipping manifest" is doesn't mean that the concept is the same thing our system thinks of as an "shipping manifest." Nor does it mean the format is the same. Nor does it mean that the steps involved in processing a "shipping manifest" are even vaguely similar.

One solution would be packing the "shipping manifest" concept with a complete definition of all its parts. This will never, ever happen, because you'd get into silly levels of definition recursion. You'd end up padding every shipping document with a Voyager-style complete definition of your entire company. If you wanted to handle requests from a system which had no understanding of the business domain, you'd need to add in the contents of a basic MBA program, as well.

There are three general initiatives to correct this problem. One is the construction of discovery languages, like UDDI. The second is business process integration techniques, like ECO and RosettaNet. The third is catalog normalization efforts, like UNSPSC and BMEcat. Essentially, people solving the service discovery and integration problem today are doing so by insuring everyone speaks the same language, rather than by inventing a universal translator.

To roll this back into the discussion regarding the word model, I still think the best mechanism for joining classes from two separate problem domains is the creation of a third set of classes which represent the intersection point between those two problem domains. In other words, rather than let Logic objects talk to Set objects, it seems easier to code up some Calculus classes that implemented and/or made calls to Logic and Set objects.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB

Not really related. (none / 0) (#45)
by nile on Sat Mar 24, 2001 at 04:41:49 PM EST

Thanks for responding.

You can use words for service discovery and domain integration, but words are not really a particular solution as a general programming model in the same way that objects are. I'm trying to grasp how to respond to this and would appreciate any pointers you have that gave you the impression that words were the former.

cheers,

Nile

[ Parent ]
Re: Not really related. (none / 0) (#51)
by zephiros on Sat Mar 24, 2001 at 06:30:01 PM EST

You can use words for service discovery and domain integration, but words are not really a particular solution as a general programming model in the same way that objects are.

I realize this. However, evidently I'm still not clear what specific modeling problem "words" are intended to solve. The only advantage I can see to decentralizing application logic is the ability to easily integrate and reuse classes, because each class carries its context with it. If the goal of words, in fact, has nothing to do with facilitating class integration, then I no longer have even the vaguest notion of what you're talking about.

I'm trying to grasp how to respond to this and would appreciate any pointers you have that gave you the impression that words were the former.

I was not suggesting they were. I was pointing out the problems associated with attempting to integrate problem domains via decentralizing application logic. I was also highlighting some of the existing thinking in the area of connecting applications from Field A to applications in Field B. If the scope of the "word" model is simply to move around where application logic lives, you might want to scale down some of the more breathtaking claims in your white papers, such as:

The word model, by making these domains interoperable, expands the types of problems that computer science can solve.

As this rather suggests you're trying to do full-on, cross-domain application integration.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

The purpose of words (none / 0) (#52)
by nile on Sat Mar 24, 2001 at 06:54:14 PM EST

Great! You understand what the purpose of words are. Now, we just have to establish what domain of problems they operate over.

However, evidently I'm still not clear what specific modeling problem "words" are intended to solve.

So this is a question of what types of problems words apply to. To answer this, we have to look at what problems have semantic and syntactical relationships that would benefit from coupling.

But all problems have semantic and syntactical relationships just like all problems have data and behavior. Consider an ATM. It has relationships between its money feeder, its button interface, the customer, and the bank. Consider a calculator. It has relationships between numbers, operators, and parenthesis. Consider a web browser. It has relationship between navigation, history, user interface, etc. All programs have syntactical and semantic relationships.

This means that, like the object model, the coupling of grammar rules, data, and methods in the word model applies to the entire domain of programming. The proper name for a pattern that applies to all of programming problems is a programming model.

You understand the purpose perfectly. Does this help explain the domain better?

thanks,

Nile

[ Parent ]
My mistake! (none / 0) (#60)
by nile on Sun Mar 25, 2001 at 02:58:47 PM EST

I reread your comment and you completely understand the purpose and a really important application of it,

whoops,

Nile

[ Parent ]
How would you explain OOP to a C programmer? (3.00 / 1) (#47)
by nile on Sat Mar 24, 2001 at 05:15:23 PM EST

I'm trying to explain a new programming model to OOP programmers and it's very difficult. So, I'm going to solicit help with a thought experiment.

Imagine that you were trapped on a desert island with a C programmer that had never seen an OOP language. You want to teach that programmer OOP, but keep running into specific difficulties.

When you give a concrete example of an OOP solution, the programmer thinks that OOP is a design pattern that only applies to a few problems rather than a programming model that applies to all.

When you explain the coupling relationship between objects and data, the programmer says that they can just do that anyway in C. So, they think that OOP is just a way to do structured programming, rather than a new programming model.

Although a few people have grasped it, I'm facing the same problem here trying to explain words (which couple syntactical relationships, data, and methods). I would really appreciate some suggestions on how to explain OOP to a C programmer because then I'll can use the same tactics to explain the word model to OOP programmers.

Thanks in advance,

Nile

Minor Corr: That should read methods and data (none / 0) (#54)
by nile on Sat Mar 24, 2001 at 07:14:10 PM EST

The text should read coupling relationship between methods and data, not the non-sensical objects and data.

[ Parent ]
Let me trust the machine (5.00 / 1) (#75)
by leviathan on Mon Mar 26, 2001 at 06:38:32 AM EST

I think I've got a handle on the general concept you're trying to put across (that in OO the dependency upon grammar is intrinsic in every piece of code whereas with words it's local to the word which uses it and both the word and the grammar get updated as required) - but that was from the 'Why words are better than objects' section. I think that the concrete examples comparing OOD to word design is the right idea, but that the domain you were attacking with it was too complex to do justice to in the space available.

I read the OO section with a quizzical hat on. Swathes of the API seem to be missing, and the pattern that you're using doesn't seem to be the best fit. It could almost be read as if you were making the OO deliberately obtuse to favour the word model. Let me reiterate, comparing OO and words is the best way to go about explaining this, but before I really understood OOP for the first time, I needed to touch some real, complete code.

Once I'd seen the mechanics of how methods are called, how objects are instantiated and so on, I didn't need to see it again. I now can program OO without thinking about how the bytes are laid out and each circumstance where a method may be called. I just needed to see the foundations once so I could trust that under all the high-level stuff, it was still the same machine I knew and trusted.

I expect that if you were to attack a simpler domain with concrete OO and word oriented code you could fully specify it from both angles and I could see, for example, what the 'match' keyword is actually doing in terms I am familiar with. High-level overviews of concepts are all well and good for a taster, but to feel I have a grip on something I have to see the strings that attach it to me world.

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]

Good point. (none / 0) (#78)
by nile on Mon Mar 26, 2001 at 10:44:33 AM EST

I agree. Real world examples trump theory. The BlueBox source code is going to be released tonight and has an entire word-oriented syntax that you can examine. The only negative is that it is XMLGUI and people have pointed out that XML examples make people think that it is just a parsing model. I really need to implement the calculator example above.

The OOP example wasn't the best either. I wasn't trying to deliberately do injustice to it, there just wasn't space. The word example suffers from the same problems too. It's clear I choose an example that was too complex.

Thanks for the comment. I think it's dead on.

Nile

[ Parent ]
I did just that! (none / 0) (#76)
by caracal on Mon Mar 26, 2001 at 07:15:08 AM EST

I once setup a crash course on Java within which I had to explain OOP on C programmers and I managed to fit this in a 35 minutes time window.
It mostly came down to explaining how a virtual method call works (thru the method table etc...) and suggesting that they think about the implications with respect to inheritance, overriding etc... and what possibilities would this bring.
So, what I would suggest is that you tell us what your top level logic is doing when running a "word" program just like LISP can be explained by describing the cons cell and the top level eval loop.
Then we (myself at least...) will figure out by ourselves what can come out of this design without resorting to marketroid talk like "This coupling eliminates the side effects of integrating different syntaxes "


[ Parent ]
I agree. (none / 0) (#79)
by nile on Mon Mar 26, 2001 at 10:46:45 AM EST

I'm the author and I agree. The above isn't marketoid talk, it's just way too heavy computer science that is keeping people from seeing the forest for the trees.

Thanks for the suggestion!

Nile

[ Parent ]
Strange. (4.00 / 1) (#59)
by i on Sun Mar 25, 2001 at 05:41:33 AM EST

Observe how your "Body" word includes matches against "Br", "Img" and probably dozens of others. Observe that "P" word, and "Frame" word, and probably dozens od others would also include matches against "Br" and "Img" and dozens of others already matched in "Body".

Now please demonstrate:

  1. There's really no duplication; or
  2. This duplication is somehow desirable.

Oh by the way. In my shop having "drawCell" method in "Table" class is a quick and sure method to have yourself fired. Well, not in my real shop, in my ideal shop :)



and we have a contradicton according to our assumptions and the factor theorem

Correct, but inheritance solves this problem (none / 0) (#62)
by nile on Sun Mar 25, 2001 at 03:19:42 PM EST

Right. Duplication is definitely not desirable. In a real world example with a fully implemented word model, there would be a root word with a base set of relationships. Body, P, Frame, Div, Td, and other XHTML words would then inherit from it and add, modify, or substract relationships as needed.

I agree that drawCell in Table is probably a bad place, but it's not a part of the model. I will wear sackcloth for a week ;]

Nile

[ Parent ]
A Simple Arithmetic Example (4.00 / 1) (#69)
by nile on Sun Mar 25, 2001 at 07:07:01 PM EST

Several people have asked for a different example which is given below. First, though I'd like to thank everyone who has participated in this discussion and made recommendations. Special thanks to nymia_g (for the symbol recommendation), to KnightStalker (for pointing out the term grammr rules was difficult to understand), to tmoertel (for seveal recommendations), to rusty (for his encouragement), to zephiros (for pointing out an application), and too many others to mention in a post. All of you have really helped refine and broaden the discussion.

That said, here a better definition of a word:

Word
----

Self Identify Function (i.e., a parser to recognize its symbol)
Definition (can be defined by its data and methods - like objects - and other words)
Rule/Relationships: The rule says how its symbol can be syntactically related to other symbols. The relationship defines what that means.

So, how does one approach problems with words. Well, first one identifies the elements of a domain. In arithmentic for example, the elements are subtraction, addition, and numbers.

Now, one writes a word for each of these elements that can recognize itself. That is, the addition word can recognize '+," the subtraction word can recognize '-,' and the number word can recognize '0-9.'

Now, one defines what that word means by writing data and methods. In the addition and subtraction functions, one has to write addition and subtraction methods, and the number word would have a value for the number it actually parses.

Next, one writes the rule/relationships so that these words can be connected to each other. The rule part says literally how they can be written out. So, the rule part in the number word that says the plus word can come next, literally means that a programmer using them could write: 8 +. Or 3 +. Etc.

Now, the relationship part says what that means. In this case, it means pass your value on to the plus sign and tell it to parse its relationships.

Now, the plus sign will have rules that a number can be to the right of it and a relationship that says get the value from those numbers. It will then add that value to the value it already has to form a sum.

Let's say, a programmer writes 3 + 4. Well, then the number will recognize 3 and store it as a value. It will then check its rule/relationships and ask them to see if there are any matches. The plus will say "I match" and the number word will pass on the three. The plus will then check its rule/relationships for a match and the number word (4) will say I match. The plus will then take its value and add them together to form a sum.

As you can see, this is a very different way to frame problems. It's power, though, is that one can easily integrate concepts from other problem domains by simply creating new rule/relationships. Notice how, relationships are encapsulated in the word. Therefore, there are no side effects from creating new ones. This means that if there was a Paint library of words, for example, we could easily create new rule/relationships between Shape, Image, Screen, Fill, and other words.

This last part is very important and what the entire paper is about. The object model is better than structured programming because it couples data and methods. The word model is better than the object model because it couples syntax and semantics (i.e., rule/relationships). The Riel quote in the paper is very important in understanding how this coupling eliminates side effects.

Thanks again to everyone who's contributing,

Nile

Symbols vs. Words (none / 0) (#70)
by nile on Sun Mar 25, 2001 at 08:16:21 PM EST

I forgot to distinguish between symbol and words. nymia_g deserves credit for making this distinction, and I cannot improve on his original post so here it is:

Regarding the programming model of Words. I would assume a word is an instance of something and it could be taken from a symbol which is like a class or definition. To make a distinction between a declaration from allocation, certain terms should be defined. In the case of Words, a symbol could be any syntax token that can relate to any other token. A symbol becomes a word when instantiated or allocated in a domain or multiple domains, provided the relation between words are semantically correct. nymia_g

Nile

[ Parent ]
Slight mods (none / 0) (#73)
by nile on Sun Mar 25, 2001 at 10:03:43 PM EST

I'm reading this closer and there is one small mod I want to make.

There is a word's definition, its instantiation, and its symbol.

The definition is its data, methods, symbol recognizer, and rule/relationships.

Its instantiation is when it is actually allocated.

Its symbol is the literal token that it recognizes. For example, the '+' is the symbol for the Plus word.

Following nymia_g, I think we need to make a distinction again between word definition and the literal it recognizes. The latter is the symbol.

cheers,

Nile

[ Parent ]
Just a Thought (5.00 / 1) (#71)
by zephiros on Sun Mar 25, 2001 at 08:17:03 PM EST

Here's one possible addition, which may end up helping you get past the content-to-context ratio I was talking about earlier. You might want to create a central, http accessible repository for code. I guess you would need some sort of QA to insure people aren't checking in malicious code or whatever. Anyway. If an application ran into a new case or data type or object that it couldn't parse or talk to, it could query the repository. The application would pass on as much context as it currently had, and the repository would figure out which new "words" the application needed to download. Then someone (the application, the user, the repository, the Trilateral Commission, whatever) would decide whether or not to press on. Since words store their own context, it should be fairly painless to build a chain to get the application from here to there. In some cases, like parsing a new HTML tag, this could be done transparently. In other cases, like opening an unrecognized document format, this may require user approval or purchasing a license for the parser or whatever. YGTP. The neat part is that never-used features would never bloat the application, because the app would grow (and transform) as it was used.

This is not intended as a practical suggestion, because building the library would be a monumental task. I mean, you'd have to recode everything from the ground up. But, for George Jetson-style futurist speculation, it's kind of neat to imagine a framework for self-assembling applications.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB

Exactly! (none / 0) (#72)
by nile on Sun Mar 25, 2001 at 08:22:58 PM EST

Great idea! That's exactly what BlueBox is about. If you visit gaming sites, you'll end up getting more gaming words and so you'll speak the gaming language. If you visit science sites, you'll have more mathematics words. Rather than a central repository what we're thinking of is a Web where each word has an http address. In this way, if you don't have a word you can use its address to download it into your cache. The browser will be as smart in a subject as it needs to be.

We can almost do this with the XMLGUI words that come as a sample with BlueBox. I say almost because the cache is currently broken.

BTW, it's really exciting to see someone grasping the implications.

Nile

[ Parent ]
Quick Overview (none / 0) (#77)
by nile on Mon Mar 26, 2001 at 10:36:23 AM EST

For those who are just joining the discussion, here are the four claims:

1) The structured model of programming has a spagetti-like relationship between data and behavior that leads to undesirable side effects, integration, and scalability problems.
2) The object model eliminates these problems by coupling data and methods.
3) Object oriented programming has a spagetti-like relationship between syntactical and semantic relationships that leads to undesirable side effects, integration, and scalability problems.
4) The word model eliminates these problems by coupling rules and relationships.

To learn more, I suggest reading the arithmetic example a few posts down, then returning to the paper above and looking closely at Riel's analysis.

There are also several discussions going on below, three that stick out:

Is the word model a design pattern or a programming model?
Is the word model just good OOP?
Can't you do the same thing in C and C++?

A heavily revised version of this paper combined with the theorectical one will be submitted to the ACM. A version of these papers will be released under the Open Content License tonight and readers interested in participating in thier refinement will be given credit as authors.

cheers,

Nile

Unconvinced (none / 0) (#81)
by Simon Kinahan on Mon Mar 26, 2001 at 11:25:01 AM EST

<P>OK, so I think I may be starting to understand what you are talking about: you want to integrate syntactic rules for the use of objects ("words" if you like) with the objects themselves, so you tie together the object with a representation of the object. Its terribly hard to figure out, as even your "theoretical" paper is very vague and handwavy. I really think you need to sit down and come up with some one sentence explanations of what you're trying to acheive, how you're doing it, and why anyone should be interested, as I strongly suspect *you* don't really know what it is you're trying to do, or you'd be able to write more clearly about it. Now some (rather harsh, I'm afraid) comments.

<P>Firstly, I disagree strongly that "<i>The relationships between the literal tokens '0-9', '+' and '-', and '()' are identical to the relationships between the concepts of number, plus and minus, and parentheses in a math program.</I>" The string "(9 + 0) = 9", could mean "I have three cows and an ostritch" (which would be false) as easily as it could mean what it does (and be true), while following the same grammatical rules. Similarly, you could just as easy write "(9 + 0) = 9", as "9 x 9", and thats using the same notation for the numbers. I contend that syntax does not determine the possible semantics of a sentence, and neither do semantics constrain the possible syntactic representations. Anyone who's written a parser designed to construct a particular data model from an unrelated file format could tell you this.

<P>Secondly, I do not see anything in your essay that indicates that we're *wrong* to treat the relation between syntax and semantics and unidirectional. When I change the semantics of an expression in a language, I don't always change the syntax. Indeed, most programming languages use the same syntax (function or method calls) for almost all possible semantics. English does the same, albeit with a much hairier grammar: we always use the same sentence word order. I don't see this limiting what can be expressed elegantly in English.

<P>Third, I don't see that your example actually gets us anywhere towards making syntax depend usefully on semantics. Al you've done is bundle them is the same box. What does this actually acheive ?

<P>Lastly, I have to say, as I have already hinted, that this idea is basically misconceived. Representations of things inside a computer are essentially meaningless, in the same way as words are meaningless unless you speak the language. They're only given meaning by the fact that computers can map from one representation to another, and ultimately get to one that can be presented to, and interpreted by a human being. Taking a representation and hooking it up to another representation (the class) on which some manipulations can be performed is basically pointless. You're doing nothing more than, say, serialization does.


Simon

If you disagree, post, don't moderate
Three Responses (none / 0) (#83)
by nile on Mon Mar 26, 2001 at 11:41:00 AM EST

I appreciate criticism and I agree that there are better ways to express some of these things. That said, I would like to reply to what you've said.

I strongly suspect *you* don't really know what it is you're trying to do

Not true. I've written a full implementatin of words with rule/relationships called BlueBox. I know exactly what I'm trying to do and say, though - I'll gladly admit - I am having difficult conveying it to everyone. Part of that is my fault, part of that is because -like the object model-it requires conceptualizing problems in a new way and it is naturally difficult for people to make the transition.

Secondly, I do not see anything in your essay that indicates that we're *wrong* to treat the relation between syntax and semantics and unidirectional. When I change the semantics of an expression in a language, I don't always change the syntax. Indeed, most programming languages use the same syntax (function or method calls) for almost all possible semantics. English does the same, albeit with a much hairier grammar: we always use the same sentence word order. I don't see this limiting what can be expressed elegantly in English.

Thank you. I'm trying to collect different ways that what I'm saying can be misunderstood and you just found another one. This isn't a literal '0' must be zero argument. As you point out, that's painfully false. We can represent the semantics of a domain with any symbols we want.

I'm saying that when a program runs - if it's running correctly - how it lets elements in a domain get grouped together is isomorphic to what those relationships means. When a user enters '2 + 3', whether its represented by 'two plus three' or any other expression, the ways those literal tokens (whatever they may be) can be grouped together must be isomorphic to the relationships between elements in a domain (i.e., the semantics).

Lastly, I have to say, as I have already hinted, that this idea is basically misconceived. Representations of things inside a computer are essentially meaningless, in the same way as words are meaningless unless you speak the language.

Obviously, the same misunderstanding is going on here. Does it make sense that I'm not talking about literal tokens but the syntax that is actually used, i.e., the relationships that the program allows as legal?

Nile

[ Parent ]
Unconvinced (5.00 / 1) (#82)
by Simon Kinahan on Mon Mar 26, 2001 at 11:26:14 AM EST

OK, so I think I may be starting to understand what you are talking about: you want to integrate syntactic rules for the use of objects ("words" if you like) with the objects themselves, so you tie together the object with a representation of the object. Its terribly hard to figure out, as even your "theoretical" paper is very vague and handwavy. I really think you need to sit down and come up with some one sentence explanations of what you're trying to acheive, how you're doing it, and why anyone should be interested, as I strongly suspect *you* don't really know what it is you're trying to do, or you'd be able to write more clearly about it. Now some (rather harsh, I'm afraid) comments.

Firstly, I disagree strongly that "The relationships between the literal tokens '0-9', '+' and '-', and '()' are identical to the relationships between the concepts of number, plus and minus, and parentheses in a math program." The string "(9 + 0) = 9", could mean "I have three cows and an ostritch" (which would be false) as easily as it could mean what it does (and be true), while following the same grammatical rules. Similarly, you could just as easy write "(9 + 0) = 9", as "9 x 9", and thats using the same notation for the numbers. I contend that syntax does not determine the possible semantics of a sentence, and neither do semantics constrain the possible syntactic representations. Anyone who's written a parser designed to construct a particular data model from an unrelated file format could tell you this.

Secondly, I do not see anything in your essay that indicates that we're *wrong* to treat the relation between syntax and semantics and unidirectional. When I change the semantics of an expression in a language, I don't always change the syntax. Indeed, most programming languages use the same syntax (function or method calls) for almost all possible semantics. English does the same, albeit with a much hairier grammar: we always use the same sentence word order. I don't see this limiting what can be expressed elegantly in English.

Third, I don't see that your example actually gets us anywhere towards making syntax depend usefully on semantics. Al you've done is bundle them is the same box. What does this actually acheive ?

Lastly, I have to say, as I have already hinted, that this idea is basically misconceived. Representations of things inside a computer are essentially meaningless, in the same way as words are meaningless unless you speak the language. They're only given meaning by the fact that computers can map from one representation to another, and ultimately get to one that can be presented to, and interpreted by a human being. Taking a representation and hooking it up to another representation (the class) on which some manipulations can be performed is basically pointless. You're doing nothing more than, say, serialization does.

Simon

If you disagree, post, don't moderate

Three Responses (none / 0) (#84)
by nile on Mon Mar 26, 2001 at 11:42:09 AM EST

I appreciate criticism and I agree that there are better ways to express some of these things. That said, I would like to reply to what you've said.

I strongly suspect *you* don't really know what it is you're trying to do

Not true. I've written a full implementatin of words with rule/relationships called BlueBox. I know exactly what I'm trying to do and say, though - I'll gladly admit - I am having difficult conveying it to everyone. Part of that is my fault, part of that is because -like the object model-it requires conceptualizing problems in a new way and it is naturally difficult for people to make the transition.

Secondly, I do not see anything in your essay that indicates that we're *wrong* to treat the relation between syntax and semantics and unidirectional. When I change the semantics of an expression in a language, I don't always change the syntax. Indeed, most programming languages use the same syntax (function or method calls) for almost all possible semantics. English does the same, albeit with a much hairier grammar: we always use the same sentence word order. I don't see this limiting what can be expressed elegantly in English.

Thank you. I'm trying to collect different ways that what I'm saying can be misunderstood and you just found another one. This isn't a literal '0' must be zero argument. As you point out, that's painfully false. We can represent the semantics of a domain with any symbols we want.

I'm saying that when a program runs - if it's running correctly - how it lets elements in a domain get grouped together is isomorphic to what those relationships means. When a user enters '2 + 3', whether its represented by 'two plus three' or any other expression, the ways those literal tokens (whatever they may be) can be grouped together must be isomorphic to the relationships between elements in a domain (i.e., the semantics).

Lastly, I have to say, as I have already hinted, that this idea is basically misconceived. Representations of things inside a computer are essentially meaningless, in the same way as words are meaningless unless you speak the language.

Obviously, the same misunderstanding is going on here. Does it make sense that I'm not talking about literal tokens but the syntax that is actually used, i.e., the relationships that the program allows as legal?

Nile



[ Parent ]
Core Point (none / 0) (#130)
by Simon Kinahan on Tue Mar 27, 2001 at 06:44:58 AM EST

Let me try to be clearer, and briefer, about my core point. You can see the class definition in an OO language as a description of a representation of a class of objects in some language. In OO languages this "datatype language" consists of a type identifier followed by named values of primitive types or references to other objects.

When you parse or output a file, or present a GUI on the screen, or even when you call getThingy(), this can be seen as taking a description in the datatype language that meets the grammar rules for the class and transforming it into another, or transforming another into it. The reason for, as far as possible, bundling up the function definitions with the datatype definition is pretty clear. When you change the datatype definition (the language syntax, as it were) you almost always also have to change the function definitions.

Now, as far as I understand what you are proposing, and I must confess it still seems pretty vague and woolly to me, you want to bundle up grammar rules with the datatype definition and the function definitions to indicate what syntactic constructs in some language correspond to this construct (the class) in the datatype language. The immediate issue here is that most datatypes can be represented in multiple different ways: you can serialize Java objects or store them in a database, netlists can be represented in EDIF, VHDL, Verilog, and so on. Similarly, as I said before, a syntactic construct can be interpreted in several different ways, even in several ways consistent with a single meaning. For example, the representation of a netlist you want to use if you want to lay it out on a chip is different to the one you use if you want to correct its formatting, or convert it to another format.

So, in reality, there is a many-to-many correspondence between syntactic forms in languages and datatypes. This is unsurprising, since a datatype itself is a syntactic form in a language defined by the compiler, so all I'm saying is that there are as many possibly translations of a form in one language as there are other languages capable of representing it. Indeed, in some target languages there will be two or more representations.

How does this affect you ? Well, as I said you're still being very vague about what you're actually *doing*, but as far as I can see you've got class definitions with grammar rules in them, and you're calling these "Words", but we've got two problems that its not clear how you solved: syntactic forms in the object language can correspond to two different datatypes, so how do you resolve this conflict ? and datatypes can correspond to two different syntactic forms, so how do you resolve this ? and this assume full correspondence, which is not necessarily the case anyway. A datatype may be able to represent things that cannot be represented in the language into which you want to translate it.

This brings us back to what I said before. What is actually gained by bundling the grammar rules for representing a datatype in some other language in with its definition ? You seem to be constraining the thing to have a single representation, but the advantage that exists in OO systems - that the strong dependency between the datatype definition and the functions over it is clearly represented - does not seem to exist because there is not strong dependency between a syntax in on language and a syntax in another (the datatype).





Simon

If you disagree, post, don't moderate
[ Parent ]
Re: Good questions (none / 0) (#154)
by nile on Thu Mar 29, 2001 at 06:52:41 PM EST

Hi Simon,

I read this a few times and found it difficult to understand. I think - correct me if I'm wrong - that you have two questions. The first revolves around the fact that the same semantics (e.g., a netlist) can have multiple syntaxes (e.g., all the different formats for netlists like Verilog, VHDL, etc) and vice versa. The second revolves around what one is supposed to do when the syntax is richer than the libraries at one's disposal.

How does this affect you ? Well, as I said you're still being very vague about what you're actually *doing*, but as far as I can see you've got class definitions with grammar rules in them, and you're calling these "Words", but we've got two problems that its not clear how you solved: syntactic forms in the object language can correspond to two different datatypes, so how do you resolve this conflict ?

The answer would be to inherit the rule/relationships (i.e., the grammar rules) and not inherit the data and methods. In this way, one would automatically gain the syntax (as I understand you to be using the word) and would only have to fill in the data and methods.

and datatypes can correspond to two different syntactic forms, so how do you resolve this ?

Here, we would do the reverse: inherit the data and methods and change the rule/relationships.

and this assume full correspondence, which is not necessarily the case anyway. A datatype may be able to represent things that cannot be represented in the language into which you want to translate it.

The purpose of words is not really translation as in these two examples, though they are well suited for it. I assume here you are talking about a case where the syntax is richer than the semantics of the libraries at you're disposal as occurs when Lynx opens a Web page with images. This is a difficult problem that is really orthogonal to the word model as a whole. To solve the problem, there would probably have to be semantic loss as when Lynx does not display images.

To sum, the two types of inheritance and polymorphism that the word model allows actually makes it ideal for dealing with problems that have a many to many relationship between syntax and semantics.

cheers,

Nile

[ Parent ]
Please give me a "real world" example (4.00 / 1) (#86)
by kostya on Mon Mar 26, 2001 at 12:23:42 PM EST

Actually, please give me an example for my real world :-)

Right now, I write large scale distributed systems for people like Banks and Investment firms. So far, all of your examples seem to revolve around mathematics or parsing. This is easy to explain: they are both grammar intensive concepts. So far, I kind of understand your concept, but I can't see how it is useful for me in "every day" work. Let's face it, most of us don't write calculus formula machines ;-) But what advantage will "words" and "grammar rules" give me when designing an authentication system that deals with Users, Organizations, Groups, Applications, and Permissions? That's one example--another might be Funds, Accounts, Account Managers. Granted, maybe you don't know enough about these domains to comment, but give me something "higher" level than a calculator example or a parser example.

I'm trying to understand this, but so far it actually seems very limited in its immediate usefulness--it seems limited in its domain application. The first time I saw OOP, I immediately saw how it would be useful. It took me a while to design objects correctly, but the benefits seemed obvious. Words just seem like well-defined ways to solve provlems in "grammar intensive" domains. But what about stuff beyond parsing and calculus? Can you give me something higher level than parsing or numbers? To me, the numbers and english grammar seems confusing.

Thanks.



----
Veritas otium parit. --Terence
Dense software composition (none / 0) (#87)
by nile on Mon Mar 26, 2001 at 12:31:57 PM EST

Because words couple syntax and semantics, you can easily merge the semantics of different libraries to solve complex problems. You have a banking customer with a financial library that covers costs, investments, checking and a trading customer with a trading library that covers puts, gets, etc. You need to merge the two richly: not just show the same information on the same page.

In the object model, this would be very hard and would require a substantial of the libraries.

In the word model, each of the elements in these domains- checking, puts, gets, etc - would be their own word. Richly integrating the two to express complex programs involving the concepts in both libraries only requires making new rule/relationships. No massive rewrites, no 6 month schedules, no massive code reviews, because there are no syntactical side effects of creating new rule relationships. This is completely unlike the object model which has a spagetti like relationship between syntax and semantics.

Does this help?

Nile

[ Parent ]
Ok, a bit better ... (none / 0) (#89)
by kostya on Mon Mar 26, 2001 at 02:26:05 PM EST

Could you unpack the example and kind of give me a blow by blow? I'm trying to understand how this isn't just simple "instance level" methods--especially template-like ones?

Example: gets and puts

template<class T> Account::put(T obj)

In the case of a "put" or a "get", if we use templates, we might not need to do anything (i.e. the template would be able to handle the object because it already is compliant). In the case of the object being a bit too tricky, we just write a custom method for the template to match to--one which takes into account the trickiness of the object.

Now, I'll grant you that the "word" or "relationship" is a method call, and so maybe not syntatically pretty. But isn't this accomplishing what you are aiming at?

Again, I'm still not convinced that this requires language level support--short of looking prettier or saving typing. But then I might still be not grokking your point ;-)

Whatever the case, a stimulating discussion.



----
Veritas otium parit. --Terence
[ Parent ]
Templates have the same problem (none / 0) (#92)
by nile on Mon Mar 26, 2001 at 02:37:02 PM EST

Give me another half hour to write out a blow by blow bank example: it will be similar to the math example below that's blow by blow. For now, here's a quick run on why templates can't solve the problem. Note below how there are undesirable side effects when we try to integrate.

Let's say we write a logic and a set template so that we can relate elements between both. One of those relationships (borrowed from an earlier commentator) might be:

.... LogicalElement<Element, RuleMemberOf> le;
SomeLogicalSet<Element> set;


Now, a quick way to analyze this to see if it is identical to words is to look at where the grammar rules exist. In this example, the syntax rules are expressed in the above global file and semantics are encapsulated in the templates.

The problem that a programmer writing this code will face is that the relationships above are expressed globally and, as a consequence, new relationships could have unintended side effects on existing ones: that's the point of the paper. The template example works with just a few rules, but let's imagine another chemistry template, now:

template<class T, class Y> class ComplexMolecule{
../stuff
public:
}

The syntax of chemistry and material science could then be integrated with: #include <...>
ComplexMolecule<Element, AnotherElement>
Material<Element>


Now, let's say that we wanted to prove logical claims about sets of molecules. This would require integrating the two sets of relationships:

#include <words>
LogicalElement<Element, RuleMemberOf> le;
SomeLogicalSet<Element> set;
ComplexMolecule<Element, AnotherElement>
Material<Element>


Notice that Element has two different meanings here. This is the exact problem words are trying to solve. Now, of course, you could work with templates more perhaps to form a word, but you would have to couple syntax and semantic relationships to do so, just as in structured languages you have to couple data and methods to do OOP.

The point of this article is that OOP languages don't naturally support a crucial coupling in the same way that structured languages don't naturally support the coupling between data and methods, not that you can't do words in C++ (you can do OOP in C, for example, but it's harder). The big benefit for programmers is that when you support this coupling, it is much easier to model domains and richly integrate them with other domains just like OOP makes programming easier. Does this make sense?

I'll work on the blow by blow example,

cheers,

Nile

[ Parent ]
I still don't see the flaw (none / 0) (#99)
by kostya on Mon Mar 26, 2001 at 03:41:02 PM EST

You keep pointing out that OO can't do it, but I don't see why. Maybe our examples are flawed.

This was my example, so I'd like to officially "scrap it" and try something else to see if we can have a meeting of the minds here!

New example:


File: rule.hpp:

class Rule {
... stuff
};

class Rule_MemberOf {
... stuff
};

//others ...


Ok, now we have a whole bunch of rules. For the sake of simplicity, let's say they are nice templates with nice concept constraints (see boost website). So for instance, rules could be template objects that take 1 or 2 template arguments (element, element-element, element-set, and set-element). You can create a Rule object for understanding how objects relate to one another.

Ok, now we define another file that has some nice STL containers, but customized to allow for Rules. This could include an Object or Element or Chunk class that wraps any object and holds rules for that object. It would then contain a bunch of STL containers that are rule aware. We would then also have some iterators and rule-specific collections--i.e. they hold rules, not objects. They are made to interact with and iterate over rules.

We now have generic rules, covering all sorts of relationships. We have a generic wrapper object that allows us to attach a rule to an object, generic containers that can interact with relationship objects, and a specialized set of containers that act on, manipulate, and hold rules. Let's assume these are nice and complete like the Standard C++ library. Users who find quirky or hard to generalize rules and relationships could inherit and tweak classes and functions from these libraries ... let's call them <rule> and <rule_objs>.

Now, let's try that example, slightly modified:


#include <rules>
#include <rule_objs>

Foo fooObj; // new Foo
RObject <Foo> r_fooObj(fooObj); // new Rule-based oject that wraps a Foo, specifically fooObj
RSet rset;

Ok, let's assess. We have Foo type--which represents some sort of type. Some sort of concept. Anything. But let's just say that it might be useful, especially to someone in chemistry. We have place foo in our set, rset.

Next ...


// for the sake of the example, Material is soething chemistry-esque and
// ComplexMolecule is a specialized set for interacting with collection of Materials or something
Material fe("Iron-composite");
RObject<Material> r_fe(fe); // we can now do things with it
ComplexMolecule cm();
cm.add(fe);
RObject<ComplexMolecule> r_cm(cm);

// now, let's do some stuff
r_set.add(r_fooObj);
r_set.add(r_cm);
r_fooObj.associate(AS_CONCEPT, r_cm);

Ok, so why can't r_set be a collection of concepts (let's say foo is a manufacturing process or something). We have foo and cm, they are associated in some way. R_set could have some functionality for iterating over its contents, via rule-based requests (i.e. show me all dependent elements or ...).

Why wouldn't that work? Now, what I am looking for in an answer is this: how would your language or language modifications make that simpler? I'm not looking for a C vs. C++ answer. I understand that OO can be done in C but it is generally a pain. It's just that your examples so far don't show me any savings--they don't seem all that much clearer or more flexible.

Just demonstrate it for me :-) I'm interested.

P.S. Hasn't this been done already? Aren't there a bunch of rule supporting languages out there? Isn't R++ what you are talking about? I thought that rules has been largely unaccepted or non-useful.

----
Veritas otium parit. --Terence
[ Parent ]

Not rule supporting (none / 0) (#103)
by nile on Mon Mar 26, 2001 at 04:11:32 PM EST

Sorry for the ambiguity, they're different things. Rules in words are how words can be grouped together not if-then's that are activated when data changes.

I'm maintaining more than one convesation at the moment, so give me a little time to respond to you,
cheers,

Nile

[ Parent ]
Ok, here goes (none / 0) (#106)
by nile on Mon Mar 26, 2001 at 04:22:38 PM EST

This needs to be a dialogue, so I'm going to answer part of it here. First to avoid conflicts, you would introduce namespaces and that would be a start.

Taking what you posted, though, what if someone else came along with a different library that also defined "Material" and the other stuff. How would you integrate it?

Before continuing, though, go back and look at the bank example which is on a sub thread. It also talks about inheritance and polymorphism of rule/relationships: something you can't do with templates/namespaces. Think of it this way, what if you want to derive a new set of relationships other than the ones there. How would you do it. When you solve that problem, look at what you have!

Keep asking questions. I'll get to them as I can.

Nile

[ Parent ]
Perhaps I was to general (none / 0) (#111)
by kostya on Mon Mar 26, 2001 at 05:12:03 PM EST

The more we talk, the more I am convinced I hit this the wrong way.

I saw words as a general concept. Now you are talking specifics--inheritance, interaction, conditions based on type.

Now that sounds exactly like a method. A method that acts on a state and/or an object. Templates would allow you to handle more general cases. Specific cases could then be implemented by the programmer.

Since I have to declare a rule and then implement it, I'm missing how a lexical-level construct would do much more than auto-gen code for me--code which I would mostly write myself in "special cases" or additional cases. Since templates can be inherited and further specialized, it seems to all fit.

Top class has the template. Further specializations add and change specific cases for the template function (i.e. rule pattern) to suit their needs.

Perhaps this is just half the problem? What is the other part if this is only part of it? If this is it, then it looks like at least C++ is able to support rules. Not because it is OOP, but because it supports parameterized programming--templates!



----
Veritas otium parit. --Terence
[ Parent ]
I'm worried about clarity here (none / 0) (#114)
by nile on Mon Mar 26, 2001 at 05:30:02 PM EST

I agree, I'm worried that continuing to go down this path might miss the forest for the trees. The only reason, I'm willing to discuss templates at all is because - due to their nature - they are a way of constructing syntax rules as well as semantic relationships. The problem is that templates are a smoke screen because it appears to people that the fact that they are templates matters to the discussion. It doesn't. Their syntactical features are the only reason they're interesting.

Perhaps a better way to do this would be to talk about syntax for a while. If what I mean by syntactical relationships vs. semantic relationships is clear, the other stuff might come into focus.

Syntax is the legal ways in which elements can be joined together, e.g., '0-9' and '+'. Semantics is what those legal joinings mean, e.g., numbers and addition. The overall point I'm trying to make is that syntax and semantic relationships, as defined in programs are isomorphic to each other. Hence, they should be coupled to elimiate side effects.

The part that keeps getting dropped from the conversation are the syntactical relationships (as programmers we're almost trained to ignore them) and that's where the disconnect is coming from.

C++, in general, doesn't allow one to specify enw legal relationships. It's syntax is statically set when the compiler is compiled. Now, there is one loophole and that is templates which allow one to specify syntax relationships. But there is no enforcement there of the syntax relationships/semantic relationships coupling. What's worse all of the syntax relationships are, by default, global. As a result, one would predict a spagetti-like relationship between syntax and semantics and that is exactly what one sees.

Does this make it clearer? It seems obvious to me that syntax is where we're missing each other.

Nile

[ Parent ]
Ok, it's working! (none / 0) (#132)
by kostya on Tue Mar 27, 2001 at 10:01:13 AM EST

Alright, I'm starting to see what you are talking about. Kind of :-)

I guess my main issue is that I am kind of grasping the concept, but failing to see its relevance/applicability. Your attempt to show the difference in the C++ parser versus a Word parser didn't click--mainly because I didn't see some huge problem in the code (I do XML all day long, so maybe I am too numbed to the actual problem to see a better way <grin>).

Perhaps a follow-up article with examples of things that can only be expressed succinctly in words and not in objects would be helpful. And diagrams. I mentioned that in the other post. Diagrams are good :-)

I remember objects "clicking" for me. It was when you someone showed me the polymorphic example in a real life situation--we were working on agent stuff at a conference. When they demonstrated the "base object" being able to be substituted for more specialized objects, it "clicked". I saw that you could do something similar in C or Pascal, but not without great pain. I saw the "black-box" paradigm, objects being private little boxes that had all sorts of mechanisms in them--but you didn't want to know nor did you care. You only cared that it got its job done. Modular programming taken a step further! I saw it. The encapsulation coupled with the polymorphism hooked me. I figured out the other benefits afterwards.

I'm still waiting for that "click" ;-)



----
Veritas otium parit. --Terence
[ Parent ]
I'm writing another paper (none / 0) (#155)
by nile on Thu Mar 29, 2001 at 06:54:50 PM EST

I'm putting together all the suggestions to write another paper with real concrete examples.

Thanks for all your questions. They've really helped me narrow down what needs to be in it.

cheers,

Nile

[ Parent ]
How do you reuse relationships? (none / 0) (#108)
by nile on Mon Mar 26, 2001 at 04:39:42 PM EST

Think about it this way. Now, that you've put all of those relationships in a template, how are you going to reuse them? Let's say you want to relate the concepts in just a slightly different way. How would you reuse your work in the template approach?

Now look at the same problem with rule/relationships that can be added/subtracted/overriden at will.

Nile

[ Parent ]
What do you mean "reuse"? (none / 0) (#110)
by kostya on Mon Mar 26, 2001 at 05:05:56 PM EST

I ask because what you are saying doesn't seem to make sense (to me).

Templates, or the functions defined by templates, would be inherited by subclasses of the templated class/object. So you do reuse them. As to extending them, how would you extend it beyond defining specific instances--i.e. writing templates? Which is how you use templates.

A template function would be attached to an object. You could define a template:

template<class T> void uses(T obj)

This method now defines the "uses" rule. It works on any object. Now if you need to tailor it, you write the specific template for that particular type.

Attached to the object. Inherited. Can be changed and customized. Short of a change in syntax, isn't that the same thing.

I looked at the bank example. Templates would do EXACTLY what you are saying. Granted, it would be a method, not some lexical construct, but it would do it. To say that C++ doesn't support the concept would be like saying that Java doesn't support multiple inheritance. It doesn't, but you use interfaces to get almost the same thing. Not a perfect comparison, but I kind of see it that way.

C OO requires major name mangling. Words look like they have to be declared and implemented (from the examples). If that is that case, how is is any different from the example of events: sure Java and C++ do not support events at the language level, but the easily support the paradigm with standard OO features. So far, short of some automatic call (which could be implemented), it looks like C++ would support words just fine. You just see classes as words and implment specialy named or formatted functions to implement the "rule" pattern.



----
Veritas otium parit. --Terence
[ Parent ]
By reuse (none / 0) (#113)
by nile on Mon Mar 26, 2001 at 05:17:05 PM EST

I'm really trying to explain this, so bear with me.

Think of it purely from an object-oriented perspective. You have a large system of objects. Those objects relate to each other in a certain way. You want to change those relationships. Now, if the syntactical and semantical relationships are not coupled, how will you do so. You'll have to search through the entire system to find all of the relationships. It's entirely possible, in fact, that one object might hold the semantic relationship to two other objects.

Now, words mean that, by default, you can trivially change the relationships in a system - both syntactical and semantic - simply by inheriting from a word and changing a rule/relationship.

Does this help at all?

Nile

[ Parent ]
Kind of :-) (none / 0) (#131)
by kostya on Tue Mar 27, 2001 at 09:52:10 AM EST

I see what you are saying--i.e. if the relationships are spread across the system, it becomes difficult, if not impossible, to maintain or change them. My current problem is that I am failing to see how well-written OOP would not solve that problem anyways.

Or is your point that well-written OOP would, but most implementations would be error prone? Therefore the need for the language level support?

It is unfortunate, because we seem to just be missing one another--i.e. I am kind of getting what you are talking about, but not enough to phrase my questions in meaningful or clear ways.

Maybe I need a diagram :-) I am unfortunately crippled by the need to always draw what I am explaining--which in turn affects how I learn. Damn visual learning :-)



----
Veritas otium parit. --Terence
[ Parent ]
More questions (none / 0) (#100)
by Gat1024 on Mon Mar 26, 2001 at 03:51:59 PM EST

Although I'm beginning to understand your model, I'm still trying to figure out how it can be useful. Unfortunately, the examples you've provided so far have left me somewhat confused. As in:

#include <words> LogicalElement<Element, RuleMemberOf> le; SomeLogicalSet<Element> set; ComplexMolecule<Element, AnotherElement> Material<Element>
Here, although you use element as the parameter in two different libraries, they are actually two completely different things. Element in LogicalElement represents a member of some set. Element in ComplexMolecule represents an atom. Note that I say element in something since it's that something that defines the context. Element is simply a value of a formal parameter that requires certain rules are met.

Sure having two objects with the same name but different definitions may pollute a global name space, but there is a simple solution. As in qualifying the word with a context like MoleculeLib::Element and SetLib::Element.

To reason about sets of molecules or atoms you could do something like this:

LogicalElement< ComplexMolecule<MoleculeLib::Element> > LogicalElement< MoleculeLib::Element >
And if LogicalElement required certain functionality of the elements passed to it, you could wrap molecules and atoms with a Sets::ElementWrapper that handles all of those niceties.

Your way amounts to having element know about logical sets as well as complex molecules. There's no separation of concern. If you have a word that can be used 20 different ways, you'll have 20 different concerns wrapped up in a single file. I think things can get cluttered this way. Especially when only one or two of those concerns may be needed. Talk about bloat.

Also, everytime you decide to use element in a different way, you'll have to add that behavior to its definition and every word that has a rule that involves that element. Let's say you've build a calculator with infix notation. Later on, you ceded that RPN is better. You'd still have the complexity of changing the grammar. But since the behavior relies on just how the parser works, you'd have to change that as well. You'd have to modify the number word so that a plus comes before it and another number comes after it. You'd have to modify the plus to expect two numbers after it.

My problem is that all of the concerns seem to be embedded in the word. The parsing concern which simply specifies how to create sentences is mixed in with action concerns that implement behavior. With separate concerns for both, you could change the parsing portion without touching the action portion at all. The action part of plus would say, give me two numbers, any two. The parsing part would be concerned with figuring out which two to provide.

Without seeing an implementation, all of this is just speculation. I can't wait to see your library.

[ Parent ]

You just did something important. (none / 0) (#102)
by nile on Mon Mar 26, 2001 at 04:06:57 PM EST

Notice that you just localized the rules of how objects can be together with what those relationships mean. You've just coupled syntax and semantics (to some degree) in the same way C programmers couple data and methods together when they want to do OOP in a struct. There might still be some problems if its not fully coupled, but I would have to look at it in more detail.

It's really important to understand that I am not claiming that one cannot do words in C++, only that the language does not naturally support it. The claim is that objects should have a self-identity function and rule/relationships as well. You can put something together using other features of the language in the way C programmers make objects out of structs, but you're still doing word oriented programming when you do so in the same way they are still doing OOP. To understand this, think about how you would implement inheritance of the rule/relationships you've just created. Notice how the language does not naturally support such inheritance.

There is more to your comment and I will try to respond to it as soon as possible. I suggest looking at the bank example which is in this same main thread, though in a different subthread.

cheers,

Nile

[ Parent ]
More detailed bank example (none / 0) (#101)
by nile on Mon Mar 26, 2001 at 03:57:04 PM EST

So let's try a more detailed bank example. I obviously cannot give all of the code right here, but I can give you a pseudocode explanation.

Let's imagine a simple bank with savings accounts, checking accounts, customers, money, and account handlers.

Each of these would be a word has a self identify function so that it can recognize itself. The bank word can recognize "Bank," the customer word can recognize "Customer," the money word would recognize any sum of money, and so on.

Now, each of these words would also have data and methods to define it. I imagine that the bank word, for example, might have a list of accounts as a property of the word. The bank word would probably also have methods like "withdrawMoneyFromAccount" and "callFederalReserve." All of this may be compared analogously to objects doing the same thing.

Now, where the analogy falls apart is that these words would also have rule/relationships with each other that defined how they could literally be put together and what those groupings mean. For example:

Customer withdraws 20$ from bank.

Now, there are 5 words up there. The rule/relationships of Customer would say that withdraws can be put after it and that when it was the Customer word should query the withdraw word for how much money it gets. The customer word would then pass control to the withdraws word which would check its rule/relationships.

The money word - which is in those rule/relationships - would shout "I have a match" and the bank word would also shout "I have a match." Notice how withdraws has two rule relationships, one going to the $20, the other to the bank.

Finally, the withdraws word would use the bank and the $20 money word to make a withdrawal and passes back that amount to the customer.

Why is this powerful. Because the rule/relationships mean that syntax relationships and semantic relationships are coupled with each other in the same way that data and behavior are coupled in objects. In practice, this means that we can add as many new words as we want without any side effects. In this way, it is possible to build very rich and complex software that is easy to understand because we have eliminated the problem of undesirable side effects.

Does it make sense, now? Is there a part of the example that should be expanded on?

cheers,

Nile

[ Parent ]
Are you saying ... (none / 0) (#90)
by kostya on Mon Mar 26, 2001 at 02:28:50 PM EST

Wait, are you saying that the behavior/rule/grammar would live outside of the object? I.e. there would be objects libs and then grammar libs?

If so, how would that work?

Also, wouldn't that just be nifty template functions that can then operate on a wide variety of well-understood object types?

Again, my brain is liking the concept, but I'm still seeing this as a great library or design pattern. Not something that requires a language-level support mechanism.



----
Veritas otium parit. --Terence
[ Parent ]
Not quite (none / 0) (#93)
by nile on Mon Mar 26, 2001 at 02:44:00 PM EST

A word is composed of data, methods, rule/relationships and a self identify function (aka, a mini parser that can recognize its symbol).

What we're really doing is adding things to the standard object - rule/relationships and self identity - and we're adding them for the same reasons that we invented the object in the first place: they have bidirectional relationships with each other.

As to whether it's a design pattern or a programming model, this might be a little difficult to explain, but let's give it a try.

A design pattern is a way of solving a particular type of problem. The idea behind design patterns is that whenever you come across a particular problem, you can benefit from past architectural work by using a well-tested solution.

The lambda design pattern, for example, only applies to certain problems. So, to see if words are a design pattern, we have to look and see what types of problems, the word model solves better than traditional models. This means we have to specifically state which problems have semantic and syntactical relationships that will benefit from coupling.

But all problems have semantic and syntactical relationships. Consider an ATM. It has relationships between its money feeder, its button interface, the customer, and the bank. Consider a calculator. It has relationships between numbers, operators, and parenthesis. Consider a web browser. It has relationship between navigation, history, user interface, etc. All programs have syntactical and semantic relationships. This means that, like the object model, the coupling of grammar rules, data, and methods in the word model applies to the entire domain of programming.

The proper name for a pattern that applies to all of programming problems is a programming model. We call the object model, for example, a programming model and not a design pattern because it applies to all problems.

Of course, that's a terminology issue. If you still want to call a design pattern even though it applies to all problems (I think the GOF would disagree), it's not that important because it's just terminology. What's important is that it's clear that it applies to all problems. You say tomato, I say tomato.

You're getting a lot closer. Does the above make sense at all?

Nile

[ Parent ]
I still don't see how that isn't just an object (none / 0) (#96)
by kostya on Mon Mar 26, 2001 at 03:10:39 PM EST

It still looks like an object with some very "cool" methods on them.

I guess I don't see how OO doesn't support your concepts. Maybe I'm dense or maybe I have just implemented that type of stuff unconsciously so I'm just not "seeing it".



----
Veritas otium parit. --Terence
[ Parent ]
Think of it in terms of structured programming (none / 0) (#97)
by nile on Mon Mar 26, 2001 at 03:26:57 PM EST

Here we go.

The rule/relationships can be inherited and are polymorphic. On a practical level, this means that you are inheriting not just data and methods, when you inherit from a word, but both the syntactical and semantic relationships that word has with other words. This is pretty important.

Taking it from another angle, though, imagine a C programmer looking at C++. The programmer says that it looks just like C to them since C already has data and methods. The C programmer will also note - correctly - that one could do the same thing in C. Look at the GNOME people who are very clear that they are doing OOP in C, just like BlueBox does word programming in C++ and Python.

I probably should have explained the inheritance and polymorphism of rule/relationships from the start, rather than the coupling relationships. The former explains what you can do.

Does this make sense?

Nile

[ Parent ]
Maybe this will help some (none / 0) (#98)
by nile on Mon Mar 26, 2001 at 03:39:44 PM EST

The result of programming with words is that the design of the system is modularized and encapsulated in all of the rule/relationships of the words. As a result, simply by inheriting from a word, you can change all of the rule/relationships related to that word. In this way, the design of a system can be changed on a local level, word by word.

Now, of course, you can do the same thing with objects, just as - once again - you can do OOP in C. To do so, though, you would have to couple the syntactical and semantic relationships of objects and then localize all of the relationships related to an object to that object. The point is that today's OOP languages don't naturally support this coupling just like structured languages don't naturally support the coupling between data and methods.

As a result, most OOP programs have a spagetti-like relationships between syntactical relationships (how stuff can be legally grouped together) and semantic relationships (what those groupings mean). Enforcing this coupling would lead to better designed programs and, of course, one gets the real-world benefits of inheritance and polymorphism of rule/relationships.

You're asking very good questions.

Nile

[ Parent ]
Here's how (none / 0) (#88)
by nile on Mon Mar 26, 2001 at 12:54:14 PM EST

The post below was more of a "Software Integration on Internet Time" example. I recommend reading that but I think you're more curious about how you'll approach problems using this model and how it will benefit you.

The word model is a better way to design systems. Like the object model, you would have customer, account, and other words for each of the elements in a domain. The difference is that the syntax - i.e., the legal way they can relate to each other - is coupled with the semantics -- i.e., what those relationships mean.

On a practical level, this means that once you create the words for a domain, it is very easy to extend the domain with more words because additional rule/relationships do not have side effects on other words. It's the elimination of these side effects that makes it such a better model. That's why the integration example worked below.

The resulting system is also much easier to understand because you're not forcing programmers to remember syntax/semantic relationships in the same way that OO systems are easier to understand because you are not forcing programmers to remember data/behavior relationships.

Nile

[ Parent ]
And There's The Problem (none / 0) (#91)
by zephiros on Mon Mar 26, 2001 at 02:32:26 PM EST

One of the selling points of OO is that the programmer can control the degree and type of abstraction in the design. Take, for example, the concept of a floor in an office building. If I'm designing elevator control software, I'm going to be concerned about floors. However, I should probably model them as a property of the elevator class. In this case, floor simply tells me something about the state of a given elevator.

Now let's say I'm designing intra-office mail delivery software. I'm still concerned about floors, but this time I want to abstract them as containers for offices. So I'll probably have a floor class, from which offices would inherit certain properties (like mail delivery schedules).

As a third case, let's pretend I'm writing software to centralize control of the building's various HVAC systems. Once again, floor is a container, but now it's a very big container that exposes a very rich set of data about the various construction details, ventilation systems, electrical and data wiring, etc.

In the first case, we need a thumbnail view of the floors. In the second case, we need the concept of floors to encapsulate organization details. In the third case, we need floors to include whatever structural information is meaningful to architects, electricians, and city planning officials. In all cases, the concept of floors refers to the same set of physical objects.

If we take a wide view, and build all that context into a single, monolithic floor object, we'll end up bloating all our software with needless details. In a nutshell, that's the problem I'm seeing with "words;" in order to cover every possible semantic use of a given object, you need to pack it with a huge amount of context.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

Response to first case (none / 0) (#94)
by nile on Mon Mar 26, 2001 at 02:47:53 PM EST

Thanks zephiphos, you're really tackling this. Let's look at these three cases in more detail.

One of the selling points of OO is that the programmer can control the degree and type of abstraction in the design. Take, for example, the concept of a floor in an office building. If I'm designing elevator control software, I'm going to be concerned about floors. However, I should probably model them as a property of the elevator class. In this case, floor simply tells me something about the state of a given elevator.

Words can be composed out of other words just like objects can be composed out of other objects. They can also be properties - i.e., data - in other words. One way to think of words is as objects with a self-identity function and rule/relationships tacked on.

Good questions. I'll answer your other two in a moment.

cheers,

Nile

[ Parent ]
Response to second and third cases (none / 0) (#95)
by nile on Mon Mar 26, 2001 at 02:58:36 PM EST

Now, that we've solved the composition problem, let's look at the rest of the examples. The thrust of them, as far as I can tell, is that a "floor" word has to handle every type of relationship and thus becomes bloated.

I just don't see why this is necessary. The Floor class wouldn't have to have all of that information, so why would the Floor word. Why couldn't we just have three Floor words. That's what you would do in the object world. Like objects, words don't have to have all of the relationships or all of the information of the real physical object.

Does this make any sense? I suspect that the emphasis I've made on integration has misled that it is always necessary to integrate with new rule/relationships. It isn't: the power is that you can, not that you must. Programmers have complete freedom to model problems as coarsely or as finegrained as they want.

cheers,

Nile

[ Parent ]
Okay. (none / 0) (#105)
by zephiros on Mon Mar 26, 2001 at 04:16:58 PM EST

So let's say I have two "words" in two different applications which both model the same real world object (but in different ways). If I want to connect the applications, I'm still writing new "words" (or modifying the old "words"). Which is to say, the only connectivity gain I get over classes is that any new "words" I create can inherit context from the "words" in the original applications.

IMO, this is a pretty thin advantage over just using OOP. My experience tells me this advantage, even on really large projects, would not justify the additional cost of the approach. I'd recommend, before you release/publish, you put together some metrics on teams using "words," and demonstrate some measurable productivity gains.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

Wrong example (none / 0) (#107)
by nile on Mon Mar 26, 2001 at 04:28:37 PM EST

Hi zephiphos,

That's right, if you integrate two libraries that are using the same element in a domain but in different ways, you'll still have to manually integrate them. That's not where the advantage is.

The advantage is that it is trivial to integrate differentconcepts from different domains trivially. You have a set library with different set words like "Set," "Membership," "Contains," etc. and a logic with "implies," "false," "true," etc. Richly integrated, these libraries could solve calculus problems.

To richly integrate these words with each other, all one has to do is form new rule/relationships from one word to another. There are no side effects. This is the power, not on integrating the same element, but in integrating different ones. It's in the rich composition of different libraries to solve complex problems that the word model has a phenomenal advantage over the traditional OOP model.

Nile

[ Parent ]
The inheritance test (none / 0) (#109)
by nile on Mon Mar 26, 2001 at 04:59:28 PM EST

Several people have been pointing out that it is possible to couple syntactical and semantic relationships in OOP and structured languages. I, in turn, have been noting that it is also possible to couple data and methods in languages like C to do OOP. GNOME programmers, for example, do this.

I think I just found a way to make the point clearer. I thought this was obvious, but clearly it's not. The rule/relationships in words can be added/subtracted/modified by their children. That is, they can be inherited and there is polymorphism as well. Notice how neither OOP or structured languages support syntactical relationship/semantic relationship inheritance.

Just like OOP does with structured languages, then, we can pose an inheritance test. Do today's OOP languages naturally support the inheritance of rule/relationships? Clearly they do not.

cheers,

Nile

This might make it clearer (none / 0) (#112)
by nile on Mon Mar 26, 2001 at 05:13:54 PM EST

This is inheritance of both semantic relationships and syntactical relationships -- i.e., the legal rules by which objects can be put together and what those relationships mean.

One easy way to miss the point is to think that it is just semantic inheritanceL i.e., how objects are related to each other.

hope this makes it clearer,

Nile

[ Parent ]
Problems with the Word Model (5.00 / 3) (#115)
by tmoertel on Mon Mar 26, 2001 at 09:35:44 PM EST

First, I should point out that nile and I have been having an interesting discussion that spun off of an earlier editorial comment that I posted. In that discussion, nile (with much patience) explains to me from a number of angles what the "Word Model" really means. If you're having trouble with the main story above, you may wish to vist our discussion, which clarifies many conceptual and other points.

Second, if you want to understand the author's claims about the Word Model, you should read the paper The Word Model on SourceForge. The main k5 story contains a few glitches in the examples, probably owing to k5's once-you-submit-it-it's-frozen editorial process. The glitches make it difficult to deduce the nuances of the Word Model. The paper is much clearer and easier to understand. It has diagrams that are particularly helpful in understanding the essence of the "Word-oriented programming."

Third, I am in disagreement with many of the claims made in the above story and in the paper. If you will please forgive me for my bluntness, I do not believe that the Word Model contains any novel or noteworthy contributions. Now, I've made a lot of boneheaded mistakes in my day, and maybe this is one of them. In fairness to nile, I'll explain my reasoning below so that if I've tipped into foolishness, somebody can point out where it happened.

In any case, nile has demonstrated nothing but a genuine patience and professionalism in his explanations to me so far. Since one of nile's posts indicates that the Word Model paper will be submitted to the ACM for possible publication, please consider my criticism below to be an early round of peer review designed to strengthen the final submission.

My observations about the Word Model

My understanding of the Word Model is as follows:

(1) You have a system of objects.
 
(2) Among the objects there are relationships, the meaning of which can be considered the system's "semantics."
 
(3) The way in which the objects can legally be combined in the system is its "syntax" and is governed by a "grammar".
 
(4) In traditional OO approaches this grammar is expressed outside of the objects (e.g., w.r.t. language domains, in an external parser).
 
(5) Therefore, the overall relationship between the system's syntax and its semantics is difficult to understand and ultimately limits the usefulness of the system because changes to the syntax may have hard-to-find, unintentional consequences for the system's semantics.
 
(6) In response, the Word Model suggests that the grammar should be represented within the objects, creating "words". This representation takes the form of self-recognizers in each of the objects and rule/relationships for each object that define how the object can relate to others. It is claimed that the Word Model representation creates an obvious, bidirectional relationship between the system's syntax and its semantics, the ultimate consequence of which is that the system is easier to extend and integrate with other systems.
My analysis of these claims is as follows:

Claims (1), (2), and (3) I accept.

Claim (4) I must question on the grounds that grammatical information is easily and commonly incorporated into objects. Regarding computer languages, for example, OO implementations of recursive descent parsers commonly do this. Similar designs are manifest in other problem domains. Therefore, I don't think that OOP suffers from the problem described in Claim (4); and I believe that normal OOP techniques suffice to solve this "problem" when desirable.

Claim (5) I must question on similar grounds. The relationship between a system's syntax and its semantics is unclear only in a poorly designed system. The author's OO examples in this story, for example, aren't what I would call good designs. That they suffer from syntax-semantic obfuscation isn't an indication that OOP designs in general suffer this problem.

Claim (6) I must also question, even if I were to stipulate that (4) and (5) are real problems (which I don't), because the benefits that the proposed Word Model offers are readily available with good design techniques that do not suffer fundamental problems that I believe exist in the Word Model:

  • Foremost, the Word Method does not allow for the painless integration claimed because of semantic mismatch in the domains to be integrated. One of the author's posts about the Word Method suggests that disparate domains like banking and trading can be integrated by simply editing the two domain's respective words' rules to incorporate the other domain's words. However, upon scrutiny, this doesn't seem to hold. For example, it's likely that both domains have "Account" words -- after all, I have banking accounts and trading accounts. Which of the two domain's Account words do you use? What if their syntaxes overlap? Don't forget that each has its own semantics. The banking Account word's semantics may equate it with an account identifier on a local mainframe, but the trading Account word's semantics may equate it with a queue in which trades are placed for processing by an third-party brokerage. Just because you can integrate the domains' syntaxes doesn't mean that their underlying semantics are compatible. Aligning mismatched semantics is a difficult problem, and what the Word Model promises doesn't help in this regard.
  • The Word Method doesn't seem to offer any way in which context-sensitive grammars can be handled without undoing its supposed primary benefit of bundling a word's syntax and its semantics in a clean, obvious, and self-contained package. For example, consider a word whose semantic interpretation of the words that follow it on the input stream varies depending on the context in which the word is used, e.g., an Account word used in a banking context vs. a trading context. Now we have semantic interpretation varying depending on syntactic context. There doesn't seem to be a clean way to handle this type of relationship in the Word Model. In order for that word to select which rules to allow, it would need to have its own context provided to it. So we could pass the current context -- banking or trading -- from word to word so that when we finally encounter an Account word it would know which semantic interpretation to follow. But this seems to pierce the clean coupling that the Word Method promises.
  • The Word Method obfuscates the system's grammar by creating needless redundancy. For example, in the HTML example in the paper, it would appear that the Body word would need to have rules for its relationship with the Br word, the Img word, etc. These rules -- or at the least the bulk of their content -- would need to be duplicated in the words Td, P, Div, Center, and so on for any other HTML element that can contain IMG or BR elements. (Of course, this could just be a limitation of the word-oriented solution presented in the HTML example. Perhaps one could create an Inline word to represent all the different inline elements like IMG, BR, etc., and rather than duplicating BrRule, ImgRule, etc. all over the place, whenever a word like Body or Div can accept inline content, we could give it a single InlineRule which would indirectly incorporate the BrRule, ImgRule, etc via the Inline word. But if that's the case, we're building a recursive descent parser, and we might as well use the standard techniques.)
  • The Word Method seems to require that, in a system of N related objects, the programmer must maintain O(N^2) rule/relationships. The cost of making each word "stand on its own" is proportional to the number of words in the system. Adding one new word could require modifying N other words. Thus changes to the grammar become increasingly costly. For example, if you wanted to bring back HTML's BLINK element by adding a Blink word to your Word-based solution, you would have to add rules for Blink to the Body, Td, P, Div, Center, H1, ..., H6, Li, and a bunch of other words. Ouch. (The same Inline word comments apply here, too.)
That's my reasoning. If I am mistaken, I would appreciate any corrections that could be provided. Thanks for taking the time to read and consider my comments.

--
My blog | LectroTest

[ Disagree? Reply. ]


Answers to Two Points (none / 0) (#117)
by nile on Mon Mar 26, 2001 at 09:49:35 PM EST

Hi, I agree with tmoertel that we have been having a useful discussion below. He's pointed out some very useful criticisms with the format of this paper that I agree with. He's also been very kind in responding at length even though we disagree. I'm going to respond to all of his points after dinner. Right now, I'll hit two:

The Word Method obfuscates the system's grammar by creating needless redundancy.

and

The Word Method seems to require that, in a system of N related objects, the programmer must maintain O(N^2) rule/relationships.

Both of these can be answered very simply. Words can inherit/subtract/modify their rule/relationships. This is very critical, it is something OOP languages can not naturally do.

In a real world example, there would be a root word with a base set of relationships. Body, P, Frame, Div, Td, and other XHTML words would then inherit from it and add, modify, or substract relationships as needed.

Hope this helps,

I'll reply to the others after dinner,

Nile

[ Parent ]
Claims 4 and 5 (none / 0) (#121)
by nile on Mon Mar 26, 2001 at 11:37:41 PM EST

Ok, let's first look at the response to the claims:

Claim (4) I must question on the grounds that grammatical information is easily and commonly incorporated into objects. Regarding computer languages, for example, OO implementations of recursive descent parsers commonly do this. Similar designs are manifest in other problem domains. Therefore, I don't think that OOP suffers from the problem described in Claim (4); and I believe that normal OOP techniques suffice to solve this "problem" when desirable.

Claim (5) I must question on similar grounds. The relationship between a system's syntax and its semantics is unclear only in a poorly designed system. The author's OO examples in this story, for example, aren't what I would call good designs. That they suffer from syntax-semantic obfuscation isn't an indication that OOP designs in general suffer this problem.


So, there's a few things going on here. First, it's clear that my example of OOP gone bad has been misunderstood as a claim that OOP has to be done this way. This is my fault.

Riel, in OODH, was very clear that he was giving a worst case structured programming example to show the spagetti-like relationship between data and behavior that frequently occured. Riel was also clear that structured languages like C could easily solve this problem by coupling data and methods as the GNOME people do. I should have been very explicit about this.

You can couple syntactical and semantic relationships in any programming language you want - C++, C, Lisp, Basic, etc. - and, in fact, you should if you want to avoid side effects.

There are many ways to view the coupling of syntax/semantic relationships just as there are many ways to view data/method coupling. Let's take a safety/flying perspective here. The claim is not that you can't fly with C++, just that it doesn't have safety mechanisms built in place in the same way C is missing safety mechanisms between data and methods. Good C++ programmers, like good pilots, will naturally construct these safety mechanisms.

The problem is that they are always needed. All problems have syntactical and semantic relationships. Consider an ATM. It has relationships between its money feeder, its button interface, the customer, and the bank. Consider a calculator. It has relationships between numbers, operators, and parenthesis. Consider a web browser. It has relationship between navigation, history, user interface, etc. Like the object model, the coupling of grammar rules, data, and methods in the word model applies to the entire domain of programming. If you don't couple the relationships, then, just as the lack of coupling in the structured model leads to undesirable side effects from additional data dependencies, you'll have undesirable side effects with new syntactical dependencies.

I know that was long, but it only had two points: Safety mechanisms vs. Flying and that all problems have syntax/semantic relationships.

We can end on one more point. Just like in the object model, this coupling brings a new type of reuse to the table. Rule/Relationships can be added/subtracted/modifed etc. by their children. In this way, the syntactical/semantic relationships in a word can be reused!

cheers,

Nile

[ Parent ]
I'm looking, but I can't see it. (none / 0) (#123)
by tmoertel on Tue Mar 27, 2001 at 12:48:40 AM EST

I'm in agreement that spaghetti code is bad. So is design that hides the relationship between syntax and semantics. These are problems, no doubt about it.

But, I don't see how "words" solve these problems any better than what's available now with existing techniques. I'm trying to see it, believe me, but I can't. My intuition is telling me that I can't see it because it's not there. If you can show it to me, however, I'll gladly look at it.

Here's what I (and I suspect others) would like to see:

  1. A concrete demonstration or proof that existing programming models (e.g., OO and functional) when applied sensibly are in some way fundamentally deficient. (All we've seen so far are examples that show that bad OO programming is deficient.)
  2. A crisp, concise definition of what these deficiencies are. (The deficiencies I've seen have been extrapolated from or substantiated by the OO examples, which are not examples of good OO design. I will suggest that you need a definition that stands on its own and can be tested to demonstrate its veracity.)
  3. A crisp, concise explanation of why OO and functional models cannot elegantly address these deficiencies. (The explanations that have been presented so far have overstated the limitations of OO and (especially) FP. Nobody is arguing that bad OO or bad FP isn't bad. If you could show that good OO and FP were in some fundamental way bad, however, then you would have something.)
  4. A crisp, concise definition of what the "Word Model" is.
  5. A concrete demonstration or proof that the Word Model does elegantly address the aforementioned deficiencies. (Your papers say that the Word Model handles the deficiencies, but don't show it.)
  6. A list of what you are claiming as new contributions to the field of Computer Science. (I include this because of the subtitle "Why Computer Science Hasn't Solved the Big Questions?" in the paper "Natural Programming and Integration" and this claim in "The Word Programming Model: A Detailed Explanation" paper: "The word model is important because it increases the number of problems that computer scientists can solve," suggesting a novel, noteworthy contribution.)
  7. Finally, a literature search and comparison with existing methods that demonstrates that the claimed contributions are novel and noteworthy. (Show that the Word Method isn't a repackaging of existing techniques.)

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
These criteria would knock out OOP (none / 0) (#125)
by nile on Tue Mar 27, 2001 at 01:10:06 AM EST

1 and 3. The object model would never be a programming model by these criteria. A C programmer would simply say that coupling data and methods is a good technique(3) and they would be right. A C programmer would also say that not coupling them is a bad way to program and they would be right (1). Therefore, OOP is not a programming model.

You're criteria are much too strong. You need to construct a set of criteria that, if presented to a structural programmer, would allow the programmer to say that OOP was a new model.

2. Here's a very simple one. The inability to look at grammar rules and see the semantic dependencies. Here's a more complex one. Put the syntax rule with the semantics of what that relationships means and the encapsulation of the syntax rule will keep it from having side effects.

4. A new fundamental unit of programming that couples data, methods, rule/relationships and has a self identifier.

5. Looking at 2 again where I aspired to minimalism, the example in the paper shows that it does solve the problem. The syntax relationship is with the semantic relationships so you can look at it and see the dependencies. This is intentionally miminalistic and intentionally involves a physical inspection. You can also verify the more complex claim by noticing that the word example does not have side effects.

6. Words allow you to richly compose different domains because they don't have side effects. This is traditional hard.

7. Finally, a literature search and comparison with existing methods that demonstrates that the claimed contributions are novel and noteworthy. (Show that the Word Method isn't a repackaging of existing techniques.)

I've done such a search. I've also answered a large number of questions that have tried to explain it in terms of one thing or another. Invariably, the person has to restrict some part of the system in order to make it work. There is also never any inheritance or polymorphism of rule/relationships.

I need to finish the code and go to bed, but I'll answer any new questions you post in the morning,

Nile

[ Parent ]
Now we're getting somewhere... (none / 0) (#127)
by tmoertel on Tue Mar 27, 2001 at 02:16:48 AM EST

Thanks again for taking the time to respond.

On (1) and (3) being too strict, I don't think so. In order to show that structural programming has a deficiency, one can show that the real world is perceived by humans as a collection of related things (objects). Humans are very good at grappling with object-related complexity; we do it every day. One can therefore argue that SP, by not modeling these objects and their relationships directly, is inefficient at representing the human-perceived real world, which is a frequent domain that must be modeled in software. OOP addresses this fundamental deficiency by directly supporting the ways that humans deal with the real world (via abstraction, encapsulation, modularity, and hierarchy).

Regarding (2) you wrote:

Here's a very simple [definition of what the deficiencies in OOP and FP are]. The inability to look at grammar rules and see the semantic dependencies. Here's a more complex one. Put the syntax rule with the semantics of what that relationships means and the encapsulation of the syntax rule will keep it from having side effects.
This the crux of the difficulty that I'm having. Where is the inability to look at grammar rules and see the semantic dependencies? It's in the examples of bad OO coding in your paper, certainly, but in good code? That's what I want to see. Please look at, as just one example of how this is already done today, Parsec. Are you telling me that you can't immediately see the semantic dependencies from looking at the grammar rules? In the same way that Parsec uses monadic combinators elegantly to capture the syntactic-semantic binding in the parsing domain, combinators are commonly used in other domains to do the same thing. This stuff isn't new.

Your (4) I'll accept, with the provision that the "fundamental unit of programming" may not be new.

On (5) and (6), I don't see it. If the examples (both OO and Words) had taken the next step, to show how you would merge two domains, then perhaps the benefits and could be weighed. However, just coupling the grammar with the objects is not sufficient to demonstrate that you can easily integrate different domains. For example, what if the cost of updating/adding rules and relationships is greater than that of doing it the old-fashioned way? Where, then, is the benefit, even if your objects and grammars are coupled? In other words, if the cost of avoiding undesirable effects is greater than the effects themselves, where is the benefit?

On (7), our previous discussions suggest that you didn't consider functional programming in your literature search. You may wish to examine modern FP more deeply, because it addresses what you consider to be outstanding problems.

Regards,
Tom

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
Definitely, notice the coupling relationship (none / 0) (#138)
by nile on Tue Mar 27, 2001 at 05:22:40 PM EST

You're right. Notice the coupling relationship that occurs in monadic combinators between syntax and semantics.

Now, notice the coupling that occurs between data and methods in OOP.

Do you see how analyzing different programming models in terms of side effects and coupling provides a common ground. The same analysis can be applied to Logic programming by the way. Think of implications and the side effects that they can have on each other.

The point of this coupling analysis is that we need to join the coupling discoveries from different fields of programming to form a new unit of programming that eliminates all of the side effects. I have only discusses syntax/semantics here, but look at Logic programming and Parallel programming and maybe it will make more sense.

Thanks for the reference by the way. It makes telling the story much easier. My apologies for my lack of knowledge of modern functional programming, when I studied it, this coupling was not discussed.

cheers!

Nile

[ Parent ]
Further followup (none / 0) (#153)
by nile on Thu Mar 29, 2001 at 06:17:50 PM EST

This the crux of the difficulty that I'm having. Where is the inability to look at grammar rules and see the semantic dependencies? It's in the examples of bad OO coding in your paper, certainly, but in good code? That's what I want to see. Please look at, as just one example of how this is already done today, Parsec. Are you telling me that you can't immediately see the semantic dependencies from looking at the grammar rules? In the same way that Parsec uses monadic combinators elegantly to capture the syntactic-semantic binding in the parsing domain, combinators are commonly used in other domains to do the same thing. This stuff isn't new.

I realized I was not explicit here in my response above. Monadic combinators do couple syntax and semantics but they are stateless. After running their parsing rules and their operations, they are subsumed by the combinators above them. In this way, unlike words they do not have persistent data and methods. They only couple two of the elements being discussed.

However, just coupling the grammar with the objects is not sufficient to demonstrate that you can easily integrate different domains.

This is based on my experience programming BlueBox. I agree that it needs to be made explicit with some simple examples.

cheers and thanks again for the conversation,

Nile

[ Parent ]
Regarding state... (none / 0) (#165)
by tmoertel on Sat Mar 31, 2001 at 11:20:24 PM EST

Regarding functional programming (FP) you wrote:
Monadic combinators do couple syntax and semantics but they are stateless.
Monadic combinators (and just about any FP idiom) can easily incorporate state. It's just that pure functional programming is not imperative, and so the way that state is maintained and updated is unfamiliar to (and easy to overlook by) people that aren't accustomed to it. For example, in Parsec, the Parser type is parameterized on the type of your choice. This type can hold any state that you want.

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
Can and Must (none / 0) (#166)
by nile on Sat Apr 14, 2001 at 06:16:37 PM EST

Missed this comment ...

I agree that monadic combinators can easily incorporate state. C++ can also incorporate rule/relationships and, in fact, does in the original BlueBox code.

Modern programming languages are Turing complete, so there are many different technologies in which it is possible to do word-oriented programming. It is also possible to do functional programming in C, word-oriented programming in assembly, OOP with monadic combinators, etc.

It is useful to distinguish between can and must. In word-oriented programming languages, one must couple data, methods, and syntactical and semantic relationships together just as one must couple data and methods together in OOP. The programming languages enforce good habits. In contrast, one can couple data and methods in C and one can couple data, methods, syntactical and semantic relationships with monadic combinators, but until this coupling is enforced, C will not be an OOPL language and monadic combinators will not be a WOPL.

sorry for the delay, the question was buried,

cheers,

Nile

[ Parent ]
Answer to context point (none / 0) (#122)
by nile on Tue Mar 27, 2001 at 12:23:29 AM EST

To quote:

The Word Method doesn't seem to offer any way in which context-sensitive grammars can be handled without undoing its supposed primary benefit of bundling a word's syntax and its semantics in a clean, obvious, and self-contained package. For example, consider a word whose semantic interpretation of the words that follow it on the input stream varies depending on the context in which the word is used, e.g., an Account word used in a banking context vs. a trading context. Now we have semantic interpretation varying depending on syntactic context. There doesn't seem to be a clean way to handle this type of relationship in the Word Model. In order for that word to select which rules to allow, it would need to have its own context provided to it. So we could pass the current context -- banking or trading -- from word to word so that when we finally encounter an Account word it would know which semantic interpretation to follow. But this seems to pierce the clean coupling that the Word Method promises.

This does not break coupling if the context is information passed through proper couping channels as it is in BlueBox. Objects, for example, couple data and methods. Yet, in spite of this, a Customer object can see how much money is in a Bank object. In turn, a Customer object can share with Neighbor object how much money that was. Now, if all of this information is shared through the proper coupling channels (i.e., object methods and not direct access to object data), then the data/method coupling has not been violated.

So as long as the context information is passed through proper coupling channels (i.e., rule/relationships), it is not a violation of rule/relationship coupling.

This was a very good question, by the way. I had to really scratch my head over dinner.

Nile

[ Parent ]
Seems like something is broken... (none / 0) (#128)
by tmoertel on Tue Mar 27, 2001 at 02:49:58 AM EST

Regarding:
This does not break coupling if the context is information passed through proper coupling channels as it is in BlueBox.
Are you sure you haven't broken the fundamental claimed benefit of Words? You would be passing syntactical context into the body of a rule/relationship, which can be responsible for providing semantics. Consider (to use the parser domain as an example) that the context could have been passed from several levels up the parse chain and that the token for the passed-from word probably would not appear in any of the passed-to word's syntax rules (because the words' respective tokens are never adjacent), even though the semantics attached to those rules may be dependent upon the syntax relating to the passed-from word (via the passed-in context). Here, then, is a syntax-semantic dependency that is not made apparent by the grammar rules. In order to understand the dependency, you must know what the passed-in context represents, how its value was initially determined, what rules it depends on, etc., and this information requires a code hunt, just like you claim is the problem with OO.

Doesn't this contradict the claim that Words address the the big limitation of OO/FP? ("The inability to look at grammar rules and see the semantic dependencies.")

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
Well, I'm still up (none / 0) (#129)
by nile on Tue Mar 27, 2001 at 04:58:24 AM EST

Still working on the code for release, so I'm a little tired.

I think we need to look at the object example closer. I took your exact criticism and showed how one object could gain information from another and pass it to yet another without violating the object model. As long as I do not allow direct access to the syntax relationship so that it can be modified without the word's knowledge, there are no side effects.

I think we need to draw a careful line and again I will return to Riel's analysis. The method dependencies he was concerned about on data were those that could *change* the data. And this makes sense since the coupling is done to avoid side effects and the only methods that can have unintended side effects through data are those who actually manipulate it. Let's distinguish then between dependencies that change and dependencies that do not change.

Now, the claim of Riel is that all of the dependencies that change should be put with the data that they change. That way, if you are using a client that relies on that data, you can look in one place and find out what is going on when the data changes unexpectedly.

Notice that using context does not directly change the syntax rules: in fact, it can't, since they are encapsulated. The only methods that can change the syntax rules are in the word.

Very useful distinction. Thanks,

Nile

[ Parent ]
Missing the point... still broken (none / 0) (#136)
by tmoertel on Tue Mar 27, 2001 at 04:42:23 PM EST

You suggest that we return to the object example from your earlier response, so let's do that:
[Passing around context information] does not break coupling if the context is information passed through proper coupling channels as it is in BlueBox.
I'm not talking about coupling, I'm talking about dependencies. You're saying that it's okay to pass contextual information about earlier-parsed Words into later Words because it doesn't necessarily break coupling. But it does create hard-to-see dependencies, just like you say is the problem with the OO approach. The later Words are now dependent upon the implementation of the earlier Words and their syntax rules. This is the exact problem that the Word Model supposedly solves. In other words, who cares if your coupling is intact if you still have dependency problems that require code hunts when you make changes?

For example, if you change a Word that creates or uses context that is passed on to other Words, your changes may have broken those other Words (because they depend on the context, which you don't encapsulate but pass on). You are now obligated to see which Words the context is passed to, see how they use it, see what Words they pass it to, and continue in this way until you have traced out the full flow path(s) of the context. It's a code hunt.

Also, you've assumed that the context will take the form of Words, but that's not the case. The form of the context will need to capture the syntactic relationships among the previously parsed Words that have been encountered during parsing. So, you can't just pass a Bank as context, for example. You would need to pass a richer structure that says, "You are in the context of a Bank (and here it is)." The reason is that contexts may be stacked: "You are in the context of a Bank (and here it is), which is in the context of a ForeignCountry (and here it is)". Sometimes, you won't even need the Word itself, just the information that you are within its context: "You are in the context of a Bank."

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
Let's make a distinction (none / 0) (#139)
by nile on Tue Mar 27, 2001 at 07:01:55 PM EST

Again and again, you force me towards greater clarity Tom. Thanks for taking the time to do this.

So, there are two different types of dependency information we might want from a system. The first is the forward-looking repercussion view that you mentioned. If I make a change here, what will be the effects on the rest of the system. Neither the object model provides that in terms of data nor the word model in terms of syntax. You are dead on in this observation.

The other type of dependency is the debugging type: "Why did this change." If you have a large system with thousands of files and one of them starts exhibiting strange behavior, you want to know what is causing this behavior. You want to be able to look at an object and see its data dependencies or look at a word and see its syntactial dependencies. And you want to do it fast because your main server is giving away $100 credits. [cut to IBM commercial]

You're right that the coupling does not solve the forward-dependency problem. No arguments, there. I hadn't made this distinction to be honest because I was only looking at dependencies from the debugging perspective, as Riel does. Thanks for bringing more clarity to the conversation by forcing this distinction.

The claim can now be made more exact as follows:

Coupling syntax and semantic relationships guarantees that information about why a syntax change occurs will be localized to the word the change occurs in.

Nile

[ Parent ]
The big question (none / 0) (#124)
by nile on Tue Mar 27, 2001 at 12:49:07 AM EST

To quote:

Foremost, the Word Method does not allow for the painless integration claimed because of semantic mismatch in the domains to be integrated. One of the author's posts about the Word Method suggests that disparate domains like banking and trading can be integrated by simply editing the two domain's respective words' rules to incorporate the other domain's words. However, upon scrutiny, this doesn't seem to hold. For example, it's likely that both domains have "Account" words -- after all, I have banking accounts and trading accounts. Which of the two domain's Account words do you use? What if their syntaxes overlap? Don't forget that each has its own semantics. The banking Account word's semantics may equate it with an account identifier on a local mainframe, but the trading Account word's semantics may equate it with a queue in which trades are placed for processing by an third-party brokerage. Just because you can integrate the domains' syntaxes doesn't mean that their underlying semantics are compatible. Aligning mismatched semantics is a difficult problem, and what the Word Model promises doesn't help in this regard.

Let's break this up into seperate parts.

The banking Account word's semantics may equate it with an account identifier on a local mainframe, but the trading Account word's semantics may equate it with a queue in which trades are placed for processing by an third-party brokerage.

Let's start with the clash problem, since it is mainly one of namespaces. Each word in BlueBox has a unique Web addressable ID, so there are no literal clashes there. The word that is being used is determined by the rule/relationships that are called that point to this ID (not the token). The token parser in the word simply verifies that it is indeed there.

What if their syntaxes overlap? Don't forget that each has its own semantics.

I assume that this is the unique ID problem again, but this time on a larger scale with multiple words and rule relationships. For example, the worry is that if you say:

Bank: Is the interest rate increasing on my bank account?

Trading: What bonds is the interest rate increasing on?

It could be the case that interest rate increases had different semantic values (I'm not really a money person, so bear with my ignorance). This would indeed be a problem if the rule/relationships pointed to tokens and not unique identifiers.

Now, of course, the following would be ambiguous if allowed in both:

Merged Bank/Trader: Please give me money from my account?

Merged Bank/Trader: Please give me money from my account?

But it should be ambiguous. There has been a merger and the customer needs to use a syntax that -through its unique rule/relationships - will allow it to identify the unique IDs it needs to use. That is, the customer needs to speak in such a way that it specifies which account - trading or savings - they are interested in.

Now, I am in one error here. In my examples above, I parsed tokens. Those examples were constructed in a fake version of C++ because this was requested by readers and I made a mistake because I couldn't compile the result to catch such errors. Look at the BlueBox source and you will see unique Web addressable IDs.

cheers,

Nile

[ Parent ]
Forth, some commentary and an example (5.00 / 1) (#116)
by Captain Napalm on Mon Mar 26, 2001 at 09:45:17 PM EST

First off, have you ever studied Forth? It's basic programming model is a
word (literally) and each word is responsible for its own actions and any
other parsing that might be required. For the most part, the syntax (if you
can really call it that) is RPN, so the data comes first, then the
operation:

3 4 +

But that's not always the case; for instance, for `:', which defines new
words, it's semantics (or syntax?) dictates that it is to be followed by the
name of the word:

: add4 4 + ;

So `:' looks ahead in the parse stream for a sequence of characters (and in
ANSI Forth, any character between ASCII 33 and 126 inclusive can be used as
the label, so things like `: 3.14 3.1415926 ; ' is perfectly legal) and
creates a new word from that. In effect, it does its own parsing. Such
words are typically rare, but that doesn't stop you from defining your own
forms of syntax. There's one package that allows you to type in equations
using infix (FORTRANesque actually) and it only takes a few lines of code to
get OOP in Forth (heck, I've seen an 8080 asembler in one page of Forth, and
an 80386 assembler in 20 pages).

Now, onto other things. I have a problem with both of your XML parser
examples. In the first one, you explicitely check for each tag type and
handle it in the main while loop, while in the second example, you seem to
call a function for each tag type, which either does something or not. Both
are horrible ways to code I'm afraid.

I'd do something like (and forgive me if I don't use C++; I do OOP in C):

typedef struct mltag *MLTag;
typedef struct mlparser *MLParser;

void xhtml_body(MLTag self,MLParser input)
{
MLTag next;
void (*function)(MLTag,MLParser);

while((next = MLParserNext(input)) != NULL)
{
if (MLTagType(next) == TAG)
{
function = lookuptag(MLTagName(next));
(*function)(next,input);
}
else if (MLTagType(next) == DATA)
/* process non-tag data */
else
/* error, neither tag nor data */
}
}

lookuptag() would look through a table of tag names (say, "BR") and when
found, return an associated function. If a function for a tag didn't exist,
then a function that does nothing would be returned. This makes it easy
enough to add support for new tags and I think was discussed in an earlier
post. But I fail to see how this is a revolutionary new programming
paradigm since I've been coding like this for years now (unless it's so
obvious to me and to no one else that something like this is revolutionary).

Your calculator example is rather sparse. Let's see if we can't flush
things out a bit. Start with a simple calculator specified by the
following:

<expr> ::= <term> | <expr> `+' <term> | E
<term> ::= <factor> | <term> `*' <factor> | E
<factor> ::= NUMBER | `(' <expr> ')' | E

(E is epsilon, or NULL input, literals are in quotes and upper case define
sets (which I'll leave unspecified for now) and alternatives are separated
by vertical bars. An empty expression returns `0'. So far so good. Code
that up (and only that) using words (don't bother with minus and division;
I'm keeping things simple).

Now do a different style of calculator, a LISPish based one. You don't have
to worry about precedence here, just the operators (add) and (mult). For
example:

(add 2 3 (mul 5 6))
(add 1 2 3 4 5)
(mult 1 2 3 4 5)

Again, an empty list returns `0' (and if you like, a parameterless function
(add) can return 0 as well).

Now do a third style, this time in RPN. In this case, each operator will
only expect two items on the data stack. Again, an example:

5 6 * 2 3 + +
1 2 + 3 + 4 + 5 +
1 2 * 3 * 4 * 5 *

No input returns `0'. If there is only 1 item on the stack, return either
that item (for the case of +) or 0 (for the case of *) and if there are no
parameters on the stack, return 0.

Now, since this was done using words, mix and match them. I'll leave how to
switch between them up to you, but I would expect you to be able to nest the
parsing.

-spc (Don't quite see what the fuss is about)


Answers (none / 0) (#118)
by nile on Mon Mar 26, 2001 at 10:10:12 PM EST

I have answers to three of your samples (i.e., EBNF, Lisp, and RPN) . I believe Forth falls under the same objection, but need to study it more.

I've downloaded a Forth manual and will be studying its dictionary over dinner.

As a preview to my answer, check out this page on global grammar rules here.

cheers,

Nile

[ Parent ]
Forth and other questions (none / 0) (#126)
by nile on Tue Mar 27, 2001 at 01:33:34 AM EST

Ok, I've read the manual. Here's the quick gist. Forth and all of the other examples don't have a coupling of syntax and semantic relationships. They also have global grammar rules, largely as a result of this lack of coupling.

I have to release the code and go to bed, but check out the link below and look at where it talks about the side effects of global grammar rules. This is a syntactical analysis of the problem.

I'll be happy to answer more questions if you have any in the morning.

Nile

[ Parent ]
Static oop-> reflection-> dynamic oop-> w (none / 0) (#119)
by doomsayer on Mon Mar 26, 2001 at 10:55:51 PM EST

Here's what I think the word model partly is. The languages I know best are C++, java and python so I'll use them in my explanation.

Static oop; Classes, inheritance, polymorphism
Available in C++, java, python

Reflection; getClassName() in the language and get object properties in general
Available in java, python

Dynamic oop; ability to load or evaluate a class at runtime
Available in python, kinda of in java, where you can load a .class file which is derived from a .java file

Word; the ability to have different classes with the same class name, as long as they have different namespaces; classes should have a function like getNamespace(). This is useful when you want for example to have a table class used in html and in a game for example. Without namespace differentiation, you'd have to make two classes; Table and Gametable. The word syntatic sugar enables you to make both your classes Table, one's getNamespace() function would return 'html' and the other's getNamespace() function would return 'game'. Both could be loaded into the machine and not overwrite each other even though they have the same class name.

The full word model could very well have more abilities; but, from what I've read so far it at least handles different namespaces well. This ability would be useful in very large projects, where it would become a pain to have to name every class htmltable, gametable, architecturetable, etc... C++, java and python have some namespace support; but, for instance I've never seen a way of getting the namespace by querying an object. In the same way that you can simulate C in C-- my saying using only globals and prefixing their names with the funtion names, as was posted before, so you can simulate different class names in C++; but, it would be nice to have the syntatic sugar so the names don't become too unwieldly.

I believe part of the reason that people have trouble seeing the extra abilities of the word model is that a good oop programmer is usually using most of the word model already, often with a lot of effort. Dynamic oop is easily used in python already, with some extra effort in java and with more effort in C++ by using corba and dynamic libraries. The different namespaces, while potentially useful are generally not used, because some compilers don't support it and the project has to be huge before it becomes useful and it'be helpful if filesystems could regularly handle attributes as well as filenames; because even if you're compiler / interpreter could see that two classes called table are different; the filesystem will still see two files with the same name, which can easily cause trouble later when people merge / update projects.

Actually, it's more than that (none / 0) (#120)
by nile on Mon Mar 26, 2001 at 11:13:45 PM EST

Thanks for commenting and trying to understand what I'm saying. Actually, the word model is a little different than you described. It can indeed avoid clashes, but C++ can do this as well with namespaces.

Words couple syntax/semantic relationships (i.e., rule/relationships) and encapsulate them with data, methods, and a self identifier. The self identifier is a function that recognizes the word as a token. A table word, for example, would recognize the token "Table." So, this isn't reflection, it's token recognition.

Now, the rule/relationships are really what words are all about. I recommend reading these examples here and here. Notice how the rule relatioships connect concepts in a domain and at the same time encapsulate those connections so that they do not have side effects on existing connections. This is why it is easy to integrate semanticallytwo different domains.

Please let me know if this makes any sense. I'll be happy to answer any questions.

Nile

[ Parent ]
A question of "how" (none / 0) (#133)
by kostya on Tue Mar 27, 2001 at 10:04:09 AM EST

I'm curious. Will words be implemented like C++ did many of its features--i.e. no cost if you don't use it? Will it be an extension to a language like C++, another paradigm supported by the extended language? I.e. you add words in addition to parameterized programming and OOP?

Or is this too radical a concept? I.e., in order to do it right you need to really change how you program?



----
Veritas otium parit. --Terence
Check out the BlueBox source (none / 0) (#135)
by nile on Tue Mar 27, 2001 at 03:43:45 PM EST

An early version of Bluebox is available specifically to answer questions like these. Be forewarned that it has several bugs, a few inconsistencies between what is described here, and is still raw.

It should provide a very good foundation for understanding rule-relationships. Look under the language/xmlgui portion for all of the words in XMLGUI and, although this isn't implemented yet, it should be clear that the the rule/relationships can be added/subtracted/modified by their children: i.e., there is a new type of inheritance and polymorphim going on here that is absent in the OOP world.

Let me know if this answers your questions. You do have to change how you program, but the change is a coupling constraint so it makes the design process easier.

Nile

[ Parent ]
Question about the source (none / 0) (#141)
by KnightStalker on Tue Mar 27, 2001 at 07:50:52 PM EST

What version of wxWindows are you compiling this against? I installed the latest version, and your code appears to expose a bug in it. That is, <wx/object.h> only declares the class wxClassInfo if wxUseDynamicClasses is #defined, but then it refers to that class if that symbol is *not* defined.

[ Parent ]
The configure files need some fixing (none / 0) (#149)
by nile on Thu Mar 29, 2001 at 12:05:57 PM EST

Thanks for pointing this out since it should be in the release notes. The configure files should also be using wx-config to determine the version that is on the user's system and exiting if it is incompatible.

Here are the dependencies:

xerces1_30
wxGTK-2.2.3

I just looked on the wxGTK site and noticed that they have released a new version. This is the first time a new version has broken the code, so there does appear to be a bug. Here's a link to the old version.

The April release, which will be more official, should install out of the box. The goal is for it also to have a virtual symbol recognizer table and polymorphic rule/relationships. This way the new types of inheritance and polymorphism that the word model brings to software design will be immediately obvious to users. Hopefully, it will also make the story much easier to tell.

Thanks for looking at the source. Let me know if you have any more problems. I'll be putting up the current version (written in Python) soon in CVS.

Nile

[ Parent ]
My problem (none / 0) (#156)
by KnightStalker on Thu Mar 29, 2001 at 07:51:17 PM EST

Sorry about that. For some reason I didn't have wx/gtk/setup.h installed correctly which was causing my problem.

[ Parent ]
Another problem (none / 0) (#157)
by KnightStalker on Thu Mar 29, 2001 at 08:19:45 PM EST

src/xmlgui/Makefile is trying to create an archive with a couple of other archives that it assumes I have -- /usr/lib/libwords.a and /usr/lib/libantiwords.a. I assume libwords.a is the same as what's in $BLUEBOX/lib/libwords.a, but what libantiwords.a is, I have no idea, as the only files that contain the string "antiwords" are that makefile and some leftover Emacs temp files.

[ Parent ]
I figured it out (none / 0) (#161)
by KnightStalker on Fri Mar 30, 2001 at 01:01:11 AM EST

It works if you just take out the line that tells it to archive that.

[ Parent ]
Thanks for pointing this out. (none / 0) (#163)
by nile on Fri Mar 30, 2001 at 12:57:15 PM EST

This is a bug that didn't appear on my machine because I had that library.

I'm fixing it right now.

Nile

[ Parent ]
This is the most stupid thing I've ever read (1.00 / 1) (#134)
by exa on Tue Mar 27, 2001 at 03:10:27 PM EST

Well, it's clear that the author has almost no knowledge of programming language semantics, compiler design or artificial intelligence. Similar issues have been discussed in a much more intellectual level in semantics field. This is only a half-hearted attempt at "I want to be in the game" with no significant contributions.

It's funny that there is no reference to real work on language design. It seems as if the author knows nothing about related computer science fields. I guess he's not a computer scientist after all, because his research(!) seems to converge on his local minima.

It's absurd that no reference to work on knowledge representation or advanced grammar theories exist. I'm certain that this poor guy knows nothing about them.

Sorry, but the acceptance of this article also shows how ignorant kuro5hin people really are. It would be a nice exercise to write an article that makes fun of this post, but unfortunately I have not time for such waste of time.

You guys are saying kuro5hin is a much more intellectual place than slashdot, but at least I can find some recent news at slashdot. All I find here lately is lame articles like this.

__
exa a.k.a Eray Ozkural
There is no perfect circle.

Read the theorectical paper for references (none / 0) (#137)
by nile on Tue Mar 27, 2001 at 04:54:28 PM EST

As posted in the earlier version, there is a theorectical paper with seveal references here. There is also an earlier paper that sets the syntactical aspects of the problem I am addressing here. Both of these papers have substantial references.

The reason that this article does not is because the readers of kuro5hin made it very clear when I posted an earlier version that a simpler explanation was wanted that avoided dense computer science theory.

Many have complained that this paper itself is too dense with its discussion on coupling and side effects, even though it eliminates much of this theory.

I understand your concerns, but I'm trying to meet the needs of my audience. If you want to get a taste of my background in the field read my responses to dozens of questions below. There are several technical discussions occurring lower in the paper.

cheers,

Nile

[ Parent ]
Look at the comments below (none / 0) (#140)
by nile on Tue Mar 27, 2001 at 07:36:46 PM EST

The kuro5hin people are not idiots. Look at the comments below and you'll see that readers of kuro5hin posses a wide breadth of technical knowledge. There are discussions on the syntax/semantic coupling in relationship to templates/concepts, structured programming solutions, monadic combinators, OOP solutions, recursive descent parsers, Haskell, Lisp, Forth, logic programming, and dozens of other technical angles.

These people have taken the time - which I greatly appreciate - to try to understand what I am saying and to respond to it. As an author, this is invaluable. As a person interested in the subject, their questions have also forced me to think much more clearly on the subject. Look in particular at Tom (tmoertel's) comments and the discussion there. He just forced me to make a distinction in how I used one of my terms.

I think the work here stands by the way and would welcome a discussion on it. The conversation is on the coupling of syntax relationships, semantic relationships, data andbehavior. If you want to bring in unification-based grammars with typed feature structures to this conversation, I'll be happy to discuss them.

cheers,

Nile

[ Parent ]
Sorry but (none / 0) (#142)
by exa on Tue Mar 27, 2001 at 08:30:49 PM EST

I don't see anything that's reminiscient of a programming paradigm here. For a programming paradigm shift, you need different foundations, approach, PL constructs; something significant.

In particular, what you're describing doesn't seem to have much semantic significance.

In case you haven't noticed, you can already do what you describe with current languages. I'd written an LR(1) parser in C++ which was object-oriented pretty much like you describe.
One class for each type in language. Sounds cool? Unfortunately, I didn't have the time to make it into a release. In my opinion still better than lex/yacc, but you'd really need extensible syntax/semantics to achieve a flexible language.

Now, that's just a great way to write compilers. But unfortunately, caching words from internet does not solve programming problems in the real world. Programming needs *formal* languages, this won't do. That bluebox thing is just a toy; and I don't see in what way it is different from a dumb search engine. You can't acquire any language like that.

Anyway, the underlying idea is good, but the presentation is so awful and the application of the idea has not been worked into a concrete state.

You should look into logic languages and functional languages I think. You're still thinking in C++, and imperative languages don't take you anywhere. (And neither XML) Be careful about one thing: the mapping between syntax and semantics is not always straightforward.

You may also want to consider looking into Ontology research if you are interested in KR.

__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
On KR and other topics (none / 0) (#146)
by nile on Wed Mar 28, 2001 at 04:15:16 PM EST

I don't see anything that's reminiscient of a programming paradigm here. For a programming paradigm shift, you need different foundations,

the coupling of data, methods, syntax relationships, and semantic relationships

approach,

looking at the elements in a domain, and, for each element, writing a cohesive unit that recognizes its symbol, defines it through methods and data, and uses rule/relationships to relate it to other elements.

PL constructs;

symbol recognizers and rule/relationships

something significant.

the inheritance and polymorphism of rule/relationships

Anyway, the underlying idea is good, but the presentation is so awful and the application of the idea has not been worked into a concrete state.

I really recommend actually looking at the source code to BlueBox here. It's not a search engine. Look in particular at the words in XMLGUI and the rule/relationships in them. Notice how its possible to inherit/add/subtract/modify them. I agree the presentation can be improved, but most readers want less theory, not more.

Be careful about one thing: the mapping between syntax and semantics is not always straightforward.

Can you give some examples here? I would be very interested if there was a Russel's type paradox in the mapping between syntax and semantic relationships. Although it's tangential to this discussion, such puzzles are always fun to play around with and sometimes lead to deeper insights. I am familiar with the standard philosophers, so I assume you're talking about something new here.

You may also want to consider looking into Ontology research if you are interested in KR.

I did look at ontologia at stanford and it appears to be an attempt to merge deductive databases with OOP. Is this correct? Although we are also doing research in the same problem space at dLoo, that wasn't discussed in this paper and we have a very different approach to the problem.

cheers,

Nile

[ Parent ]
How do you integrate different domains? (4.00 / 1) (#143)
by exa on Tue Mar 27, 2001 at 08:31:21 PM EST

Well, not the way you describe. This has already been done for real domains (not programming domains). I mentioned in another comment "look into Ontology research if you're interested in KR" which was a bit obscure.

Just type Ontolingua in google, and there you go. Yes, I saw the references in the "theoretical paper" but the theory part of it is a bit uncooked. You have to read some papers. Sorry for sounding like a bigot, but that's how you should have proceeded. This article feels like an undergrad project report. Is it possible that you're taking XML too seriously? :/

Applying Ontologies for programming (sub)domains: I honestly don't know how to best manage this but my feeling is that you can't really integrate the low-tech that we have today. Your best bet is to use some kind of a wrapper language, with a message passing lib. Close to what corba does. He he, and you can also use UML to create thick documents :P

Any work done in this area would feel more like Software Engineering than PL design. All that OO trivia. You might come up with some brilliant design pattern to show how to do this for real problems, some executive would love to look at the pretty pictures.

You've talked about the programming trick you're doing in python. That sounds nice, why not tell it in detail as an oo design pattern?

Thanks,
__
exa a.k.a Eray Ozkural
There is no perfect circle.

Re: Logic programming and ontologica (none / 0) (#147)
by nile on Wed Mar 28, 2001 at 04:41:13 PM EST

You may also want to consider looking into Ontology research if you are interested in KR.

I did look at ontologia at stanford and it appears to be an attempt to merge deductive databases with OOP. Although we are also doing research in the same problem space at dLoo, that wasn't discussed in this paper and we have a very different approach to the problem.

There is a strong relationship to functional programming languages in this paper, however. As kuro5hin readers below have pointed out, monadic combinators couple syntactical and semantic relationships in functional languages just as objects couple data and methods in object-oriented languages.

What's going on in this paper is the coupling of all four to form a new unit of programming. That unit has the standard inheritance and polymorphism of objects and also a new type of inheritance and polymorphism with rule/relationships. Logic programming languages don't really factor into the discussion.

Is it possible that you're taking XML too seriously? :/

XML is not really designed for rich integration of different domains. This is discussed in the introduction to the theorectical paper that was pointed out in the last post. It also includes a reference to an interview with Bill Joy who pointed out the same problems with XML.

Your best bet is to use some kind of a wrapper language, with a message passing lib. Close to what corba does. He he, and you can also use UML to create thick documents :P

Any work done in this area would feel more like Software Engineering than PL design. All that OO trivia. You might come up with some brilliant design pattern to show how to do this for real problems, some executive would love to look at the pretty pictures.


Corba, COM, and other component technologies experience the same problems discussed in this paper. As discussed in the paper, Riel's analysis of action-oriented languages can also be applied to the object-oriented programming. In the latter, side effects can occur because syntactical and semantic relationships are not naturally coupled.

Be careful about one thing: the mapping between syntax and semantics is not always straightforward.

I pasted this from a lower post because I find it interesting. Can you give some examples here? I would be very interested if there was a Russel's type paradox in the mapping between syntax and semantic relationships. Although it's tangential to this discussion, such puzzles are always fun to play around with and sometimes lead to deeper insights. I am familiar with the standard philosophers, so I assume you're talking about something new here.

Nile

[ Parent ]
Ontology has *nothing* to do with OOP (none / 0) (#151)
by exa on Thu Mar 29, 2001 at 12:59:19 PM EST

>I did look at ontologia at stanford and it appears to be an attempt to merge deductive databases with OOP. Although we are also doing research in the same problem space at dLoo, that wasn't discussed in this paper and we have a very different approach to the problem.


Well, ontolingua has *nothing* to do with deductive databases *or* OOP. It's about ontology, and I must stress that ontolingua represents one of the best uses of knowledge engineering and derives from a wealth of philosophy and CS. You have to _read_ ontology papers to understand the application. There must be some presentations at stanford, read them. Read the ontolingua manuals and look for other frame based knowledge interchange languages. Also look at other papers that deal with extracting knowledge from the web using machine learning/data mining.


> There is a strong relationship to functional programming languages in this paper, however. As kuro5hin readers below have pointed out, monadic combinators couple syntactical and semantic relationships in functional languages just as objects couple data and methods in object-oriented languages.


So what does this imply? It means that you aren't contributing anything by saying that you can couple syntactic and semantic relationships in the same abstraction. This has been done before in various ways in the design of advanced compilers and computational linguistics. It's not a new way to program.


>Can you give some examples here? I would be very interested if there was a Russel's type paradox in the mapping between syntax and semantic relationships.


The richer the language is the harder the relationship between semantics and syntax becomes. That's basically what happens in natural language. Although in Cat. Gram., we tend to claim the existence of syntax -> semantics homomorphism when we go out to the real world all we see is an interplay of syntax and semantics, such that the mapping between syntax and semantics is complex enough to devote entire studies to it. Example: try to derive semantic categories from syntactic rules. try to resolve semantics of ellipsis..., etc, etc. At the discourse level such difficulties, which are often avoided by many linguists, become even more prominent. I don't think it has much relation to a paradox caused by an unhealthy formulation of set theory :) but still the knowledge that semantics is not trivial to derive is noteworthy. What I imply is that you can't hope to attain semantics with the composition of simple imperative rules, even for very simple languages. Excuse me for this shallow talk about a very non-trivial topic.
[Have a look at "enriched composition" subject]

For programming, I think there is simply no point in writing a recursive descent parser and wishing that it will change the world. Still, as I said before this is a known method, and I think writing it up as an interpreter-writing design pattern for OO languages would be very beneficial. I bet if you looked up a few papers about compiler construction in software engineering, you could see similar ideas. To summarize what I did: there are classes which are symbols in the grammar, implementing their own syntax, and they also employ representation and functions that adhere to semantics of the same (terminal or non-terminal) symbol.

Regards,
__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
Nor with the topic of the article (none / 0) (#152)
by nile on Thu Mar 29, 2001 at 03:19:22 PM EST

A generalized description of the field can be found on the Indiana CS site:

The aim of the Computational Ontology project is to extract the issue of identity as a distinct technical problem, and to develop a calculus of generalized object identity, one in which identity---the question of whether two entities are the same or different---is taken to be a dynamic and contextual matter of perspective, rather than a static or permanent fact about intrinsic structure.

Now, one of the questions that computational ontology is interested in has a superficial similarity to the topic in this paper. Given two languages, with the same underlying semantics, how can one construct a map between a set of sentences in one and a set of sentences in another. To quote from Ontology-Based Semantics by Ciocoiu and Nau:

What exactly do we mean when we say that a set S2 of L2 sentences is a translation of a set S1 of L1 sentences?

Ciocoiu and Nau correctly point out that such mappings involve more than just syntax because there can be implicit assumptions in the L1 and L2 languages themselves. As a result, trying to create a mapping based solely on syntax will frequently fail.

From the computational ontology papers I read on the Internet, I now understand where we are missing each other. Computational ontology is not really relevant to the topic at hand, but I can see why it would appear to be. There are some striking superficial similarities:

  1. Both this paper and computational ontology deal with mapping relationships between two different groups.
  2. The mappings in both are mappings in semantics and syntax.
  3. Both deal with homomorphisms. In ontology, assuming a simple syntax/syntax homomorphism is a naive because there can be implicit assumptions in the language which restrict relationships that are not immediately evident in the syntax.
If you come from a computational ontology background, it would be very easy to see this as a very naive description of a basic problem in the field and an incorrect answer to that problem. This paper is tackling a very different problem, though: eliminating undesirable semantic side effects.

Using the terms defined in the paper, there are several different problem domains in the world, mathematics, chemistry, mechanical engineering, etc. Each of these domains has different elements - numbers, atoms, force - that have relationships with other elements in the domain. These relationships are both syntactical (i.e., how elements can legally be put together) and semantic (i.e., what those groupings mean). The question here is not what is the relationships between two different languages, but what is the relationship between syntax and semantics in a single language.

The way in which syntax is defined is a little different than in the field of computational ontology as well. In particular, in this paper, syntax is defined formally as the legal relationships that a program allows between elements when solving problems in a domain. To clarify the difference, a computational ontologist might see the fact that one cannot divide by zero as an implicit assumption of the language and thus not part of the language. In contrast, as defined in this paper, that rule would be part of the syntax because the mathematics program would not allowit. Why it would not allow it would be part of the semantics.

Given this definition, there does appear to be a homomorphism between syntax and semantics in programs. Indeed, it appears there would have to be for a program to function correctly. The syntactical relatioships that a math program allows between '0-9,' '+,' '()' should map to the semantics of what those relationships mean. If they do not, then a program will restrict valid math expressions like "5 + (4 - 3)" or allow invalid math expressions like "((5((++2()(." Either way, the program will not function correctly. The claim of this paper is that coupling the legal ways in which programs allow elements in a domain to be related with what those relationships mean eliminates side effects for the same reason that coupling data and methods do.

For programming, I think there is simply no point in writing a recursive descent parser and wishing that it will change the world. Still, as I said before this is a known method, and I think writing it up as an interpreter-writing design pattern for OO languages would be very beneficial. I bet if you looked up a few papers about compiler construction in software engineering, you could see similar ideas.

This I think is the largest source of misunderstanding. Most C programmers, before the advent of C++, coupled data and methods together. It was simply a good programming practice. This did not make C a object-oriented language nor did the fact that they could couple them in such a way mean that object-oriented programming was really just structured programming in disguise. The central claim of this paper is that allprograms should couple syntax and semantic relationships, not just compilers and interpreters. Furthermore, that this coupling should be enforced in the language itself and that enforcing it allows a new type of inheritance and polymorphism.

So what does this imply? It means that you aren't contributing anything by saying that you can couple syntactic and semantic relationships in the same abstraction. This has been done before in various ways in the design of advanced compilers and computational linguistics. It's not a new way to program.

This criteria would imply that OOP is not a new model because it can be done in structured languages. Furthermore, if you look at monadic combinators, you will notice that they do not couple data and methods with their coupling of syntax and semantic relationships. To repeat, the claim is that this should be done for allprograms, that the language should have safety mechanisms to enforce the coupling as occurs in C++ between data and method, and that there is a new type of inheritance and polymorphism possible as a result.

regards,

Nile

[ Parent ]
Write a PL, and I'll see you (none / 0) (#158)
by exa on Thu Mar 29, 2001 at 09:29:03 PM EST

Write a PL that implements this and I'll see you. Making grammar specs part of a pl is one of my own projects if you wonder but I'm not claiming it's a new programming paradigm because it isn't. I doubt how acquainted with compiler theory you are, but you should be aware that there are limits to this kind of programs, *and* you are only scratching the surface.



__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
How about some complete toy programs? (none / 0) (#144)
by keyeto on Wed Mar 28, 2001 at 06:13:45 AM EST

I've read the material at SourceForge, and found the more theoretical paper rather easier to understand than the detailed exlpanation given here. As far as I can tell, It looks like a neat idea.

However, the examples given don't really give much of an idea about how you would approach writing actual programs. What I think is really needed, are some complete programs. A couple of toy versions of simple unix command line tools, such as "uniq" or "sort" would be good. If you can give them some kind of common ancestor, so much the better. Something that reads lines from an input, and writes lines to an output, would put the entire approach on the sort of ground that most programmers can already walk.

Now, the immediate response to this is "that's not the point". You'd be right, you don't need OOP to write these, and using OOP would be considered overkill by many folk, let alone using the extra power that words appear to provide. However, seeing how the paradigm operates in terms of entire programs would really, really help us to understand. You make problem is made worse by presenting us with an obviously extended version of C++, without any explanation of what the extensions actually are, or how they might relate to the new data model or programming model. If you can write these examples in several programming languages, including some action-oriented and object-oriented ones, it would really help to make clear what the differences between our existing paradigms and your new one.

Please use the term "word-oriented" to describe this programming paradigm. This matches terms used for existing paradigms such as "action-oriented", "object-oriented" and "aspect-oriented". Computing is really full of words that get used to mean something different from their everyday use. Using a phrase that's similar to some we can more reasonably be expected to have come across would also help. The word "word" itself is a brilliant example, since it already gets used to mean a chunk of memory 4 or 8 bytes long.

I don't mean to be only critical, since I do think there might be something to word-orientation, but it needs to be explained to us using real code that runs on our real machines.


--
"This is the Space Age, and we are Here To Go"
William S. Burroughs
As the author, I agree (none / 0) (#145)
by nile on Wed Mar 28, 2001 at 02:58:18 PM EST

What I think is really needed, are some complete programs. A couple of toy versions of simple unix command line tools, such as "uniq" or "sort" would be good. If you can give them some kind of common ancestor, so much the better. Something that reads lines from an input, and writes lines to an output, would put the entire approach on the sort of ground that most programmers can already walk.

Thanks for the recommendation. It has been echoed by several people here and is a really good one. I was really concerned about getting enough theory into the paper: I should have concentrated on examples over theory.

For now, you can find source to BlueBox here. Look at the XMLGUI syntax in BlueBox. It consists of a group of words that have rule-relationships in them. It should be obvious that children of these words could easily add new rule/relationships (i.e., there's a new type of inheritance and polymorphism going on) and that adding a new rule/relationship in a word will not have an adverse effect on any other words.

XMLGUI, however, is not sufficient (for one thing, it's too large). There really needs to be an example showing two very simple programs being merged.

Now, the immediate response to this is "that's not the point". You'd be right, you don't need OOP to write these, and using OOP would be considered overkill by many folk, let alone using the extra power that words appear to provide. However, seeing how the paradigm operates in terms of entire programs would really, really help us to understand. You make problem is made worse by presenting us with an obviously extended version of C++, without any explanation of what the extensions actually are, or how they might relate to the new data model or programming model. If you can write these examples in several programming languages, including some action-oriented and object-oriented ones, it would really help to make clear what the differences between our existing paradigms and your new one.

Another good recommendation. I did the example with C++ words rather than BlueBox words because people requested it. Looking back, it's clear that I should have been more explicit about what those additions were. I've been working on this stuff for a long time so I missed that.

Please use the term "word-oriented" to describe this programming paradigm. This matches terms used for existing paradigms such as "action-oriented", "object-oriented" and "aspect-oriented". Computing is really full of words that get used to mean something different from their everyday use. Using a phrase that's similar to some we can more reasonably be expected to have come across would also help. The word "word" itself is a brilliant example, since it already gets used to mean a chunk of memory 4 or 8 bytes long.

That's sounds like a good idea. Today, people refer to the object model, but it has been around for a long time so there's no confusion. "Model" is a very generic term and shouldn't be used.

thanks, these are very good suggestions,

Nile

[ Parent ]
Cheers (none / 0) (#150)
by keyeto on Thu Mar 29, 2001 at 12:06:51 PM EST

I'm glad you liked the ideas, even with the garbled syntax and mistake in the first paragraph that you quote. I meant to say "execution model" rather than "programming model". The two terms, "data model" and "execution model", come straight out of the Python 2.0 Language Specification, and make for a very useful distinction, in my opinion (I learnt damn near all of Python in a four hour session by reading this document, whilst drinking several pints of beer. I have yet to see a better language specification, even though collecting programming languages is a sort of hobby of mine).

In the second paragrph that you quote, there is another mistake. On rereading your more theoretical paper, I notice that you do actually use the term "word-oriented". Excellent, keep it up.


--
"This is the Space Age, and we are Here To Go"
William S. Burroughs
[ Parent ]
The word model in 8 words or less. (1.00 / 1) (#148)
by pdubroy on Wed Mar 28, 2001 at 05:03:56 PM EST

I'm not sure if there's still some discussion going on here, but I was reading this the other day, and I just had a bit of an epiphany. First, I must admit that I am nowhere near an expert, and in fact I haven't had time to read all the comments, so maybe someone has raised this already. However, I can summarize what I understand to be the "word model" in 8 words:

Object-oriented programming in a context-sensitive language.

Before I discuss this any further, how close am I? Now it seems from the general discription that nile is actually implementing logically, and not as part of the language, but in essence it is the same thing. Thoughts?



Here's one definition (none / 0) (#164)
by nile on Fri Mar 30, 2001 at 01:47:59 PM EST

First of all, I think you're getting close. Context is definitely available to words.

Here's a quick definition.

A word couples data, methods, symbol recognizers, and rule/relationships together to form a new unit of programming.

Ok. It's more than eight words. I lose. Is this clearer, though?

Nile

[ Parent ]
Is This A Joke? (1.00 / 2) (#159)
by exa on Thu Mar 29, 2001 at 09:49:44 PM EST

You Mean:
All Your Syntax Belongs To Us!
in the "introduction to bluebox" but as far as I can tell, this is no more than a futile attempt. Fly too high and your wings will burn.

You are saying this will facilitate new types of programs that "beyond the reach of computer science"? Ha ha. Excuse me but either this is some sort of really very well designed hoax and an elaborate joke or it is the work of an amateur. Can we come and watch you drown in the Turing Tar Pit?

It's very very ironic that you are claiming these superficial programs you are talking about convey the state-of-the-art, who are you deceiving; yourself? And again, I'm surprised that this story got published at all, is this to be followed by more pseudo-science in this manner?

I'm bookmarking bluebox.sourceforge.net under Humor/Hoax.

Thanks!

> Welcome to the development site of BlueBox, the natural language browser. With BlueBox, it is possible to integrate different syntaxes seamlessly to write new types of programs that solve problems currently beyond the reach of computer science. In the next few months, dLoo will be finishing an initial version of Bluebox and submitting a formal proposal to the W3C detailing the technology by which this is accomplished. Individuals, communities, open source companies, and W3C participants are welcome to join us in this process.



__
exa a.k.a Eray Ozkural
There is no perfect circle.

I meant "to solve problems currently beyond (none / 0) (#160)
by exa on Thu Mar 29, 2001 at 09:52:30 PM EST

the reach of computer science" Ha ha ha ha ha hah a haha hah ha

__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
Hold on there, Punchy (none / 0) (#162)
by slaytanic killer on Fri Mar 30, 2001 at 06:31:48 AM EST

I'm bookmarking bluebox.sourceforge.net under Humor/Hoax.
Did someone hurt you recently?

I find these sort of things important, especially in the long run, as people come up with all sorts of interesting ideas and perhaps 10 years later someone finds something of worth in the work. What I am not interested in is your brilliant wit in cutting someone down who is likely smart enough to find your rant amusing.

Find something more important to care about.

[ Parent ]
The Word Model: A Detailed Explanation | 166 comments (144 topical, 22 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!