Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
GPL as Code

By nile in Technology
Thu Apr 19, 2001 at 08:41:53 AM EST
Tags: Software (all tags)
Software

Today, the differences between open source and closed source end the moment products are shipped. Open source, however, enables a new type of reuse model that eliminates several fundamental problems in computer science. This model, which we call open reuse, is already being used by different projects in the community. With open reuse, open source software has significant advantages over closed source products, even after the code is compiled. The GPL and other open source licenses become part of the technology of the project.


Introduction

The open source development model has brought significant benefits to software creators. Opening the source code of a project attracts developers, results in more stable software, and introduces unexpected creativity. Every programmer reading the source code is another programmer who can find subtle bugs. Every new perspective can result in unexpected breakthroughs, like the extensions to Zope and the modules of Apache. The benefits of the open source model have become clear over the past few years and more and more companies have begun to open source their work[1].

Few doubt the advantages that open source brings to software development, but most believe the benefits stop when the code is shipped. A lack of observable consequences prevents users from seeing any differences between the finished products of open and closed source development. Presented with the binaries of two different libraries, a software developer cannot determine, simply by using them, which one is open source and which one is closed. Presented with two spreadsheets, an end user cannot tell from using them which was developed openly.[2].

Open reuse makes open source a technology and not just a development model. It eliminates the traditional problems of entropy death and frozen APIs. An open reuse library has advantages over a closed source counterpart even after it is compiled and shipped. The license under which a project is distributed becomes part of its technology.

Traditional Reuse Models

Software reuse has always been tied to the openness of code. The most primitive reuse model is "Copy and Paste." With copy and paste, a programmer literally copies the source code from one place and pastes it into another. Rediscovered every year by novice programmers, copy and paste is the worst reuse model. Its most serious flaw is that the multiple copies of code it creates makes it extremely difficult to guarantee that all of the code has been updated throughout a project.

APIs are a significant improvement over copy and paste. In API reuse, software is released in unreadable bundles that are made to perform actions through a small set of public names (i.e., APIs). After the source code is compiled, these names are the only parts of a software project that are open. Clients that want to make the software perform actions use APIs to tell the software what to do. In this way, the same code can be reused in different parts of a program simply by calling a method or instantiating an object.

But there is a cost. In the process of using APIs, clients create intrinsic dependencies on the names of the library. As a result, every year programmers all over the world update their code to the new APIs of the libraries they depend upon[3]. When heavily used libraries like core Window APIs change, the entire industry rewrites its programs[4]. As software projects mature and programmers incorporate more and more libraries, the number of intrinsic dependencies in code increases sharply until they reach a critical point. At that point, the cost of moving the algorithms in the code to new technologies is greater than the time it would take to rewrite them[5].

The programmers of commonly used APIs also suffer as a result of API reuse. Everyone who has written a heavily used library is aware of the paradox of success. As the number of users of that library increase, the ability of the library's programmers to change APIs decreases proportionately. Every change the library writer introduces forces users of the library to rewrite code. The more revolutionary the improvements to a library, the less likely that clients will adopt them due to the additional work that is required. APIs become victims of their own success.

The problem that both the creators and users of libraries experience are some of the largest in the industry[6]. These problems are going to become exponentially worse as software moves to the Internet. A single SOAP component for a credit card service could have a thousand dependencies, none of them known to the provider. Those dependencies, in turn, could be providing services themselves and have millions of dependencies. In this way, a change in one component could cause millions of services to stop working and cause problems for tens of millions of people.

Open reuse

There is a direct relationship between how software can be reused and how open it is. Copy and paste, for example, requires that all of the code be open while the traditional API model only requires that method signatures be open. If none of the code is open, neither API reuse nor copy and paste are possible. If only some of it is open, the only reuse that is possible is API reuse. In general, as source code is opened, the number and types of reuse models that can be used by software developers increase.

"Open reuse", like copy and paste, requires all of the source code to be open. It solves the problems of the API model by making software more flexible than is traditionally possible with closed source libraries. In contrast to API reuse, which puts direct dependencies on the syntax of library names and violates the encapsulation discussed in "Word Oriented Programming," open reuse couples syntactical and semantic dependencies together. This coupling brings a flexibility to reuse that eliminates entropy death and API freezing.

The reuse model is simple: writers of APIs manage API calls for their clients. Clients reuse software by writing parsable descriptions of their needs. These descriptions are then parsed to generate the source code that calls them or mapped to libraries by generators through internal API calls. From a software developer's perspective, it is as if software rewrites itself to adapt to changes in technology over time.

Open reuse is already beginning to appear in the open source world. The most common form occurs when a CGI script reads an XML description of a web page and generates versions for Internet Explorer, Netscape, or small wireless devices[7]. In contrast to cross-browser libraries, generators and parsable descriptions give web authors the ability to control their dependencies on different browsers. In addition, because all of the dependencies of a platform are in the generator rather than scattered throughout hundreds of files, the generator is the proof that a browser is being supported correctly.

There are two reasons that open reuse has not become more widespread. The first is that it requires source code - i.e., descriptions - to be open. The second and historically more difficult is that it requires the syntaxes of different domains to be integrated. Even simple programs like FTP clients integrate networking, output to a user, and the file system. All of these tasks can be done in different ways and would benefit from open reuse. But since it is very difficult to integrate different syntaxes in traditional programming models, FTP clients are not written with parsable descriptions.

Enter word-oriented programming. Word-oriented programming naturally uses open reuse through its coupling of syntactical and semantic relationships. In word-oriented programing, it is possible to inherit the data, behavior, and rule/relationships of words. The result is scalable, parsable languages that can be richly integrated with each other. In this way, a client can use integrated network, GUI, and file syntaxes to describe a complex solution like an FTP client. These complex descriptions can then be mapped through inheritance to Java, Windows, GTK toolkits for display, HTTP, SMTP for network transport, and any of a number of file storage technologies.

Consider a word-oriented HTML syntax. Any of the words in the syntax - HTML, Body, Table, Tr, Img, DIV, etc. - can be overridden and given a different meaning. A GTKBody word will match 'Body,' and create a GTK window. A QTBody will match 'Body' and create a QT window. In this way, the same HTML page can have two different meanings depending on which word it is instructed to use at runtime. Open reuse is not the purpose of word-oriented programming any more than the API model is the purpose of structured and object-oriented programming. It is simply the standard method for reusing software.

The only types of programs that word-oriented languages and open reuse can produce are open source. A developer who releases a word-oriented program has released the source to the program, even if it is compiled and even if the libraries it uses are compiled. This is because the words in the libraries that the program uses can be inherited and instructed to call their parents and print their symbol. As the program runs under the new interpretations, it will print itself out. It is possible to release a single static binary, but then all of the benefits of word-oriented programming are lost. Open source is a fundamental part of the programming model.

Formally, the code produced by word-oriented languages, even if distributed in binary form, always has the following characteristics:

  1. The source code is retrievable in the form of a parsable description.

  2. The meaning of that description can be changed through simple inheritance of data, methods, and rule-relationships.

These two characteristics are very close to the characteristics of code that is produced under the GPL and other open source licenses: open access to source code and the freedom to distribute modifications of it[8]. What is different is that the code of word-oriented programs is open, not because the licenses legally require it, but because the code cannot be closed without eliminating all of the benefits of the programming model. The licenses transition from being legal documents to part of the technology of the project.

A Closer look at the Benefits to the Open Source World

When we read books, we often discover that a phrase or a word has a different meaning than we originally thought. The phrase "What do you mean by that?" turns out not to have been asked by a student, but as a prelude to a bar fight. Chris turns out to be the girl and Pat the boy in a relationship. When this occurs, we simply remap definitions to create a new interpretation of what we are reading. This comes to us so naturally that we are unaware of how remarkable it is. In the software world, when APIs change, programs crash[9].

Today, there are well over a hundred different open source platforms for developing applications and web sites. Some popular projects with strong communities are Python, Perl, PHP, wxWindows, GTK, QT, and the Apache XML tools. Every year, these projects release new versions of their libraries with changes to their APIs. Every year their clients have to update their source code to benefit from the changes in the new libraries.

This cycle of release and update limits the speed at which projects can evolve. The full benefits of a new version of the KDE desktop, for example, are only realized after all of the KDE application developers adopt the new KDE APIs. The process of adoption is frequently longer than the time it takes to update the platform. In addition, because the platform needs clients to verify that edge cases have been correctly accounted for, libraries do not reach full stability until widespread adoption.

Open reuse offers an alternative model. With open reuse, every time developers release a new version of their libraries, they release a generator that maps the old descriptions to new APIs. In this way, all of the clients are instantly updated to the new APIs, greatly accelerating the development cycle. The edge cases of the platform can be instantly tested because all of the clients are test cases for the new APIs. The traditional cycle of releasing a platform and updating clients to that platform is eliminated.

The stability of the platform is also increased. Rather than depending on all of the clients to correctly use libraries and scattering dependencies through thousands of client files, the generator is proof that the clients are correctly using the APIs[10]. Today, when clients incorrectly use APIs, programmers have to painstakingly search through thousands of files to eliminate bugs and potential security flaws. With open reuse, the security flaws that arise from incorrect use of APIs are localized in a single program and can be instantly fixed by changing the way the generator maps client dependencies.

Open reuse also benefits the clients of libraries. Parsable descriptions have a natural tendency to attract more and more generators to themselves over time. HTML can now be read by dozens of different browsers. The proliferation of browsers for HTML occurs because a new browser instantly benefits from all of the content that has already been created in HTML. By creating a generator that internally maps HTML dependencies, KDE's Konqueror automatically benefited from the content of the Web.

In practice, this means that clients that use parsable descriptions can expect more and more generators to be written for their code. As support increases, the code of a project will be able to be remapped instantly to a different language, a different platform, a different technology. A library written with the intention of using CORBA as its networking layer and Java SWING as its interface will be able to instantly switch to GNOME and SOAP by using their generators.

Open reuse means that algorithms are no longer locked into the particular platforms or technologies they were originally designed for. Words in software projects that were originally designed to target one set of specifications regarding graphical toolkits, network technologies, and mathematical frameworks can be used to target a different set. Apache can benefit from words from GNOME, GNOME can benefit from words from PERL, and PERL can benefit from words from Python[11].

Conclusion

Humans adjust automatically when the meaning of a word or phrase changes. Today's software does not - it just crashes. The API model is a dead albatross from the world of closed source software and should be abandoned. In a web of software only a minuscule fraction of the current Web, a change in a single API could render entire sections inoperable. As software moves to the Web, as the service model grows in importance, as the number of dependencies increase, closed source solutions will no longer be practical. Scalable network models moving on Internet time demand open parsable descriptions.

Open parsable descriptions have a large history of prevailing over closed standards. Although there are many closed source alternatives to HTML, from Microsoft Word documents to Adobe PDF files, none of these standards has experienced the success of HTML. HTML can be read by dozens of different browsers[12]. Every year, these HTML browsers undergo major changes[13] and operating systems change their APIs. Yet the Web, those billions of pages, requires no rewrites[14].

These facts change our view of software licenses. In the past, it has generally been assumed that although there are substantial differences between closed source and open source development models, there are no technology differences between the finished products. With word-oriented programming and open reuse, though, the license is part of the code. Closed source is not just an inferior way to develop software, it is inferior technology.

Resources

A heavily revised version of The Word Model, based on feedback from dozens of Kuro5hin readers, has been posted. A FAQ is also up with answers to the most common questions concerning word-oriented programming and open reuse. Many thanks to Eric Raymond for his terminology suggestions. Any additional clarity is a result of his gracious help. All mistakes are the author's.

References

1. See Eric Raymond's "The Cathedral and the Bazaar" for the benefits open source licenses can bring to the development process if managed correctly.

2. The widespread belief is that when programmers have finished fixing all of a program's critical bugs, when the project has been converted from source code to an binary package, and the customer is looking at the finished software product on the shelf, the fact that software was developed openly no longer matters. As the customer weighs one program against another, they choose the program with the best support, the largest feature set, and the greatest stability. The fact that a library or program was developed openly plays only an incidental role in the decision making process.

As software moves to the Internet, the importance of open source in the decision making process is expected to decrease even further. Clients of a SOAP service do not care if the service is implemented with closed or open source software: all that matters is that the service works. Nor are the providers of the service under any obligations, under the terms of current open source licenses, to release their changes to the software back to the community. There are widespread worries as a result that the trend of software moving to the Internet presents a grave challenge to the open source movement. This paper explains why those worries are unfounded.

3. Today, the industry continually refactors code as technology changes to avoid entropy death. Programs are still fragile, however, whenever there is a substantial technology shift. A few years ago most popular programs depended on libraries from Microsoft. When the Internet, Linux, and other disruptive technologies became important, these dependencies become liabilities and prevented companies from moving quickly to new markets. In many cases, companies rewrote their applications or critical libraries from scratch to make them usable with new Internet-driven technologies. These rewrites were very expensive and prevented companies from moving to new markets quickly.

4. The problems with having numerous dependencies on closed source libraries is well documented in many books. For example, a popular book from Microsoft Press - Maguire, Steve, Debugging the Development Process, Microsoft Press, (c) 1998, p. 15 - says: "One of the easiest ways for your project to spin out of control is to have it be too dependent on groups you have no control over."

5. This analysis has been heavily influenced by the work of Simon Phipps from IBM. Readers are strongly encouraged to read his work: "Parallel worlds: Why Java and XML will succeed" and "Escaping Entropy Death: Where XML Fits in and Why."

6. For the open source world, the consequences of the Traditional API Model are even more severe. Both GNOME and KDE are great desktops created by talented programmers. A rivalry has arisen between two because of the consequences should one of these desktops become the category leader.

7. The Apache Project is using parsable descriptions to generate different content for a variety of platforms. Cocoon, one of the most interesting projects on the site, uses XSL stylesheets to render HTML, PDF, XML, WML, and XHTML from XML documents.

8. See http://www.opensource.org and http://www.gnu.org/licenses.html for a discussion on the different software licenses currently in use.

9. The reuse model described in this paper allows for better disambiguity algorithms than in traditional natural language theory. A very simple algorithm, for example, is to have a disambiguity algorithm assume an interpretation until the interpretation "breaks." Then, by remapping the closest dependency with an alternative path, try again. Although inelegant, such an algorithm provably disambiguates content as new information is processed and also appears to model human behavior. For an introduction to the disambiguity problem, see Norvig and Russell, Artificial Intelligence: A Modern Approach, (c) 1995, ch 22.

10. Thanks to Eric Raymond for terminology suggestions and his observation on generators as proofs of correct library use.

11. "Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperm or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain ..." Dawkins, The Selfish Gene.

12. There are a number of popular browsers, the two most popular being Mozilla and Internet Explorer.

13. Mozilla was a complete rewrite of the original Netscape code. See http://www.mozilla.org for a complete history of the open source project.

14. Java, in contrast, is a different story. While HTML pages are open for all to read, Java programs are distributed as binaries that are run by the Java Virtual Machine (JVM) on a user's platform. Because Java code is always released in binary form, Java clients are forced to make intrinsic dependencies on specific Java virtual machines. When Java was a young language and there was only one Java virtual machine, this was not a problem. But with the advent of Java SWING and new libraries in Java 1.2, Java 1.3, and now, Java 2.0, Java programs written for an older version of the language do not work with new JVMs. If the Web was built around applets rather than HTML, the entire Web would have to be rewritten every time a new JVM was released.

This is not a criticism of Java, just its reuse model, which the majority of the software world uses today. Cross-platform libraries promised to solve the problem of entropy death by providing a common API for incompatible platforms. Java, for example, promised to leverage code across multiple operating systems, eliminating the need to rewrite code for each new platform. Its wide adoption has resulted from this promise and SUN's aggressive marketing.

But cross-platform solutions do not live up to this hope. They do not solve the problems of entropy death or API freezing. Clients are still required to create intrinsic dependencies on those platforms. When the market shifts rapidly, the clients have no recourse but to wait for the platform to adapt to the market or rewrite their code to new APIs. The developers of the cross-platform solution, in turn, experience all of the standard problems of frozen APIs as they acquire more and more clients.

Many users of Sun's Java solution discovered the extreme limitations of the cross-platform approach when KDE and GNOME arrived on the scene. In the first two years of their existence, neither of these platforms were supported by Java. As a result, open source Java programmers that wanted their programs to work in these environments either had to content themselves for a long wait or rewrite their code in another language.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Kuro5hin
o [6]
o "Word Oriented Programming,"
o The Word Model
o FAQ
o The Cathedral and the Bazaar
o Debugging the Development Process
o Parallel worlds: Why Java and XML will succeed
o Escaping Entropy Death: Where XML Fits in and Why.
o Apache Project
o Cocoon
o http://www .opensource.org
o http://www .gnu.org/licenses.html
o Artificial Intelligence: A Modern Approach
o Mozilla
o Internet Explorer
o http://www .mozilla.org
o Also by nile


Display: Sort:
GPL as Code | 65 comments (38 topical, 27 editorial, 0 hidden)
So what? It's all about money baby. (2.00 / 3) (#4)
by rebelcool on Sat Apr 14, 2001 at 08:48:23 PM EST

Well, you've completely ignored the entire reason software and all the hardware you wrote this on exists:

To make money. All business, which has developed (or been the root of) every single advancement in computers..and everything else of modern life pretty much.

With the GPL you cannot make money! The GPL is as tasteful to business as communism is. This is why you don't see any businesses basing their software on GPL, and those that are (ahem..linux software corps) are failing, despite their plans of making up the development cost difference with t-shirts, support and stuffed penguins. We all know how stuffed penguins are a fantastic revenue generator.

This is the exact reason I would never want a geek running my business, as opposed to an MBA. They simply dont have a clue about basic economics.

COG. Build your own community. Free, easy, powerful. Demo site

The article is about reuse models, not licenses (5.00 / 1) (#6)
by nile on Sat Apr 14, 2001 at 08:52:35 PM EST

The article is about reuse models, not licenses or the success of open source as a business model.

cheers,

Nile

[ Parent ]
Toilets made of gold. (1.00 / 2) (#7)
by rebelcool on Sat Apr 14, 2001 at 08:56:25 PM EST

So basically, it's a dream about what you would like to see, never mind how in Reality it would be implemented.

Well, i'd like a toilet made of solid gold too, but since I cant afford one, it'll be up to somebody to come up with a way to make a free one.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Source can be found here (none / 0) (#9)
by nile on Sat Apr 14, 2001 at 09:10:07 PM EST

You can find the BlueBox source at http://www.dloo.org.

cheers,

Nile

[ Parent ]
All Is Business? (none / 0) (#10)
by DesiredUsername on Sat Apr 14, 2001 at 09:16:08 PM EST

"Well, you've completely ignored the entire reason software and all the hardware you wrote this on exists: To make money."

No, money is the reason software and hardware companies exist. The are several reasons software itself exists, only one of which is money. Other reasons are prestige and usefulness (as in tools you make yourself and possibly release for free).

"With the GPL you cannot make money! The GPL is as tasteful to business as communism is."

So what? I try to use only GPL'd software. If no companies are producing it, all that means is that none of my software comes from companies. It also means no software companies get my money.

Play 囲碁
[ Parent ]
but think.. (none / 0) (#14)
by rebelcool on Sun Apr 15, 2001 at 12:01:15 AM EST

for all the GPL'd software to exist, how much research and development done by companies FOR PROFIT had to invest just do the developer of your software could do it. The hardware you run it on, the firmware on that hardware, is all pay software.

The point is is that the GPL has no place in the software-making *business*. Individuals are welcome to GPL whatever they like..but a business creating GPL'd software, or trying to extend GPL'd software is just asking to go out of business since they can't sell their wares effectively.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Hm (none / 0) (#16)
by regeya on Sun Apr 15, 2001 at 12:53:14 AM EST

With the GPL you cannot make money!

Implying that no-one would pay for GPL software *cough*Redhat is in the black*cough* or are you incorrectly assuming that one cannot profit from GPL-based software?

The GPL is as tasteful to business as communism is.

Um....yeah.

This is why you don't see any businesses basing their software on GPL, and those that are (ahem..linux software corps) are failing, despite their plans of making up the development cost difference with t-shirts, support and stuffed penguins. We all know how stuffed penguins are a fantastic revenue generator.

Oh, yippee, you see companies catering to the Linux world failing, and jump to the conclusion that it's all due to the fact that Linux is GPL, a lot of Linux software is GPL, and since obviously companies would never base any of their code on the GPL, and since companies who offer software always plan on their primary source of income being the software and not hardware, services, or whatever, that the GPL is bad for business and dooms a company to failure.

/me gets out the 500lb. cluebat.

Look, all I'm saying is that instead of talking out your ass and assuming that companies like VA Linux are in trouble because of Linux and its GPLed nature, try to find out the real reason why they failed. I could come up with a dozen companies who had closed-source licensing models who failed. Guess that means that closed-source is a bad plan.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

i'll tell you why it's bad. (none / 0) (#20)
by rebelcool on Sun Apr 15, 2001 at 10:59:34 AM EST

By pulling out some good ole' economics, why we can say the GPL is bad for business.

First, let me start with saying just being "in the black" means nothing at all. Sure, it's better than being in the red, but the real question is how much profit are you making. If you're not making the industry average rate of return (and no, red hat certainly is not) then your business is not successful. At least not from an investment standpoint..after all, if you can get a higher average return from a different company, why in the world would you invest in a company that cant make the average, much less beat it? You wouldn't, and neither do intelligent investors. Thus, economically speaking, even if you are turning a profit, that means nothing. It's all about how much profit.

Now for some simple economics. Software is an enormous investment. Software developers are currently among the highest paid profession on the planet. Any kind of major development easily runs into million dollar investments, simply based on wages. Then you have of course your fixed costs... office space, machines of which to do work on and so on. In the end, it costs a software company *millions* to crank out big software. This is why most software companies start in the garage by 1 or 2 guys using their own machines.

So how does linux and the GPL fit into this? Well, thanks to that wonderful GPL which forces company's to give away their source, they can't really charge for their software. They cannot charge directly for what cost millions to produce. Economically speaking, that is incredibly unsound, and plain idiotic. Now, there are other things out there, such as support and stuffed penguins which a company can try to sell to make up for this loss.

Think for a moment though..virtually every *other* closed source software company does that too. But they can charge for the software. Who do you think will make more money?. Exactly. And when it comes down to investors..they will go for who is making the most money, something a company who's sole product is GPL'd can never achieve.

Further, things like support and penguins will *never* offset the cost of development, unless your software is so poor, customers must spend hours a day on the phone with your support people.

Think of a car company giving away all the parts for the car, and all the tools to put it together. It takes you a good weekend or so to put your car together. In recent months, they've made it even easier for you by half-building it for you. If you need help, they charge you for that, but that's it. The rest is free. Certainly, the high production cost went into producing the parts... which is why it costs you thousands to buy a car. This is the GPL of auto-industry, and its plainly obvious to see why it wouldn't work well. The few people out there that are technically competant to even start building a car will do so, and won't even bother with the car. The people who are NOT technically competant enough will just go to a manufacturer which builds it's stuff for you.

Am I anti-GPL? No. It's great for personal or volunteer projects. But basing a business off it was just another stupid business plan of the late 20th century. Mark my words, red hat and the other linux co's are either going to drop out of business within the next 5 years, or, more likely, get bought up by a big ole competitor (such as microsoft or IBM).

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Hm again (none / 0) (#34)
by regeya on Sun Apr 15, 2001 at 11:13:53 PM EST

I'm sure I'm an unenlightened asshole who needs to be thwapped over the head repeatedly for reading that long-winded (a the air was a bit hot, at that) response and replying, "Ah, bullshit." You make the common mistake of stating that because the source has to be given away, the program absolutely has to be given away for free. No, no, no, no, no, no, no, and no matter how many times you say "it's simple economics" that doesn't make up for the fact that you don't understand the GPL. You can charge for distribution. Yes. You could produce your own software, license the software under the GPL, and charge for distribution. IIRC RMS used to charge people for tapes(!) of Emacs. True, someone else could come along with their own compiled versions and charge or distribute for free, but if you add things like printed manuals, subscription plans, service plans, etc. you might not make billions, but if the cards are played right, a profit could be made. Just because it's not been effectively done yet doesn't mean it can't be done. :-)

[ yokelpunk | kuro5hin diary ]
[ Parent ]

distribution is not enough. (none / 0) (#35)
by rebelcool on Mon Apr 16, 2001 at 01:00:43 AM EST

Why would I buy the box/manual/registration cards when I can go download the thing? When I can read the manual online?

Of course I wouldn't and neither have any of my linux-using friends, or the company I work for. The same is true for thousands of other people. To sell software, you need to sell IT. Not complements. The software itself. And not just give it away where it can be obtained easily by the masses. It simply doesn't work.

Look at it this way: You have a coffee company which sells both coffee and creamer. Creamer is a complement of coffee, and is cheaper to manufacture. Also, not everyone who likes coffee likes creamer..lets say 1:6 just for shits and giggles. Now, coffee is expensive to produce. Let's suppose this company gives the coffee away, but sells the creamer. Of course, creamer still costs money to produce, and they don't charge too much for it. Further, only a sixth of the coffee drinkers buy it. This is what linux corps are doing..and it simply doesnt work.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

feh (none / 0) (#36)
by regeya on Mon Apr 16, 2001 at 01:26:07 AM EST

This has steered into "we covered this in 1997" land. Nobody ever said you had to make things simple for someone who wanted to download the code; nobody said you had to give the manual away for free. Just because the program's GPL doesn't mean the docs have to be. Oh wait; if they do, can I sue anyone who's ever written a Linux-oriented book, or a book on The GIMP, or GNOME programming, what have you? :-)

"It simply doesn't work" isn't a compelling argument to me. What sort of research have you done on the subject? I admit I've done none, but you're speaking in absolutes here, so obviously you've put considerable work into researching the subject. And please, don't link to VA Linux stock quotes. I can point out examples of companies who sell product that fail. It's not proof that selling product isn't commercially viable.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

heh.. (none / 0) (#38)
by rebelcool on Mon Apr 16, 2001 at 10:44:20 AM EST

Find me a linux product that doesn't give it's manual away, and i'll show you a linux product that noone will use. It doesn't help that the linux culture is all about getting everything for free.

Have you not read what I wrote? Those are basic economic concepts. Marginal costs, product complements, marginal revenue, rates of return.. these are all very basic, and highly important, economic concepts. I imagine whoever came up with the "i know, we'll sell support instead of the software!" idea either doesn't know anything about the costs of producing software, or did poorly in their college economics courses.

One can simply look at a business plan, and apply some simple economics to see why it doesn't work. To sum it up: High input price product (the software), the GPL places a price cap on this product itself (at $0.00), so the company attempts to make it up by selling complements, which cost money themselves to make, but they still give them away anyway to those who want them. So, their revenue comes from the few people buying the product complements. It simply does not make up the difference.

I don't have hard numbers for which to plug into equations, but it's not too hard to figure out anecdotally. Software costs millions to produce (I am a developer..i'm not cheap and neither is any other one) in wages. Add to that costs of machines, fixed costs such as office space and so on. Product complements such as support, t-shirts etc also cost money to make. Of course, you can't charge $90 for a t-shirt..who the hell would buy it? Shrinkwrap and manuals also cost money. But you can still get all that stuff for free off the web..and most people do. See the Coffee and Creamer example.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

WTF (none / 0) (#39)
by regeya on Mon Apr 16, 2001 at 12:04:01 PM EST

The last newspaper I worked for had a "free" website with the day's headlines . . . oddly, people still bought the dead-tree version. They found, though, that they had to remove obituaries from their website, as subscriptions dropped off then; seemed that most people were getting the newspaper to find out who died. :-) Lesson? Nobody said you have to give away the full manual, nobody said you had to make a build process easy, nobody said you had to give the same QoS to free users as paying customers just because the software is GPL. For that matter, nobody said the core business had to be the GPLed software. Do you think that Digital Domain's core business was ever FLTK or FLWM? Do you really think that Eazel's core business will be a file manager? We could go on, but I'm tired of this thread. :-)

And please, stay away from simple economics; you're just fooling yourself if you think the business world operates on simple economics. I know of at least one large corporation (and they're not alone) who would rather spend $20K U.S. per unit on HPs than spend $2K on Windows boxes because they can get a better service contract. The way I look at it, there are two types of consumers (and this includes business consumers): the person who looks for the bargain simply for the sake of the bargain, and the person who weighs several factors including but not exclusively price, as well as warranty, product guarantees, QoS, and so on. How's that fit into the world of GPLed software? Well, if someone like Ximian or Eazel has a USP (Unique Selling Point), can get enough paying customers, and can deliver on their promises, I would be willing to bet that they'd be able to turn a profit. Okay, they won't be worth as much as small nations, but they'll be able to make a living, yes, doing things they probably want to do. If you still want to call them fools for doing so, I'm taking your newspaper away. ;-)

[ yokelpunk | kuro5hin diary ]
[ Parent ]

hah (none / 0) (#43)
by rebelcool on Mon Apr 16, 2001 at 02:22:57 PM EST

Corporations don't follow basic laws of economics? And I don't breathe oxygen..i breathe air!

*All* businesses (ones in a capitalist nation anyway), operate under fundamental rules. Think of supply/demand, marginal revenue, profit, marginal cost and the like as the Kernel of economics. To say big corporations don't operate under these is absurd. Corporations very well do have to spend money to make things, and they very well have to SELL those things to get money back. They can also choose to sell complementary goods to generate additional revenue, but selling only complementary goods whilest giving away the core product doesn't work very well. Yes, you can make money doing it. But not much, and not enough to coax investors away from companies that sell both.

As such, your example of spending 20K and service contracts and the like are *details* of overhead costs. They fall under capital investments, and are relatively unimportant for our purposes.

As for not giving away full manuals, or making it difficult to install things.. well, that would inspire people to buy manuals and support, but only to a point. If I could go out and buy an easy-to-install OS for the same cost as the pain in the ass OS would cost me in support, and the easy-to-install OS had numerous benefits over the hard one (such as more software, better GUI etc)..which would I pick..the closed one of course.

Of course making something difficult to install would give it a bad name, and bad PR. It leads people away, which is why linux co's are working hard to create new easy methods. Yet at the same time it's shooting them in the foot.

Of course, all of this could be solved if there was no price cap on the software itself. Time after time, history has shown us that price caps cause more problems than they solve. Just look at the california energy crisis, which stems from a price freeze on electricity.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Very good! (none / 0) (#44)
by regeya on Mon Apr 16, 2001 at 07:20:57 PM EST

Corporations don't follow basic laws of economics? And I don't breathe oxygen..i breathe air!

Good! IIRC 90% of "air" is nitrogen, and oxygen is in the "trace element" range. Good for you. You get a sticker.

Look, at this point, I'm going to sigh, say "I give up, but not because you win." If one wanted to go with simple economics, WTF isn't WinNT dead already? I don't know. Why isn't the Win9x series dead yet? I don't know. Ever installed either from scratch? All I've dealt with is the Win9x series and its predecessors. Pit it against, say, Linux Mandrake, and the closed-source OS looks a lot less attractive.

BTW, I think it's a bit funny that you're doing a complete 180 on this...*sigh* first stuff like support, ease-of-use, etc. aren't as important as simple economics, then price isn't as relevant as the rest. Please, make up your mind...regeya signing off this crazy thread.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

Congratulation! (none / 0) (#46)
by caracal on Tue Apr 17, 2001 at 02:28:23 AM EST

I admire you for your patience rebelcool but this is just denial of reality.
You cannot win an argument against such people, let Darwinian selection do the job.
Let them try and die off! But this will take a little while and we (non supporters of the GPL business) have to watch out not to be hurt by the fallouts.


[ Parent ]
Putting screw makers out of business (none / 0) (#57)
by nile on Wed Apr 18, 2001 at 02:47:43 PM EST

This is fairly bad reasoning.

There was a time when it was very expensive to make screws. No doubt, when it became possible to automatically make screws, a large number of screw makers were scared. Some might have thought it even threatened the whole screw making enterprise. After all, it costs a great deal of money to make screws if there has to be a screwmaker for each set of screws. The process of commodization appears to threaten the individual maker.

Of course, we know the outcome of the screw story. As screws became commodities, we moved to higher forms of design. Now, it is not screws themselves, but the products made out of screws, nuts, bolts, large sheets of metal, electronics, etc. that are valuable. The commodization of basic parts lead to higher forms of industry and greater products. The screw maker jobs transitioned to car manufactures, toaster makers, etc.

The industry is currently undergoing a transition towards the commodization of software. Open Source, .Net, Java enterprise beans, and several other technologies are evidence of this. When I worked in closed source, I had to make each part, every individual screw if you will. When I work in open source, in contrast, I can take whole assemblages of software and make them work together to form something entirely new. BlueBox, for example, was originally a combination of wxWindows, Xerces, and my own code. There is no way that I could have written a cross-platform toolkit and an industry-grade XML parser in the time-frame I had to work on BlueBox. The commodization of software through open source made this possible, though.

At the same time, new scalable business models are also being formed. Service can be automated in the same way that software is. RedHat Network Service and the Helixcode update do not require a person at each end. What these services do is deliver whole assemblages of software to the end user, not just individual pieces. It is how these assemblages fit together that make them valuable. For example, if I can download a complete server solution, with all security pacages, and several extensions and a specific e-commerce solution to my computer, I have gained something much more than a simple Web server. The fact that someone has put the software together in this way makes me willing to pay for this service. When software is a commodity, it is the assemblage of its pieces that makes it valuable.

I believe this business model works and scales. Although I personally receive it for free, I would pay for the Helixcode update. So would a lot of other people.

cheers,

Nile

[ Parent ]
Time to take econ 1001 again (none / 0) (#65)
by RandomPeon on Sun Apr 22, 2001 at 05:00:27 AM EST

Have you not read what I wrote? Those are basic economic concepts. Marginal costs, product complements, marginal revenue, rates of return.. these are all very basic, and highly important, economic concepts. I imagine whoever came up with the "i know, we'll sell support instead of the software!" idea either doesn't know anything about the costs of producing software, or did poorly in their college economics courses.

I hate to break it to you, but there's a whole industry of selling support - solutions providers, consultants, value added resellers. You seem to think of support in very small terms - but setting up an E-commerce site is also a form of "support", substantially more expensive than than the a helpdesk. Another Econ 1001 term - "loss leader" - you can CENTER your business around selling complements. The textbook example that comes to mind is a conventional shaving razor, which is sold at a vastly below-cost price. The complement (replacement blades) is where the money is at. Sounds like a trivial case until you considered that Gillete spent an obscene amount of money developing their latest razor ($200M??? Read it in TIME last year). Internet Explorer is another example - Microsoft loses money giving away IE but it killed browser-centric computing and allowed the complement (Windows) not to get blown off the face of the earth. In order to use IE on an x86, you have to have a computer that runs an MS OS, assuring that you'll still have an OS in the traditonal sense. (Of course, this one's illegal, but that never bothered Bill).

The profit margin on support is far better than the profit margin on software, and supporting open-source software makes the profit margin better. Closed software has no almost no marginal cost(unlike a car, which makes your analogy pointless) to produce, but it has tremendous up-front costs. Support is the opposite. One is a high-risk business(with possible high returns, of course), the other is much safer. Of course, with open source, development costs drop dramatically - little software is genuinely innovative, so the cost to refine a GPLed program which already does what you want is far lower than recreating it from scratch. If you assume open-source software is more stable (often, but not alway true) then your support costs go down, but there's no reason support prices have to go down too. Supporting stable software is cheaper, since I can charge the same amount (or less) and make a greater profit (or the same profit). Oh yeah, open-source software failures are generally easier to resolve, since there's no incentive to hide the cause(Windows errors are cryptic for a reason) and because you can implement the fix yourself if it's cost-effective. Wow, spending money on something other than the product so the product is more profitable - I think that's called a "capital investment" (although the IRS might not agree in this case).

If GM spends $200M on robots that allow them to make cars more cheaply, they've made a capital investment. If RedHat(or somebody else) spends $10M on coders who make GPLed programs work great, and sells "solutions" (God I hate that word) based on that software and has those coders on staff to instantly put a diff out when something doesn't work....

Of course, anybody else can sell the support, but RedHat has a good performance on their capital investment compared with the other option - write the distribution from scratch and keep the code closed, along with advantages above from being both the solutions provider and developer. It's a little weird, since the loss-leader is also capital investment. But IT has never been much for normal business practices - where else is it the industry-standard practice that the manufacturer disclaims all liability if the product is defective and you can't get your money back if it's defective?

[ Parent ]
where's the customer in your theory? (none / 0) (#23)
by anonymous cowerd on Sun Apr 15, 2001 at 11:51:01 AM EST

What silliness! First let's dispose of your obvious factual error:

This is why you don't see any businesses basing their software on GPL...

Both IBM and Apple currently ship products including Apache. IBM isn't a business now? Hmm.

Well, you've completely ignored the entire reason software and all the hardware you wrote this on exists...

As far as I can see, at least one of the reasons that software, and hardware, and cars and hamburgers and n'Sync CDs exist is to satify the needs and/or desires of customers.

This isn't a trivial objection. Instead it addresses a thoroughly stupid, persistent and headache-inducing misunderstanding of the basis of the entire capitalist system. You claim that "everything else of modern life" exists merely so some corporation's stockholders can amass wealth. But that point of view obstinately and blindly leaves out the other half - my half - of the buyer-seller equation.

Think about it for a second. Sure, in any commercial transaction the seller wants to make money, and in most businesses that's why they play the game. But does any buyer ever willingly part with his cash just so the seller can accumulate wealth? That is precisely what you assert; yet nothing could be more obviously absurd. No, the buyer spends his hard-earned money with considerable reluctance so that he can achieve something else, and more often than not that something else is non-economic in nature. That is, I might buy a computer to do professional work on it, e.g. edit CAD drawings for my civil engineering business, but for every PC sold to businesses there are three sold to people who surf the net at home for amusement - and for that matter, even in my office a good half of the CPU cycles are spent playing mp3s and surfing for Star Trek trivia. Or, for every vehicle sold to build some company's profit-making work truck fleet, there are twenty SUVs, luxury sedans and sports cars sold in order to gratify some private citizens's desires for comfort and prestige. And this is how it should be! Keep in mind, damn it all, that we ordinary (i.e. non-investment-class) people are not merely drone worker bees whose sole purpose in life is to stuff the cells of a bee's nest with honey.

Just as it is the intent of every seller to get the absolute most out of every transaction - in other words, I want to sell you this quart bottle of drinking water for all the cash you have in the world - it is equally the intent of the buyer to get his goods for the lowest possible price, or if possible, for free. In order for a so-called free-market system to thrive, it needs this dynamic tension between the desires of the seller and of the buyer. That's what competition is all about. But by your theory of capitalism, the interests and desires of the buyer are simply ignored, rejected, cancelled out. And it's not just you who persistently makes this dumb error. There's the Calvin Coolidge school of politics: "The business of America is business." There's the Charles Wilson philosophy: "What's good for General Motors is good for America." There's the Reagan/Dubya tax theory: "Good policy mainly consists of making the already-too-rich yet richer." How blockheaded can you get?

There's an old saying, and perhaps it's gone out of style these dumb new days, but they always used to say "the customer's always right!" Admittedly some companies lose track of this notion, generally to their subsequent regret. Here I'm thinking, for example, of Microsoft, which wants us to replace, at our own expense, the operating system we currently have on our PCs with a "new improved one," Windows XP, which is specifically engineered not to allow you to listen to or record mp3s. Gee, what an excellent idea! I'll just run down to the store and empty my wallet so I can break my PC! Or those drooling morons over at the RIAA, who imagine that the best way to sell teenagers essentially unnecessary amusement products is to rant and yowl at their prospective customers, hostilely accusing them of what they mislabel as "thievery" and threatening them with the prospect of going to jail. And all this stupidity goes back to the pinheaded notion that the world turns on its axis, and the sun lights the sky, and the wind blows and the grass grows, just so and only so that corporations can make money. Feh.

Yours WDK - WKiernan@concentric.net

"This calm way of flying will suit Japan well," said Zeppelin's granddaughter, Elisabeth Veil.
[ Parent ]

heh..oh where to begin.. (none / 0) (#25)
by rebelcool on Sun Apr 15, 2001 at 12:35:18 PM EST

"Both IBM and Apple currently ship products including Apache. IBM isn't a business now? Hmm."

Go read what I wrote. I said "basing" businesses. That would mean, the core product of the business is GPL'd. Such as red hat. Is apple and IBM using GPL'd..yes. Are they based on it?Certainly not. They make money using tried and true methods. :) "But by your theory of capitalism, the interests and desires of the buyer are simply ignored, rejected, cancelled out. "

This made me laugh. So, apparently, a short post about why software tool companies are not going to GPL their stuff, and why it would be foolish to do so, obviously means I have some huge economic theory. I'm the next adam smith!

The rest of your post seems to be a bitching about how customer satisfaction doesn't exist anymore..or at least thats what I gather from it.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Ummm... (none / 0) (#63)
by LukeyBoy on Wed Apr 18, 2001 at 07:40:01 PM EST

I hate to bitch, but Apache is under it's own open source license, which allows a company to modify and extend the code without releasing the changes, as long as they include a small one-liner in their documentation claiming what code was used.

[ Parent ]
nitpicks (4.00 / 1) (#15)
by kei on Sun Apr 15, 2001 at 12:29:34 AM EST

Because Java code is always released in binary form, Java clients are forced to make intrinsic dependencies on specific Java virtual machines.
Your blurb about Java is a bit skewed... Java bytecode is "always released in binary form" because it's a lot better than having an end user download Java source code, run it through a compiler that supports the same API version, and then run it in a JVM. That's just wholly unreasonable (and slow too). Java's API changes because it's a new language. You don't really expect that the first version of K&R C and the latest ANSI standard would be entirely compatible do you? Anyways, this is not to say that the Java licensing model is a particularly good one, just that downloading bytecode is somehow a bad thing needs to be contested.

If the Web was built around applets rather than HTML, the entire Web would have to be rewritten every time a new JVM was released.
One could argue that much of the Web does have to be rewritten. I use Mozilla for my daily browsing, and many pages that are either designed only for Netscape 4.x or MSIE look terrible and use badly malformed HTML and proprietary tags to achieve their functionality. The fact that billions of pages require no rewrites is because billions of pages use nothing fancier than tables or framesets.
--
"[An] infinite number of monkeys typing into GNU emacs would never make a good program."
- /usr/src/linux/Documentation/CodingStyle
On Java and Mozilla (none / 0) (#29)
by nile on Sun Apr 15, 2001 at 03:00:40 PM EST

One could argue that much of the Web does have to be rewritten. I use Mozilla for my daily browsing, and many pages that are either designed only for Netscape 4.x or MSIE look terrible and use badly malformed HTML and proprietary tags to achieve their functionality.

Look at Mozilla's history, though. Five months ago, there were a lot more pages that Mozilla was not able to render correctly. The history of Mozilla has been one of better supporting the different pages on the Internet by trying to read malformed HTML. Mozilla could also read proprietary tags if it wanted to - since they are parsable - but does not because they want W3C compliance.

The fact that billions of pages require no rewrites is because billions of pages use nothing fancier than tables or framesets.

Using XML technologies, this is correct. Open reuse will only work on a limited scale because of the scalability problem of creating open reuse solutions richer than tables and framesets. With word-oriented programming, though, it is possible to create scalable parsable syntaxes. This means that open reuse can be a general solution.

Your blurb about Java is a bit skewed... Java bytecode is "always released in binary form" because it's a lot better than having an end user download Java source code, run it through a compiler that supports the same API version, and then run it in a JVM. That's just wholly unreasonable (and slow too). Java's API changes because it's a new language. You don't really expect that the first version of K&R C and the latest ANSI standard would be entirely compatible do you? Anyways, this is not to say that the Java licensing model is a particularly good one, just that downloading bytecode is somehow a bad thing needs to be contested.

I think we disagree here. Web browsers can still read HTML 1.0 Web pages. Parasable descriptions do not have the versioning problems of Java. Nor do they need to be compiled either, just translated to the technologies that implements them (a substantially smaller computational task).

cheers,

Nile

[ Parent ]
Drop The OSS Part (4.75 / 4) (#21)
by zephiros on Sun Apr 15, 2001 at 11:31:13 AM EST

In reading through this, I'm not seeing how open source software is key to making this sort of technology work. It seems perfectly feasible to create a closed source library fronted by an XML interface. Future iterations of the library would simply ship with a set of XSLT documents that described how to translate calls, depending on what version of the library the calling app was expecting. If anything, access to the library source would encourage writing interface calls which were more tightly coupled to the underlying code, which runs counter to modern development wisdom.

Now, on the other hand, if you're talking about truly abstracting interfaces to the point where calling applications simply describe what services they want to consume (but don't rely on a specific library), we're back to the whole ontology issue. Which is a discussion we've had before. Speaking of which, did you ever check out existing work in this field, such as DAML, SHOE, OIL, SKC, BUSTER, OKBC, or KIF?
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB

In retrospect... (4.00 / 1) (#24)
by zephiros on Sun Apr 15, 2001 at 12:04:44 PM EST

DAML and SHOE are really poor examples. I was cutting-and-pasting from the "Ontology" section of my bookmarks, without really thinking how appropriate they were. I also missed UDDI and RDF, but that was intentional. I really don't think either of those are headed in the right direction, despite the industry support of both. I'm starting to think it's going to require a fundamental refactoring of how we create systems in order to facilitate this sort of deep, broad-scale integration. UDDI and RDF are simply stop-gaps on the path to this Way New Architecture(tm).
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]
Why OSS can't be dropped (4.00 / 2) (#27)
by nile on Sun Apr 15, 2001 at 02:28:01 PM EST

Hi zephiros!

First, thanks for taking the time understanding the articles and responding to them. You've consistently provided help in this regard and I appreciate it.

OSS is key for two reasons. The first is technical: in order to use open reuse as a general reuse model, it is neccessary to integrate the syntaxes of different domains. To scalably integrate syntaxes, it is necessary to use the rule-relationships of word-oriented programming. One of the consequences of word-oriented programming is that it is always possible to retrieve the source code, even if a program and its libraries are distributed in binary form.

The second reason is more fundamental to open reuse itself, because it explains why open reuse libraries need to be open source without talking about their implementation. Consider a GNOME programmer who wants to use a library that was originally written for SOAP and Windows. The library is using the wrong technololgies (the GNOME programmer wants it to use GTK and Bonobo) and, if the programmer does not have access to its descriptions to map it to new ones, the library will not be useful to the programmer. If the library is open source, though, and the programmer has access to its descriptions, the GNOME programmer can map it to the GTK toolkit and Bonobo. This is why open source libraries are better than closed source libraries. The programmers using them can always map them to the new technologies they want to use.

I think it helps to break who open reuse benefits into two groups. From the library developer's perspective, only releasing the generator guarantees - as you correctly pointed out - that developers will use it rather than making direct API dependencies.

However, from the client's perspective, if the library is released as a binary, the client will not be able to map its algorithms to new technologies. This means that the client is at the mercy of the library developer. If the library developer decides not to support new technologies, there is nothing the client can do.

cheers,

Nile

[ Parent ]
Re:Why OSS can't be dropped (5.00 / 2) (#32)
by zephiros on Sun Apr 15, 2001 at 03:43:17 PM EST

The first is technical: in order to use open reuse as a general reuse model, it is neccessary to integrate the syntaxes of different domains. To scalably integrate syntaxes, it is necessary to use the rule-relationships of word-oriented programming. One of the consequences of word-oriented programming is that it is always possible to retrieve the source code, even if a program and its libraries are distributed in binary form.

It would be simple to architect a cost structure into this sort of scheme, and implement a trivial Bayesian tree mechanism for identifying the best path to take to get from functionality A to functionality A+B. Users could weight the importance of price, reliability, speed, and trust, and the discovery engine could assemble the best series of components and report back a price.

Granted, this gives a subtle advantage to OSS components (since they're cost 0), but it doesn't expressly forbid closed source. There's a great deal of closed source development going on out there. If you ignore that, you're creating a serious impediment to adoption.

If the library is open source, though, and the programmer has access to its descriptions, the GNOME programmer can map it to the GTK toolkit and Bonobo.

By writing a Bonobo to SOAP bridge? Why would he need the source of the Windows component to do that? SOAP is a well documented protocol which defines standard interface and data types. If our hypothetical programmer, armed with the SOAP spec, can't figure out how to author a method call over the wire, giving him the source to the Windows component isn't going to help.

If you mean our hypothetical programmer wants to rewrite the Windows component for Gnome, then a new description language isn't going to help. The programmer is still stuck poring over lines of VB code. Unless you mean VB/SOAP code written with "words" can automagically transform itself into GTK+/Bonobo code, which is a trick I'd pay actual money to see.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

Ambiguous Explanation. A second try. (none / 0) (#40)
by nile on Mon Apr 16, 2001 at 12:40:51 PM EST

As a preface, it is important not to think in terms of component models when discussing words because today's component models are inherently object-oriented. I shouldn't have used components in my example since they only lead to this confusion.

That said, I think my original explanation was ambiguous. Rereading it, it would be a reasonable interpretation to think that I was talking about a Win32 SOAP library, not a Win32 library that used SOAP.

Imagine that you have a company that has designed an architecture product that uses SOAP and the Win32 GUI classes. One of the libraries that the architecture product uses is a construction parts library that uses Win32 GUI classes to display the construction pieces and SOAP calls to download new pieces from the Internet. The market changes and now you want your product to run on Linux. What do you do?

If the construction library is closed source or does not generate its code from parsable descriptions, porting the project is going to be difficult. In the first case, there is nothing one can do except try to use something like Wine (Wine is getting better, but even if it works the look and feel will not be right). In the second case, one has the option of porting it but that takes a lot of time and port efforts are frequently riddled with bugs.

If, however, the library is both open source and uses parsable descriptions to generate its Win32 and SOAP code, one only needs to use GTk and Bonobo generators to write out a new set of code for the library. The GUI descriptions might look like:

<Window height = "500">
<Toolbar>
<Tool image="stop.jpg">
.....
</Toolbar>
<ScolledBox height = "60">
.....
</ScrolledBox>
</Window>

Because this is parsable a generator could generate either VB Win32 GUI code from it or C GTK code in the same way a generator can create Netscape and IE specific versions of a page from XML. Now, the reason that open source cannot be eliminated is because in order to use a generator on a library, one has to have access to its descriptions. The claim can be made even stronger: if a generator of your choice is used on parsable descriptions, even if someone runs it for you, one still has access to the source because one can have the generator print out each element it encounters.

Unless you mean VB/SOAP code written with "words" can automagically transform itself into GTK+/Bonobo code, which is a trick I'd pay actual money to see.

That's almost it. What I mean is that if the VB/SOAP code is generated from a parsable description, you can use another generator on the parsable description to generate GTK/Bonobo code.

My first explanation was ambiguous, so my apologies.

Nile

[ Parent ]
No More Secrets (none / 0) (#49)
by zephiros on Tue Apr 17, 2001 at 11:25:16 AM EST

If, however, the library is both open source and uses parsable descriptions to generate its Win32 and SOAP code, one only needs to use GTk and Bonobo generators to write out a new set of code for the library.

You know, from a strictly conceptual standpoint, I'll buy that this will work. But you've got some serious coding ahead of you:

  • Create a meta-language which describes, in a generic fashion, all possible system and library calls on all target platforms.
  • Create a usable number of generators to transform this meta-language into code.
  • Create a set of "word"-enriched core system libraries upon which to build applications. Again, for all target platforms.
  • Rewrite legacy applications that need to be "word" compatible.
Part of the problem is information hiding. If I wrap some procedural code using OO code, I don't need to rewrite the procedural code. Hell, I don't even need to truly understand what the procedural code is doing. As a result, making legacy functionality available to OO programmers is a feasible task.

On the other hand, if I wrap some OO code with Word code, I need to include a complete functional description of the code I'm wrapping. Which either means I need to rewrite my legacy code or reverse-engineer it. IMO, that's asking a lot from adopters.

I will admit, though, your underlying premise (that sufficiently detailed, machine readable service descriptions will make all code transparent) is very interesting, and slightly disturbing. It's been the source of much talk around the office lately. However, I think this will come about from rich, cross-system/cross-discipline ontologies, not the creation of a new programming technique.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

Stone soup and Darwinian advantages (none / 0) (#52)
by nile on Tue Apr 17, 2001 at 09:18:38 PM EST



On the other hand, if I wrap some OO code with Word code, I need to include a complete functional description of the code I'm wrapping. Which either means I need to rewrite my legacy code or reverse-engineer it. IMO, that's asking a lot from adopters.

Fortunately, we don't have to have complete adoption. You're right about that being the wrong way to go. What we are going to do is hook people in by doing smaller examples that benefit them. It is possible to wrap OO code with Word code and there are still significant benefits.

The first thing we are doing is using descriptions to provide a GUI syntax for Windows/Mac/GTK/and QT. A program retrofitted with it can send its interface across the network to BlueBox on any of these machines and client interactions with the program will then be handled with network calls. The benefit of this is existing GNOME and KDE programs can be retrofitted with the syntax to serve programs to all computers in an office. This will greatly accelerate the adoption of open source by making existing open source programs an option for Windows and Macintosh computers.

When the description reaches the point of being fully parsable, it can be used to generate out GTK and KDE code in the same way Glade does. At that point the code can directly wrap the existing C and C++ libraries with C and C++ code that the descriptions have been calling with network calls. As this mixed method of supporting multiple solutions catches on, we expect to have help extending descriptions to component models, networking, databases, etc.

Our philosophy is small adoption costs, high interoperability benefits, and stone soup growth. There is still a lot of work, of course, but I think as open reuse is used to allow more and more communities to share work by completely abstracting out their algorithms, we'll have a lot of helping hands.

I will admit, though, your underlying premise (that sufficiently detailed, machine readable service descriptions will make all code transparent) is very interesting, and slightly disturbing. It's been the source of much talk around the office lately. However, I think this will come about from rich, cross-system/cross-discipline ontologies, not the creation of a new programming technique.

Computational ontology is another approach to problems in the same space. When I started this project, I originally was using RDF. There are severe scalability problems with the ontology approach that are still two to three years off the radar. More strongly, I suspect that there is an exponentially exploding complexity problem with the ontology approach (as articulated by TBL), but I haven't teased it out yet.

I'm intrigued with the idea that ontologies lead to open source. Could you explain this idea more?

Nile

[ Parent ]
On computational ontology (none / 0) (#28)
by nile on Sun Apr 15, 2001 at 02:49:21 PM EST

I've working in computational ontology a little before when I used RDF. I agree with you that it is not the solution to problems that the industry thinks it is.

This article is tackling a different problem than computational ontology. Computational ontology is interested in relating sets of terms and sentences in different ontologies with each other.If two different providers are using different syntaxes but the same underlying concept, knowing that the concept is the same allows one to write software that can use the concept from both providers.

The goal of open reuse is to abstract out algorithms from their implmentations. It's a more abstract form of programming (much more abstract than templates and generic programming), which is where the word model comes in. Whereas computational ontology is concerned with mapping the semantic relationships between different sets of concepts, open reuse is concerned with mapping an abstract description of a solution to different physical instantiations.

cheers,

Nile

[ Parent ]
great job (none / 0) (#45)
by jdtux on Mon Apr 16, 2001 at 08:56:11 PM EST

a little long though... :\ +1FP anyway though

Some Qustions (5.00 / 2) (#50)
by pos on Tue Apr 17, 2001 at 04:30:57 PM EST

Once again, I enjoy reading your writings. Keep it up

The phrase "What do you mean by that?" turns out not to have been asked by a student, but as a prelude to a bar fight. Chris turns out to be the girl and Pat the boy in a relationship. When this occurs, we simply remap definitions to create a new interpretation of what we are reading.

and later on....

Open reuse offers an alternative model. With open reuse, every time developers release a new version of their libraries, they release a generator that maps the old descriptions to new APIs. In this way, all of the clients are instantly updated to the new APIs, greatly accelerating the development cycle. The edge cases of the platform can be instantly tested because all of the clients are test cases for the new APIs. The traditional cycle of releasing a platform and updating clients to that platform is eliminated.

If in 100 years the phrase "What do you mean by that?" has the semantic meaning of a person asking for someone to marry them, doesn't a story written today have a new unintended meaning? I remember snickering in school when the teacher read a story and the main character's name was "Dick" or a character was described as "gay". Are the two different meanings of "gay" actually two different words that simply share a token?

My main question is this: Don't all the edge cases still have to be tested by people to check for changes in semantics? I can see how they will write themselves, but how do you avoid the unintended consequences of a dynamically changing vocabulary over time.

I am reminded of legal writing. Legal documents are very explicit about their language in part because they are aware that meanings of words are not only different from person to person (or more importantly judge to judge) but also change as time moves on.

Even still, often lawyers cannot protect from unintended concequences that occur when legal language gets interperated in new ways. The functionality of their contract is broken. Furthermore, it requires a person to decide that the functionality is broken.

Still, it is good to see someone addressing these problems.

-pos

The truth is more important than the facts.
-Frank Lloyd Wright
Good question (none / 0) (#51)
by nile on Tue Apr 17, 2001 at 08:29:48 PM EST

One way to think of open reuse is to think of programs as file formats (file formats are also parsable). Some of the semantics of HTML have changed since the first version, but HTML 1.0 can be read correctly by good browsers because those browsers correctly use 1.0 rather than 4.0 semantics. Thus, even if a tag in HTML 1.0 has a different semantic meaning than in 4.0, the browser can know to use the 1.0 meaning rather than the 4.0 meaning.

Your analysis is a strong argument for versioning syntaxes just like files are. A program that mapped a GUI description to GTK calls might have a 1.0, a 1.2, a 1.4, etc. version. As long as the version was correctly identified, mapping an old tag that is currently used differently would not be a problem.

We do all of this less precisely in English by looking at the date when things were written and interpreting documents by the version of English that was used at that date. Many older works include definitions of words that have changed to aid readers. I'm surprised - now that I think about it - that lawyers don't include the definitions of their words with their documents.

My main question is this: Don't all the edge cases still have to be tested by people to check for changes in semantics? I can see how they will write themselves, but how do you avoid the unintended consequences of a dynamically changing vocabulary over time.

You're right, the edge cases do have to be tested in both open reuse and API reuse. Open reuse does not eliminate the fact that a library writer can change semantics unexpectedly any more than the API model does. What it does is switch the burden from the clients of the library to the library writer. An open reuse developer would instantly check all of the clients to see if any of them break because of semantic changes before releasing the library. If they did, the developer would fix the library before releasing. An API developer, in contrast, would have to release the library and then wait to hear if there were any semantic bugs. At that point, it may not be possible to fix the semantic bugs because the library has already been adopted.

Open reuse, then, correctly places the burden of changing semantics on the library writer which seems correct and provides a debugging method to instantly check if there is a problem.

Nile

[ Parent ]
Flatland anyone? (none / 0) (#56)
by pos on Wed Apr 18, 2001 at 01:31:00 PM EST

I'm surprised - now that I think about it - that lawyers don't include the definitions of their words with their documents.

They often do. The laws still get interperated according to contemporary thought. Even if we could roll back the human brain to interpret a law according to the intent when it was written, there is little desire to do so.

Doesn't your argument that it should be the library writer's responsibility mean that the library writer must have knowledge of all (or a reasonably large sample) of the clients? My point here is that in a large development setting, the library writers (or QA) would be waiting on all of the client groups to say that everything is ok. I guess that's what a good api programmer would do anyway though.

I can see how it could work that way. It's just very different. This whole discussion reminds me of the book Flatland by Edwin Abbot. He tries to describe what a 4 dimentional creature could do that a 3 dimentional one cannot by demonstrating the advantages a 3D one has over 2D and 1D. For example: a 4D creature could touch my heart without breaking my skin just like I can put my finger on the inside of a circle without going through the circumfrence first.

It is very hard to get people to "see" your 4th dimension of programming. I think you are doing a pretty good job.

One last question: what happened to bluebox.sourceforge.net? Seems kinda broken. When do you plan on developing in an open cvs?

-pos

The truth is more important than the facts.
-Frank Lloyd Wright
[ Parent ]
Good analogy (none / 0) (#58)
by nile on Wed Apr 18, 2001 at 03:21:08 PM EST

They often do. The laws still get interperated according to contemporary thought. Even if we could roll back the human brain to interpret a law according to the intent when it was written, there is little desire to do so.

Very interesting. I can definitely see cases where one would want to map an old semantic meaning to a new one. For example, if a "save" word used to save to a flat file, but now the company saves to a database. In that case, the semantic meaning should be changed. Either way is a possibility with open reuse, so it will be interesting to see how things play out in practice.

Doesn't your argument that it should be the library writer's responsibility mean that the library writer must have knowledge of all (or a reasonably large sample) of the clients? My point here is that in a large development setting, the library writers (or QA) would be waiting on all of the client groups to say that everything is ok. I guess that's what a good api programmer would do anyway though.

Yes and you're right that a good api programmer should do so anyways. The problem with the API method is it takes time for all of the client's to rewrite themselves, so semantic bugs cannot be instantly found. In contrast, when the library can instantly rewrite clients as they are compiled, one has an immediate debugging method for finding subtle bugs in libraries.

I can see how it could work that way. It's just very different. This whole discussion reminds me of the book Flatland by Edwin Abbot. He tries to describe what a 4 dimentional creature could do that a 3 dimentional one cannot by demonstrating the advantages a 3D one has over 2D and 1D. For example: a 4D creature could touch my heart without breaking my skin just like I can put my finger on the inside of a circle without going through the circumfrence first.

I like this analogy and it's a good description of a method of explaining it. Start with OOP and show how coupling data and methods benefited structured programmers and then move to word-oriented programming and show how coupling syntactical and semantic relationships with data and methods can benefit OOP programmers. The new thing approach (i.e., finger in a heart) is a good idea too. One of the things that seems to help people is talking about rule/relationship inheritance and polymorphism which is a new feature of word-oriented programmng. Thanks. I may use that in the future.

One last question: what happened to bluebox.sourceforge.net? Seems kinda broken. When do you plan on developing in an open cvs?

We're working at building a community site at dloo.org. This will allow us to experiment in several different ways. The code is still open, so nothing will change there.

Open CVS is coming very soon. I wanted to get the code in a better shape before putting it in a public CVS tree. Right now, there are too many dependencies for developers (both wxWindows and Xerces) and the install is very difficult. The current version will only demand Python (with wxPython being a later requirement for GUI developers). In addition, the word compiler will itself be written in words which is a critical step. It should be in CVS in the next two weeks.

Thanks for the encouragement,

Nile

[ Parent ]
Good example of re-writing laws (none / 0) (#59)
by pos on Wed Apr 18, 2001 at 03:26:01 PM EST

Just in case you were curious, I found a good example of someone re-writing laws to include things beyond the intentions of the original documents.

Courtney Love describes on page 2 how last november, Congressional aide Mitch Glazier, added an ammendment to the 1978 Copyright Act that allowed audio to fall under "works for hire". this means that after the normal 35 year period where copyright is returned to the author it will now never be returned.

my favorite quote: So an assistant substantially altered a major law when he only had the authority to make spelling corrections. That's not what I learned about how government works in my high school civics class.

Three months later, the RIAA hired Mr. Glazier to become its top lobbyist at a salary that was obviously much greater than the one he had as the spelling corrector guy.

This same change also allows a record company to hijack your CourtneyLove.com domain name the day you write a "work for hire" which now means a song.

Lots of fun.

-pos

The truth is more important than the facts.
-Frank Lloyd Wright
[ Parent ]
Very disturbing (none / 0) (#61)
by nile on Wed Apr 18, 2001 at 04:07:44 PM EST

Do I understand this right? Did the record industry rob music artists of the rights to music that was going to be returned to them in the next ten years? Or did they just change the meaning of the law as applied to music after November, 2000?

If it is the former, then it is one of the most disturbing things I've read in the past few years. Although everyone knows that money influences politics, I've always believed a few things held. One of them is that one cannot change the meaning of a law after the fact and retroactively apply it.

Thanks for the reference.

Nile

[ Parent ]
GPL as Code | 65 comments (38 topical, 27 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!