It doesn't have anything to do with information hoarding or IP -- it was to do with people being different, and therefore relating things together in different ways, which contain information for that person, and maybe others, but not for everybody.
Here's the part where I have to explain XLink two-way links. I'm going to borrow from a Sun article:
XLink solves both of these problems. How? In HTML you have to surround the starting point with your link markup, and then provide a URI to the ending point; you have to own the resource containing the starting point in order to do any linking at all. In XLink you simply provide a URI for both the starting point and the ending point -- no permission required for either half, and no need to edit the starting points to fix them when the ending points get moved around.
Of course, when you store a link apart from its own starting point, then anyone viewing the document containing the starting point needs explicit access to the link information -- because if they can't find the link, the starting point just looks like undistinguished text. XLink defines some ways that browsers can hunt down the relevant links for a document, so that the links can be loaded and revealed to the user, for example by making the starting-point text blue and underlined.
With XLink, a site organizer could simply store a database of links to associate with his/her content. If the user uses one of these links to go elsewhere, he can continue to use the link database, in which case two-way links are operational whether or not the new site's organizer has been involved.
Even better, a third-party could store a database of links for users to browse when they want to find information on, for example, digital cameras. None of the actual content providers need be involved in the creation of these associations.
I still think the web was is a counterexample to this. HTML still bears the hallmarks of it's ad hoc birth (why is IMG not a container tag, for example?). Nevertheless, I don't think most people think the web is "doomed to failure".
HTML has a solid foundation, but it's limited and simplistic. Idiosyncracies aside -- like the
<IMG> tag -- HTML is a very good, very valuable implementation. It's simply outliving its usefulness.
Web pages. With words, and pictures, and links. To other web pages. Ya know, the kind some hopelessly archaic people still hand code? Does anybody sane hand-code XML?
I've hand-coded all of these replies in XHTML, except for the
<br> tag, which is
<br/> in XHTML. Regardless, this is where HTML and XML will seem most different for the average user. I hate to have to harp on this, but you need to remember that data and presentation were/are unified in HTML. The author had to be part writer, part geek, and part graphic designer to write HTML code. The only tools that proved any good at automating the authoring process were WYSIWYG editors, which -- you will doubtless agree -- were pretty clumsy.
XML, by comparison, will enable the use of much, much more metadata, and that metadata will be focused in its use. The separation of data and presentation will mean that XML lends itself to workflows: a writer, a pagesetter, an artist... all of these people can be collaboratively involved in the production of documents using tools and XML applications (note the XML meaning of that word) that lend themselves to the appropriate tasks.
To bring this down to earth, we could posit an example involving a regular-Joe Web page... MP3.com, for example. A content manager is responsible for decisions over what the site will contain. A writer types up the copy in a WYSIWYG text editor, an artist creates the imagery, and the content manager lays these out in the traditional pages. Link analysis tools are used to establish the site's structure and to connect resources. He works with a database programmer, who organizes data for retrieval according to the site's established query interfaces.
All of these people are working with XML documents, but none of them ever need know the details of the XML code. The robust and flexible nature of XML applications enables user-agents that can act like day-to-day software: graphics programs, text editors, network analyzers, page-layout tools, and others. When these user-agents output XML documents, they can be embedded easily without a single hand-written line of code. Two major applications, SVG and MathML, are already enabling this kind of workflow.
I haven't heard anything that makes me think XML is going to replace HTML for making web pages -- they're two related languages, with historically similar roots, but completely disparate purposes.
I suppose it depends on what you think HTML is for. XML is for data -- any kind of data. I consider that to be little more than an generic expansion of HTML's role. For a more eloquent description, the Sun article mentioned before may be of interest:
Back in the early days -- that is, two years ago -- XML was most often compared to HTML, and in the eyes of many, XML came up short. HTML was a simple language anyone could learn; XML had complexities that could confuse developers. HTML had built-in formatting; XML needed a style sheet to be displayed as anything other than raw code. HTML had built-in hyperlinking with the <A HREF> tag; XML didn't even give you a linking starter kit for embedding hyperlinks into XML in a standardized way.
Today, we know that XML is scalable and flexible in ways that would stretch HTML to the breaking point, allowing XML to become the universal solvent for all data, not just the narrative information that HTML was originally designed to hold. However, if XML is to capture one of the most important features of the Web, it still needs to offer a standardized way to do linking. The goal of the XML Linking Working Group of the World Wide Web Consortium is to provide exactly this, and we're closing in on our goal.
The rest is at http://www.sun.com/software/xml/developers/xlink.html
[ Parent ]