Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Systems of the future

By Alhazred in News
Mon Jun 12, 2000 at 05:27:18 PM EST
Tags: Technology (all tags)

Having spent a good bit of time contemplating, reading, and experimenting with some of the newer technologies and ideas floating around I had some not quite random thoughts on the future of IT systems in general.

I see a bunch of elements which are starting to converge towards a new way of doing things. The systems of the future will be much more expressive, generalized, and powerful than those of today.

A lot of people have talked about faster hardware, programming languages, etc, but what I have not yet seen is anything like a real discussion of how these systems will REALLY LOOK to those of us who are expected to implement and use them.

1. Directions in System Development.

I have identified 4 key technological innovations which represent the kernels of the new building blocks that will eventually build the massive IT systems of the future.

  • Reiser File System which is just the tip of the iceburg in terms of implementing name spaces and highly efficient next generation file systems.
  • Plan 9 which really does unified naming in a big way. Its open sourcing will start to push this more into the mainstream as people get a look at the implementations and wrap themselves around the concept. True unified naming is the Holy Grail for system design, whether system architects yet realize it or not ;o)
  • Transmeta's CMS system The Transmeta code morphing system and associated innovations in hardware design harald a truely new era in the design of systems and their relationship with software. I predict that in a few years the concept of a "binary" is going to start getting pretty fuzzy...
  • XSL and XSLT Systems like Cocoon represent the dataprocessing environments of the future. XSLT has some very important properties, mainly the fact it is ITSELF XML, which means that XML can describe how to process itself! This brings XML itself to status of "logically complete" system. And at a rather high level at that.

2. What are these things.

Reiserfs is a Linux file system which allows the efficient storage of massive numbers of small files in very large and very deep directory structures. This allows the pushing of the existing file naming schema down to the level of individual records, removing lots of logic from applications. (IE, what if you just had to open the file /home/jsmith/contactlist/joeblogs/phonenumber to get at Joe Blog's phone number, instead of writing a program that opens the contact list file and searches it for that information? Can you see how much less logic this requires? You can do that now, but if you had 10k records in contactlist you would have a horribly inefficient mess under existing file systems. Reiserfs fixes that problem.

Plan 9 is an OS based on Unix and developed by Dennis Ritchie at AT&T which uses a totally unified name space. Unix unified I/O and the file system to some extent with the /dev stuff, which let you treat a raw device as a file. It was also the first system to let you treat partitions and remote mounts as just subdirectories of a single name space. Plan 9 extends that to all OS entities with few limitations, including network connections, resources on remote machines, etc. As with the example above under Reiserfs, which should I have to write a completely different piece of code to get stuff off an FTP server vs off my local hard drive? Everything needs to be a file (or some more advanced successor to the concept of a file). This will wipe away much of the idiocy of current system design where 97 incarnations of basically the same code get written and rewritten every day.

I think we all know more or less what Transmeta is doing. The Code Morphing Software translates one machine language to another on the fly using custom silicon which means that it is VERY fast. I see no reason why it is in any way limited to x86 code. It could translate PPC code to the Carusoe instruction set just as easily. What about Java bytecode? It should be able to do a pretty good job with that. And I see no reason why multiple CMS implementations shouldn't coexist happily in theory, so that your kernel might be written in one machine code, and applications in other ones! This provides the necessary bridge to the future, where we may well see one single standard machine language, or at least generations which can smoothly migrate upwards, ending the chaos of hardware architectures.

XSLT is a W3C project which specifies how to create a XSL stylesheet which can be "applied" to an XML document to produce some arbitrary transformation. This sort of technology lets us treat programs themselves as data, since XSL is itself XML. Technically any "Turing Complete" language is logically complete and can transform its own code in arbitrary ways, but XSLT really brings these capabilities down to a useable level. When combined with a better concept for what a "file" is, we can do some very powerful stuff!

3. The system of the future

I see the data processing systems of the future as representing essentially a "seamless compute fabric". To a user the distinction between systems will vanish and only the interface will be apparent. 50 years from now it will be senseless to ask what sort of computer, or what process is performing any certain task. There will be what appears to be a single all pervasive computing infrastructure of massive power unimagined in even the wildest Sci Fi of today. Ordinary people will dispose of PetaFLOPS of compute power.

From a system perspective what will the architecture be like? Obviously with such huge resources we will be able to attack VERY hard problems by sheer brute force. The range of usefull functions for systems will be so huge that we will no longer be able to afford to program them at the statement level we use today. We see this in "Star Trek" whenever someone says "Computer, show me...". At the least we need formal grammers that are very powerful and allow us to specify operations at a much higher level than we do now. Something like "Merge the two user databases at X and Y, filtering out all record not present in Z and store the result to Q" would be a start. Incompatible schemas would be dealt with, fetching the data would be invisible to code. None of the gory details we deal with today are going to be visible to programmers of 50 years from now.

Specifically I see XML/XSL/XSLT and the SAX and DOM API's and their successor technologies as giving us a "lingua franca" for data representation and manipulation which will consume all other representations in the long run. Something like Reiserfs could be a very efficient XML database, storing each property and value in seperately named places, and providing direct SAX and DOM interfaces to the system. A universal naming system like Plan 9's would extend that functionality to the whole network and make it look like a fairly seamless single database. With CMS-like applications providing complete hardware transparency such a system would provide the logical platform for this seamless compute fabric.

Naturally we would use existing technologies like IP, MIME, HTTP, CORBA, etc as underlying elements (or their successors). There would also be management pieces, but those present no real fundamental hurdles. We will simply harvest the system's vast resources to let it manage itself to a large extent, reconfiguring and rerouting as necessary to fix problems.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o Also by Alhazred

Display: Sort:
Systems of the future | 27 comments (27 topical, editorial, 0 hidden)
Note that the concept of code morph... (3.50 / 2) (#1)
by Neuromancer on Mon Jun 12, 2000 at 04:45:43 PM EST

Neuromancer voted 1 on this story.

Note that the concept of code morphing is older than the company transmeta, they are just the company of the moment (tm) and happen to be selling chips that do it. Also, several of these concepts are older than the examples given, but I can live with it ;-)

Re: Note that the concept of code morph... (none / 0) (#25)
by current on Wed Jun 14, 2000 at 05:05:53 AM EST

Concept was actually gotten from russian scientist. The russian mathematics evolved much at the time of cold war, and the still hold many revolutionary concepts (algorithms), it is just a question of time when western civilization utilizes them as transmeta did in the case of codemorphing.

The Eternal Meta-Discussion

[ Parent ]
Re: Note that the concept of code morph... (none / 0) (#27)
by Neuromancer on Thu Jun 15, 2000 at 08:27:09 AM EST

Right, there are also processors which were designed without really having any lower level functions, just sets of programmable logic gates, which could be configured to have an application loaded directly onto the chip. I forget where the research was done for this, but the chips are pretty cool. Personally, I can't wait to have one or 2 of these sitting in my computer with a good real time OS on it.

[ Parent ]
The XSL "code as data" bit sounds i... (none / 0) (#3)
by Eimi on Mon Jun 12, 2000 at 05:06:17 PM EST

Eimi voted 1 on this story.

The XSL "code as data" bit sounds interesting, but didn't LISP already do that LONG ago? Or is there something more significant behind this?

Re: The XSL (none / 0) (#13)
by Anonymous Hero on Tue Jun 13, 2000 at 09:58:32 AM EST

Actually, that's the first thing I thought the first time I saw an XML file -- "Hey, it's LISP with greater and less thans!" And without the power, of course (closures, anyone?)

[ Parent ]
Re: The XSL (none / 0) (#16)
by Anonymous Hero on Tue Jun 13, 2000 at 12:34:24 PM EST

yeah, more verbose than lisp, and without the *semantics*, for god's sake! xml isn't a programming language, unfortunately.

[ Parent ]
Re: The XSL (none / 0) (#26)
by Alhazred on Wed Jun 14, 2000 at 09:17:23 AM EST

Well, XSLT IS a programming language, in a sense.

I agree that LISP is more elegant in many ways, but its also too "low level". We have to be thinking MUCH higher level, and we have to make things much more ORTHOGONAL. Thats where XSLT shines.

As for closures, they are a nice concept in LISP, but not necessarily too critical in the overall scheme of things. They are after all really not much different from C's "static" types.
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
Re: The XSL (3.00 / 1) (#14)
by Alhazred on Tue Jun 13, 2000 at 10:00:52 AM EST

Yes, LISP did it years ago! Maybe LISP is the technology that will be used, but LISP isn't as good at representing arbitrary data like XML is. I think the solution will come from the DATA space, not the code space. People tried embedding data in code for years, and it didn't get anywhere. Data as code instead of code as data is the XML/XSLT paradigm.
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
Re: The XSL (none / 0) (#23)
by Anonymous Hero on Wed Jun 14, 2000 at 01:18:22 AM EST

Um, Lisp is just as capable as representing arbitrary data as xml is; I don't know where you got that idea. Secondly: data = code = data is what lisp is all about. You seem to have this idea that lisp is only code. This is false. Lisp is both. Unfortunately, xml alone is *only* data, but needs code (like c backend, or javascript), of course, to make it actually do anything.

Maybe you are right about "the solution" coming from data structures rather than code, but lisp can do *both*. It can have data structures that *are* code, and vice-versa.

[ Parent ]
Well, +1 because I have been thinki... (none / 0) (#2)
by pdubroy on Mon Jun 12, 2000 at 05:23:10 PM EST

pdubroy voted 1 on this story.

Well, +1 because I have been thinking about this exact same stuff for the last little while. I am interested in seeing what kind of discussion this generates. Last week was the first time I had heard of Plan 9, and the idea of a completely unified namespace really grabbed me. I'd like to hear more about what other people think of this. However I must say that I question exactly how much thought/research the author has put into these "predictions". It seems like (s)he has simply been wowed by a couple of things in the news, and has jumped to the conclusion that this is where computing is heading.

Re: Well, +1 because I have been thinki... (4.00 / 1) (#12)
by Alhazred on Tue Jun 13, 2000 at 09:58:22 AM EST

hehehe, well I've been involved in building information systems of all sorts for almost 20 years now.

I actually wasn't trying to predict the EXACT technologies that were going to succeed. In fact I do a lot of "research", since I work at a high tech startup company who's livelyhood depends on being on the cutting edge of information technology in many areas. I'd say I spend 4 or 5 hours a day learning about stuff at least, and more at home experimenting with code after work.

What I mean is, some of these approaches are the right ones, and maybe some aren't, but they are addressing the key concerns.
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
the future... (none / 0) (#4)
by eMBee on Mon Jun 12, 2000 at 06:16:14 PM EST

... will be plan 9, on the raiser filesystem, running on the transmeta chip, using xml config-files...

greetings, eMBee.
Gnu is Not Unix / Linux Is Not UniX

FTP Mounting (none / 0) (#5)
by Anonymous Hero on Mon Jun 12, 2000 at 06:32:20 PM EST

AFAIK, mounting a directory through FTP would require a kernel-space driver (or root privileges at least). It's usually a bad idea to have things like FTP clients in the kernel. This would work much better with a microkernel, such as GNU HURD. IIRC, they have considered writing (or have already written, I'm not sure) a translator to allow FTP/HTTP sites to be mounted. The HURD allows you to attach translators to any file/directory, so you could probaby also do stuff like mounting a tgz/zip file.

Re: FTP Mounting (none / 0) (#8)
by Cironian on Tue Jun 13, 2000 at 04:01:43 AM EST

Sharity Light/Rumba already does internal conversion of an SMB mount to NFS calls to the kernel. I suppose you could do the same thing for FTP (though it would be very slow of course). While I agree that FTP doesnt belong in the kernel, such a userspace mounting system for it wouldnt be bad IMHO.

[ Parent ]
Re: FTP Mounting (none / 0) (#15)
by Alhazred on Tue Jun 13, 2000 at 10:04:32 AM EST

Yeah, well if you ask me the Linux kernel's design is WAY behind the times anyway. I wouldn't draw many conclusions from that. A micro-kernel might be the answer, and maybe not. I'm sure a lot of solutions could be tried. The key point is that I have to be able to mount an FTP site so I don't have to BUILD FTP SUPPORT INTO EVERY PROGRAM. I mean thats the point of file systems in the first place! We use the net like we use a raw device right now. Network technology is at about the level that disk technology was in 1960 when your program had to read and write data from raw sectors. We need badly to go beyond that.
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
Re: FTP Mounting (none / 0) (#20)
by mbrubeck on Tue Jun 13, 2000 at 04:33:37 PM EST

Actually, an article was published a couple years ago in the ACM Transactions on Computer Sysems describing UFO, a user-space and user-run implementation of FTP and HTTP filesystems.

The approach they used (intercepting system calls made by user programs) is an interesting and easily generalized method. It will run on standard, unmodified operating systems and provide functionality that would otherwise require kernel-level or at least superuser-authorized system modifications. So don't be too quick to assume that new functionality always has to be low-level or privileged.

Still, I agree with the earlier point that FTP and HTTP are best suited to the connect-request-fetch applications for which they were designed. True network filesystems such as CODA, AFS and NFS are far better suited to the traditional filesystem role.

[ Parent ]

On the less technological front... (3.70 / 3) (#6)
by paranoidfish on Mon Jun 12, 2000 at 06:57:09 PM EST

...someone will finally break away from the WIMP metaphor for UI's, and we may start to see some real innovation on that front, leading to more intuitive and useable systems for both learners and power users. This is where some real innovation may be done soon; similar to the movement from Command Line to GUI. And we'll all be thinking "How did we live without it?".

it is only with diffrent data-visualisations that some of the intended functionality of the more web-orientated technologies mentioned above can really begin to show.

Of course, there will always be uses for the command line. That bit goes without saying.

Maybe someone will invent a truely paperless office at the same time? :-)

FTP filesystems? Dear God, no.... (none / 0) (#7)
by Anonymous Hero on Mon Jun 12, 2000 at 08:06:29 PM EST

The system of the future will have a much better way with distributed files than FTP, HTTP, NFS, SMB, or most other things we may be using at this point. The idea of an FTP subsystem in the kernel scares me to death -- it has to be one of the most God-awful protocols ever designed. A distributed filesystem like Coda comes to mind.


XSL XSLT considered very UGLY! (none / 0) (#9)
by Yzorderex on Tue Jun 13, 2000 at 05:43:22 AM EST

and darned confusing with brackets all over the place. The syntax is a step backwards too.
Its gonna All be program generated because nobody sane is gonna mess with that mess with anything more complicated than a demo.
If that sounds harsh then try it yourself.

Re: XSL XSLT considered very UGLY! (none / 0) (#10)
by joshv on Tue Jun 13, 2000 at 08:17:00 AM EST

Agreed. This stuff will have to be machine generated and maintained.

I am disheartened by the move towards using XML configuration files for newer applications without an appropriately simple (preferably GUI) interface for editing these files. I don't find XML particularly 'human readable'.

Granted, when I code HTML I do it by hand. But this is only because most of the GUI tools don't even come close to the full range of formatting possibilities inherent in the HTML spec. Eventually though, they will get there.


[ Parent ]
Re: XSL XSLT considered very UGLY! (none / 0) (#11)
by Alhazred on Tue Jun 13, 2000 at 09:53:13 AM EST

Yeah, I agree with you guys. XML is not a very human friendly format, even though it is technically "human readable". I'm not suggesting that people will write XSLT programs by hand, certainly not as a routine thing, but it will in the long run become the lingua franca of data processing. Any algorithm can be described using it, and it can itself be analyzed and transformed.

The powerful thing is that with XSL you could actually write a RAD type tool that really WORKED. XSL could be generated by machines from human input specifications. Even more important than that you could actually go back and forth between code and spec, so you could fairly easily take existing code and use an analyzer to determine its function.

In any case XSLT certainly represents a mechanism for quickly specifying document production logic, which is badly needed!
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
Hmmm - I don't think so (5.00 / 1) (#17)
by Anonymous Hero on Tue Jun 13, 2000 at 12:48:06 PM EST

I think the future is never what you imagine.

Some random thoughts follow...

The more things change, the more they stay the same. That is to say that it's fairly unusual for there to be a fundemental shift in underlying systems. DOS has been with us for some time, and is only now disappearing (unless you count Win 9x as DOS). Unix has been around for a while too. I fully expect to be running Windows 2050 and Linux 32.0. I expect that some of the metaphors will have morphed - for instance perhaps there will be 3d manipulative interface devices.

I think that one of the major changes that are likely to happen is that computers will be used less for data processing (so none of that 'Computer, merge the two user databases ...') - instead I guess that they will be fabric (like roads) - I see Microsoft's COM events as a first step toards the technical infrastructure necessary to support this sort of thing. (For those who don't follow MS stuff, this is an publish/subscribe technology, integrated into the OS, which allows applications to subscribe to arbitrary events defined by applications. Like 'I'm interested in hearing about it when a person gets married.'

There was an Association of Computer Machinery conference about the future of computing early last year. I expect that some details are available to the public at http://www.acm.org.


Managment is the future (none / 0) (#18)
by Anonymous Hero on Tue Jun 13, 2000 at 01:18:40 PM EST

I feel that the direction is secure managment. the way that we are going - there will be a blur between "admins" and "experts" as we know them today. systems will become extremely easy to manage and implement that there will be the computer literate masses (on the level of todays current mid-level admins) and the extrmemely proficient. systems will have a simple unified way of being setup to communicate and perform, will come with certain levels of default security - and will be mostly modular example: you would have a small portable machine that has standard interface slots for adding modules - modules will be very small embedded system chips that have the hw and sw on them. so your main device has a high level os, a screen/interface and open slots. add the hw encryption chip to a slot to encrypt ALL data coming out of the device. add the voice mod for free voip capability etc...

convergence is real (none / 0) (#19)
by Anonymous Hero on Tue Jun 13, 2000 at 01:53:55 PM EST

Once affordable computers have the capacity to deal with real-time video, there will be no technical barrier to a general-purpose appliance that includes all audio, video, telecom, and computer functions. UI and software issues will take some time to resolve, but sheer economics will demand it.

The same goes for portable machines -- there will be a "handheld" class of devices that incorporates cellphone, PDA, personal media player, smart debit card and so forth.

Distributed personal databases will be a great thing when they arrive. However, I don't think you'll be able to totally ignore bandwidth, security, and jurisdiction issues. You have to be able to trust your access point, or you'll get stung sooner or later.

Automatic micropayment protocols will eventually be necessary for load-balancing on the net. We won't be able to keep increasing bandwidth to keep ahead of demand forever; eventually we'll need to distinguish between fat local links and 20-hop routes through undersea cable to Australia. If done right, it will be cheaper than current access protocols, but it will require substantial reworking of infrastructure.

It's not just reiserfs (none / 0) (#21)
by adamsc on Tue Jun 13, 2000 at 11:36:44 PM EST

BeOS has been doing this with BFS for quite awhile. BFS is not a true database but has many database-like properties. You can instantly query on not just filenames but also arbitrary attributes. Thanks to telnet, here's example from a system with 90,000 files on the drive (query is the equivalent of find, but much, much more efficient):

$ time query Faster

real    0m0.040s
user    0m0.002s
sys     0m0.020s

$ time query Audio:Artist=Rush
[MP3 list snipped]
real    0m0.092s
user    0m0.008s
sys     0m0.046s

$ time query META:email=*@digitaria.com
[Coworker list snipped]
real    0m0.046s
user    0m0.004s
sys     0m0.023s

Many BeOS applications take advantage of the database-like filesystem. My [completely legal - no flames needed] MP3 ripping system is actually just a bunch of scripts which set the appropriate attributes; rather than having some sort of jukebox program, any attribute-aware program can enjoy full access to the meta data. My email client uses the same People files as the standard contact manager program, which means that any app capable of doing a query (which is trivial using the Be API) can use all of that data, giving me a real combined addressbook for the entire system.

One other really nice feature BeOS has is efficient monitoring. An app can easily request to be notified if a given file or directory is changed, which makes it trivial to add lots of "live" status monitors. As an example, the Tracker's Info view on a file or directory will update in real-time as the size of that file or directory changes; the included web browser has a download manager which sets a progress bar in the downloaded file's icon; you can watch this bar change from any view in the Tracker in real-time along with the display in the download manager window.

Modern filesystems are definitely one of those features that really makes everything you've used before seem really primitive.

Spring (none / 0) (#22)
by Anonymous Hero on Tue Jun 13, 2000 at 11:47:44 PM EST

I read about Plan 9 a few years ago - the other OS that was in development was Sun's Spring - which has (had?) heaps of similar concepts.

Taligent's OS could dynamically recompile and link at the function level - that is a pretty powerful concept - kind of like diving into the Smalltalk browser...

Mirror Worlds is my favourite book for introducing the concept of a pervasively connected world to someone... I have no idea what happened to the Linda language and everthing but the data processing ideas are interesting.



Re: Spring (none / 0) (#24)
by Anonymous Hero on Wed Jun 14, 2000 at 01:22:41 AM EST

the linda language (and the tuple space stuff) has essentially been reincarnated as Java Spaces (which is what is at the core of Jini. Yay! more java hype!) At least, I don't know if linda is specifically dead, but the ideas behind it are being used by Sun.

[ Parent ]
Systems of the future | 27 comments (27 topical, 0 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!