Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The Ghost Of MULTICS?

By jd in Technology
Sat Apr 21, 2001 at 03:48:24 PM EST
Tags: Software (all tags)
Software

Is MULTICS dead? Or is it merely sleeping? It was a very ambitious design for its time. OS design was hardly a mature field at its inception, and the people funding it were not exactly, ummm, stable. On the other hand, its cut-down cousin, Unix, has been having the time of its life.... Is it time the sleeping giant awoke?


Project GNU was an attempt to produce a complete re-implementation of the Unix development environment from the specification. Linux, after Linus realised it had grown beyond a terminal emulator, was the same for the kernel.

Are these one-off events, or can they be repeated? This is the question I'm going to try to answer, here and here.

First, the (ir)rationale. The original specification for MULTICS was big. I mean big. Three thousand pages big. This got cut down as people discovered new programming techniques and some functions were determined to be unnecessary. Even so, what was finally used was still one of the most complex programming tasks ever attempted.

(An estimate given to me by a MULTICS coder was that the complete MULTICS system was in excess of three orders of magnitude more complex than Unix. This puts a re-implementation at one thousand man-years, using Linux as a rough guide.)

On the flip-side, a monolithic kernel that is sufficiently elegent AND sufficiently comprehensive should be not only faster, but also more useful than their conventional counterparts.

The scheme I have proposed is to start with the libraries and development suites, à-la the GNU Project. Re-implement the environment first, and the kernel last. This is how Linux was born. (It's also largely how [386|free|net|open]bsd have developed.)

The idea is to be able to reproduce libraries and development tools which provide some well-defined set of MULTICS operations on architectures actually in use. Then, these can be used, developed, tested and honed, without needing any kind of MULTICS platform. (It's just as well -- the last MULTICS machine was shut down in the year 2000.)

Once there's something that can handle the system calls and the more standard APIs, it should be trivial to build a skeletal OS (not unlike what Linux 0.1 was) which can support those tools more directly.

From there, merging in the more esoteric MULTICS system operations & APIs, plus the more advanced Unix operations & APIs should be relatively trivial. (Relatively! I'm not proposing a scratch implementation of SMP on an architecture that almost predates the concept of a single CPU is going to be simple. AFAIK, efficient, scalable SMP has proved to be a phenominally complex task on even modern OS designs.)

IMHO, MULTICS contains the seeds of ideas that could not germinate at that time. The technology and the manpower needed was beyond anything available. Unix, though, is an idea that has grown at a phenominal rate. However, it's proving to be an awkward design for distributed and parallel architectures. MOSIX is one of the most creative solutions I've seen for distributed *nix, but it's not perfect.

There have been many experimental kernel designs, and many of those have simply vanished, because the ideas they have explored have proved of little interest outside of academia, even if there has been something of genuine widespread use therein.

By re-implementing MULTICS as completely as possible, and then extending it with any and all new concepts that have appeared in the Unix world, the possibility of borrowing some of the more useful but unused concepts in the OS research world is definitely there.

This concept may prove simply too ambitious; even for all that it's been done before. I'm no Stallman. Nor can I muster the kind of enthusiasm and drive that Linus Torvalds did. But for every function and call that does get implemented, MULTICS can be said to be at least alive and maybe awakening.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
MULTICS is...
o ...what Sid@UserFriendly had on those punched cards 16%
o ...a breakfast cerial 4%
o ...something to repel insects 2%
o ...a classic OS that needs to be left in the museums 32%
o ...a classic OS that needs a good polish, and a PL/I compiler 8%
o ...something that needs reviving, as modern OSes are lacking something 6%
o ..."SCITLUM" written backwards 17%
o ...alien technology now kept securely in Area 51 12%

Votes: 111
Results | Other Polls

Related Links
o MULTICS
o here
o Also by jd


Display: Sort:
The Ghost Of MULTICS? | 38 comments (33 topical, 5 editorial, 0 hidden)
guh? (2.60 / 5) (#1)
by delmoi on Thu Apr 19, 2001 at 09:32:27 PM EST

Why would anyone want to create something new with such old ideas?
--
"'argumentation' is not a word, idiot." -- thelizman
Re: guh (3.00 / 1) (#4)
by General_Corto on Thu Apr 19, 2001 at 10:40:24 PM EST

Why...
Because, invariably, people repeat known mistakes if they don't examine the past closely.

Multics offers you an operating system environment which has had many man-decades of research performed on it. I'm certain that a lot of clever things have been purloined from it already, and I'm also certain that many more have yet to be uncovered.

After all, Bayes was a church minister over 200 years ago, and only recently has his work been really been utilized.


I'm spying on... you!
[ Parent ]
1000 man years (2.50 / 2) (#7)
by cameldrv on Thu Apr 19, 2001 at 11:49:59 PM EST

Yes, but you can gain most of that knowledge without spending 1000 man years on the damn thing. Multics is nearly forty years old. General purpose computers had only been around about twenty when Multics was invented. It's a pretty big insult to computer scientists to say that the best OS you can write today is forty year old technology with no modifications.

[ Parent ]
someone, go tell the Barbarians (4.33 / 3) (#8)
by cp on Fri Apr 20, 2001 at 01:05:25 AM EST

Their pipe-turning and irrigation skills must surely have been better than of the Romans whom they defeated.

Sometimes, it's the early work that explores but abandons some interesting approaches to problems that are still with us today. Analog computers, for example, are finally coming back into vogue after all these years.

[ Parent ]

Learn from it, don't repeat it. (3.00 / 1) (#9)
by cameldrv on Fri Apr 20, 2001 at 01:29:57 AM EST

I'm not saying that Multics wasn't a good OS for its time. However, there have ben tons of advances in OS theory and practice in the last forty years. Even if Multics is a good place to start, why not just read the manuals, and bring it up to modern standards. We're not learning anything by re-implementing old technology. The only way to advance is to do something different than that which was done before.

[ Parent ]
Go to the Sourceforge link (3.00 / 2) (#13)
by makaera on Fri Apr 20, 2001 at 02:54:19 AM EST

Go the the Sourceforge link. The author clearly states (on that page) that the result will be a "GPL OS which meets the MULTICS specifications, but also builds on those specifcations." The project is not to just replicate, but also is to expand, which is what you seem to be proposing.

makaera


"Ninety rounds in there," Joel Andrews said. "If you can't take it down with 90 rounds, you better turn in your badge!" -- from Washington Post
[ Parent ]

Hmmm... (none / 0) (#32)
by Miniluv on Tue Apr 24, 2001 at 01:36:44 AM EST

Aside from the GUI, what are these major advances?

The UNIX spec is 30 years old now, and hasn't seen any major overhauls in that entire time. A modern UNIX system looks virtually identical to the original layout, from a high level perspective.

Sure, some of the details have changed, but even some of those are very similar. We still use streams, pipes, sockets. Memory management algorithms have evolved some, as has SMP technique. But you know the real reason everyone is so arrogant about current OS "technology" compared to the old shit? Because the hardware is thousands of times faster. Everyone wrenches their arm out of the socket patting themselves on the back for making a purchasing decision.

Come on, tell me how to moderate. I DARE YOU!


[ Parent ]

well (4.66 / 3) (#15)
by vsync on Fri Apr 20, 2001 at 04:17:21 AM EST

Considering that pretty much nothing of real importance has happened in the "computer industry" in the past 20-30 years, it wouldn't surprise me in the least.

--
"The problem I had with the story, before I even finished reading, was the copious attribution of thoughts and ideas to vsync. What made it worse was the ones attributed to him were the only ones that made any sense whatsoever."
[ Parent ]
os/390 (4.33 / 6) (#16)
by finkployd on Fri Apr 20, 2001 at 08:18:21 AM EST

Yet, os/390 (a direct decendent of 40+ year old operating system technology) has yet to be surpassed in terms of reliability, security, fault tolerance, and io speed (although that is more the hardware). While everyone is racing to jump on the next technology buzzword, fortune 500 companies and governments are queitly running their mission critical work on old technology because it cannot be trusted to windows, or even unix variants.

There have been many advanced in computer science, but that have also been many steps backwards. Not everything follows moore's law.

Finkployd
Sig: (This will get posted after your comments)
[ Parent ]
heh.. (3.66 / 3) (#6)
by rebelcool on Thu Apr 19, 2001 at 11:34:52 PM EST

i ask myself the same thing about unix, every single day.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Old Ideas? (4.00 / 2) (#25)
by lavaforge on Fri Apr 20, 2001 at 09:56:25 PM EST

The wheel is an old idea. Why, praytell, would anyone want to use them to make cars?
"In theory, there is no difference between theory and practice. But, in practice, there is." -- Jan L.A. van de Snepscheut
[ Parent ]
yes, but (3.50 / 2) (#26)
by delmoi on Sat Apr 21, 2001 at 10:58:10 AM EST

The Wheel is tried and true. Whereas multics has tried and failed.


--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Could be cool... (4.66 / 3) (#2)
by calmacil on Thu Apr 19, 2001 at 09:53:17 PM EST

I've read a little about Multics, enough to think that it sounds like something I'd like to play around with it. Are there any full design specifications around online? I checked out Multicians.org, which had some parts of some of the docs... are there any other good places to look?

Sounds fantastic! (3.75 / 4) (#3)
by regeya on Thu Apr 19, 2001 at 10:36:20 PM EST

YOWZA! Can I get it on PUNCH CARDS for my TURING ENGINE?

[ yokelpunk | kuro5hin diary ]

well... (4.12 / 8) (#5)
by Estanislao Martínez on Thu Apr 19, 2001 at 11:01:36 PM EST

The whole thing kid of hinges around the following question: what would people gain out of it?

Your essay continually implies that MULTICS is a superior design compared with Unix. How is this so? I know nearly nothing about MULTICS, so I really can't evaluate your idea without some examples of things you'd gain out of it.

--em

Neat, but why (3.75 / 4) (#14)
by strlen on Fri Apr 20, 2001 at 02:58:58 AM EST

Yes, I do understand that "learning how", or showing "it's possible" is a good reason why. But where are the tools for that to be done? How many working MULTICS systems are there? Perhaps, a better idea would be to setup a public access MULTICS machine, perhaps something using over-the-net serial console, that people could play with and get appreciation for MULTICS. Now, I may have my history _VERY_ wrong, but didn't MULTICS run at some stage on the PDP series? And since PDP-11 is easily emulated (many packages exist for its emulation), it may well be possible to setup a public virtual MULTICS machine. Also, what innovative aspects of MULTICS, not present in UNIX are there that are really of use to the public? Are there any problems with migration from MULTICS machines? Now, I'm not saying I know thse answers to that, quite contrary, I am looking for the questions; but I am a bit skeptical about the whole project. After all, creation of UNIX was indeed a project to duplicate some MULTICS functionality, so a MULTICS work-alike may be not as pie-in-the-sky as it seems.



--
[T]he strongest man in the world is he who stands most alone. - Henrik Ibsen.
Multics Machine Info (3.00 / 1) (#21)
by Captain_Tenille on Fri Apr 20, 2001 at 01:23:00 PM EST

Unfortunately, Multics never ran on the PDP-11, or any other PDP. IIRC, Multics only ran on gigantic Honeywell-Bull mainframes that I know very little about, and have never heard of emulators for. Also, the last operating Multics machine, used by the Canadian DoD, was shut down last year.

I'm not saying it's impossible, but it might be easier to bring the Multics ideas back into UNIX/Linux. The scarcity of the specs, lack of working models, and utterly unportable code (PL/I and assembler, I believe) make this exercise rather more difficult than copying UNIX or even VMS (efforts to do that appear to have stalled.
----
/* You are not expected to understand this. */

Man Vs. Nature: The Road to Victory!
[ Parent ]

Some reasons why... (2.00 / 1) (#27)
by jd on Sat Apr 21, 2001 at 06:12:10 PM EST

First, MULTICS included a number of features as part of it's inherent design :- SMP, security, etc.

These are all things simply bolted on to modern OS'. (Largely, because it's cheaper to bolt things on than it is to re-design entire OS' from scratch.)

This means that, for modern OS', all these extras are likely to impact performance and stability. Worse, the performance hit may not be linear, which would make the system non-scalable.

In fact, this is what we see. SMP in Linux, for example, is not that useful beyond 4 processors, and is a positive disaster when you get to 16+.

The same is true for security. Secure Solaris is not exactly Sun's star product, and SE Linux is progressing at a snail's pace and may never produce anything useful.

It's my opinion that there is something fundamentally wrong with the entire approach, if huge, multinationals and massive Open Source projects with more corporate, R&D and Government assistance than any other project before it, can not overcome obstcles that the MULTICS group were able to defeat.

There HAS to be something simple, something fundamental, to the entire MULTICS way of thinking that cannot be replicated in a UNIX environment. It's this "something" I want to find.

[ Parent ]

entering warehouse 23 you a open a random box... (4.62 / 8) (#17)
by unstable on Fri Apr 20, 2001 at 08:19:03 AM EST

You find thousands of computer punch cards.. all carfully numbered and sorted...
You notice on the top of the number one card it reads "Multics for the x86" ...also in the box is a card reader that has a USB plug on it.





Reverend Unstable
all praise the almighty Bob
and be filled with slack

www.multicians.org (1.83 / 6) (#18)
by your_desired_username on Fri Apr 20, 2001 at 08:58:59 AM EST

www.multicians.org

Not much point. (3.75 / 4) (#22)
by Parity on Fri Apr 20, 2001 at 01:54:10 PM EST

Please notice that Unix came out after MULTICS and so, presumably learned from MULTICS mistakes. Further, the 'innovative features' of MULTICS (remote terminals, paged, virtual, and segmented memory, multitasking, SMP support) are all implemented in modern Unices. The one missing feature is the B2 security rating, which conventional Unices cannot have no matter how well secured because they lack some features of the model specified by B2 (notably, separation of administrative powers so that the technical-side sysadmin cannot read classified files, etc.) This, of course, is the purpose of Trusted Solaris, and similarly, though not here yet, Trusted Linux.

Now, this is from my personal knowledge of Unix and a quick reading of the FAQ, so perhaps there's some underlying greatness to the architecture.

Truthfully, though, in the interests of disclosure (and a desire to vent, okay, it's true), I had a professor who had been on the MULTICS team, who's close-minded, narrow-focus, virulently anti-Unix, anti-C, (not to mention sexist, racist, etc., which was more reprehensible and almost got him fired, but didn't affect me directly) attitudes have turned me off of any interest in MULTICS ever, for purely emotional reasons, based on one person who is probably not representitive of Multicians as a whole, but, there it is.

So, perhaps my opinion that MULTICS isn't worth anything except a footnote as a predecessor to Linux is technically unfounded, but then again, neither the article nor the FAQ show many any reason to bother overcoming my biases.


Parity Odd


Very interesting. (3.00 / 1) (#23)
by Mr. Piccolo on Fri Apr 20, 2001 at 06:00:42 PM EST

Multics (not MULTICS) II certainly would be an interesting test for the bazaar model of software development. If we can get 3,000 coders to implement one page of the specification each, we're home free! ;-)

Anyway, perhaps we could at least use those extra protection levels of the x86 processor that nobody uses. ;-) ;-)

Seriously, though, it sounds like a cool uber-hack, but is probably useless. How about taking the concepts of Multics that didn't catch on and building something brand-new around them? Or has that been done already?

P.S. Cool, new feature, changing sig behavior!

The BBC would like to apologise for the following comment.


The Ponderings of a Meandering Mind (3.50 / 2) (#24)
by jd on Fri Apr 20, 2001 at 08:35:11 PM EST

Firstly, MULTICS, being an acronym and all that, really does need to be capitalized. (Or, as a QA person where I work would put it - capitalizified.)

People have almost certainly borrowed from MULTICS, in one form or another. However, nobody has ever built anything useful from such work, aside from UNIX.

The problem is, most architectures for Operating Systems are designed by people who probably know a lot about design, but not very much about usability or extensibility.

A couple of examples:

  • L4 is a nice microkernel, and there are two flavours of Linux build round it - vanilla & real-time. Seen any Linux distros pick up on it?
  • "Ants" is a distributed kernel, which farms processes efficiently over networks with any kind of topology. If it were actually useful, it would render projects such as Cosm, distributed.net and SETI@Home obsolete and petty. If. You see it in use, lately?

This is the problem. There are a lot of brilliant concepts out there, but that's ALL they end up being. For an OS to become anything more than an interesting side-show, it has to be practical, useful, and extensible.

Ok, so would a re-write of MULTICS meet these criteria?

First, let's try practical. The project is going to start by creating a development suite that complies with MULTICS' specification. This would allow developers on non-MULTICS platforms to play with the API. This is the keystone to the whole project. Without that suite, development would be impossible.

However, not only does it make MULTICS development possible, it also provides added tools for programmers across the board, for completely unrelated work. In a real sense, then, the development suite is practical, regardless of whether the kernel itself ever is, or indeed is ever developed. (Much like GCC is useful, even though HURD has remained a pipe-dream.)

Now, how about useful? Here, we need to examine a couple of MULTICS' strengths -- SMP and security. Modern OS', including Linux and Windows 2000, support only very primitive security models. In today's heavily-networked world, these models are as effective as a paper watchdog. MULTICS, on the other hand, was the first B2-certified OS. Unlike most OS', security was built-in from the start, rather than wrapped round inherently-insecure components. There is no telling if that improved the design of Classic MULTICS, but it does mean that a MULTICS II should have negligable overhead from the security.

SMP is another issue that causes endless problems. Very, VERY few Operating Systems are scalable, because of the complexity of all the various issues. Again, SMP tends to be bolted-on, rather than an inherent part of the design. As a result, it tends to scale badly. By using an OS specification that includes SMP, again, this should not be a problem. If the design is correct, it should be as correct for N processors as it is for 1, where N is any number you feel like throwing in there.

Finally, we get onto extensibility. This project is not about building around a closed, sealed specification, set in stone. It's about using that specification as a starting-point, as a guaranteed minimal set of functions, and extending it. Again, this goes back to how OS' are written. If Linux 0.65 had the same capacity to load/unload driver modules as 2.4.3 has, it would not have needed the substantial re-writes that it underwent, each time a design flaw came up.

If MULTICS II is built with extensibility as a priority, then anyone can add whatever capability they feel like, at any time, without causing endless conflicts and upgrade problems.

[ Parent ]

3k pages? AAARG (3.00 / 1) (#28)
by ksandstr on Sat Apr 21, 2001 at 06:14:34 PM EST

Sounds like a severe case of overengineering to me. If the MULTICS design was completely required to be so damn large, why haven't the "good bits" found their way into UNIX yet?

I'm sure that you could provide a UNIX98 compatibility layer on top of completely memory-mapped I/O, for example, so compatibility to existing software wouldn't be an issue.

What I'm not so sure about is if UNIX really is the "little brother" of MULTICS, or the small well-designed operating system that was hiding inside MULTICS, waiting to be discovered one day?


--
A gentleman always has an IDE cable in his coat pocket.



Migrating the good bits (none / 0) (#29)
by fluffy grue on Sat Apr 21, 2001 at 08:38:45 PM EST

I think I'll add all of the good stuff from Multics into Linux right now.

So let's see here... first I gotta put in robust timesharing... check.

Then I gotta put in pipes... check.

Finally, remote execution... check.

Okay, I'm all done! That was easy!
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Robust? (2.00 / 1) (#31)
by Miniluv on Tue Apr 24, 2001 at 01:33:03 AM EST

People use the word robust in reference to Linux? And they aren't referring to the developers?

When did this happen?

Come on, tell me how to moderate. I DARE YOU!


[ Parent ]

Lack of experience, I tell you. (none / 0) (#38)
by ksandstr on Thu Apr 26, 2001 at 08:20:24 PM EST

Oh well. In my book, anything that lasts at least 31 days as a terminal server for a bunch of secondary school students (i.e. ages 16->18 inclusive) counts as "robust".


--
Somebody has to set Imperial America up the bomb.



[ Parent ]
contact information (none / 0) (#30)
by briandunbar on Sun Apr 22, 2001 at 06:36:27 PM EST

I've looked at the your sourceforge web site, and don't see any contact information listed. How would a semi-bright person sign up for this (maybe) crusade of children?


Feed the poor, eat the rich!

Contact information (none / 0) (#35)
by jd on Tue Apr 24, 2001 at 06:49:46 PM EST

This is (finally! oops!) on the project website. (The website has also been extended, and will continue to extend until it collapses into a black hole.)

However, if you want a quick email address now, then it's: imipak@sourceforge.net

[ Parent ]

What Multics had and Unix doesn't (none / 0) (#33)
by Maniac on Tue Apr 24, 2001 at 12:51:05 PM EST

First, let me refer you to my previous Multics article.

With that behind me, I used to work for Honeywell, not as a Multics developer but as a software developer using Multics to develop software for other systems (simulators for aircraft). One of the reasons we used Multics was a general "Buy Honeywell" edict. However, we used Multics until the VAX was out for a few years because...

  • Dynamic linking. Compile a file & it is "ready to run" - really. No linking was ever required. There was a "bind" step you could do, but it was optional.
  • I/O redirection by function calls. The other article explains this more fully, but let me emphasize the punch line - every program could take advantage of a filter w/ no coding changes.
  • Reliable operation. Linux is getting good in this area. Multics was still better by allowing parts of a system to be taken off line and fixed while the system stayed up.
  • Better tools. Many tools in Unix are an 80% solution to tools that were on Multics. Specifically, lrk was better than both yacc and bison, ted was better than ed and sed, and so on.
  • Much better security. Access control lists - expanded for email (didn't have much spam then but might handle today's spam), rings, and so on.
I would take dynamic linking tomorrow on Unix or Linux if I could have it. The benefits for a software developer far exceed the costs (we're spending weeks trying to build a suitable command line to ld to include all the right items, in the right order, etc.).

Miscellaneous, and other suntry items :) (none / 0) (#34)
by jd on Tue Apr 24, 2001 at 05:00:53 PM EST

Ok, let's start with the fact that the SIMTICS pricect has a slightly improved website now. :) (It can be found at: http://simtics.sourceforge.net.) Again, comments and critisism welcome.

It has links to the SIMTICS project page on Sourceforge, which details some of the work that is going to be involved in this.

These pages are going to be heavily expanded, as I pick the brains of MULTICS developers, and will radically be improved if/once I've got actual design documents for it.

Ok, now to move on to the advantages of MULTICS.

First, MULTICS offers a variety of things that don't exist in conventional UNIX, but can usually be added on. B2-level protection of memory and files, for example.

There are some advantages that are much harder to implement. These include "stretchy" memory segments, where you can always allocate and access more memory, without slamming into some other process' area. Users can also have an effectively unlimited number of segments. (In the original MULTICS system, users could claim up to 256 thousand segments, each of which could hold anywhere from 1 byte to 256K. Puts the original PC - only built 20 years later!! - to shame!)

Secondly, addressing is done symbolically. The symbols are resolved -during- the execution of the data, by the memory manager, not at load-time. This means that you can effectively have run-time optimization of code, which is dependent upon the way in which the program is being used.

Thirdly, the MULTICS system was designed with buggy operators in mind. As a result, everything is fail-safe. The concept of undeleting files exists, although the implementation used was to work via magnetic tape, for a comprehensive real-time transaction log.

Those are a few of the major advantages to the MULTICS design. I've not listed them all, just some of the ones that are of specific interest to me.

The memory management is intriguing to me. Looking up the old papers on MULTICS (on the Multician's web site), it would seem that one way to implement this would be via a "memory filing system". This would be a 3-layer memory manager.

Layer 1: This would take the process:segment:offset ordered triplet, and convert it into a filename:offset ordered pair.

Layer 2: This would be the filing system. Ext2 would be nice, or ReiserFS. This simply "opens" the file, and sets the pointer to the correct position. This involves converting the filename:offset ordered pair into a track:sector ordered pair.

Layer 3: This takes the track:sector and converts it into an "internal segment:offset" address. This could be done with a minor change to the existing memory manager in most OS'.

This allows you to replicate MULTICS' heirarchical memory manager, by re-using just about any piece of code that could be remotely be useful. The only layer that would involve any substantial work would be layer 1, and that's just a basic lookup table.

A few clarifications (none / 0) (#37)
by Maniac on Wed Apr 25, 2001 at 09:28:44 AM EST

There are a lot of good things in Multics, but you may have a few misconceptions about the system:
  • The 256k word (1 megabyte) segment limit was pretty hard to work around. A "normal file" was simply mapped into your address space as a segment. A "multi segment" file was implemented as a directory with files named 0, 1, 2, .... The vfile_ interface took care of that for you, but a lot of early tools were broke w/ files > 1 mbyte.
  • I'm not quite sure what you mean by "run time optimization" of code. The first time you made a function call outside the current segment, the dynamic linker had to search for the target of that call. The dynamic linker used already initialized segments first, and if not found there, searched your PATH for a file that matched the name, initialized it, and then fixed up the references. In essence - a call to a separately compiled routine was done with an indirect reference [to the fixup routine the first time, then to the routine called on all subsequent calls]. I don't see any opportunities for optimization there [unless you used bind...].
  • The web site refers to PDP-10's. I used them too and they never ran Multics. Multics primarily ran on DPS-8M's (Multics version) built by Honeywell. There were the early systems (e.g., MIT) which ran on GE hardware before Honeywell bought them out.
  • On the memory manager, you can do part of that today w/ Unix. The main problem is that mmap gets you a "finite" address range to map your file into the address space. Try to grow the file beyond that and you are stuck. Segments can help that, but tend to move the problem.
  • The last PL/I compiler I used was on a DEC VAX. As an aside, it was originally developed on Multics. I would not put too much effort into using the original PL/I (except as design guidance) unless you translate it into C++ or Ada (for exceptions, and other language features).
I'm busy with current projects, but would not mind providing some guidance or clearing up misconceptions your team might have.

[ Parent ]
An idea whose time has come (none / 0) (#36)
by 80md on Wed Apr 25, 2001 at 12:33:08 AM EST


It seems clear that the next-generation tech-savvy computing power of Windows XP, combined with the best-of-breed multi-object protocol layer compliance implementation architecture of MULTICS, should combine to produce a combination of factors which, in a distributed system implementation environment implementation layer, would combine to power a true best-of-breed tech-savvy protocol computing layer into the twenty-first century, if not indeed beyond.

Well beyond. I mean, like, way the hell beyond, more than you dare contemplate.

So I say, "Yes! Absolutely! More MULTICS for Me, Mom! (TM)"



The Ghost Of MULTICS? | 38 comments (33 topical, 5 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!