Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
NG unix lookalikes and other directions in OS design.

By matman in Technology
Tue Sep 26, 2000 at 04:03:14 PM EST
Tags: Software (all tags)
Software

Linux (in the sense of more than just the kernel) has got its foot in the door, and the Linux kernel is sneaking in. Linux is being put on servers, desktops, embedded devices and the media is helping to make sure that everyone knows it. It's got a culture surrounding it. But that's okay, because Linux is really pretty cool - sure seems to beat NT, and most of the other major contenders out there. However, I'll bet that the Linux kernel isn't the pinacle of technology, never to be replaced. I hope that people don't forget that better IS possible.


If something's to replace the Linux kernel, it's got to do just that - most software should work without needing much more than a quick recompile. Compatability with existing operating systems is not a total necessity, but it does help an implementation gain acceptance. Remember that UNIX is an environment - an interface that software written for unix expects. You can't modify the environment, but you can append to it. You can even offer alternative environments. Obviously, there's room for improving the Unix design... not that it doesnt work well. Interfaces presented to software could be modular and a system could offer many interfaces. This is already done to some extent using Wine on *nix machines to present a windows API to applications that want it. Changes to what lies underneith are even less limited.

There are many OS designs out there. From microkernels used by Mac OS X, QNX, Hurd, even Windows NT, to caching kernel concepts used in V++, monolithic kernels like Linux, and others. There are hybrids of these designs, and systems designed for distributed operation

A microkernel is one where the main kernel is designed to be minimalistic. All other parts of the kernel - filesystems, drivers, network stacks, etc - run as userspace servers. This design offers more sandboxing possibilities and results in a more stable and more secure system. It's a more object oriented approach, but it's critics have said that it can't be as fast as a monolthic kernel because of the overhead involved in communicating between parts of the kernel. However, microkernels have been shown to come close to monolithic kernels in speed. Caching kernels aim to be a sort of light weight microkernel. They cache threads and address space similarily to how hardware caches data. They have been shown to be just as fast as monolithic kernels while being more modular and secure.

Microkernels are obviously in, Hurd aims to be a NG Linux of sorts, offering a Unix interface. Things will already run on it. It appears to be a contender. OS X looks interesting for those running macs. These are OSs that already have code... very working code. However, there are more bleeding edge designs (even though they may be old, many have not been implemented on a large scale). Yahoo gives a little bit of an index. There's a lot more out there than just Unix and Windows. Ever heard of an exokernel? The exokernel cuts down on what a kernel is expected to do, abstracting hardware less than normal kernels do. Does not sound like desktop end user material, but certainly could be useful for high end specialized applications.

A hybrid sounds attractive - something which can offer extreme stability, flexability, and security like a microkernel aims to do while, when required, offer low level access to hardware, for purposes like multimedia and gaming as an exokernel does. Although it's certainly possible, people dont like to reboot to access a specific application, so I doubt that two different operating systems for two different tasks will ever be very popular. Obviously, as computing becomes a more integral part of our lives, we're going to want continuity between our devices. A distributed operating system, designed to manage many devices and provide services to them, is certainly a possiblilty. In the end, of course, end users don't care so much about about the design as they do about the results.

This isn't the place for me to insert 50 pages of OS design essay... the resources are out there for people to learn from. This, I hope, can be a place for gossip, excitement, learning, and wild speculation. More awareness of weird designs can only result in more of them, and that benifits everyone.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
What kernel design is to be the next big thing?
o Microkernels 27%
o Caching Kernels 6%
o Exokernels 7%
o Monolithic Kernels 5%
o Distributed Kernels 20%
o No Kernel 31%

Votes: 88
Results | Other Polls

Related Links
o Yahoo
o QNX
o Hurd
o caching kernel
o Linux
o index
o exokernel
o Also by matman


Display: Sort:
NG unix lookalikes and other directions in OS design. | 37 comments (36 topical, 1 editorial, 0 hidden)
Linus on microkernels (4.06 / 15) (#1)
by mattdm on Tue Sep 26, 2000 at 03:10:29 PM EST

There is a historic (ca. 1992) argument between Linus Torvalds and OS guru Andrew Tanenbaum over this very issue. Tanenbaum disses Linux heavily for being monolithic, and Linus flames back. What fun! Check it out.

Re: Linus on microkernels (3.42 / 7) (#2)
by kovacsp on Tue Sep 26, 2000 at 03:21:50 PM EST

I love this quote from that exchange:
"... but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5"

-- Andrew Tannenbaum, 1992 (refering to Linus' decision to give away linux when it only ran on "high end" hardware)



[ Parent ]
Re: Linus on microkernels (2.42 / 7) (#3)
by matman on Tue Sep 26, 2000 at 03:29:23 PM EST

A funny quote in that conversation by A.S.T.: "Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."

hehe oh yah? seems like everyone is running linux on intel hardware :P

[ Parent ]
Re: Linus on microkernels (4.00 / 5) (#8)
by adamsc on Tue Sep 26, 2000 at 05:32:03 PM EST

I found the comments about the "upcoming Windows NT" being a microkernel interesting. While you can hardly classify any shipping version of NT as a microkernel, the original design was interesting and reflect a number of good ideas, as you'd expect from someone with Cutler's credentials.

NT is really an example of what happens when a decent design runs into deadlines and backwards-compatibility. Most of the worst parts about NT exist in the Win32 layer and the programs like explorer.exe running on top of it. There've been a lot of rumors of heavy feuding at Microsoft between the Win9x and NT teams; as an example, apparently the NT people thought the Win95 explorer was garbage, an opinion I share - Microsoft has held the state of the art in user interfaces back by almost a decade but there is some interesting and significantly better code hidden under the muck.

[ Parent ]

Re: Linus on microkernels (4.25 / 4) (#15)
by Pseudonym on Tue Sep 26, 2000 at 09:40:09 PM EST

The irony of reading that debate nowadays is that the Hurd, while the cutting edge of technology ten years ago, is obsolete now. The Mach kernel even contains such bloat as networking support! Research has moved on to the next generation of microkernels (typified by L4/Fiasco, QNX, Plan 9 and BeOS) in the mean time.


sub f{($f)=@_;print"$f(q{$f});";}f(q{sub f{($f)=@_;print"$f(q{$f});";}f});
[ Parent ]
Distribution Will Make the Next Killer OS (4.13 / 15) (#5)
by cysgod on Tue Sep 26, 2000 at 03:55:54 PM EST

Microkernels are already a step in the direction of distribution. The idea being to move as much of the process specifics off the raw hardware as possible. Making everything movable, loadable and easily replaceable. It is not so gigantic a leap to think about replacing the hardware out from under a process if the hardware is needed for another task, or needs to have more RAM added, etc.

The advantage to all of these is that with greater abstraction from hardware you gain greater portability both for the processes and for the OS itself.

In the end it becomes an economy of scale question. 40 486's linked together can give a modern system a good run for its money at a possibly a significant cost savings, except for more specialized computational tasks.

The failure of most modern languages to exploit parallelism goes hand in hand with the lack of good parallel natured kernels.
As an example, as long as I trust the other 5 machines on my network, I don't really care which ones are translating my MP3 into a form that the DSP chip can digest. And I don't care where the DSP chip is, as long as I get sound out the speakers. The key is to make the clustering built-in not a result of having to hack software up.

Being able to transparently move processes around between machines that are going up and down, being upgraded is certainly a place where a lot of research and money is going right now. It's just a matter of time until some nice student of higher education gets some time off and implements this sort of trusted process sharing for >insert your fave free OS here<.

In the future, we'll likely take this for granted. It'll be quaint to think of the days when processes couldn't be shared. The challenges involved are quite some food for thought though. My mind boggles at executing foreign code on my system with some level of trust and how to protect data, etc. Should be some good fun to develop, when do we start?


Re: Distribution Will Make the Next Killer OS (1.60 / 5) (#13)
by Didel on Tue Sep 26, 2000 at 08:42:34 PM EST

Howabout now? Sounds like you just volunteered. :)

[ Parent ]
Re: Distribution Will Make the Next Killer OS (none / 0) (#28)
by Misagon on Wed Sep 27, 2000 at 03:01:23 PM EST

I think that we need a better foundation than Unix if we are going to have something where distribution is ubiquitous. (sp?) The problem is that it is really hard to create a system that implements distributed shared memory if you want both speed and causality. You need to let the same system be responsible for monitoring all communication of all processes that communicate with any application that uses dsm.

For Linux, the starting point would be the Distributed IPC project.
--
Don't Allow Yourself To Be Programmed!
[ Parent ]

Re: Distribution Will Make the Next Killer OS (4.00 / 1) (#32)
by negcreep on Thu Sep 28, 2000 at 05:25:26 PM EST

check out:
http://www.disi.unige.it/project/gamma/

its a modified linux which does native paralell processing.

[ Parent ]
What about L4? (3.93 / 15) (#6)
by faichai on Tue Sep 26, 2000 at 04:04:06 PM EST

I'd thought I mention a kernel that I am intensely curious about, but which also seems pretty unknown in the community, that kernel is L4.

L4 is a microkernel, designed by Jochen Liedtke, it is a high-speed microkernel written primarily in assembler.

The orignal design, was to develop a microkernel, that tries to impose as little policy as possible on the full operating environment (sound familiar!).

The end result, L4, was a fast real-time, multi-threaded microkernel. It's primary benefit being extremely fast IPC (inter process communications), which reduces the overhead of pushing traditional kernel processes into user space.

The approach taken with L4 opposes that taken with Hurd, in that the kernel is heavily optimised for a particular architecture (indeed even specific instances of an architecture). This optmisation is deemed necessary to ensure maximum kernel performance, and negate arguments against microkernels being too slow, as demonstrated with Mach and the Hurd.

To prove the viability of the L4 design, they built L4Linux on top of L4, as a monolithic server. The amazing thing is that the performance reduction was only 10% on average. Paper here.

While I say monolithic server, functions of the Linux kernel were still split of into different threads. However only the bare minimum to get it working. The implications of this are that the use of L4, as a base, and increasingly splitting up Linux into different servers could very well improve Linux performance, as well as making it a good environment for real-time computing, and low-latency work (Audio and video). This is indeed the aim of IBM's Sawmill project .

The problem with L4 is, that at the moment the Code is non-free, and the only version available is an old one from 1995. I believe that there are talks underway to GPL it. However, there are projects such as L4/Fiasco to produce a C++ version of the Microkernel, albeit however with the performance drawbacks of using such a language.

So L4, to me, seems to have everything required of a microkernel. Are more architecture dependent kernels the way to go? I can't say for sure, but I think the general basis is sound, and porting an optimised microkernel probably takes about as much effort as porting the hardware dependent components of Linux to a different architecture anyway, so portability is not a factor (although feel free to prove me wrong!).

Re: What about L4? (3.60 / 5) (#12)
by Broco on Tue Sep 26, 2000 at 08:32:51 PM EST

I'd have to disagree with the idea that we should have more architecture-dependent kernels. I don't know much about L4 specifically, but with today's technology, I think there's very little reason to code anything in assembly, and certainly never an OS. All-assembly OSes are nice to play but they'll never go past that stage for mainstream users (i.e. anyone who isn't doing a 2-month-long scientific calculation).

First, I'm not sure how you can say that portability is not a factor. With a pure assembly program, each and every line of it depends on its original platform. Every single line would have to be changed if you wanted to port it to a different CPU! And it's even worse if it's optimized: the code will often depend on obscure features of the target platform that aren't found on other systems, so simply going around doing a global search-and-replace of "ax" with "zc4" or whatever won't work either. So I would say that assembly code is not only somewhat nonportable, it's completely impossible to port. "Porting" such a beast would amount to a full rewrite.

So, since assembly code is 100% non portable, that would mean if a pure asm OS caught on, we'd be stuck with the horrible x86 archictecture for even longer. "arrgh!" is putting it mildly :).

And the argument that assembly programs are faster is only valid for badly-designed architectures like the x86. Because there are so few registers, human programmers can keep up with the internal workings of the system. But on the Alpha, for example, there are 64 (?) registers and there is no way a human programmer can make effective use of all of them without ending up with a mess of spaghetti. OTOH, a compiler can. So C code can be actually faster than optimized machine language!

Anyway, I would argue that speed is not really an important concern in most cases. Any lack of optimization is rapidly made insignificant by advances in hardware; OTOH, improvements in flexibility, ease of use, stability etc, last forever. Optimization should in many cases be the last concern of the developers of a large project.

Klingon function calls do not have "parameters" - they have "arguments" - and they ALWAYS WIN THEM.
[ Parent ]

Re: What about L4? (3.50 / 2) (#16)
by tzanger on Tue Sep 26, 2000 at 10:12:42 PM EST

And the argument that assembly programs are faster is only valid for badly-designed architectures like the x86.

Even with 80x86 hardware, I find it very difficult to believe that a human can program better assembly than a compiler with x > 4.

The reason is all the special cases and branch prediction optimizations and cache fill loopholes which exist on the P5 and P6 cores is ludicrously complex to deal with for anything more than the tightest of loops. Hell, 80386 optimizations are "incompatible" with 80486, which are incompatible with P5, which are incompatible with P6. It's insane.

Optimization should in many cases be the last concern of the developers of a large project.

Agreed. My rule of thumb is that you should only have to hand-optimize where absolutely necessary. Don't waste time trying to get the menu popup as fast as possible. Instead optimize that little function which parses lines from the input data. I'd estimate that under 3% of code on any given project even needs hand-optimization, and under 1% requires hand-coded assembly. The coding guidelines for the Linux kernel explain it well:

    ...

Okay, I can't find it now (go figure) -- it was a good description (I think by Linus) of why you should let the compiler do the optimizing for you. In short: it has a much better idea for how it is organizing things and what the processor is going to be doing than you will, for the most part.

Inner Loops is a pretty decent book which goes on to describe all the nastiness you will run into if you think you can outsmart a compiler for general-use code on the P5/P6 processors. I used to code for 80386 and it wasn't too difficult to optimize. I don't think I'd dare (mostly because I'm paid to do these kinds of things now) since the new x86 chips are so damn full of inconsistencies and loopholes.

x86.org is a good site for x86 info as well. Just going there now I see it has been acquired by Doctor Dobbs.



[ Parent ]
Re: What about L4? (none / 0) (#30)
by sety on Wed Sep 27, 2000 at 11:18:09 PM EST

A couple of questions:

1. Are other architectures easier to optimize for with the compiler and / or by hand? Such as Sparc or Alpha? I run x86 linux on PIII and need it to run a small numerical program very fast.

2. If I don't know exactly what I am doing am I more likely to screw up hand x86 assembly modifications on a PIII?

[ Parent ]
Re: What about L4? (none / 0) (#31)
by tzanger on Thu Sep 28, 2000 at 10:47:51 AM EST

1. Are other architectures easier to optimize for with the compiler and / or by hand? Such as Sparc or Alpha? I run x86 linux on PIII and need it to run a small numerical program very fast.

If you don't know what you're doing you probably won't get the best performance out of hand optimization. I wish I had that Inner Loops book in front of me; I'd give you an example of how the instructions must be specifically ordered or the one of the pipelines will stall waiting for the other. It's not straightforward like other processors.

I have no experience optimizing Sparc or Alpha processor code so I can't give a firsthand statement on them. However I have heard from people who do have experience in multiple 32-bit proc coding that the Intel chips are the "weirdest", to use the technical term.

2. If I don't know exactly what I am doing am I more likely to screw up hand x86 assembly modifications on a PIII?

You can't HARM anything, but you can get yourself a lot worse performance if you start stalling pipelines, starving cache, screwing with the branch prediction flow, etc.



[ Parent ]
Re: What about L4? (3.00 / 1) (#34)
by Not Jon Katz on Tue Oct 03, 2000 at 02:22:33 PM EST

General question: If I am compiling a program with GCC 2.95.2 - what optimizations should I use to get the program to run the fastest on a i486 and a Pentium II? I have been using -i486 -03 when compiling for my 486, and i586 -03 for my Pentium. Cool?

[ Parent ]
Re: What about L4? (4.00 / 1) (#35)
by tzanger on Tue Oct 03, 2000 at 02:30:55 PM EST

General question: If I am compiling a program with GCC 2.95.2 - what optimizations should I use to get the program to run the fastest on a i486 and a Pentium II? I have been using -i486 -03 when compiling for my 486, and i586 -03 for my Pentium. Cool?

You won't have a run fastest on both a 486 and P2 with a single compilation; The optimizations required are completely different. I believe that GCC 2.95.2 will emit code for -mpentiumpro which gives you P2 optimized code, since P5 is different than P6. I'm not a total expert on GCC though.



[ Parent ]
Re: What about L4? (2.00 / 3) (#36)
by Signal 11 on Tue Oct 03, 2000 at 06:21:42 PM EST

-O6 and -pedantic will get you more optimizations regardless of platform.


--
Society needs therapy. It's having
trouble accepting itself.
[ Parent ]
Re: What about L4? (none / 0) (#37)
by fluffy grue on Fri Oct 06, 2000 at 06:25:16 PM EST

Hey Siggy, what're ya smoking? Obviously not tobacco Sigs.

From the gcc manpage:

       -pedantic
              Issue  all  the  warnings  demanded  by strict ANSI
              standard C; reject all programs that use  forbidden
              extensions.

              Valid ANSI standard C programs should compile prop-
              erly with or without this option (though a rare few
              will  require  `-ansi').  However, without this op-
              tion, certain GNU extensions and traditional C fea-
              tures  are  supported  as  well.  With this option,
              they are rejected.  There is no reason to use  this
              option; it exists only to satisfy pedants.

              `-pedantic' does not cause warning messages for use
              of the alternate keywords whose names begin and end
              with  `__'.  Pedantic warnings are also disabled in
              the expression that follows __extension__.   Howev-
              er,  only  system header files should use these es-
              cape  routes;  application  programs  should  avoid
              them.

--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Re: What about L4? (4.00 / 1) (#24)
by faichai on Wed Sep 27, 2000 at 05:20:02 AM EST

I could have been a little clearer, yes the microkernel will have to be re-written for new architectures, the thing with microkernels is that they are not that big!

I can't remember exactly offhand, but I think L4, was like 30,000 lines of assembler.

Remember that the whole point of microkernels is to do as little as possible and push everything into user space. So L4, provides a minimal set of primitives for memory management, passing interupts as IPC, thread and process management etc.

Once you have this microkernel running, it will provide a degree of hardware abstraction, such that servers running on top of it (which do all the real work like I/O, virtual memory management etc) can be ported with a bit more ease, and that these servers can be written in a high-level language and ported easily.



[ Parent ]

exokernel? (3.12 / 8) (#7)
by the coose on Tue Sep 26, 2000 at 04:50:05 PM EST

Now, that's a new one on me. After reading the supplied link, I think this could really go somewhere. The article talks about each application using a Library Operating System (LibOS) but why not build up a more complete system with the generic OS functions built into the LibOS? To me this seems like the epitome of portability since to port the OS over to a new platform it's just a matter of obtaining a LibOS that runs on that platform. As long as the hardware I/O primitives are consistent among the various exokernels, porting is no problem.

I know it's a lot easier said than done but this is neat stuff to think about.

It looks vaguely like the subsystems in NT 3.5 (2.00 / 1) (#29)
by marlowe on Wed Sep 27, 2000 at 06:11:37 PM EST

At one time NT even had a POSIX subsystem. I'm told it really sucked. No GUI, X11 or otherwise. I think even networking was absent.

-- The Americans are the Jews of the 21st century. Only we won't go as quietly to the gas chambers. --
[ Parent ]
Check out EROS (3.00 / 8) (#9)
by Paul Crowley on Tue Sep 26, 2000 at 05:33:37 PM EST

My favourite alternative operating system project is EROS (http://www.eros-os.org/). Still nothing like a TCP/IP stack, but lots of neat, unusual, and highly advanced operating system features. I think we're going to have to rethink the way we design operating systems if we're to have secure computer systems, and something like EROS's flexibility could make a real difference there.
--
Paul Crowley aka ciphergoth. Crypto and sex politics. Diary.
May not have to reboot (2.00 / 6) (#10)
by dead_radish on Tue Sep 26, 2000 at 07:01:41 PM EST

Caveat: My understanding of kernel operations is fairly slim.

people dont like to reboot to access a specific application

Couldn't something along the lines (conceptually, at least) of VMWare use this? Not a full reboot, but a separate instance of the kernel? So that you have the microkernel running for most of your applications/functions, but when it comes time to launch Diablo 4: Duke Nukem's Heretical Quaking Doom (Coming soon to a store near you!) a small process splits off, launches an exokernel, and runs until the game is done? Granted, it would require a split of the system resources, but with the amount of resources currently in boxes, if they were used efficiently, we could get by quite nicely.

Is this just a horribly naive argument, or could something like this work?

Cheers,
dead radish
I knew I shoulda brought a crossbow. -- Largo. www.megatokyo.com

Re: May not have to reboot (2.00 / 1) (#11)
by matman on Tue Sep 26, 2000 at 07:47:30 PM EST

Of course anything is possible. The thing is that to have a second instance is more resource intensive. To be able to sorta load a second, different kernel beside the main kernel instead of ontop of it would be a good thing especially if you could set one of the kernels to a higher priority. I mean, I'm learning, but I dont know enough about it to make any particular claims... I'm just speculating :)

[ Parent ]
Re: May not have to reboot (3.00 / 2) (#14)
by scheme on Tue Sep 26, 2000 at 09:14:46 PM EST

The whole point of using an exokernel is to reduce the overhead that the kernel requires. Although spawning another virtual machine would let you start a exokernel for each app easily, you would have a lot of overhead since you need to virtualize the hardware for each exokernel. It would probably be easier to just have a normal kernel or microkernel and run the app under that than to do what you suggest.


"Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT'S relativity." --Albert Einstein


[ Parent ]
The issue would be mediating hardware access (3.00 / 1) (#17)
by Alhazred on Tue Sep 26, 2000 at 11:22:03 PM EST

What your talking about really just amounts to having different drivers with different levels of abstraction. Putting one kernel on top of another is always a performance penalty because of context switching and memory access issues. What you would want would be something like a "direct hardware access API", which is basically what direct-x is in the windows world... Its perfectly feasible, but as I started out saying, you have to design the system to allow the proper mediation of access to hardware resources. Certainly having 2 different task schedulers and virtual memory subsystems doesn't make a lot of sense in general. My guess is that even PC class machines will eventually support virtualization in hardware, which will make things much simpler. In a sense the x86 did that to a certain extent with "real mode".
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
gnu hurd... (2.40 / 5) (#18)
by pulsar on Tue Sep 26, 2000 at 11:30:29 PM EST

...is very interesting to me. They use gnumach as the minimal kernel, then have the gnu hurd handle everything else. gnumach handles all the lowlevel hardware access and you could think of gnu hurd as a super server of sorts. gnumach is based on Utah's mach 4, which was based on CMU's mach 3 (and older versions). A lot of gnumach developers/users use oskit. The two coupled together provides access to drivers from Linux, FreeBSD, NetBSD and other kernels. This is actually quite cool as you are not "stuck" with one kernels driver suppport, but rather have several to choose from.

There are actually several operating systems based on CMU's mach. For example, Compaq's Tru64 UNIX (formerly Digital UNIX (formerly OSF/1)) is based on mach 3 (older versions were based on mach 2.5). Tru64 is an exellent example of extreme performance! Linux on Alpha is quite fast itself, but from what I've seen Tru64 is still a bit faster (probably some optimizations that Linux doesn't yet have). Compared to other unix-like operating systems, Tru64 smokes 'em all in performance and reliability (also IMHO, but there are many example cases).

gnu hurd caught my eye a few years ago and I've followed it with interest since. In recent months I have cofounded a project to start porting gnumach/hurd to Alpha. Don't take that as advertisement please, I only meantion it to show development of gnu hurd is quite active (also check out the Debian GNU/HURD mail list archives). There was a audio interview with one of the cocreaters of gnu hurd not too long ago, check the gnu hurd list archives... anyway the interview was interesting. Many items were discussed and I have to give the guy a A for avoiding the interviewers attempt to get him to say gnu hurd competes with Linux. Instead he talked about similar features and the cool features of gnu hurd. gnu hurd has a transparent ftp interface. Very cool stuff! There have been patches floating around for a while that allows you to get xfree86 up and going on gnu hurd. Work has picked up on that recently and I hope they get everything fully functional soon.

All that said and done, I for one am very interested in seeing which of the different kernel types will become popular in the future.

Re: gnu hurd... (4.00 / 1) (#19)
by pulsar on Tue Sep 26, 2000 at 11:52:07 PM EST

I also want to say (got a little side tracked with that post, sorry) that a good majority of people who calim microkernels are slow are usually the ones who took a monolitic kernel, broke it into 2 or more "processes" without really doing much to make it a true microkernel. CMU's MACH 3 site has a lot of good documentation (I don't remember the URL off the top of my head, but it's linked from the gnu hurd site) about microkernels in general and some about MACH specifically. If you are really interested, go there and read some of it. There are even some slides that you can print out and put on a projector (assuming you have that equipment). IIRC there is something like 20MB worth of docs available to download (this is from memory, so I could be off on the size).

[ Parent ]
Plan9, a distributed OS (3.00 / 6) (#20)
by 3john on Wed Sep 27, 2000 at 12:14:05 AM EST

Apart from the cult movie, which is scoring so badly on the front page poll, this is an operating system from Lucent ne Bell Labs that (to quote from the press release):

"The Plan 9 team was led by researchers Rob Pike, Ken Thompson, Dave Presotto and Phil Winterbottom, with contributions from others in the Computing Science Research Center and support from Dennis Ritchie, head of the Computing Techniques Research Department.
...
The Plan 9 system is based on the concept of distributed computing in a networked, client-server environment. The set of resources available to applications is transparently made accessible everywhere in the distributed system, so that it is irrelevant where the applications are actually running."

and something from the faq:

"Plan 9 users do Internet FTP by starting a local program that makes all the files on any FTP server (anywhere on the Internet) appear to be local files. Plan 9 PC users with a DOS/Windows partition on their disk can use the files stored there. ISO 9660 CD-ROMs and tar and cpio tapes all behave as if they were native file systems. The complete I/O behavior and performance of any application can be monitored by running it under a server that sees all its interactions. The debugger can examine a program on another machine even if it is running on a different hardware architecture."

I have always liked the sound of it, and since June it has been available under some kind of open source license

Re: Plan9, a distributed OS (4.00 / 1) (#21)
by sergent on Wed Sep 27, 2000 at 02:14:04 AM EST

I'm posting this from Plan 9 right now.

(In particular, it's from Charon inside Inferno running on Plan 9.)

It's pretty nifty. Check out comp.os.plan9 (moderated) for more info.

It's a bit hard to get used to--definitely not Unix. But there are many nice things.

For a touch of things 9ish that is not quite as esoteric, try using wily on Unix. I really like it and have started using it for everyday hacking at the office recently.

[ Parent ]

Re: Plan9, a distributed OS (2.00 / 1) (#22)
by 3john on Wed Sep 27, 2000 at 02:30:47 AM EST

I do use wily ;) it is so much better for doing compile-installs than vi. And, come to think of it, is probably why I remember Plan9.

[ Parent ]

Re: Plan9, a distributed OS (2.00 / 1) (#23)
by matman on Wed Sep 27, 2000 at 02:41:26 AM EST

Care to go into any more detail? :) You've got me curious.

[ Parent ]
Re: Plan9, a distributed OS (none / 0) (#33)
by Arker on Sun Oct 01, 2000 at 08:38:49 AM EST

Hrmm I don't see anything here I don't have already under linux.

Plan 9 users do Internet FTP by starting a local program that makes all the files on any FTP server (anywhere on the Internet) appear to be local files.

I do that too. The program is called emacs. Ported to damn near everything.

Plan 9 PC users with a DOS/Windows partition on their disk can use the files stored there.

Yep, mount /dev/hda1 /dos-c.

ISO 9660 CD-ROMs and tar and cpio tapes all behave as if they were native file systems.

Got that too.

The only thing that looks at all different is the emphasis on distributed computing, but it's my understanding that the software is out there already that can do that as well, whether with windows, linux, or bsd... so why use plan 9?



[ Parent ]
Barrier to replacing Linux (4.00 / 2) (#25)
by edderly on Wed Sep 27, 2000 at 08:13:36 AM EST

To a certain extent the easy bit for any new OS platform is providing compatibility for existing applications on existing platforms. The tricky part is the hardware support.

How important is this? If you are happy for the OS to run on a limited hardware range - this is no problem. However if we are talking about replacing Linux or Windows (on Intel) then obviously you need to support the range of hardware available. This is a huge task - only now is Linux getting to grips with this problem and that has taken years. Just take a look at the Linux source and you will see that what consititutes the core kernel is relatively small - the software/hardware driver range is huge.

This problem makes the debate between Micro vs Monolithic kernels irrelevant for most folks. It's a bit of a shame really - Linux seems to be largely copying traditional UNIX kernel concepts - rather than coming up with something really cutting edge. It relies (like most other Unices) on the ability of developers to come up with something reliable in a hard to debug but very performant environment.

BSD already is a linux replacement (3.00 / 3) (#26)
by mr on Wed Sep 27, 2000 at 08:56:09 AM EST

Even though the original poster is ignoring BSD
"From microkernels used by Mac OS X, QNX, Hurd, even Windows NT, to caching kernel concepts used in V++, monolithic kernels like Linux,"

The reality is that BSD is a replacement for the Linux kernel already.

1) the Userland on linux distros is already found shipping on FreeBSD
2) Debian has had a desire to take the BSD kernel and making a GNU/BSD to complement their GNU/Linux,
3) BSD runs linux binaries, and some people show a 30% faster run time.


The better question is "What will replace the Unix model"?

Ken Olson (DEC CEO) - Why would anyone want Unix? VMS is so well documented (points to orange wall of manuals)
Microsoft - Windows NT 3.1 will be a better Unix than Unix
Think of all the OS corpses left before the Unix model.....Unix has won.
So...what will replace UNIX is a more important question than 'what will happen to linux' as Linux is the hyped Unix flavor of the week.


What specifically needs to be improved? (3.00 / 3) (#27)
by Nelson on Wed Sep 27, 2000 at 02:12:43 PM EST

I'm not sure why microkernels are always brought up as the panacea. The idea is clever enough and easy to understand and I think we all like the idea of the abstraction but in practice it is very difficult to implement a microkernel OS (I did my time at CMU on this very subject) and it is even more difficult if not impossible to do it and keep speed. Tru64 and MacOSX take liberties with the "microkernel architecture" and NT has as much "microkernel" as Linux does. I'd really like to see the microkernel dream realized with multiple servers providing different personas but I have no reason to believe that it is the future or even a good target for OSes to aim at, at this point I think it is still a research concept more than anything. The only real area where microkernels have succeeded is on large hardware with many processors.

There *is* L4Linux and MkLinux and neither is very popular. Hurd is also making great progress but is quite a ways from prime-time. (In recent months one of the core developers said that they are 1/10th of the way to beta on the hurd list...)

Linux has a very very good design, it is very clean and most of the code is quite simple. The benchmark is high, it performs well. I think that the way I see the kernel issue evolving is like this: Debian BSD/OS will be the first step, the ability to take a Linux system and drop in a BSD kernel is desirable for a number of reasons, it makes it possible to actually compare BSD and Linux kernels in a fair manner, it provides competition and that will ultimately spark development changes. There are some serious religious issues to sort out to make this happen, BSD is far more than a kernel, it's a way of thinking and a way of life for some. I personally don't think Linux should have two tool sets but that's the heart of the issue.

Down the road I see HURD becoming a factor of sorts. How big and in what capacity remains to be seen. HURD is a long way from providing what Linux provides, let alone offering something that it doesn't. L4Linux (FiascoLinux?) and MkLinux (dead?) and possibly others are the same, if they can do what Linux does and then some then either the Linux kernel will change to match them or they will become viable drop in replacements.. I can't foresee something non-UNIX-like any time soon though. Something else you have to consider is the dynamic nature of Linux and the overall high quality of the code, they have not shyed away from rewriting parts of the kernel, redoing parts, and dramatically changing parts to improve it. They avoid changing it for the hell of it but different parts of linux have undergone radical changes over the last few iterations. A new kernel would have to have something that Linux simply couldn't do or couldn't do in an efficient manner, at one point it looked like real-time was it but there are very good real-time solutions for Linux now. Buzzwordy kernels, microkernel, "cachekernel" (is that a microkernel that fits in to L1?) "nanokernel" (clearly better because nanoseconds are faster than microseconds..) "exokernels" aren't enough, bragging rights ("I've got a *microkernel*") are cool to less than 2% of the user base.

NG unix lookalikes and other directions in OS design. | 37 comments (36 topical, 1 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!