Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Security properties needed from operating systems

By holofukami in Technology
Tue May 28, 2002 at 07:15:26 PM EST
Tags: Security (all tags)

The VFiasco project aims at the source-code verification of an operating-system kernel. A major goal of this project is to produce a trusted kernel that can be used to run both security-sensitive and untrusted system software and applications on one machine.

One question that has come up is that of how to find out which security properties are worth proving. While everybody can quickly come up with a wish list of items like separate address spaces, authenticated inter-process communication, or support of secure installation and booting, up to now nobody seems to have come up with a list of formal security properties that are worth verifying.

I recently attended a talk by an security expert that introduced an anonymous-web-browsing service. During the talk, I noticed that the attacker model regarding which the system was secure -- the assumed strength of a potential attacker -- precluded attackers that can execute code on the local machine. When I asked the expert what was needed from the operating system to protect the system from Trojan-horse attacks or malicious local users, he listed a few properties such as provision of protected address spaces and a safe clock, but said that there currently was no systematic method to derive such properties.

I got a disturbing feeling from this answer. The security community seems to have accepted that operating systems are insecure, leaving the problem of host-based security to OS vendors and system and firewall administrators.

Contrast this situation with the current and expected use of computers. Users want to use their desktop computers and PDAs to perform more and more security-sensitive tasks such as generating electronic signatures or encrypting communication. However, users also expect to be able to use the rich variety of services offered by modern Unix and Windows systems, including the possibility of downloading and running the latest and greatest games and toys from untrusted sources. Given the complexity and size of Unix and Windows, it will never be possible to trust that they do not contain bugs that allow malicious agents to compromise the user's security goals.

The VFiasco project aims at a different approach to provide security guarantees: We want to formally verify a microkernel operating system (the Fiasco microkernel, an implementation of L4), proving security properties about it. On top of this microkernel, system servers and user applications run in two compartments: trusted and untrusted.

(Trusted applications would use only small, verified or certified, trusted operating-system servers, where the untrusted compartment mainly consists of an L4Linux subsystem -- a Linux ``kernel'' running in user mode, with arbitrary Linux applications. Please find more information on the intended microkernel-based architecture at the Perseus website, which describes a prototype implementation of such a compartmentalized system.)

With this approach, an interesting question arises: Which security properties should be proved about the kernel? In other words: Which properties are needed to implement secure applications on top of the kernel? As I said earlier, there seems to be no accepted list of such properties, further it is unclear how to derive them.

A simple answer would be: Look at the Unix specification and filter it for security-relevant features; when your system provides these, it is secure. Unfortunately, this answer is not useful for two reasons. First, as verification is very expensive, minimalism (both in number and complexity) of the security requirements is mission-critical to the VFiasco project. Therefore, we aim for a small API which is easy to reason about; this is why we chose a microkernel as the base of the system. Second, it is far from clear that the Unix interface is complete in the sense that it is possible to guarantee security properties given a correct implementation.

We think that the following approaches are better suited to find the right security requirements:

  1. Trivial pursuit. All of us can think of properties which are obviously required for a safe operating system, such as address-space protection, kernel-memory protection, trusted software-installation process, and so on.

    This approach would require thinking of more such obvious properties (or squeezing them out of experts such as implementors of secure applications), and finding a rationale for them.

  2. Examine a subject. Look at one or two specific applications that need security guarantees from the system, for example the Perseus system or the DROPS console subsystem (DROPS is an operating system that runs on top of L4/Fiasco). Assume an attacker model in which the attacker can execute arbitrary code (such as Trojan horses) on the system. What guarantees does the application require from the operating system in order to be able to guarantee its own security goals?

  3. Bug hunt. Classify known attacks (such as those described by CERT) based on attacker model and violated security goal. Filter out attacks on operating systems or system software. Think of countermeasures that could have prevented the remaining classes of attacks (e.g., better access control, stack not executable).

  4. Access-control breakdown. As one representative class of OS-level security mechanisms, look at access-control mechanisms such as access-control lists, mandatory access control, BSD jails, virtual machines, and other sand-boxing mechanisms. Derive security properties these mechanisms provide, and break these down to requirements on the microkernel.

  5. The rules of the game. Analyze a security calculus. There exist calculi (similar to process calculi) that allow reasoning about security protocols. Examples are the BAN logic [Anderson: Security Engineering, page 29], the spi calculus (search the web, for instance [ResearchIndex results]), and maybe also the calculus of mobile ambients [ResearchIndex results]. In these calculi one can prove the correctness of security protocols. The derivation rules of these calculi present the assumption of the calculus about the environment and/or the underlying system. One can now analyze these rules to find a set of properties that guarantee the correctness of the rules. Any system that fulfills these properties is a model for the calculus and thus preserves the truth of all proofs done in the calculus.

  6. Question the truth. Analyze a security proof. Take a description of a security protocol and an explanation why it is secure. For instance the RSA key exchange protocol, or the SSH protocol. Examine the explanation for basic operations such as "send a message", "decipher", "read key" ... Consider these basic operations as operating system primitives. Design a set of properties of this hypothetical system that are necessary assumptions of the explanation of the protocol.

My questions for you:

  1. Do you know any publications that derive security requirements for operating systems? Especially for an attacker model that includes Trojan horses?
  2. What do you think of the approaches for finding security properties we proposed?
  3. Which properties does your security-sensitive application require from the underlying system?
  4. What is the best forum to ask these questions? (In the old days, I would have tried comp.security, but most experts seem to have vacated that newsgroup.)


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o VFiasco project
o Fiasco
o L4
o L4Linux
o Perseus website
o the Perseus system
o ResearchIn dex results
o ResearchIn dex results [2]
o comp.secur ity
o Also by holofukami

Display: Sort:
Security properties needed from operating systems | 52 comments (49 topical, 3 editorial, 0 hidden)
Microsoft... (3.25 / 4) (#1)
by DeadBaby on Tue May 28, 2002 at 12:40:00 PM EST

If you remember, Microsoft was going to make XP only able to run "certified" programs for just this reason. (and, of course, others) The idea was rejected universely by just about everyone but it would have helped solve a lot of problems. A similar scheme that was less dependant on Microsoft might actually still work.
"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
Not really the same (4.80 / 5) (#2)
by kuran42 on Tue May 28, 2002 at 12:44:07 PM EST

The focus of the article seems to be more how you can securely run any programs. Microsoft's approach is to look at each program and make a determination about whether it is safe to run or not. Time consuming, inelegant, fails to address underlying problems (which would require a rewrite of Windows, of course), and, in the end, little more than a new source of revenue for Microsoft.

kuran42? genius? Nary a difference betwixt the two. -- Defect
[ Parent ]
Certificates do solve the problem (2.40 / 5) (#4)
by Ken Pompadour on Tue May 28, 2002 at 12:51:37 PM EST

Of making decisions about "Who to trust" somewhat easier. Which is what they're meant for, and do solve, and would work on Windows XP, without a rewrite of fuck all.

But just keep pissing in the wind, Linux boy.

...The target is countrymen, friends and family... they have to die too. - candid trhurler
[ Parent ]
Mathematical Correctness (5.00 / 3) (#7)
by Blarney on Tue May 28, 2002 at 01:18:57 PM EST

I may be wrong - perhaps I do not understand this article as well as I should - but it seems to me that this discussion is about mathematically provable security. Can a system be designed such that, for any arbitrary program that it may run, this program cannot access anything it should not or interfere with other programs?

Certificates are merely a statement that the program was created by an approved software vendor. They make no claims of mathematically guaranteed security. Perhaps they would have security value if Microsoft or another certification authority carefully went through each submitted program's source code line-by-line, but it seems very unlikely that they would undertake this sort of effort and that software vendors would willingly submit their source code (possibly containing valuable trade secrets) to this sort of scrutiny.

Anyhow, if such a software security validation process existed - if there existed a program that could prove the security and correctness of any other program - it would be best to simply sell this program or, better yet, build it into the operating system a la the Java theorem prover.

[ Parent ]

If that's so (5.00 / 1) (#10)
by Ken Pompadour on Tue May 28, 2002 at 01:46:25 PM EST

Then Java is already (to pick a number out of a hat) 70% of the way to this secure operating system.

...The target is countrymen, friends and family... they have to die too. - candid trhurler
[ Parent ]
Good point (none / 0) (#34)
by MfA on Wed May 29, 2002 at 03:59:27 AM EST

Albeit not wholly original, these guys beat you to it.

At the moment it still lags a bit in performance, but it might catch up in the future. The MMU in x86's does not lend itself well to very finegrained protection, without a big monolithic kernel space you get a lot of context switches. For an OS which relies on the type system for security this is not an issue.

[ Parent ]
The reason... (4.50 / 4) (#5)
by trhurler on Tue May 28, 2002 at 12:59:47 PM EST

The reason this sort of thing hasn't become more popular is that, to put it simply, it is really hard to get right, and the difficulty increases at a greater than exponential rate as the desired featurelist for the system in question grows. Given twenty years or so and a large number of people, doing something like this and producing a system similar to BSD as the result might be feasible(assuming you don't get ambitious later on and change the goal, which is an odd requirement over that timespan in our field,) - or you could just go use OpenBSD and deal with "not quite perfect, but almost certainly good enough for the real world." In the commercial world, things are even more bleak, as a system like Windows, even redesigned for security, would take time measured in centuries to verify, and of course Microsoft probably won't accept that as the new lower bound on the product cycle time...

This is a great thing they're doing - but you need to be realistic: it is a niche endeavor, and that is all it will ever be. For a very few, it will be useful, but to most of us, even most of us serious computer people, it will be a curiosity at best. In order to maintain theoretical security, they'll be producing something rather austere, to put it mildly.

'God dammit, your posts make me hard.' --LilDebbie

Maybe I'm wrong.. (4.00 / 1) (#9)
by rcs on Tue May 28, 2002 at 01:21:45 PM EST

I could be completely off base here, but hey, where better?

Isn't there a huge benefit to everyone if this is done correctly once? The bar significantly lowered for anything trying to build on it, something similar to it, with similar goals?

Once something comes out of a pure, 'untainted', 'theoretically ideal' project such as this, doesn't it allow people to point to it and at the very least say "Yeah, we're not theoretically ideal, but we're a lot like Foo, which is"? Eh. You look at "provably secure", like in cryptosystems. Where you can say that breaking foo is at least as hard as doing bar (where bar is preferably an NP complete problem with major research backing it).

And if this is done in a general enough manner, couldn't it be used as an atom for future growth? (Yes, I'm still young and idealistic enough to pursue the idea of perfect components to build systems out of.)

I've always felt that there was something sensual about a beautiful mathematical idea.
~Gregory Chaitin
[ Parent ]

One problem (none / 0) (#16)
by greenrd on Tue May 28, 2002 at 02:30:23 PM EST

Run an old version of bind on top of OpenBSD, and poof - you've instantly got an exploitable system.

Secure foundations do not imply secure operating system.

"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Or perhaps (none / 0) (#17)
by greenrd on Tue May 28, 2002 at 02:35:50 PM EST

actually telnet would be a better example, seeing as it's inherently insecure.

"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Well, (4.00 / 1) (#21)
by trhurler on Tue May 28, 2002 at 03:22:58 PM EST

You can do things that make this secure from an OS perspective(ie, even if the program is exploitable, the end result is useless and cannot be used to gain further access to the system,) especially in the area of mandatory access controls and limited view, but even so, let's say I compromise your DNS. Maybe this doesn't directly allow any access to your DNS server, but so what? I now have your DNS, and creative manipulation thereof will get me more, because identity is one of the fundamental properties of a network entity.

Basically, the problem is that you CAN, at great usability cost, compartmentalize security - but if each compartment gives way when attacked, then you haven't really gained much in practical terms; even if the attacker never "roots" a single machine, he can still do what he wants to do. To prevent this, you must ensure that every line of code that ever runs on your system is properly written.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
That may be easier than you realize (none / 0) (#25)
by MickLinux on Tue May 28, 2002 at 04:44:51 PM EST

...but more expensive than OpenSource can handle.  What I mean is that M$ or Sun or Apple could perhaps do it, if they chose to.  But I'm not sure that they would actually make a profit at it, so they might not, anyhow.

Essentially, the number of algorithms that must be run is limited -- think what is involved in a browser, a WP program, and so on.  It's large, but limited.

So you first code the microkernel, and you make sure that that is strong.  The microkernel
has to handle security and call subroutines.  

Next, you code the basic subroutines that people will want.  Because you are going to be suffering losses on the compartmentalization, you want to take C code and then hand-code Assembler.  But you document each routine extensively.  

Now, as you code these subroutines and check them for security-robustness, you enter them as "Secure", trusted routines.  The first of them will be SYS-Operations.  The next batch will be mathematical algorithms; the next batch will be network algorithms, and so on.  

Now, as users want programs, they can write them using the secure algorithms.  But their code is still untrusted.  But then, as you go, you code more and more layers, providing people with the things they want in a trusted environment.  And once coded, there won't have to be more changes.  

This is the kind of thing that M$ wants to do anyways, but they don't want to spend the effort to do right.  They just want the benefits of monopoly and being the *only* software provider.  

But ultimately, for this to work, you also have to open-source the code, so that people can see it, test it, break it if possible.  If they can't break it, then it can be presumed unbreakable.  If they can break it, then you determine what principles of security were violated, and then look at the rest of the code to see if it too is vulnerable.

So I'm not sure if this is practical... but it is definitely doable.

I'm pretty sure I could work on this kind of project (I'm great with Assembler!), but I'd have to be paid.  I can't afford to just throw time (yah, right, like I'm being paid to type this.  But that's mild depression for you.)
But aside from that, I'm not really certain that security is a good idea.  I think that open everything might be better.

I make a call to grace, for the alternative is more broken than you can imagine.
[ Parent ]

No (4.00 / 1) (#26)
by trhurler on Tue May 28, 2002 at 06:37:23 PM EST

What I mean is that M$ or Sun or Apple could perhaps do it, if they chose to.
At the cost of going out of business, yes.
So you first code the microkernel, and you make sure that that is strong. The microkernel has to handle security and call subroutines.
Five to ten years later, you've got something that, by itself, is useless - and provably so - and you've proven it. Great.
Because you are going to be suffering losses on the compartmentalization, you want to take C code and then hand-code Assembler.
No you don't. You want to use C, yes. Using assembler for this job is beyond stupid; the compartmentalization isn't all that big a performance hit anyway, and moreover, verifying correctness gets harder as the number of lines of code increases; you NEVER want to up that unnecessarily. (For that matter, debugging it would be harder, and that's not even a beginning of proof of correctness.) Furthermore, you'd need assembly programmers from the great beyond to really make the performance gains noticable in most circumstances, and if you're going to go to all this effort, wouldn't you want a portable result?
Now, as you code these subroutines and check them for security-robustness, you enter them as "Secure", trusted routines.
And the first mistake you make might as well be every mistake. Human beings are not perfect, and the scale of what you're talking about makes putting men on the moon look like wiping your ass. Remember, even your proving process can produce errors; only a correct proof is any good, and then only if the code matches the proof exactly. This endeavor will take you decades(probably centuries,) and at the end, you'll have - not centuries of progress - but rather today's OS technology well implemented. Is that what the people of the year 2500 will want? I bet not.
But their code is still untrusted.
Then it cannot be used. To use it is to rely on it, is to trust it. "Trust" is not just a label you put on a jar full of good code. If I'm using your DNS server, then I trust it, whether I like that or not.
And once coded, there won't have to be more changes.
Very obviously you've never written any serious computer program, or you'd never have said this. Aside from debugging, which is never really a complete process(you discover new kinds of bugs, given enough time,) no useful specification is ever graven in the sphere.
This is the kind of thing that M$ wants to do anyways, but they don't want to spend the effort to do right. They just want the benefits of monopoly and being the *only* software provider.
Wrong again. Microsoft is big because they release new stuff all the time. Your model would prohibit that. How can you succeed in selling software that never changes? People only have to buy it once!
But ultimately, for this to work, you also have to open-source the code, so that people can see it, test it, break it if possible.
Which they won't bother doing.
If they can't break it, then it can be presumed unbreakable.
As the British intelligence services are fond of saying, "interesting, if true." However, this is neither true, nor interesting. Most breakable systems are never broken, open source or otherwise.
So I'm not sure if this is practical... but it is definitely doable.
In precisely the same sense in which building a Dyson sphere is "doable."
But aside from that, I'm not really certain that security is a good idea. I think that open everything might be better.
You and rms can go have a big democracy cakewalk all over Tienanmen Square while the rest of us watch the tanks roll over you. We'll keep our security, thanks.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Actually, I have done coding on projects, (3.00 / 1) (#32)
by MickLinux on Wed May 29, 2002 at 01:04:10 AM EST

....  both big and small.  

When I talk about coding something into assembler, I refer to coding routines.  Routines are containable, and once coded -- like object code for C compilers -- typically are not recoded.  If something else is needed, it is built from the ground up..

So suppose you code a 1-D complex FFT of variable length; if you later need an n-D complex FFT, you code that anew, you don't recode the 1-D FFT.

But to use this, documentation has to be extremely good.  That means that there exists documentation telling
(a) *how* it works, (b) *what it does* in a block-by-block breakdown, (c) how to *interface* with it (d) exceptions (e) returns.

As far as code goes, also, my Assembler code has typically been smaller than any C code.  For example, at one point when I wrote a Virtual Memory task switcher back in the days of DOS, the code size  of the entire thing was < 4k.  The code size for one "task" program -- that would scale an image with aliasing and dithering was about 200-300 bytes.  The code size for a floating point divide (no coproc.) was about 200 bytes, that for a double precision floating-point logarithm was about 1200 bytes, but only because the logarithm required a basic lookup table.  Typically you can expect a line of code to require about 5 bytes, and a loop to require about 20-30 lines of code.  So the square root, for example, was about 40 lines long.  That's pretty short and readable.  If all code is broken down like this, then things actually become usable.

However,  that being said, there are later posts than mine that showed some other interesting security flaws such as the time taken between operations, and page faults that allowed programs to figure out the password.

Ideally, a secure task manager should be transparent to such things.

Thus the complexity of the system does grow.

I make a call to grace, for the alternative is more broken than you can imagine.
[ Parent ]

Ah (4.00 / 1) (#39)
by trhurler on Wed May 29, 2002 at 11:50:32 AM EST

Well, if you've written assembler(I'm guessing Intel?) that's cool, but some of your assumptions aren't particularly good for our purposes.

First off, I am certain that your code will be smaller than that produced by a C compiler - but smaller is not always faster. In fact, compilers often generate their fastest code by "wasting" memory.

Second, while mathematical routines and so on may be of the "get it right and never rewrite" variety, this method of doing things does not scale. You are going to need to rewrite operating system pieces, both entirely and in parts, including bug fixes(some of this stuff isn't easy to unit test to the point that you're absolutely certain it works,) but also including modifications to adapt to new specifications(OS specs never stay the same, no matter what anyone tells you.) You are unlikely to choose to rewrite 10,000 lines of VM hardware map code just to make a 50 line alteration. Trust me on that:)

Securing against power, timing, emissions, and other similar attacks is very, very difficult. In fact, nobody is quite sure how to do it reliably. In any case, those are hardware problems, not software problems; if the hardware has those characteristics that allow side channel attacks to succeed, then the software is helpless to do anything about it.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Here's a sample (none / 0) (#35)
by MickLinux on Wed May 29, 2002 at 04:40:43 AM EST

It occurred to me that I'm not really sure whether you're just trolling when you say that I haven't worked on any projects.  But I think you're one of the regulars, so I'm going to guess that you weren't trolling.

But just so you know I'm really being genuine that I have, I am posting on my diary some of my own source code.

Needless to say, I am not about to post proprietary stuff that belongs to some company I've worked for.  So anything I post has to be *nonprofessional stuff*.  By definition.  

But it's on my diary webpage.  It is an algorithm that does dither and scale, shift-and-add only, in commented Assembly language.  Since it's the only thing there, then it should be easy to identify.


I make a call to grace, for the alternative is more broken than you can imagine.
[ Parent ]

Ah (4.00 / 1) (#20)
by trhurler on Tue May 28, 2002 at 03:19:10 PM EST

In theory, your idea is feasible. The problem is that, as I said, in order to make the verification feasible, they're going to produce something very austere - something with limited capability compared to systems "real people" use for "real work." It will likely have heavily restricted IPC mechanisms, filesystem provisioning that is truly draconian, and so on - anywhere a resource might allow two distinct privelege sets to "mingle," you will find spartan and/or nonexistent functionality, in order to make security provable. The end result is a system that isn't very usable, but is very secure.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Heh. (4.00 / 1) (#22)
by rcs on Tue May 28, 2002 at 03:49:31 PM EST

I don't know. I still hold hope that it can be achieved by working from two angles. One a basic set of functionality, one that's actually useful, and one from a base that's provable.

It's obviously more work to do this. I understand the problems with proving code, in one form or another, especially in the context of security proofs. Which are hard to even get a good grasp on in the first place... Eh. The really interesting part here is the intersection. What is the minimal functionality needed in a base so that you can expand it to the level of something that real people use? Without that, it's just so much wanking.

I've always felt that there was something sensual about a beautiful mathematical idea.
~Gregory Chaitin
[ Parent ]

Problem (4.00 / 1) (#23)
by trhurler on Tue May 28, 2002 at 04:15:31 PM EST

When people "use" it, they're going to do so by putting more code on the system.

I've done more in this particular area than most people. What it comes down to is simple. You can have several different kinds of "security" in an OS.

First, you can have none. For all intents and purposes, this is what most commercial systems today have, their proponents notwithstanding.

Second, you can have something weak but good enough for most purposes. Like a door lock, it won't keep a dedicated intruder out for long, but it might convince him to try your neighbor's place instead, and it isn't too annoying to deal with.

Third, you can have something that is itself strong, but which makes no guarantees for any code running on the system. These require you to use the same methods for developing apps that you do for the OS itself, which makes this very hard to use. If you don't see the difficulty here, we can do a little thought exercise. Just let me know. If so, the starting question is this: how would you configure a small network with name service and a web server that uses some dynamic content?

Fourth, you can have something that manages to truly box everything away, to the point that a failure in one part means nothing to any other part. This system will have the functionality of a small network in each little compartment, with huge inefficiency - and will therefore be so bulky that proving it correct will be an unbelievably long and arduous task.

Fundamentally, your building block idea is sound - but what you may be missing is that all the other blocks have to be built right too; the proper analogy is a chain. Maybe the anchor link is good and solid, but if the links from there are weak, or if even one of them is, then even if that bottom link stays anchored, the parts you care about(your data and apps) will not.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
The building block analogy (4.00 / 1) (#28)
by Ebon Praetor on Tue May 28, 2002 at 09:20:02 PM EST

Rather than the building block analogy, it think what we may be looking for is very similar to how various resistance and terrorist cells operate.  Theoretically speaking, each cell should be able to operate independently of the other so if compromised, it doesn't damage the rest of the whole.  Ideally, the cells also shouldn't have knowledge of where the directions are coming from, only that the directions are valid.  Thus the top (the OS) should never really be compromised.

Unfortunately, this is really hard to implement in either resistance cell or an operating system.  Both need to send/recive information without compromising the other end; both need to be able to lose a block without compromising the whole system.  But yes, everyone is right and the software and the OS would have to be built for that purpose, but a model already exists in the real world.

[ Parent ]

Interesting comparison, because... (4.00 / 1) (#38)
by trhurler on Wed May 29, 2002 at 11:42:15 AM EST

It is probably mathematically provable that you cannot have provable security in resistance cells, as long as information has to flow both ways(ie up AND down the communications chain.) You can use redundancy to trivially accomplish survivability of losses, but the only way to secure communications is either to hide them(they can be discovered,) or to use a secret of some sort(encryption.) Secrets can be compromised along with the cells that use them. What this means is that there is no way to guarantee that the guy on the other end is in fact the guy you think he is, and no way to guarantee your communications can't be intercepted(and possibly in some cases even altered.) Under those circumstances, "security" is a matter of making the best precautions and hoping it works out.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Cell structure (4.00 / 1) (#40)
by Miniluv on Wed May 29, 2002 at 02:28:10 PM EST

I'd go further than probably and say almost definitely to the provably insecure. The only thing is, provable within the realm of mathematics is different from provable within the realm of human endeavor.

I could easily construct a cell system which would escape virtually all attempts at compromise and be isolated enough that a single cell being compromised would be compartmentalized to just that cell. It'd obviously involve redundant channels of communication, zero human interaction, and heavy cryptography usage. It'd be cumbersome, but do-able. The only difficult part is initial key distribution, but given enough time I'm sure people could come up with a secure way to do that as well.

Fuck Walmart
[ Parent ]

Heh (none / 0) (#41)
by trhurler on Wed May 29, 2002 at 03:33:30 PM EST

Key distribution is the great unsolved problem of the universe. Saying "I'm sure people could come up with a secure way to do that" sounds like "I'm sure people could figure out how to teleport" to my ears.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Well (none / 0) (#43)
by Miniluv on Thu May 30, 2002 at 09:37:44 AM EST

I'm still poking around the edges, and agree that it is truly a huge problem, but I think there could be some form of systematic, yet relatively secure method of key generation rather than plain distribution.

One scenario would be using something innocous about the person who needs the key as a seed for some mathematical process to generate the key. Say, their street address to the power of their age times their weight. Generates a relatively large number which can be used to seed some other algorithm. Remember, this only actually needs to be used once, which will probably slip under the radar of most surveillance networks. Using this initial key you distribute the more secure key.

Like I said, I'm still groping around the edges of it, and not really devoting much time or energy to the problem as I'm not a terrorist, radical activist, or professional security researcher. Also remember, seemingly ludicrously expensive schemes might actually be reasonable if they only need to be done once.

Fuck Walmart
[ Parent ]

Problems (none / 0) (#44)
by trhurler on Thu May 30, 2002 at 12:13:26 PM EST

Your scheme relies on the algorithm being secure. The problem is, as long as your organization never does anything, that will work. The first time you draw attention to yourselves, they're going to start looking real hard at what you're up to. They're going to torture captives(and there will be captives.) In short, they're going to compromise your methods.

Incidentally, this is the problem with most any nonrandom key material generator. Instead of distributing keys, you've distributed a method of obtaining them, and now anyone who gets that method owns your ass.

A second problem is that if a key generated this way is compromised, there's no way to generate a different key. This is also why biometrics are a total sham(aside from the fact that they don't work, as people are starting to demonstrate in research these days.)

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Yes but (none / 0) (#45)
by Miniluv on Thu May 30, 2002 at 02:28:04 PM EST

The first time you draw attention to yourselves, they're going to start looking real hard at what you're up to.
Oh absolutely, but the algorithm doesn't matter in this case, the seeding does. There are plenty of numbers attached to just about any person, you just have to pick them. Hell, I can think of a couple hundred individual body measurements which can be used in hundreds of thousands of combinations to seed algorithm. Many of them can be worked into an innocent sounding conversation which would reveal to both sides how to generate the one time key used to transport the actually secure public/private keypair.

A second problem is that if a key generated this way is compromised, there's no way to generate a different key.
That's why you only use it once. You use a not particularly secure method of generating the key only to solve forward security for key distribution. Obviously you shouldn't need this very often, since public/private keypairs shouldn't be changing terribly often. Limit your liability where your liability is greatest, and devote the most effort to the places where the greatest reward can be gotten, meaning use your weak key only once, recognizing that it's security isn't terribly important but spend a lot of effort generating long, secure keypairs for routine communications.

Fuck Walmart
[ Parent ]

Still problems (none / 0) (#46)
by trhurler on Thu May 30, 2002 at 04:33:43 PM EST

The seeding is an algorithm, and if it is easy for the terrorists to converse and tell how to generate a key, it will be easy for an eavesdropper to figure out how to generate the key too. If doing so requires special information, then you've merely substituted special information distrbution for key distribution - and odds are the keys are easier to distribute! If this is compromised, then your keys transmitted using it are compromised, and you have lost.

Security is not like a football game, where you can have special teams come out that are totally unsuited for regular play and do some particular job for you. Security is more like a defensive line in war; if it has a weak point, then the strength of the rest does not matter.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
This is where I disagree (none / 0) (#48)
by Miniluv on Fri May 31, 2002 at 02:41:42 AM EST

Security is exactly like both of those. Cryptography is the special teams unit, infosec is the Maginot line.

The goal of my system is to allow keys to be generated in an essentially unique fashion every time through conveying a vague set of instructions, such as "Answer every third question honestly" which keys both people to know that in fact the current date to the third power mod the days in the month is actually the relevant question interval. Obviously recruiting happens to some extent in person, which allows for covert passing of this information, and frequent changes since it's essentially a onetime pad.

I'm not claiming its perfect, I've already admitted its half baked, but I do believe it addresses some of the conerns you've raised.

Fuck Walmart
[ Parent ]

Two things (none / 0) (#49)
by trhurler on Fri May 31, 2002 at 12:39:09 PM EST

First off, it is not even remotely a one time pad. It is a form of one time keying, but one time pads are systems with specific mathematical properties that yours does not even begin to approach.

Second, it is a bad system, because all that is required to break it is surveillance of recruiting operations, which are among the most high profile things your hypothetical resistance movement is going to be doing, and therefore are what is liable to be under surveillance anyway - and unless you detect that surveillance, nobody involved will ever know their communications are compromised!

I'm not trying to insult you; the fact is, doing this stuff is ridiculously hard - to the point that people have dedicated lifetimes to it and found no good solutions. The idea that you'd dash one off the top of your head is absurd, no matter how clever you are.

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Understood (5.00 / 1) (#50)
by Miniluv on Fri May 31, 2002 at 01:19:49 PM EST

One time pad wasn't what I meant, one time keying was. One time use was what I was really driving at.

I recognize that this is terribly difficult, and fairly well impossible to make perfect. My idea is more of a ghetto solution that achieves reasonable security within certain limits.

Fuck Walmart
[ Parent ]

permissions to run.. (3.00 / 1) (#11)
by sasquatchan on Tue May 28, 2002 at 01:53:18 PM EST

To go about making an OS fully secure, how does one restrict what runs with "kernel" privileges ? Drivers and anything dealing with the hardware need such capabilities. And once anyone has access at the kernel level, pretty much anything is possible (re-write function tables, change permissions, etc). So if you restrict your OS to disallow things, it'll never catch on because it can't be extended.

Further, I'd think the theorists would be all over you about a program to verify code. Sounds like a Turing problem.
-- The internet is not here for your personal therapy.

Verification (5.00 / 1) (#14)
by greenrd on Tue May 28, 2002 at 02:23:09 PM EST

Further, I'd think the theorists would be all over you about a program to verify code. Sounds like a Turing problem.

You misunderstand. There is a whole subfield of CS devoted to formally verifying code against certain models. There is no attempt to write a code verifier to verify any possible legal code - that would be solving the halting problem. The aim is to verify a particular restricted set of code. My understanding is that you restrict the kinds of code which you're going to process, sufficiently to avoid encountering the halting problem.

"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Sorry - mistake (none / 0) (#15)
by greenrd on Tue May 28, 2002 at 02:27:40 PM EST

There is no attempt to write a code verifier to verify any possible legal code - that would be solving the halting problem.

Actually - correcting myself - there are such verifiers - for example Java bytecode verifiers. I'm a bit rusty on what exactly they do, but I know that (obviously) none of the properties they verify involve a fully general proof of halting (which would be intractable of course). Certainly checking low-level things like "must not try to cast a Button to an int", doesn't require any analysis of whether code will halt.

"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Java Bytecode Verification (none / 0) (#18)
by Simon Kinahan on Tue May 28, 2002 at 02:42:25 PM EST

Here's what I know: All it really does is type checking at a fairly simple level. All JVM instructions take operands of a single known type, and output another known type. For these purposes, all references have the same type. The bytecode verified steps through the bytecode recording the types of everything on the operand stack, and the local variables (obviously it already knows the types of the fields), and making sure no instruction attempts to use a data as something different to what the instruction that created it thought it was.

Even then, there are legal Java bytecode programs that cannot be verified. For the verifier to work, all forks of any conditional statement must leave the operand stack in the same state when they merg back together. These are mostly pathological cases, and I think no Java program will ever compile to something thats not verifiable, but its an interesting example of how even very simple verification is hard.


If you disagree, post, don't moderate
[ Parent ]

Might be off-topic but... (none / 0) (#12)
by MickLinux on Tue May 28, 2002 at 02:08:37 PM EST

Here's how I see the security problem, in light of the 486 construction (and perhaps similar constructions):

You need to secure the memory and the system operation instructions against all kinds of things like overflow attacks... essentially, the memory must be impervious to everything.  So must the system operation (disk i/o, etc.)

As an ignoramus in the security field, I think it may be possible to do this by making use of the security levels provided by the 486+ construction [big caveat in the closed-source system architecture here... I'll  get to it]

Level 0:  Task switching, security control.
Level 1:  All routines/memory not currently in use
Level 2:  Secure routines currently in use [such as hard disk access]
Level 3:  Unsecure routines currently in use [such as programs].

Trespass according to 486 literature normally results in an exception, and is responded to by handing the task back to Task Switching, which then shuts down the routine and sends a "routine failure" signal to the calling source.  You will have to set up the response so that a program cannot intentionally generate a double exception, but that shouldn't be too difficult.

All task switches then are controlled from Task Switching, which in turn keeps track of the different processes and child processes and permissions, as well as hard disk access permissions [depending on the area].  Then it only allows through those hard disk access requests that meet security requirements.  Yes, it will take a long time.  More than that, hard disk space and memory space is divided into "program private", "system private" "public", "unused", "unused-clear first (formerly private)" and cross listed by user id.  So it's going to be slow.  

You will also have to make the computer highly resistant to the surprise shutdown, through the method in which data is written (write the data first, then write the pointer that "turns it on".)

But that is the price you pay for security.

Now, the testing phase -- it would seem to me that if you can prove that the Task Switching is impervious to attacks through its simplicity of interface, and keeps the privacy secure, then the only other issue of security is whether a trusted/untrusted program is allowing private material to be public.  But that is a security issue for the individual programs.

---> Now for the big caveat.  When you have a closed system architecture, you always have the possibility (indeed, with Intel, probability) that there are undocumented routines out there, some of which may be designed specifically to compromise security.  So whatever you develop, there should be this caveat.  People have to understand that if they don't watch the production of every part, and then keep every part with them at all times, there could be *hardware trojan horses* involved.  Indeed, I have heard how the British have a way to watch what is on your TV in order to enforce some kindof a TV tax.  I have no idea if that is true or not, but people have to be aware that security is not infinite.  But if the TV or keyboard can store info for playback later, then your security is gone right there.  And how hard is it, anyways, to swap keyboards?    You need access, and approximately 20 seconds, or 10 with practice..  

Just something to think about.

I make a call to grace, for the alternative is more broken than you can imagine.

Oh yes, details (none / 0) (#13)
by MickLinux on Tue May 28, 2002 at 02:21:20 PM EST

Just thought of a bit more.  In writing to the hard disk, the proper process would be:

(1) Mark unused space as "unused -- erase first".
(2) Write data to space.
(3) Write pointer to new data in directory tree.
(4) Change data space to "private" / "public", etc.

It will even be desirable to set the bit values of "private", "public", and
so on such that a power-down in the middle of writing these values will not result in a transfer between "unused--erase first" and "private" turning into "public".   You do this with the following bit structure:

"Unused" -- 00
"Unused -- Erase first" -- 01
"Private -- " 11
"Public" --10

That way, the transfer between "Unused-erase first" and "Private" involves the change of only one bit.

Directories should have a special structure, in that they should be filtered to only appear if the user and the program has permission to see data that is stored beneath them.   To do this, though, essentially each directory will have to have a username attached to it, or else be marked as "user-public" from the get-go.  

It's not an easy job.  I'm not sure if it is possible.  But if you do build it, you will definitely get customers.  Starting with the German Army, which discovered all their email routing itself through Denver.
Then again, that brings up the interesting point that if you're going to do  this, better do it simultaneously in all countries, because each country will claim that export is "weapons export".  

Aaah,.freedom is such a complicated thing.  To maintain it, you have to insist that nobody have it.


I make a call to grace, for the alternative is more broken than you can imagine.
[ Parent ]

Oh, and robustness too (none / 0) (#19)
by MickLinux on Tue May 28, 2002 at 02:47:21 PM EST

That's another thing.  

The kernel has to be robust.  That means that the kernel has to be able to identify problem areas in memory and avoid them.  It also has to check the results every time it writes a security bit, to make sure that things were actually written properly.  A lot of this stuff can be done in the background (there can be a 'halt' instruction when things really slow down that allows the kernel to go through and check data against checksums and such.)  

Ideally, the kernel will also WDT itself, so that if there is an electrical glitch and the kernel gets lost, it will catch itself.  But if there *is* an electrical glitch, the kernel will need to shut itself down, saving the state as it goes, and then bring itself back up from zero, checking the saved state for errors.

This might seem to be more in the field of the macro kernel, but since it deals with basic security, it really needs to go in the microkernel.

Sorry about all these replies to myself.  I hope this isn't spamming, but I'm trying to offer what help (ideas) I can, in response to your specific request.

I make a call to grace, for the alternative is more broken than you can imagine.
[ Parent ]

Back-channel insecurity. (4.00 / 2) (#24)
by jabber on Tue May 28, 2002 at 04:24:30 PM EST

The military has a considerable, multi-tier specification on security levels, including requiring unique usernames and passwords, requiring no percievable interaction between applications running on a system, and EM-shielded, physically isolated systems guarded by Marines.

The interesting one here is the middle item, dealing with interactions. In a particular mainframe system, a VAX IIRC, a password is checked against the valid version in a peculiar way.. Each character of the password is compared, and if it does not match, the system requests a re-entry of a valid password.

The problem? When the system reloads the password request for an incomplete password, it does so from memory, but when it does so for an incorrect one, it page-faults. A page-fault is an observable phenomenon which can be monitored, and so a password can be determined, eventually.

To build a truly secure system, such observable 'side-effects' would need to be taken into account.

In addition, DOS attacks on multiuser machines are also a factor in providing secure services. Disk quota are a start, but memory usage and CPU utilization quotas need also be considered.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

Native code is bad (none / 0) (#27)
by k31 on Tue May 28, 2002 at 08:13:57 PM EST

My personal feeling is that as little as possible should be written in native code.

In the past, the performance penalty would have made this impossible. But JAVA (software VM) and Transmeta (hardware VM, of a sort) are commercially successful platforms which illustrate that PCs now have more than enough power to do emulation in realtime.

Now, what if instead of emulating an existing system per se, code was written that took advantage of the fact that the program ran within a VM?

Thus VM meta-information could be used to make decisions; and each process could run with a very strictly detailed sandbox.

Utimately, I think, it is non-native machines which will allow for "real" security.

That, and more great programmers.

Your dollar is you only Word, the wrath of it your only fear. He who has an EAR to hear....
[ Parent ]

All you're changing is the name. (5.00 / 1) (#30)
by NateTG on Tue May 28, 2002 at 10:42:44 PM EST

By using a VM you're moving the problem around. You're still vulnerable to weaknesses in the VM -- both in terms of corrupting the VM's behavior, and in terms of the VM's failure to protect the underlying envirmonment.

[ Parent ]
A matter of scale (none / 0) (#47)
by k31 on Thu May 30, 2002 at 10:09:23 PM EST

Yes, the VM is venerable... but if you can secure the VM, then you don't have to secure the programs running within it.

Something like the F0 0F bug would be fixable by patching the VM, but replacing a CPU may not be as easy as patching the VM which is running within it... the VM could be like the BIOS EEPROM, just as easily updated.

I agree that we're moving the problem, but that's a good thing, it's moving to where it is more easily contained.

Your dollar is you only Word, the wrath of it your only fear. He who has an EAR to hear....
[ Parent ]

TENEX (4.66 / 3) (#29)
by FlipFlop on Tue May 28, 2002 at 10:00:55 PM EST

In a particular mainframe system, a VAX IIRC, a password is checked against the valid version in a peculiar way..

Perhaps you're thinking of TENEX for the PDP-10. Someone coded the password checking algorithm to compare the characters one-by-one. The first time it found an unmatched character, it would stop and indicate a failure. This was a nice, efficient comparison routine.

To exploit the security hole, an attacker would simply guess the password. Then they would place their password on a page boundary with the first character in one page, and the rest of the password in another page. They would force the second page out of memory. Then they would call the password checking routine.

If the first character was incorrect, the password checking algorithm would return right away. If the first character was correct, the password checking algorithm would load the second page out of memory to check the next character.

When the password checking routine returned, the attacker would check if the second page had been loaded into memory. If so, they knew the first character was correct. If not, they knew the first character was incorrect.

With this approach, an attacker could guess the password in 128*n tries. That's far better than 128n tries.

AdTI - The think tank that didn't
[ Parent ]

I've got a secure os. (3.00 / 3) (#31)
by NateTG on Tue May 28, 2002 at 10:54:22 PM EST

  • It runs on any hardware.
  • It consumes no power.
  • It is guaranteed not to run hostile code.
  • It is free.
  • It is already available.
  • It is hard real time
  • It is proven technology
  • and it's completely useless (tm)
Just remember that you can't power the computer on while using it.

Comments from the author (5.00 / 4) (#33)
by holofukami on Wed May 29, 2002 at 03:57:37 AM EST

As the author of this article, I would like to reply to some of the comments that have been posted.

First, I would like to apologize to those who posted editorial comments for not having addressed them in the article. It looks like the Kuru5hin editors have posted the story six hours after I submitted it to Edit mode, so I just didn't make it in time.

Second, I should have explained better how software verification works. Basically, you prove using mathematical means that some model of your code conforms to its specification. For software, you usually do this either by model checking or using powerful theorem provers. (If you are interested, please read the section on related work in this VFiasco technical report.)

In all but the simplest cases, you do not expect a program to do the verification automatically. Verification is an interactive process that is very expensive.

Third, I got the impression that many posters have not understood the architecture of the system correctly. What we are going to verify is only a microkernel. The Unix operating system (in this case, L4Linux) runs as an application program on top of this microkernel. The (small) secure OS servers and security-sensitive applications (such as a key-signing application) also run as application programs on top of the microkernel. As all components (including the Unix OS) run in their own address spaces, the Unix components -- including the Unix "kernel" -- can be completely untrusted. Therefore, a broken DNS daemon or telnet daemon or a root break-in cannot compromise the security goals of the security-sensitive applications.

(Yes, it is true, we will only verify the microkernel, not all of the trusted system components. But you have to start somewhere!)

Fourth, usability of the system we envision is that of a normal Unix desktop machine. There is no need to take Draconian measures in order to restrict users from breaking into the "trusted compartment": Unix users can compromise the security of the Unix subsystem (and based on historical record, they will, because the Unix system is huge and riddled with bugs!); but the properties of the verified microkernel will prevent them from breaking into the secure compartment of the system that runs besides Unix, not on top of it.

Finally, I would like to encourage readers to think of the questions I have asked in my article. What properties should we prove about the microkernel? What properties does your security-sensitive application need from the underlying kernel or operating system? What is the best way of deriving such properties?

Another forum to ask these questions on ... (none / 0) (#36)
by MfA on Wed May 29, 2002 at 05:07:51 AM EST

Have you tried one of the mailing lists related to Eros? (Or one of the other capability system related lists on the eros-os.org site.)

[ Parent ]
Why not draw on existing materials (none / 0) (#37)
by Maniac on Wed May 29, 2002 at 10:26:26 AM EST

> Which security properties should be proved about the kernel? ...

Well, to summarize the material from the "Rainbow Books"...

  • discretionary access controls (user specified limits to access, system enforced)
  • mandatory access controls (system defined limits to access, system enforced)
  • identification (who the user is)
  • authentication (what the user has access to)
  • auditing (of normal operation and attempts to circumvent controls)
The details on what you need to implement were described in those books, or updated in the new "common criteria" for secure systems. This is going to be hard - another comment mentioned out of band signalling through an interaction with the paging system. I don't recall that particular problem on the VAX, but it was present (and fixed) in Multics. There was a very subtle interaction that exposed that particular interface.

Umm... (none / 0) (#51)
by ectospasm on Tue Jun 04, 2002 at 04:49:30 PM EST

  • identification (who the user is)
  • authentication (what the user has access to)
Correct me if I'm wrong, but isn't authentication proving the user is who they say they are, and authorization what the user has access to?

[ Parent ]
Correction (none / 0) (#52)
by Maniac on Wed Jun 05, 2002 at 11:36:39 AM EST

I stand corrected. I reread the relevant sections in the TCSEC (orange book) and network interpretation (red book) to be sure.
  • identification (who you claim to be)
  • authentication (method used to establish the validity of a claimed identity)
However, there isn't a separate "authorization" concept in these documents since that is a part of the discretionary and mandatory access controls. That's not to say that the phrase authorization doesn't show up all over the place - just that there is is not managed separately from one of the other basic concepts.

By the way, the reason I distinguish between identification and authentication is the split allowed by the red book where one system can determine the identity of an individual, and another can do the authentication steps (typical username / password entered on client machine, authentication by server via kerberos, NIS, etc.).

[ Parent ]

Looks valid to me (4.00 / 1) (#42)
by MickLinux on Wed May 29, 2002 at 03:45:37 PM EST

Everything you said looks valid to me.  Even so, you can do some things to be (for example) resistant to power-failure or power-pulse attacks, even if not completely so.  

For example, consider the way the mouse detection wheel is built on some early equipment.  Four bits transform in the pattern

0000,  0001,  0011,  0010,  0110,   0111,  0101, 0100
1100, 1101, 1111, 1110, 1010, 1011, 1001, 1000  and repeat...

That bit combination makes sure that only one bit changes at a time,
and is important for some very slow equipment.  

Well, if you use a similar data construction method when adjusting your hard-disk permissions, you can
assure that a powerdown will never compromise the data on that issue.  On the other hand, if a
powerdown also wipes the write-head across the drive, then you have problems [and hardware is,
as you indicated, a problem then.]  But even then, with certain checks that are hardware-dependant,
you can minimize those problems --- as long as you know the hardware.

If you don't know the hardware, then you'd better provide for changes later [also as you indicated].

So yes, I would say I agree with your most recent comment wholeheartedly.

Also... what I had written for sqrt and log was, I believe, the fastest possible for a non-coprocessor
computer.  The sqrt was done much like a binary divide, except that it was based on (a+b)*(a+b), and therefore had some variations.  The log did use lists, but only with as many items as the number of bits used [each item a full double-precision number].  I don't know if it was the fastest possible, but it is the fastest that I could come up with.  Likewise, I am pretty sure that my code for the dither and scale was as fast as possible, though I would want to do things differently if I were rewriting it, to represent color as vectors.  My 3-D panel drawing code (with texture mapping), though, I believe was slower than it had to be.  It  was good, but not the best.  

I make a call to grace, for the alternative is more broken than you can imagine.

Security properties needed from operating systems | 52 comments (49 topical, 3 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!