Here's how I see the security problem, in light of the 486 construction (and perhaps similar constructions):
You need to secure the memory and the system operation instructions against all kinds of things like overflow attacks... essentially, the memory must be impervious to everything. So must the system operation (disk i/o, etc.)
As an ignoramus in the security field, I think it may be possible to do this by making use of the security levels provided by the 486+ construction [big caveat in the closed-source system architecture here... I'll get to it]
Level 0: Task switching, security control.
Level 1: All routines/memory not currently in use
Level 2: Secure routines currently in use [such as hard disk access]
Level 3: Unsecure routines currently in use [such as programs].
Trespass according to 486 literature normally results in an exception, and is responded to by handing the task back to Task Switching, which then shuts down the routine and sends a "routine failure" signal to the calling source. You will have to set up the response so that a program cannot intentionally generate a double exception, but that shouldn't be too difficult.
All task switches then are controlled from Task Switching, which in turn keeps track of the different processes and child processes and permissions, as well as hard disk access permissions [depending on the area]. Then it only allows through those hard disk access requests that meet security requirements. Yes, it will take a long time. More than that, hard disk space and memory space is divided into "program private", "system private" "public", "unused", "unused-clear first (formerly private)" and cross listed by user id. So it's going to be slow.
You will also have to make the computer highly resistant to the surprise shutdown, through the method in which data is written (write the data first, then write the pointer that "turns it on".)
But that is the price you pay for security.
Now, the testing phase -- it would seem to me that if you can prove that the Task Switching is impervious to attacks through its simplicity of interface, and keeps the privacy secure, then the only other issue of security is whether a trusted/untrusted program is allowing private material to be public. But that is a security issue for the individual programs.
---> Now for the big caveat. When you have a closed system architecture, you always have the possibility (indeed, with Intel, probability) that there are undocumented routines out there, some of which may be designed specifically to compromise security. So whatever you develop, there should be this caveat. People have to understand that if they don't watch the production of every part, and then keep every part with them at all times, there could be *hardware trojan horses* involved. Indeed, I have heard how the British have a way to watch what is on your TV in order to enforce some kindof a TV tax. I have no idea if that is true or not, but people have to be aware that security is not infinite. But if the TV or keyboard can store info for playback later, then your security is gone right there. And how hard is it, anyways, to swap keyboards? You need access, and approximately 20 seconds, or 10 with practice..
Just something to think about.
I make a call to grace, for the alternative is more broken than you can imagine.