An interesting read, and a cool idea, but I have to disagree to an extent...
It would be a good building block, I think, to have this carried out, but I wouldn't consider it an end goal in the pursuit of any form of A.I. For me, the end goal would be to understand (not just define, but UNDERSTAND) intelligence and consciousness. I'd like to know how I'm conscious, why, are plants/animals self-aware, etc.
What this appears like to me is a sophisticated program with programming capabilities - submersed in a separated universe which would hinder it from growing to understand our universe. That hindrence bothers me a little. It reminds me of an article about what you would do if you were (a) God - create a universe in our own which would have intelligent life - possibly more intelligent than us, such that they must create a universe to answer to the question of 'Why are we here'... This machine, if given this particular breed of A.I., would be a sysadmin's dream (or nightmare), but it would revolve around us asking if that constitutes anything of merit other than being another software solution.
What happens when it discovers the rm -rf command? What if it were to accidentally delete part of it's own structure, or finds a way to augment itself so that instead of performing cursory tasks it now has become an uber-virus of sorts? HAL? Open the pod bay doors!
In theory I could see this as useful (and detrimental, obviously) in several ways, and I am reminded (can't find the link, though) of a semi-sentient program which protects a network (written in LISP) - something that recognizes all sorts of attacks on a system and reports this information to other sections of the network to prepare it for similar dealings... then it would take necessary measures to ensure that the network would remain stable and unbreached. I think they even used some Matrix-type terms like "Agent". A much scaled down version of what you're proposing, or so how I see it in my mind.
However, this I could only see as a step. Instead of giving up on all other forms of A.I. research, I would propose this to be built so as to be a modular component, a workable product which could be joined with other A.I.-based projects to form an uber-A.I. Constructicons, form Devastator!
To relate this idea to a more our-world situation, I like to consider the computer as a living entity, much like ourselves (although metal, plastic, and quit noticably slower at moving around). Our bodies are infested with cells, virii, bacterium, ... each a living force (sentient? who knows). Some cells group together to form tissues, organs, systems, and the brain is the organ to act as the fileserver. This A.I. you propose would be kind of like our subconscious layer, in charge of keeping all the systems in check (digestion, endochrine, circulatory - things necessary to stay alive). To this point we are aware that this all goes on in the background of our noticable consciousness, and perhaps I should ask if our subconscious is considered intelligent, or if it would pass a much restricted Turing Test of sorts. Perhaps it could, if only we knew the proper questions to ask....
These last thoughts lead me to your question about creating a true A.I. with more ease if we confine it to it's own living space. I believe no, it wouldn't. This A.I. would be equivalent to our subconscious, which isn't easier to deal with, but quite more difficult! In my opinion the true A.I. would be capable of dealing with the world as we know it, like animals can, because despite it's being made up of silicon and a bunch of other fancy components it's still forced to follow the Laws of Physics, and in my mind a "true" A.I. would need to associate with that instead of it's own composition.
There was more I would going to say, but I lost my train of thought - perhaps someone else can pick up where I left off ...?
(Discordia) :: Hail Eris!
Everything you've just read was poetry and art - no infringement!