The question of why the computer interface isn't changing
is regularly asked on IRC, often on Kuro5hin, and over on
Edge.Org, Jaron "Virtual Reality" Lanier, practically
despairs over it.
The first reason I am asserting is that the
physical devices used for the human-computer interface
sharply limit the range of motifs, metaphores, and
ideas that will work well with it. The computer interface
nowadays is constrained to a pointing device, a keyboard,
a pair of speakers, and a two-dimensional, usually flat,
multicolored screen. Throw in a microphone if you feel like it.
The pointing device comes in several versions, but the
rolling mouse seems to be the only version most people
will tolerate for long periods of time. Some prefer a
trackball, but the remaining options are even
less likely to be adopted. The nipple is approaching
extinction, and the finger pad is for limited use only.
The stakes here are no longer academic; they are
ergonomic. That leaves not much space for physical creativity.
The same goes for the keyboard. The layout will be
tweaked for years to come to maximize the tolerance
people show for the keyboard, and there is an ongoing redesign
of the layout of speed keys, but the interface is defined and
Other input devices are unlikely to make much headway.
Speech recognition's only use is to let you keep
working when you take that carpal tunnel-mandated
break. Prolonged talking into the computer is a
pitiful way to wreck your vocal cords. Foot pedals
are similarly unattractive. We may learn to like
enhanced mice, like the roller-for-a-middle-button
type that Logitech sells, or maybe mice with tactile
feedback, but these changes would not be revolutionary.
As for the output devices, the speakers are limited
in the interactivity they give. We use them mostly for
music, partly because canned sounds as computer messages
can get very tiresome after a while, and partly for privacy.
The screen itself may get more versatile as it becomes lighter,
or as LCD technology allows us to look more closely into it,
but once again, these changes will not be earthshaking.
This physical configuration allows for huge amounts
of creativity, but creativity doesn't quite pay well
for the concept I am hawking here, which is
mundane computing (MC). MC is what is requiring
optimization in the graphical user interface
and limiting innovation. MC does not reward
clever ideas. It rewards clever ideas that people
will use 8 hours a day. Vive le difference.
I want the term 'mundane computing'
to catch on as an anti-hype agent.
Jaron Lanier's work pioneering virtual reality
shows that he is a creative genius. But his ideas
simply do not work on the interface front because
of the limits of the devices involved. People
do not like headmounted displays. People
do not like motion sickness. And when it comes to
organizing the information they use in conjunction
with computers, (unless they do CAD/CAM), people
do not need fancy 3d viewing programs.
So, with tweaks here and there, the 'desktop' metaphores
for the GUI will prevail (though they already are
moving outside what was once obviously a 'desktop' motif).
This is not a bad thing. The user interface for the
automobile is not changing much nowadays, either.
However, there is much to be done in the optimization
arena, and for the same reasons.
Because computing has established its place in the
modern office, there is a realm of computing that should
be segregated from the rest of the field, which I am calling
"mundane computing." MC is the second reason why the GUI
is not about to change much, nor should it.
By Mundane, I do not mean 'boring.' But first of
all, I mean 'no longer novel'. MC is every
use of computing which has been in the public view
long enough, which people employ for long parts of their
day, and in which the computer and software are obviously
a means to an end, and not an end in themselves.
Word processing, spreadsheets, graphic design,
email, and many other uses of computers should
fall in this realm.
In MC, a user interface must work well, and this
'works well' attribute is enhanced by a previous
familiarity on part of the user. So, in MC, there
is a barrier to entry by new ideas, (which is
of course the other reason) and progress
is largely of the form of evolution, not revolution.
Hackers should rejoice, however, because dammitall,
evolution remains necessary in the graphical
user interface. Enough evolution has already happened
to show my point. The desktop motif is no longer
so close to a real desktop. Remember when you discovered
that you could mark a rectangle on the background of
a Macintosh and then move all the icons within it in unison?
Remember when it was that you realized that was an utterly
useless feature? I forgot when I recalled that last one.
There are other ideas (motifs, widgets, et cetera)
that have sprung up in various programs, which
are creative but unlikely to be used in
mainstream programs. A nifty thing I discovered
(and tried to use) is Open Data
Explorer. Take the pipe motif from the Unix
shells, use simple widgets to represent it graphically,
and then make the widgets perform Matlabesque tasks.
Nice idea, but I'll stick to Matlab and Octave.
This is just one example out of many things that
are creative, clever, but unusable for long periods
(for most people, anyway). The creative
programmer is going to find resistance against
his ideas entering into mundane computing.
Since the need for optimal usability for the GUI
will drive it to change in the coming years, let's take
a look at where it will go: the bone-rattler example
for a GUI was something I encountered in a physiology class.
It was a simulation written with Xlib in which there were
quite a few labeled buttons to 'press'. The target for
each button, however, was a 6 by 6 pixel bullet
next to the label, rather than a rectangle surrounding
both. Mousing needs to be minimized, and therefore targets
enlarged. The new Mac trick of treating the mouse
cursor like a magnifying glass when appropriate is
a step exactly in that direction. Another is the ability
to have sloppy focus in most X window managers. (Note
that Microsoft is looking at what X window managers do well.)
What will be ultimately a limit to changes in the GUI
is the need to use only those widgets that represent
two-dimensional physical objects. Three dimensionals won't
work on a CRT.
But with a slowdown in the evolution of
most software, (knock wood, cross fingers,
throw salt over left shoulder) the average
user will come to expect software to be
more stable, more reliable, faster, and
contain fewer nasty security surprises.
Wouldn't maturity in the software industry
be nice? Just a thought.