Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
If not windows...what?

By jamesarcher in Technology
Wed Jan 10, 2001 at 03:52:31 PM EST
Tags: Software (all tags)
Software

Since the Xerox days, the concept of "windows" has been the fundamental unit of the GUI interface. Are there alternatives?


We are used to seeing strange and wonderous GUI interfaces in movies, but in the real world everything comes back to the lowest common denominator: windows. I'm not referring to the operating system of that name, but to the "resizeable box" paradigm that has been around since before even Macs.

I wonder if this is the only feasible solution for a GUI interface. Does anyone have any ideas about how to get rid of the legacy windows metaphor and move to something else? What interesting GUIs have you seen in movies, TV shows, etc? Could any of them work in the real world?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Best Windowing Innovation
o Maximize/Minimize 24%
o Tiling 5%
o Cascade 2%
o Roll-up 25%
o Flaming Monkeys 41%

Votes: 77
Results | Other Polls

Related Links
o Also by jamesarcher


Display: Sort:
If not windows...what? | 62 comments (62 topical, editorial, 0 hidden)
PalmOS is the best GUI! (3.69 / 23) (#1)
by 11223 on Wed Jan 10, 2001 at 02:01:43 PM EST

To me, the perfect concept of a GUI is the Palm OS. It's something that has to be experienced to be understood, but once you do, you understand that you could actually devote the entire screen to an application and still efficiently switch between applications.

Really, I wish more full-fledged computer operating systems were designed with those same principles. (To those who will claim that it doesn't scale, well, go read the MLP about "what to say when you're losing a technical argument.")

--
The dead hand of Asimov's mass psychology wins every time.

Hey?!?!?! (1.78 / 19) (#6)
by 11223 on Wed Jan 10, 2001 at 02:21:21 PM EST

GandalfGreyhame, I know you're a BeOS user, but if you feel differently you should probably reply rather than simply rate down and be a K5 coward. Ratings should be reserved for how well stated a position is, and replies for the content of that position.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Improving on the PalmOS gui... (3.75 / 8) (#9)
by theboz on Wed Jan 10, 2001 at 02:36:42 PM EST

Launcher III. It is almost the same but has tabs instead of the list of the software categories, and also to beam something you click on the icon, and drag it down to the toolbar at the bottom where you can beam, delete, etc. If you want to check it out go to http://www.benc.hr/. It's one of the most useful things for the palmos since hackmaster and portamonkey.

Stuff.
[ Parent ]

Can anybody say, "Amiga?" (4.00 / 10) (#10)
by sinclair on Wed Jan 10, 2001 at 02:37:00 PM EST

Probably not. A flame war might erupt. ;-)

(FYI, under AmigaOS, a program can open a 'screen' just as easily as open a window. This 'screen' takes over the whole display, and can even use different color depths, resolutions, and/or scan rates. The program can open windows on the screen, or draw directly into it. Switching screens was near instantaneous, making it a very useful GUI concept.)

[ Parent ]

the problem isn't not scaling, it's usability (4.22 / 9) (#11)
by Anonymous 242 on Wed Jan 10, 2001 at 02:37:12 PM EST

To me, the perfect concept of a GUI is the Palm OS.

I somewhat agree. I think PalmOS is a tremendously ingenious GUI for a limited purpose device. Some apps just don't transfer well to the Palm GUI paradigm. Some other apps can work together in ways that can not be accomplished in the type of GUI used in PalmOS.

The real challenge is to redefine the GUI for a general purpose device, such as the PC.

[ Parent ]

Jurassic Park (2.75 / 12) (#2)
by mattx on Wed Jan 10, 2001 at 02:02:19 PM EST

If you remember in Jurrasic park:

"It's a Unix system!"

The file manager was a 3d representation of the filesystem, which I guess could kind of count for a different GUI. (When the girl tries to find something on the computer to close all the doors, IIRC).

I've seen it running on an SGI with Irix, has anyone else used it too?

-- i fear that i am ordinary, just like everyone


"Its a Unix system!" (3.50 / 4) (#5)
by GandalfGreyhame on Wed Jan 10, 2001 at 02:15:50 PM EST

Its an SGI of somesort running a Irix somewhere in the 4.0.1-5.3 range. How do I know this, you ask? Am I some sort of Jurrasic park freak? No. Just a SGI freak in training. :)

If you've got a SGI running Irix 5.3, you can get the app, which is called fsn

[ Parent ]

Thanks! (3.00 / 3) (#8)
by Cheerio Boy on Wed Jan 10, 2001 at 02:25:06 PM EST

Thanks! I couldn't remember the name of the app.

Has anyone ported it to the newer versions of Irix yet? Last I heard it would only run on 5.3 or thereabouts.

[ Parent ]
I've got this on my Indy... (3.50 / 4) (#7)
by Cheerio Boy on Wed Jan 10, 2001 at 02:22:29 PM EST

I can't recall the name of this app at the moment but I do have it loaded on my SGI Indy under Irix 5.3.

The issue with using it is that you tend to lose track of where you are in the file system unless you have the extra 2D representation open as well.

For instance it is much easier to know that you're at /usr/local/apps/foo/bin from a 2D view instead of looking at a solid block that looks like a building labeled foo.bin on what looks like an endless field. File access becomes more like playing the old Zork games:
E[enter]
E[enter]
"You are at a house with a sign."
READ SIGN[enter]
"The sign says FOO.BIN
ENTER HOUSE[enter]
Executing FOO.BIN - please wait.

See what I mean?

[ Parent ]
Zork Interface (2.33 / 3) (#29)
by majcher on Wed Jan 10, 2001 at 07:25:46 PM EST

File access becomes more like playing the old Zork games...

Actually, I have used something like this on a Unix machine way back. I think it was called "ash", for the "Adventure SHell". Fun for about five minutes, but that's about it.

(A Google search for "Adventure Shell" turns up all kinds of good stuff.)
--
http://www.majcher.com/
Wrestling pigs since 1988!
[ Parent ]

There's a Linux version (3.50 / 4) (#30)
by itsbruce on Wed Jan 10, 2001 at 07:30:05 PM EST

available here. So far, all it does is show you your files. I think it would go great with a large touch screen, myself.


--

It is impolite to tell a man who is carrying you on his shoulders that his head smells.
[ Parent ]
Can There be Anything Better than Windows? (3.18 / 11) (#3)
by Morn on Wed Jan 10, 2001 at 02:06:48 PM EST

We mainly want to read text and view pictures, these fit best (and least confusingly) into rectangular shapes. We find three-dimensional environments for purposes like this more confusing than two dimensional ones, since the information we want to view is mainly two-dimensional (and any 3D environment, at the moment, needs to be projected onto a 2D display device for viewing, which is also confusing, because it messes up depth perception).

From this, I deduce we need something two-dimensional, which provides rectangular sections in which to place text and pictures. Does anything that doesn't rely on 'windows' as we know them satisfy this constraint?

the next plateau (2.38 / 21) (#4)
by Refrag on Wed Jan 10, 2001 at 02:07:00 PM EST

I really think that the next form that GUI interfaces will take on will be the flaming monkeys. After all, flaming monkeys have been the single greatest productivity enhancement to windowing systems. However, many feel that flaming monkeys are simply being held back by only being used within a windowing context. If Gnome (or Apple) would just concentrate on flaming monkeys and get all of the other window-centric ideas out of their head, they would be able to build the true best-of-breed GUI.

Refrag

Kuro5hin: ...and culture, from the trenches

Ion (3.30 / 10) (#12)
by evvk on Wed Jan 10, 2001 at 02:40:18 PM EST

You might want to take a look at Ion. It is "only" an experimental window manager but takes a rather different approach at managing the windows. Instead of overlapping paper-imitating stacks of windows the screen is divided into frames. Client windows are "attached" to these frames. It is rather nice to use from the keyboard (but not the mouse) and has text editor reminiscent "minibuffers" for going to named window etc. --- with tab-completion, of course! Like all user interfaces, it has its problems, the biggest of which, I think, is existing multiple-windows-per-document (toolboxes etc.) applications and other shortcomings in application support for features.

Tiled windows (3.33 / 6) (#21)
by Kaa on Wed Jan 10, 2001 at 05:26:33 PM EST

Instead of overlapping paper-imitating stacks of windows the screen is divided into frames.

Ugh. You might recall that Windows 1.0 (and maybe 2.0 as well) used tiled (that is, non-overlapping) windows. I don't think going back to that era is a good idea. Overlapping windows are much more useful and convenient than tiled ones and the task bar helps manage them.

Tiled windows are a remnant from the text terminal time. They have been dead for a very long time and should stay that way.

Kaa
Kaa's Law: In any sufficiently large group of people most are idiots.


[ Parent ]

And the command line too? (3.00 / 5) (#25)
by evvk on Wed Jan 10, 2001 at 06:04:00 PM EST

> Ugh. You might recall that Windows 1.0 (and maybe 2.0 as well) used tiled (that is, non-overlapping) windows. I don't think going back to that era is a good idea.

Why not? Everything was better in the good old days before bloated, slow, "user friendly" software...

> Overlapping windows are much more useful and convenient than tiled ones and the task bar helps manage them.

Ever tried navigating between overlapping windows from the keyboard? That is a pain. Even if it wasn't, I always tile my windows in conventional window managers. In those, however, it is not easy to go to the window, say, right of current (not possible or not the right one because there are too many possibilities in the mess).

The mouse should die, in my opinion. I don't like the device. It hurts my wrist. Like some other post already said, the screen is not an input device but output. There's just no place for overlapping windows without some stupid dragging device and all those suck.

> Tiled windows are a remnant from the text terminal time. They have been dead for a very long time and should stay that way.

And the command line too?


[ Parent ]
Windows and mice (3.00 / 2) (#50)
by Kaa on Thu Jan 11, 2001 at 01:52:06 PM EST

Everything was better in the good old days

I take it, then, that you are typing this on a good old green-on-black text terminal, right?

Ever tried navigating between overlapping windows from the keyboard?

Sure. Alt-Tab works quite well.

The mouse should die, in my opinion.

Well, windows (and WIMP environment in general) were designed to be used with a mouse. If you don't want to use a mouse, no wonder you find GUIs inconvenient. I hope you understand that you are in a very small minority.

I don't like the device. It hurts my wrist.

(1) Get a proper wrist rest. Proper, for me, means at least 2 inches high. Yes, I had to make my own. A stack of four learn-Windows-in-10-minutes books with an old soft-rubber Sun mouse pad on top worked very well for me.

(2) Get a trackball. Your wrist will thank you.

And the command line too?

Nope. A command line is a way for me to interact with a system using language (as opposed to graphical manipulation which is what GUI does). There is nothing old-fashioned about using a language to communicate with a system.



Kaa
Kaa's Law: In any sufficiently large group of people most are idiots.


[ Parent ]

Frames and overlapping windows (3.75 / 4) (#26)
by Tim Locke on Wed Jan 10, 2001 at 06:51:40 PM EST

I would like to see both frames and overlapping windows used. I would like the taskbar in windows to be in it's own frame. ICQ would work well in it's own frame. I wouldn't want to be limited to frames exclusively, but it would be useful to have both.

It would probably not be good to allow the user to break an application's window into frames, but it would definitely be good to allow applications to be placed into frames that are on the desktop. This would limit the size of a window when it was maximized. That way I could still see whatever was in the other frames.

Let's build it.

--- On the Internet, no one knows you're using a VIC-20.
[ Parent ]
I prefer pwm (3.66 / 3) (#35)
by fluffy grue on Wed Jan 10, 2001 at 11:05:40 PM EST

pwm, ion's parent project, is IMO much easier to use - you have both the window and pane paradigms going on, you can still use the mouse in a traditional way if you want, and it's much more (and much more easily) configurable on the whole. And nowadays whenever I sit down at a more traditional interface I keep on trying to tab-drop my webbrowsing sessions into a single frame, and meet with very little success in that regard. :)
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Why the "resizeable box" paradigm (3.75 / 12) (#13)
by rednecktek on Wed Jan 10, 2001 at 02:45:22 PM EST

Habit, ease of design.

Having desinged interfaces I can tell you from experience that a straight line is easier to make than a curved or non-conventional interface.

Paper is easier to cut in straight lines, meaning less waste to the manufacturer, and cost savings to the consumer. Therefore, unless you specifically need an odd shape, rectangular is what you get. BTW, the original design of ^most^ GUIs was meant to mimic paper so it would be easier for people to understand.

Our bodies aren't recutangular, but it's easier to build a door that way. Sure you can design a door that is curved (think Middle East architecture), but is it practical?

Software that uses a non-rectangular interface is done so for aesthetic or artistic value. K-jofol (sorry for the search, the home page appears to be down), and Sonique are excellent examples of non-conventional interfaces. Symantec tried it some versions ago for one of their commercial products (Norton Utilites 4.0, IIRC). I assume they abandoned it because users couldn't understand the interface. Joe Six-pack user doesn't understand that those pretty pipe graphics on the edges don't do anything.

Just remember, if the world didn't suck, we'd all fall off.

resizable boxes (3.75 / 4) (#23)
by joto on Wed Jan 10, 2001 at 05:40:01 PM EST

Do you think it is innovative to use something else than resizable boxes? In that case I must disagree. I think it is stupid, and nonproductive. Innovation in user interfaces should be about making the computer easier for the beginner and/or more productive for the expert. It should not be about making everything look weird. Whether windows or buttons are square, rounded, triangular, or shaped like a tennis racket doesn't matter. It is still the same old window or button. It is the same paradigm. If you want to innovate, you have to create something new, and something better.

[ Parent ]
Is there a particular reason .... (3.00 / 3) (#37)
by rednecktek on Thu Jan 11, 2001 at 12:15:04 AM EST

you directed this at me?

I believe I expressed a similar opinion with "Software that uses a non-rectangular interface is done so for aesthetic or artistic value."

Slightly OT:
Although I agree that Art != Innovation, it has been my experience that Ease of Use (for the user) != Productivity for the expert. IMHO, any time you "dumb-down" the system for ease of use, you introduce security holes.

Just remember, if the world didn't suck, we'd all fall off.
[ Parent ]

Two questions (3.53 / 15) (#14)
by slaytanic killer on Wed Jan 10, 2001 at 02:47:02 PM EST

It looks like there are two questions here:

. Is there a better way to model tree structures than inside windows?

. Is there a better alternative to tree structures for normal file systems?

Windows are just one view of a filesystem. Right now, we are all using another -- the web of Kuro5hin. Most files on the web reside in normal filesystems; in Kuro5hin's case, we're pulling stuff from a relational database. So, very often we're using lots of different "filesystems" without being aware of it.

BTW, in sci-fi, you often hear of people traversing 3D "nodes" instead of windows. That's another alternative.

Hmm... (3.20 / 5) (#16)
by 11223 on Wed Jan 10, 2001 at 02:53:34 PM EST

There was a company that tried a pure, non-nested database for its filesystem. It quickly fell apart when they realized that users wanted an easy way to categorize their items.... aka folders.

The Star Trek system, while interesting, really only works when you've an AI for a computer and an infinite number of displays to interact with. In the meantime, we're stuck with dumb computers and one (or two, if you're lucky) non-touchscreen moniters, and it really doesn't work.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Filesystems (3.50 / 4) (#22)
by Kaa on Wed Jan 10, 2001 at 05:39:06 PM EST

Is there a better way to model tree structures than inside windows?

??? You probably mean "Is there a better way to represent a hierarchical filesystem than what Windows Explorer does in its left pane?"

The answer greatly depends on the size of your filesystem. If you have a couple of hundreds of files, most everything works. If you have a couple of thousands of files, the Windows Explorer system works quite well. If you have a couple of hundreds of thousands of files, most everything gives up and dies. You start needing something like an object-oriented database which will likely work much better with a command language than with any point-and-click interface.

Is there a better alternative to tree structures for normal file systems?

Again, it all depends on the size of your filesystem. It also depends on what do you want. Tell us why a tree-based filesystem makes you unhappy and people may suggest ways to make you happier. But again, keep in mind that size changes all.

Windows are just one view of a filesystem. Right now, we are all using another -- the web of Kuro5hin

Ahem. Windows is not a view of a filesystem -- it's an operating system which includes a filesystem and several ways of accessing it. Kuro5hin is not a filesystem either. It seems that you are using the word "filesystem" not in its technical meaning, but rather as "any storage-retrieval system where you could retrieve a chunk of information by knowing its name/location". In this case the WWW is a much more interesting example.

Kaa
Kaa's Law: In any sufficiently large group of people most are idiots.


[ Parent ]

Thing is... (3.50 / 2) (#42)
by slaytanic killer on Thu Jan 11, 2001 at 05:47:45 AM EST

Ahem. Windows is not a view of a filesystem -- it's an operating system which includes a filesystem and several ways of accessing it.

Hey, I think you're trying too hard to say I'm wrong. ;) I wrote "Windows are..."; I'm not talking about MS Windows, I'm talking about just normal windows. It's an ambiguous word, but using it in a plural sense I think makes it clear.

Kuro5hin is not a filesystem either.

Again, I did not say that. I said that the web of Kuro5hin is a view of a filesystem. If you look more deeply, a filesystem is storage of discrete data. It can be hierarchical, or it can be relational -- the details are hidden from you by design. Just because we are used to hierarchical tree structure doesn't mean you can't have a relational file system. Your filesystem commands may be in SQL rather than the normal command-line commands you're used to. And you can make a relational file system emulate a traditional tree-based one.

[ Parent ]
2D (2.62 / 8) (#15)
by simmons75 on Wed Jan 10, 2001 at 02:49:00 PM EST

While it may be popular for an interface to be 3D, we seem to be 2D creatures IMHO. The 2.1D interfaces we have now are kludgy, and unless we wear some sort of clunky headgear, 3D isn't all that appealing, either. Perhaps doing away with the resizable window altogether is the best paradigm (I had thought that Microsoft was going this direction when I first saw the Win95 taskbar.)
poot!
So there.

a true 3d interface is not currently possible (3.25 / 4) (#18)
by Anonymous 242 on Wed Jan 10, 2001 at 03:30:30 PM EST

The 3D interface for a computer would be something akin to the Star Trek Holodeck colliding with the virtual world in Rudy Rucker's The Hacker and the Ants.

I can only imagine my productivity if I could have a web search come back as a bookshelf full of encyclopedia or a stack of index cards sorted by topic and relevancy ratings. The hard part is developing a computer fast enough with enough display bandwidth to render a real time, realistic display. The harder part is building a computer capable of either reading my brain waves or interpreting all of my motions from head turning to finger twitching in a way that I expect.

[ Parent ]

Have a gander (2.71 / 7) (#17)
by leviathan on Wed Jan 10, 2001 at 02:59:58 PM EST

Some ideas similar to this have been explored before here. I'm thinking of this story. I'm even feeling big headed enough to point out my own contributions to that story (well, it saves me covering the same ground here), although I do rabbit on about ROX for too much (the replies get more focused to this story than my orignal comment).

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
my thoughts (3.47 / 17) (#19)
by joto on Wed Jan 10, 2001 at 04:37:32 PM EST

I think windows as a way of showing the output of more than one program at the time is such a good idea, and therefore is likely to stay with us until eternity. However, I really believe that the mouse must go away. In the future, we will instead use voice recognition. We will not tell the computer to move the pointer device above the "open file" button and click, we will simply tell it to open a file (much like emacs users do now, thank you!). If, when editing some document, we need to move the focus somewhere else, the computer should be able to track our eye-movements in order to find out what to do. We will also continue to use keyboards for some tasks (i.e. programming, spreadsheets, ...).

I hope any user-interface input elements visible on the computer screen (icons, menus, buttons, scrollbars, etc...) will go away. The screen is an output device, and it's not a good idea to use it for input. Of course, if you really need to draw (i.e. CAD, diagrams, etc), you will use a flat-screen touch-sensitive monitor lying in front of you on the desk with a pen or stylus in your right hand.

There must also be a change in the way we work with computers. I don't like doing mistakes, so the computer should be able to undo anything. It shouldn't matter whether I say "oops" now, or 5 years in the future. Everything should be under revision control and saved for posterity. And as a result of this, there should not be any confirmation dialogs or similar stupidities.

The computer should also be able to sense whether I am concentrated or just idling. I don't want to be disturbed by email or other events when concentrating on some problem, but when idling, it will be a welcome break if the computer told me I had unopened email. I don't know how you would go about measuring this, since it is possible to be very concentrated without touching the computer at all (when really concentrated, I am just thinking). I don't know how, but I guess it is possible...

Lastly, a bit of intelligence would be a good thing. When I tell my computer to move that paragraph up, it would be a good idea if it understood by context which paragraph, and how far I really meant. But that is probably a bit more difficult, even humans have problems with that.

I can't decide... (3.87 / 8) (#27)
by Wormwood on Wed Jan 10, 2001 at 07:12:02 PM EST

what this is. Troll, flamebait, or ignorance of the facts.

However, I really believe that the mouse must go away. In the future, we will instead use voice recognition. We will not tell the computer to move the pointer device above the "open file" button and click, we will simply tell it to open a file (much like emacs users do now, thank you!). If, when editing some document, we need to move the focus somewhere else, the computer should be able to track our eye-movements in order to find out what to do. We will also continue to use keyboards for some tasks (i.e. programming, spreadsheets, ...).

First: I doubt that the mouse will die, ever. Unless touchscreens become cheap and self-cleaning, they will never be used heavily. Do you know what happens when a greasy, oily finger touches practically every part of the screen for a day? It gets disgusting and hard to read.

Also, let's think about people who do a lot of data entry and other such things. It's much easier (in an setup where ergonomics is taken into account) to place your hand on the mouse, and move stuff around on that way; there's less chance of repetitive stress injuries. You must also remember that noe everyone sits within arm's length of their monitor. I, for one, don't. For me, every time I wanted to open a program or click a [Submit] button, I'd have to lean forward. By the end of the day, my shoulder and neck would probably be pulsing epicentres of pain.

A note about speech recognition: It's useless for anyone who does a lot of document work. Take a secretary: I know some who can type at 200 WPM. There isn't anyone who can talk that fast and still be coherent to a human, much less a computer.

You also mention focus changing by what we're currently looking at. Apart from being expensive and incredibly impractical, this would probably scare most people. Do you like it when a computer does something for you without telling you? I can garner that most of the k5 readership doesn't. I know that my eyes skip all over the screen, all the time. I have a WinAMP window under this one, and I dart back and forth between the two. Now, if the focus was changed every time I moved my eyes, I'd lose my IE window, because that would disappear under WinAMP. I can't call that conveinient. Also, how many users would be freaked out at the computer second-guessing them? I think that most older people, who are mostly used to the world in front of them, would be driven insane. Example: Clippit. I write "Dear Mom" in Word and that asshole says, "It looks like your writing a letter!" and asks me if I want help (this was before I turned it off). Clippit is never helpful. He tries to make your productivity by doing things to your document based on what other people have done. I can't tell you how bad that is.

You go on to contradict yourself, here:

I hope any user-interface input elements visible on the computer screen (icons, menus, buttons, scrollbars, etc...) will go away. The screen is an output device, and it's not a good idea to use it for input.

You said you wanted to use the screen for input, but now you say you don't. What do you want? Why do you hope scrollbars, icons, and menus will go away?

The computer should also be able to sense whether I am concentrated or just idling. I don't want to be disturbed by email or other events when concentrating on some problem, but when idling, it will be a welcome break if the computer told me I had unopened email. I don't know how you would go about measuring this, since it is possible to be very concentrated without touching the computer at all (when really concentrated, I am just thinking). I don't know how, but I guess it is possible...

Lastly, a bit of intelligence would be a good thing. When I tell my computer to move that paragraph up, it would be a good idea if it understood by context which paragraph, and how far I really meant. But that is probably a bit more difficult, even humans have problems with that.

Hate to break it to you, but this won't happen until computers are mind-readers that possess a greater intelligence and wisdom than our own. And when that happens, do you think they'll be formatting documents for us? I don't.

True, these are your thoughts, but they're all extremely impractical. In my opinion, they would decrease productivity and user happiness. Overall, it sounds like you want the computer to 'just know'. Don't hold your breath.



[ Parent ]
That damn paper clip (4.00 / 4) (#32)
by odaiwai on Wed Jan 10, 2001 at 08:54:53 PM EST

While I find the MS Office Actors (that paper clip and the others) to be unbelievably irritating, the concept behind them is quite interesting. I found a way once to get rid of the actor, but still have the hints come up on a task bar. That was quite useful and it helped me to learn a lot of useful shortcuts in Excel.

WRT to the mouse going away, I think that a move towards tablets and pen based systems will happen for certain things like drawing. Replacement of these huge clunky CRT displays with 300dpi LCD screens will happen at some point too.

Hmm, how about a 300dpi LCD flat screen which lies flat on your desk and also functions as a drwing tablet? EG, I'm editing text and can remove a paragraph by scribbling it out, or insert text by putting a caret (^) and hand-writing the text in. Of course a keyboard is better for lots of text entry, so part of the display can become a keyboard (like on a palm) which is sensitive to your fingers. Not sure how the self cleaning is going to work, but hey, we all give our monitors a little clean now and then anyway, right?

The wave of the future: Touch screens are input/output devices and the Windolene company rules the world. :)

dave
-- "They're chefs! Chefs with chainsaws!"
[ Parent ]
scribbling paragraphs? (4.00 / 3) (#38)
by QueenFrag on Thu Jan 11, 2001 at 02:08:05 AM EST

You know, between the removing text by scribbling it out, and just using a caret whereever you want to input text (and display being input) I know that you're talking about the Apple Newton, but to someone who hasn't used one, it may not be obvious.

The Newton itself had quite a few nice things about its UI, and it only partially used windows for display. Most activites were modal full-screen affairs.

This is all probably a good deal of the reason that I use my Newton every day, and my Palm3 gathers dust.


--- Sponsored by: Tulip Eyeglasses Shop
[ Parent ]

Newtons (3.00 / 3) (#40)
by odaiwai on Thu Jan 11, 2001 at 03:30:28 AM EST

I wasn't actually talking (or even thinking) about the Newton, but now that you mention it, does match what I said. I've only used one very briefly and quite some time ago.

I'd agree that a Newton has a better interface than a Palm for many things, but try carrying your Newton in your shirt pocket for a while... :)

dave
-- "They're chefs! Chefs with chainsaws!"
[ Parent ]
LCD Tablets (3.00 / 1) (#51)
by DCMonkey on Thu Jan 11, 2001 at 01:53:28 PM EST

Take a look at the Wacom PL Series

[ Parent ]
Mouse will disappear. What will replace it? (4.00 / 4) (#33)
by plastik55 on Wed Jan 10, 2001 at 09:03:53 PM EST

The parent ot your post was engaging in productive (if vague) speculation whci his entirely on-topic. On the other hand, you are pointing at current technologies, saying that none of them are as good as a mouse and keyboard, and then concluding that current input devices are the best they will ever be and won't go away. That's missing the point.

I like the way you quote the parent post talkgin about eye movements and then irrelevantly go off talking about touchscreens. Who's the troll/flamebait/ignorance person here?

Now, as a neuroscientist myself, I will have to break it to you that your second-to-last paragraph is BS. There are many channels of expression and sensory input that contain much more information and are more "intuitive" than the standard pushing-buttons/moving mice paradigm (which is the only way we currently give input to computers.)

One could imagine an interface where a major part of the input comes from where you point your eyes--the technology to do this exists in some experimental headsets, and it's possible to do it without a headset at all, with some advances in image processing. (existence proof: you, as a human, can tell at a distance of 20 feet whether someone is looking at your eyes or three inches to the right. Therefore it is possible to use a camera pointed at someone's face, perhaps built in to the monitor, to tell where that person is looking. The implementation is difficult and requires research, but progress is being made.)

Detecting "attention" is more difficult, but by no means impossible. It is possible to detect most forms of what the cognitive psychologists are currently calling "attention" non-invasively with magnetoencephalography, which implies that it may be useful channel for input.

A famous experiment recently planted a bunch of electrodes in the motor cortex of a monkey, determined how the electrical activity in the monkey's brain correlated with the movements of his right arm, and used htat information to mvoe a robot arms that the monkey could see. The monkey, observing the robot arm, progressively learned how to move the robot arm without moving his real arm--he acquired a new output channel.

As for "usability," well, take my example of an eye-directed interface. Early incarnations may not be useful to everyone because hte interface is not refined. However It will most certainly be useful (and in some forms, already is useful to disabled people who cannot operate the pushing-buttons/moving mice interface. As the interface becomes more refined, it will become more useful to more people This will require additional channels of input (for example to discriminate intentional eye movements from "just looking around,"--you can probably do this tiday with magnetoencephalography, which answers your ob. Voice recognition technology, while not presently useful enough to make it the only channel of input, is useful enough to improve the productivity of scores of people with their hands in wrist braces.
w00t!
[ Parent ]

Try this experiment for yourself! (3.50 / 2) (#39)
by edric on Thu Jan 11, 2001 at 02:52:33 AM EST

A famous experiment recently planted a bunch of electrodes in the motor cortex of a monkey, determined how the electrical activity in the monkey's brain correlated with the movements of his right arm, and used htat information to mvoe a robot arms that the monkey could see. The monkey, observing the robot arm, progressively learned how to move the robot arm without moving his real arm--he acquired a new output channel.

Most people cannot wiggle their ears. However, most people can learn how to wiggle their ears after sitting in front of a mirror for half an hour. (No need for electrodes - just a little perseverance.)

[ Parent ]

A question (2.50 / 2) (#47)
by itsbruce on Thu Jan 11, 2001 at 09:32:40 AM EST

"A famous experiment recently planted a bunch of electrodes in the motor cortex of a monkey, determined how the electrical activity in the monkey's brain correlated with the movements of his right arm, and used htat information to mvoe a robot arms that the monkey could see. The monkey, observing the robot arm, progressively learned how to move the robot arm without moving his real arm--he acquired a new output channel. "

Most people cannot wiggle their ears. However, most people can learn how to wiggle their ears after sitting in front of a mirror for half an hour. (No need for electrodes - just a little perseverance.)

If you sat an infinite number of monkeys in front of an infinite number of mirrors, would they eventually produce the score to Dumbo in ear-flapping semaphore?


--

It is impolite to tell a man who is carrying you on his shoulders that his head smells.
[ Parent ]
let me help you to decide then... (4.28 / 7) (#34)
by joto on Wed Jan 10, 2001 at 09:42:33 PM EST

what this is. Troll, flamebait, or ignorance of the facts.

It is wishful thinking.

First: I doubt that the mouse will die, ever. Unless touchscreens become cheap and self-cleaning, they will never be used heavily. Do you know what happens when a greasy, oily finger touches practically every part of the screen for a day? It gets disgusting and hard to read.

Hey, did I advocate touch-screens? No, I advocated voice-recognition. I did however say, that for some applications, like CAD and anything involving drawing, that touch screens would be a good interface. Lying flat on your desk, not like a giant CRT. But now that you mention it, yes I certainly think touch-screens will become cheap with time. And it's not unlikely that they will come up with some surface that doesn't get so oily. I am not exactly talking about what we have today, I am talking about future user interfaces.

Also, let's think about people who do a lot of data entry and other such things. t's much easier (in an setup where ergonomics is taken into account) to place your hand on the mouse, and move stuff around on that way; there's less chance of repetitive stress injuries. You must also remember that noe everyone sits within arm's length of their monitor. I, for one, don't. For me, every time I wanted to open a program or click a [Submit] button, I'd have to lean forward. By the end of the day, my shoulder and neck would probably be pulsing epicentres of pain.

Why would you have people doing manual data-entry? It seems very counter-productive to me. Why would you move stuff around manually? What stuff are you talking about? And why is it so difficult to imagine an ergonomic setup for you that uses something else than cathode ray tubes and a mechanical mouse?

A note about speech recognition: It's useless for anyone who does a lot of document work. Take a secretary: I know some who can type at 200 WPM. There isn't anyone who can talk that fast and still be coherent to a human, much less a computer.

There isn't anyone who can think that fast either. So unless you are just mindlessly copying text (which a computer should be able to do for you, anyway) this is probably not a reality. But you are free to use a keyboard if you really wish. As I said, I think keyboards are practical for some tasks.

You also mention focus changing by what we're currently looking at. Apart from being expensive and incredibly impractical, this would probably scare most people. Do you like it when a computer does something for you without telling you? I can garner that most of the k5 readership doesn't. I know that my eyes skip all over the screen, all the time. I have a WinAMP window under this one, and I dart back and forth between the two. Now, if the focus was changed every time I moved my eyes, I'd lose my IE window, because that would disappear under WinAMP. I can't call that conveinient. Also, how many users would be freaked out at the computer second-guessing them? I think that most older people, who are mostly used to the world in front of them, would be driven insane. Example: Clippit. I write "Dear Mom" in Word and that asshole says, "It looks like your writing a letter!" and asks me if I want help (this was before I turned it off). Clippit is never helpful. He tries to make your productivity by doing things to your document based on what other people have done. I can't tell you how bad that is.

Well, if you don't like focus changing without you telling the computer, how about you give the computer a command to change focus to what you look at when you want it to, then? Anyway, when I am typing, I usually look at the text I am typing, so for me, it would make sense. But, you should be able to do what you want without interruption from the computer, so you should be allowed to configure it any way you want.

Anyway, a winamp-window is exactly what I consider stupid, why do you need a window there just because you are playing music? Anyway, the computer should be intelligent enough to understand you are not typing some letter into winamp! I don't know who Clippit is, but I guess you are referring to that annoying MS-Office-thingy? I don't know what your point is? Yes it's annoying! So what, did I advocate it anywhere?

You go on to contradict yourself, here: I hope any user-interface input elements visible on the computer screen (icons, menus, buttons, scrollbars, etc...) will go away. The screen is an output device, and it's not a good idea to use it for input. You said you wanted to use the screen for input, but now you say you don't. What do you want? Why do you hope scrollbars, icons, and menus will go away?

No, I am not contradicting myself. I said I wanted to use voice as the primary input device, but that keyboards and touch-screens would also be practical in some situations. I mentioned CAD and drawing as examples of something that would be useful for touch-screens. And I was thinking of a touch-screens more in style of a tablet, and not exactly what you can buy in the shop today. And it should probably not be the only screen attached to the computer.

Scrollbars/menus/buttons/icons/etc are counter-productive. They take up screen space that could be used for stuff I really want to see. They are also not an efficient means of input. The mouse-wheel is an excellent replacement for scroll-bars. As for all the others, they are better replaced by voice input.

Hate to break it to you, but this won't happen until computers are mind-readers that possess a greater intelligence and wisdom than our own. And when that happens, do you think they'll be formatting documents for us? I don't.

Well, the idling/concentrating part, I am pretty sure can be solved without any intelligence. Perhaps it is even possible now. I am sure it would be possible to scan radiation from the brain or something like that. There should probably be a difference in some patterns between a person concentrating deeply, and someone just being bored.

The part of intelligence is unrealistic, I admit. And that was why I wrote it too, as a bit of self-irony.

True, these are your thoughts, but they're all extremely impractical. In my opinion, they would decrease productivity and user happiness. Overall, it sounds like you want the computer to 'just know'. Don't hold your breath.

No, I think many of these ideas will become practical during the next 10-20 years. And that they will increase productivity dramatically. Just because you are not willing to look beyond present hardware capabilities doesn't make the ideas impractical. Anyway, this inability to look past present hardware suprises me very much, because I take it for granted that this will improve very soon. What concerns me much more is whether practical speach recognition will ever be a reality. It seems to me to be a much harder problem, and perhaps the only real hinderance to what I envision. By the way, I am not holding my breath!

[ Parent ]

Scrollbars are output (none / 0) (#49)
by DCMonkey on Thu Jan 11, 2001 at 01:49:36 PM EST

They tell you where you are scrolled to in a document or list.

[ Parent ]
A signal is worth a thousand words (3.85 / 7) (#28)
by itsbruce on Wed Jan 10, 2001 at 07:12:48 PM EST

In the future, we will instead use voice recognition.

You're welcome to. I'll pass. It'll take a lot to convince me that telling a computer to open a file will be easier than just pointing at it (assuming you have the use of your arm/hand). Cheap, large, flat touch-sensitive screens - or something that tracks the motion of your hand - would be much faster in most situations.

Imagine how quick it would be to move a bunch of files by drawing a circle round them with your finger and then tapping (or pointing towards) the folder they should go into. You could do it in half a second. Now try coming up with a set of vocal commands to do it that takes under 30 seconds.


--

It is impolite to tell a man who is carrying you on his shoulders that his head smells.
[ Parent ]
That's the old CLI/GUI argument ... (3.80 / 5) (#41)
by StrontiumDog on Thu Jan 11, 2001 at 05:31:09 AM EST

... repeated. There are plenty of examples where CLIs are faster than GUIs. Similarly, there are plenty of examples where GUIs are faster to use than CLIs. In the same vein, UIs designed specifically for voice recognition will not use GUI paradigms (not when mature, anyway), and will have plenty of examples in which it is faster to use than a GUI. I suspect that voice recognition interface uses will have more in common with CLIs than GUIs. I also suspect that you are very much used to the idea of a computer as being something you sit in front of and manipulate.

[ Parent ]
Thoughts of voice/speech recognition (3.66 / 3) (#44)
by evvk on Thu Jan 11, 2001 at 08:08:02 AM EST

Well, I think _voice_ recognition is mostly useless. At least I'd find it awkward to dictate this message, for example. Unless someone comes up with an altogether new idea, I think we'll be using devices based on the concept of keyboards for writing for long to come (I rather write plain text with a keyboard than a pencil). And, \insert{The usual privacy argument.} In addition to (awkwardly) speaking formal commands (that one used to input by other methods), dictation just about all all plain voice recognition can be used to. Maybe this is enough for home automation and such (there it would be usefull; "Raise the volume" without having to go to the amplifier or find the remote control) but not workstations where being able to translate speech to text and recognize simple formal commands is not very usefull except to disabled people. To be usefull, what is required is _speech_ recognition (that is, voice recognition _and_ natural language understanding) and that is far, far away if ever possible in practice. And even then, I still wouldn't consider it being the primary input method for workstations, just a method for formulating more difficult queries (and that doesn't require the voice part, really, just natural language understanding).


[ Parent ]
I don't accept the comparision. (3.00 / 1) (#46)
by itsbruce on Thu Jan 11, 2001 at 09:25:14 AM EST

There are plenty of examples where CLIs are faster than GUIs.

You compare voice to the CLI and gesture to the GUI. I don't accept that. Gestures can represent symbols as well as actions - ask any deaf signer. Why not a gesture-controled CLI? I can't see any reason why not.

In the same vein, UIs designed specifically for voice recognition will not use GUI paradigms

I don't see a future for such a system except in cases where any other input option is unavailable, for reasons given below.

I also suspect that you are very much used to the idea of a computer as being something you sit in front of and manipulate.

(Thanks for telling me how unimaginitive I am). A UI that responded to gesture would be just as easy to control from a distance as a voice-activated one (neither are practical now, which is why I mentioned flat touch-screens as a nearly-here example). Easier (and easier to program) in fact, since gesture can convey complex information quickly and in parallel. Voice communication, on the other hand, is serial and slow.

I can see a future for a system that mixed gesture and voice control - e.g. quickly select one/some/all from a group by pointing in the general direction and saying a control word - but I bet the result would be voice for content and gesture for control. For example, one major problem with dication software is distinguishing between dictation and editorial commands - or between words and punctuation. Even where it works reliably, the result is very, very clunky and awkward. A system that allowed you to use hand signals to edit and punctuate would make dictation practical.

To sum up: gestures convey complex information more quickly, can communicate more than one thing at once (even just using one hand, since you can make a sign and move your arm as a modifier), can represent actions or convery symbolic information. For these reasons I think gesture would be the senior, voice the junior partner in any combined interface.


--

It is impolite to tell a man who is carrying you on his shoulders that his head smells.
[ Parent ]
Reply (4.00 / 2) (#52)
by StrontiumDog on Thu Jan 11, 2001 at 01:56:09 PM EST

Gestures can represent symbols as well as actions - ask any deaf signer. Why not a gesture-controled CLI? I can't see any reason why not.

For the same reason you (and most other people) don't use sign language for almost anything: you (generic you, not specific you) don't know sign language. You do know how to speak. The difference is an immense learning curve.

I don't see a future for such a system except in cases where any other input option is unavailable, for reasons given below.

Actually I see the GUI becoming the minority rapidly. I'm willing to bet that in 20 years, most communication between humans and computers will not occur via a GUI.

(Thanks for telling me how unimaginitive I am).

You're welcome.

A UI that responded to gesture would be just as easy to control from a distance as a voice-activated one (neither are practical now, which is why I mentioned flat touch-screens as a nearly-here example). Easier (and easier to program) in fact, since gesture can convey complex information quickly and in parallel. Voice communication, on the other hand, is serial and slow.

Voice communication may be serial and slow, but for most of the world's inhabitants, it is the preferred method of communication. It is also one they have already mastered. There is a reason why MS Word is used more widely than vi plus TeX. That very same reason will ensure that voice recognition software will win out commercially over gestures or GUIs in the long run. That reason is simplicity for the end user. Yes, there will be a lot of things that GUI users will percieve as being dumbed-down. Undoubtedly Internet flame wars of the future will ensue between GUI users ("punctuation with voice interface is awkward and error-prone") and voice recognition usrs ("Yeah, but all I have to do is shout 'Joseph Stalin' from the crapper, and my computer starts up.").

There are two problems with GUIs. The first is the learning curve. This curve is not trivial. It is somewhat mitigated by the fact that GUI paradigms are limited, with MS Windows as defacto standard, but it will get worse as computers diversify into all sorts of niches: home control, the kitchen, the car, etc.

The second problem is sitting in front of you. On your desk you have a 15 pound CRT, a keyboard, and a mouse, plus requisite space to move that mouse. A GUI requires two things, you see: space to house the screen, and a location (to use a GUI for any extended period of time you must be seated close to the screen). Hence my remark about your perception of computers as machines you sit in front of. These are forms of computers for which the GUI works reasonably.

But I do not want to get out of my car to place an order via a GUI at a McDriveIn -- I want to talk through the car window. My remote does not have a GUI. My fridge does not have a GUI. Electronic doors I walk through react to my motion, I do not use a GUI. When driving my car I want to be able to do everything from checking my latest stock quotations to sending an email: I cannot use a GUI for this, nor is it safe to do so. Elderly people who have trouble using GUIs have no trouble learning and using a few standard phrases for voice-recognition usage. On small machines like mobile phones the limitations of GUIs become apparent: in such a small area very few commands are visible at any one time, and the user has to go through a hierarchy of commands in order to carry out complex activities. With a voice recognition system, "Remove John's number" is just that: one command, instead of Menu->scroll->John->click->Submenu->scroll->click->Remove->click. As computers spread and diversify, this will only become worse.

More difficult to program? You bet. A software nighmare? Certainly. Will it require vast amounts of CPU horsepower compared to GUIs? Definitely. So what? To add insult to injury, once people are used to voice interfaces in their cars and phones, they will start to expect it to be the default on their PCs. The GUI will go the way of the CLI.

[ Parent ]

Computers People (5.00 / 1) (#54)
by itsbruce on Thu Jan 11, 2001 at 04:03:46 PM EST

[why not a sign-controlled gui]
you (generic you, not specific you) don't know sign language. You do know how to speak. The difference is an immense learning curve.

I couldn't disagree more. I learned to touchtype in a few days, got up to a decent speed in a few weeks. 2 weeks ago I had no knowledge of Graffiti (the PalmOS script), now I can write in it as quickly as I can write freehand. When I worked in a disability advice centre, I learned enough sign to communicate in simple ways with deaf colleagues/volunteers in a similar timescale. New communication skills aren't hard at all if the data communicated is familiar or simple.

Besides, just as the Unix command line uses some familiar concepts (pseudo-english commands, Roman alphabet), a sign-and-gesture CLI could be based on familiar signals (thumbs up, finger drawn across neck etc).

Voice communication may be serial and slow, but for most of the world's inhabitants, it is the preferred method of communication

Communication with people, that's entirely different. Complex speech is necessary for communicating with complex human beings. Computers, OTOH, are stupid. You don't communicate with them, you tell them what to do. That's why it's called a command line. For "communication" with computers, speech is needlessly over-complex.

Consider the example earlier in this thread where the writer wanted the computer taking his dictation to analyze his speech for undertones so that it would no how to react. One example would be for the computer to notice that he's talking to someone else. That is a very complex task for a stupid computer. How much easier it would be for it to see that he's holding his hand up, palm forward (visual analysis is a complex task for computers but nowhere near as much as speech analysis.

There are two problems with GUIs. The first is the learning curve. This curve is not trivial. It is somewhat mitigated by the fact that GUI paradigms are limited

In case you didn't notice (and you don't seem to have, since you seem consistently to assume that I'm being dense), I dropped GUI for UI some time back in this conversation. When I've been talking about computers, I didn't restrict myself to ones sitting on a desk. I didn't elaborate on all the other possibilities because it wasn't relevant to the point.


--

It is impolite to tell a man who is carrying you on his shoulders that his head smells.
[ Parent ]
itsbruce ... (3.00 / 1) (#58)
by StrontiumDog on Fri Jan 12, 2001 at 04:40:34 AM EST

I couldn't disagree more. I learned to touchtype in a few days, got up to a decent speed in a few weeks. 2 weeks ago I had no knowledge of Graffiti (the PalmOS script), now I can write in it as quickly as I can write freehand. When I worked in a disability advice centre, I learned enough sign to communicate in simple ways with deaf colleagues/volunteers in a similar timescale.

.. you are a exceptionally flexible specimen of humanity. My compliments. Touch typing in days, mastery of Graffiti in weeks, sign language in weeks -- you are apparently an excellent learner. (You see, I am capable of the occasional compliment :-). However, pay attention: the vast majority is not like you. Few people learn touch typing in days, even fewer people have ever learnt a programming language at all, and as for sign language: that remains essentially limited to the deaf. My advice to you is: never release a program based on the assumptions that (1) end users are as quick in acquiring new skills as you are and (2) are prepared to invest the time and energy to acquire these skills.

For "communication" with computers, speech is needlessly over-complex.

At the risk of hammering this home repeatedly: speech is over complex for the developer, yes, but simpler for the user (who is generally not as capable as the average developer). The same thing went for CLIs and GUIs: a CLI can be implemented in a few lines of C code, while a GUI requires tens of thousands of lines of code (contrast the sizes for instance of GNU readline versus MFC). Guess which one is a hit with the end user?

[ Parent ]

Shucks (4.00 / 1) (#59)
by itsbruce on Fri Jan 12, 2001 at 07:13:52 AM EST

you are a exceptionally flexible specimen of humanity.

Hem. I didn't mean that to sound the way it seems to have come across. I learned to touchtype in a few days because I set aside those days and spent several hours each day. That taught me where the keys were, at a speed of about 15 words a minute. Becoming proficient took a couple of weeks of solid practice, at least an hour a day. I think most people could cope with that. And some would never learn it at all. (And I did say basic sign communication).

My advice to you is: never release a program based on the assumptions that (1) end users are as quick in acquiring new skills as you are and (2) are prepared to invest the time and energy to acquire these skills.

I'm the one that trains our staff in any new software, including the stuff we write, I'm aware of the problems. Having trained nearly 200 people in the use of Windows 95 (all strictly DOS users before that) over the course of five months, I can tell you that the Windows GUI isn't simple either. Most people who think it is think so because it's the only interface they've ever learned and they forget how it took time to absorb the lessons (or, more often, they aren't aware of how much they underuse the interface, not having explored all its potential).

Interfaces evolve. If our staff had used Windows 3, they might have had a gentler learning curve (though they'd also have had to unlearn some things). The kind of UI I'm envisaging would also evolve, just as the Palm UI is evolving, with new ideas and feeback from users being integrated over time, not be imposed all at once. Take that future interface and try and learn it all at once - well, sure, you'd face the same kind of problems that someone from the 50's would have trying to use a modern GUI. That doesn't mean the modern GUI is bad.

CLI can be implemented in a few lines of C code, while a GUI requires tens of thousands of lines of code (contrast the sizes for instance of GNU readline versus MFC). Guess which one is a hit with the end user?

You might be surprised. Some of our staff hate Windows and pine for Wordstar and the heirarchical task-oriented menus we designed for them under DOS. The wide-open spaces of Windows intimidate them. The productivity of some of our staff (including some who like it) has never recovered.

As for the CLI/GUI thing - again I reject the comparison. Both the gesture-sign-speech UI I'm proposing and the speech-only UI are more complex (both to use and to program) than the current CLI. I simply think the gesture-sign-speech one would be more intuitive. After all, that is how people actually communicate, with less than 40% of the information being verbal. Speech-only communication is relatively rare - phone communication, basically. And look how uncomfortable many people are with phones, how many mistakes and misunderstandings happen when most of the context is removed - it's nearly as bad as text-only communication for that.

If text-only interfaces do become common, the world will be full of people hitting various devices and shouting "That isn't what I meant, you stupid machine."

Much like now, in fact.


--

It is impolite to tell a man who is carrying you on his shoulders that his head smells.
[ Parent ]
I've tried it (4.40 / 5) (#43)
by Joeri Sebrechts on Thu Jan 11, 2001 at 05:49:11 AM EST

In the future, we will instead use voice recognition.

I've tried doing this, with Philips Freespeech. Although it still needs a lot of work it was actually quite usable. I managed to do away with my mouse almost completely. Simple navigation IS possible using this kind of tech, and dictation is getting better so fast that near faultless dictation isn't that unreasonable to expect from a PC.
But. It's not going to happen. Maybe dictation is, but we're never ever going to do more than that with voice recognition. Not only because it's hard to express some concepts in spoken language that are SO easy to express with a pointing device but because of two simple reasons:

- A sore throat
Try talking all day to your PC, if you can. You'd be surprised how much input we give into our PC. Replace that by voice commands and it just becomes too much for our poor vocal cords to manage. Everyone who's had to talk a whole night non-stop (without listening!) knows how sore your throat is after that. Imagine doing that the whole day.

- Ears
There are a lot of them out there, and more importantly, there are 2 stuck to your head. Imagine how irritating your workplace would be if everyone around you was talking to their PC's. The noise would be so much more irritating than usual, and it already isn't that good. You could claim that by the time voice recognition has replaced regular input we'd have successfull soundproofing. Well, good luck finding that. Anything and everything is sensitive to vibration, so the only way you can kill sound is to either use a LOT of soundproofing (I've seen this done, and it costs piles of money), or create negative vibration, that evens out the original sound.
Anyway, whatever you do to reduce the noise of whole societies talking to their PC's, it's either going to cost too much, or going to be too irritating (like wearing headphones that generate an inverted soundwave).

So, what do I think we will use for input?
After having operated a palm for a while, it seems stylus-operated touchscreens with embedded handwriting recognition (but that recognizes YOUR handwriting, not some designer one, like Grafitti) is the most likely candidate. The surface of the desk could be your screen (although the surface would need to be tilted towards you ofcourse, to keep it ergonomic)
The funny thing is that what what's out there on the market TODAY you can already build this kind of setup. Sure, the screen might not be big enough (although you can use several lcd touchscreens resembling one larger touchscreen using X2X as a workaround), and the handwriting recognition has some rough edges, but it's doable. Now if only I'd win the lottery ...

[ Parent ]

UI or GUI? (2.60 / 10) (#20)
by Cyberdeck on Wed Jan 10, 2001 at 04:43:46 PM EST

I havn't seen a GUI that I really liked. The MS GUI feels awkward, the Mac GUI tries to hide the underlaying machine too much (and makes it difficult for me to use). Both Gnome and KDE have enough bugs still in them to leave me feeling like I should wait until they come out of beta. I havn't tried the BeOS.

My best idea of a UI from movies was the interface used by the HAL 9000 in "2001 - a Space Odyssey". Simple speech. Yes, they had screens, but you tell the machine what you want to do and it did it. No windows, icons, mousing, or pointers to mess with. Just speak and listen. Simple. (yah, right!)

-C
You can never have a bad day when you start it with "FORMAT C:".
Speech interface (4.28 / 7) (#24)
by Kaa on Wed Jan 10, 2001 at 05:46:55 PM EST

My best idea of a UI from movies was the interface used by the HAL 9000 in "2001 - a Space Odyssey". Simple speech. Yes, they had screens, but you tell the machine what you want to do and it did it. No windows, icons, mousing, or pointers to mess with. Just speak and listen. Simple. (yah, right!)

I have a deep suspicion that speech interface is severely overrated. For some things it would be good, but for others it would be a nighmare. One example is editing text, a *very* common activity.

Driving the car is another good example. Speech input would work on very high level ("Car! Take me to 555 Main Street!"), but imagine it on low level: "Steer left! More! No, stop! I mean, don't brake but stop steering! Steer a bit back..."

Kaa
Kaa's Law: In any sufficiently large group of people most are idiots.


[ Parent ]

The obvious example... (3.00 / 1) (#48)
by CrayDrygu on Thu Jan 11, 2001 at 10:09:09 AM EST

I have a deep suspicion that speech interface is severely overrated. For some things it would be good, but for others it would be a nighmare. One example is editing text, a *very* common activity.

Well, I have to take the obvious example here -- Star Trek. I think they've got it down nicely. Voice interface for mid- to high-level commands ("Eject warp core!" "Locate Lt. Worf" "Give me a slice of New Your cheesecake, please."), PADDs for what may be the ultimate PDA (small, lightweight, focus on a specific application, but with a fast wireless connection to the ship's database), and desktop "terminals" for anything else.

I know, Star Trek references are usually pretty lame, but you gotta admit they got this one right, and given the state of voice recognition, palmtop computers, and bluetooth, we're closer to this kind of technology than a lot of people realize.

[ Parent ]

Ummm... (4.00 / 4) (#36)
by Mr. Excitement on Wed Jan 10, 2001 at 11:10:54 PM EST

My best idea of a UI from movies was the interface used by the HAL 9000 in "2001 - a Space Odyssey". Simple speech. Yes, they had screens, but you tell the machine what you want to do and it did it.

Most of the time, anyways... ;)

1 141900 Mr. Excitement-Bar-Hum-Mal-Cha died in The Gnomish Mines on level 10 [max 12]. Killed by a bolt of lightning - [129]
[ Parent ]

Logically... (4.26 / 15) (#31)
by sinclair on Wed Jan 10, 2001 at 07:56:03 PM EST

Assumption #1: We have a typical 2001-era computer, which has a limited, rectangular, two-dimensional area for its display.

Plenty of computer displays don't use windows, because they can dedicate the display to one application. Game consoles, DVD players, public information kiosks, and et cetera.

Assumption #2: A typical 2001-era personal computer runs more than one program at a time.

A personal computer, then, needs a way for programs to share its display. I can think of three ways to do so:

  1. Give each program the entire display, in turn.
  2. Create a unified user interface that integrates all programs.
  3. Sub-divide the display into sections, and give each program a section of the display.

AmigaOS can do 1 in a nicely integrated fashion (and 3, of course), and most versions of Microsoft Windows now can do it with DirectX. (PalmOS, too, although it's technically running only one program at a time.) A command-line interface does 2, although many programs slip into 1. I can't think of many other examples, because it's not terribly practical for GUIs.

That leaves 3. There are many ways to sub-divide a computer display, but since the display is rectangular, and most of what we wish to display fits well in a rectangular area, we can make most efficient use of the display area if we sub-divide the display into rectangular sections.

Et voilà! Windows!

Once we've got this far, we find that most computer displays, even in 2001, are pretty limited in area, so we can't sub-divide them too far (i.e. run all that many programs) before each section becomes impractically small. Therefore, it makes sense to allow these sections to overlap, but that's just a refinement of the window concept.

So yes, I've seen many strange, wonderful GUI concepts out there, but none which can erase the fundamental need for non-dedicated computers to share their display among several programs. IMHO, I think we won't get away from using windowed GUIs until we get away from using two-dimensional computer displays.

remember the Siemens RTL window manager? (3.50 / 8) (#45)
by nickwkg on Thu Jan 11, 2001 at 08:10:55 AM EST

I did my university dissertation on writing a tiled window manager for X, and the most interesting one I found was Siemens RTL (although it was so old I was unable to get it to compile). It had conventional draggable and resizable windows, but did not allow them to overlap. Instead, dragging/resizing a window over the boundaries of another inactive window would cause the inactive window to move/resize. So windows would push each other around or squeeze against the side of the screen - a neat idea IMO.

Sadly my window manager never got completed, and I am now of the opinion that, unless you have a 1 metre squared screen, the current windowing model is the best.

Full screen! (3.00 / 4) (#53)
by Rainy on Thu Jan 11, 2001 at 03:54:55 PM EST

If not windows... then it's gotta be full screen! If not full screen, it's got to be windows. The real question is how you implement windowing. Here's a neat idea I've been meaning to play around with but didn't so far because I'm just not a good enough coder yet: modal window manager, sort of like vim. In command mode you can hit 'q' to close current window, 'm' to minimize, 'i' (or enter) to start enter 'input mode' i.e. start typing into current window. 'l' can move to next workplace to the right, 'j' next workplace down, etc. '5ml' can move the window 5 units to the right, and so on. 'ctrl-j' or some other 'easy' combination will exit from input to command mode (cause you need esc for some apps, like say vim). The selling point is that you can have alot of easily typed commands in command mode that would normally take up awkward key combinations.
--
Rainy "Collect all zero" Day
RE: Full screen! (none / 0) (#60)
by bluebomber on Fri Jan 12, 2001 at 09:02:27 AM EST

If not windows... then it's gotta be full screen! If not full screen, it's got to be windows.

I'm not sure that I see why. E.g., what if your only interface was through a browser? I guess, technically, the buttons and form elements displayed in the browser could be considered "windows" of a sort, but I don't think this was the intent of the original question.

There are numerous possibilities for user interfaces. Think of all of the computer-controlled devices that you use everyday. Does your microwave have either a full-screen or windowed interface? How would you describe the interface to your cell phone? I've seen a couple of PDA/palmtop-type devices that have email, calendar, address book, and simple notetaking/text editor applications built in. I guess you could say that each of these runs "full screen", but not really, because icons for the other applications are displayed across the top (or bottom?) edge of the screen, allowing the user to switch back and forth between applications.

Anyway, that's just my (sort of long-winded) way of saying "open your mind to the possibilities". To use a cliche, "think outside the box"...


-bluebomber
[ Parent ]

Um, there's some misunderstanding here. (none / 0) (#61)
by Rainy on Fri Jan 12, 2001 at 05:58:08 PM EST

What I understand by 'windows' is that each application takes up part of the screen and can be moved around, resized, etc. A browser can either be in a window or full-screen, i.e. lynx open in linux console (or dos console) is a full screen interface, while netscape open in X is a window. Buttons *inside* the browser got nothing to do with this. I suppose you could say that a frame is a sort of a non-resizable (by user) window. But essentially, the story talks about windows paradigm as introduced first by PARC, I think which augmented full-screen interfaces of the day.
--
Rainy "Collect all zero" Day
[ Parent ]
one main window, lots of peripherals (3.40 / 5) (#55)
by kimbly on Thu Jan 11, 2001 at 05:20:39 PM EST

How about something like a daisy flower? Your current application would be in the middle of the screen, taking up say 75% of the viewable area. All other windows are smooshed into one of the petals around the edge. You can quickly switch from one app to another by just clicking one of the petals. And you can a kind of overview of all the apps at once. The advantages of this are: no window is obscured, you never have to rearrange windows, and no space is wasted displaying a useless background image.

The only other mode of usage that I think would be useful would be to be able to compare two windows side-by-side at full size. So modify it a little so that you can have the middle be shared by two windows at once. But that would be an unusal mode that you would have to explicitly ask for.

One drawback: it's hard to quickly minimize your time-wasting-application-of-choise when the boss comes around to check on you.

That's a great idea (3.00 / 2) (#56)
by DJBongHit on Thu Jan 11, 2001 at 10:35:48 PM EST

How about something like a daisy flower? Your current application would be in the middle of the screen, taking up say 75% of the viewable area. All other windows are smooshed into one of the petals around the edge. You can quickly switch from one app to another by just clicking one of the petals. And you can a kind of overview of all the apps at once. The advantages of this are: no window is obscured, you never have to rearrange windows, and no space is wasted displaying a useless background image.

That sounds like it would be an excellent way to manage the screen for most usage, especially if the little windows updated in real time (or at least more smoothly than the little thumbnail windows that Gnome can do). This is probably pushing X's limits a bit, though. Also, like you said, you'd need to be able to put the windows in other arrangements, and there should be keystrokes to quickly switch between arrangements.

But that really does sound like a neat idea - if you're thinking about actually coding up something like this, let me know, I'd be willing to help out a bit.

~DJBongHit

--
GNU GPL: Free as in herpes.

[ Parent ]
'tis hard... (none / 0) (#62)
by kimbly on Sun Feb 18, 2001 at 07:02:03 PM EST

I looked into doing it, based on aewm, which is a really simple window manager. I decided that I didn't want to just relocate non-active windows and push them to the sides, because the apps inside them probably wouldn't scale themselves down -- so you'd basically just see the upper-left corner of the program, instead of a smaller version of the whole thing.

I got relativly far in the implementation, but then I found a large obstacle: apparently there is no way to get the contents of a window that isn't actually visible on the screen. XCopyArea just doesn't do anything in that case. So I think that if I really wanted to do this, I would have to create some kind of imitation X server -- so applications talk to the imitation, which passes on most calls to the actual X server. The imitation would then be in a position to intercept all draw events from applications, and thereby gain access to their pixels even if the app isn't being displayed. But this is more work than I'm currently interested in doing, so I've dropped the idea for now.

[ Parent ]

larswm (3.50 / 2) (#57)
by evvk on Fri Jan 12, 2001 at 03:40:35 AM EST

I think larswm does something close to what you described. I don't find it very usable but at least it is an experiment on a different kind of window management idea (just like Ion). That's what we need, experiments, research, not just thousands of window managers that look and behave the same.

[ Parent ]
If not windows...what? | 62 comments (62 topical, 0 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!