and will use whatever anti-aliasing functionality the
underlying toolkit provides. Application programmers will use only one API
...and if that one API handles aliasing or anti-aliasing the same way, that choice can percolate all the up to give the user a choice at runtime. Or, any code thus written can easily be repurposed in another app with a different context that might call for a different aliasing decision. The same code could display on a portable phone LCD, a desktop displaying text, a desktop displaying print preview (to use your good example), or on a Jumbotron.
Anti-aliased or not, fonts are polymorphic. Anti-aliased fonts have more information in them and are not theoretically less readable. Perhaps a future implementation will be more to your liking, or a future display technology will produce something you you find more pleasing anti-aliased. An application programmer should be able to write code today that benfits from future improvements.
make it more transparent to the user (who will never see that audio is being handled by
The programmer is also a user, as is the sysadmin in charge of the network. It should be transparent to all of them when they are not actually dealing with the differences. Pausing the stream should pause both audio and video so they don't need to be resynched. A programmer is going to have to accomplish this task at some point. When that happens, it should be inside of a shared API call so everybody does not have to reinvent the wheel.
That's not to say that your other points are not important, they are, and your suggestions should be observed. But there are more needs to be served than just the ones you are suggesting which should be observed as well.
Some of this may be terminology. When force-feedback, smellovision, and other VR technologies become mainstream, they need to be integrated into the client-server-display-stream API as well. You might say, "that's not Xwindows" and maybe it isn't. But then we should not run Xwindows, we should run Xsensurround and maybe it's implemented in terms of lowlevel protocols nobody ever heard of called Xwindows and ESD. In this paragraph I'm making the point that it doesn't matter what you call the abstraction, it is still useful for that which the user thinks of as one abstaction to be observed as one abstraction by the programmers, systems, networks, etc. In some sense, turning down the volume should turn down the smell too, and the relationship between the two (linear? log?) might be dependent on the user, or the species or who knows, but once solved it should not be the concern of the everybody else who encounters the abstracted stream.
[ Parent ]