Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Computer Science - Have we actually learned anything?

By porkchop_d_clown in Technology
Sat Apr 13, 2002 at 06:51:28 AM EST
Tags: Software (all tags)
Software

Hopfrog's discussions of error handling and variable naming started me thinking. I've been programming longer than (I suspect) most k5 readers have been alive. But even today, the concepts and languages programmers use are the same ones I learned when I was still a kid.

Have we actually learned anything abour programming since I first broke into the AP math classroom so I could play with their new TRS-80?


So, some background. My first "computer" was a calculator that had fifty keystrokes (not bytes) of memory.

My second computer was that aforementioned TRS-80. It had a 2Mhz processor, 4k of RAM and an audio-cassette for data storage.

In those days, you used BASIC on microcomputers and professionals used ForTran and COBOL on big iron. C and Pascal existed but were rare (and, frankly, in those days Pascal was better).

Most of your skills as a programmer were dedicated to overcoming the limitations of the hardware. Obnoxious programming short cuts, mind-bending machine language coding tricks, sophisticated algorithms - all were part of game. Real professionals argued about the relative efficiency of different hashing techniques, and they invented every technique and language feature in use today.

Structured programming languages, List Processing, object orientation, rules based expert systems - most of it was laid down by 1970 and the rest was in place by the early 1980's.

In the meantime a strange reversal happened. Programmers stopped having to deal with the limitations of the hardware and, instead, hardware started dealing with the limitations of programmers. CPU designers started adding instructions that were only of use to compiler writers and, thanks to Moore's law, RAM and processor speed grew so fast that no one cared if Joe Programmer knew the difference between QuickSort and BogoSort.

Instead of skill, programmers began simply writing incredibly bloated code. I've written payroll software that ran on the 4k computer I mentioned above. I've written adventure games that ran on a calculataor with 4.6k of RAM and a processor speed that was measured in kilohertz. These days simple hello world programs compile to over 128k in size, and a letter to grandmom can require over a megabyte of disk storage!

Call me an old fart, but the fact is that nothing on my 500 mHz iBook, or my gigahertz PC actually makes me more entertained than Ultima III did, or more productive than SpeedScript.

I enjoy playing with my toys, but can anyone actually dispute me? Can you show me something, somewhere in the software business that is actually less than 20 years old?

Java? Please. Syntactically, yes Java does OO very well. But OO itself has been around about as long as I've been alive and virtual machines have been around since before the UCSD Pascal I wrote software for on the Apple II. C++? I remember when C++ was a pre-processing hack on K&R C - and it's still a hack, although the standard libraries have certainly improved.

What else? Ummmmmm.... I'm thinking, but I can't think of a single idea in programming that I learned after I graduated school.

Can you teach me something?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
What was your first language?
o BASIC 69%
o Pascal 9%
o C 9%
o ForTran or COBOL 3%
o ALGOL or PL/I 1%
o Java 2%
o Perl 1%
o Lisp 1%

Votes: 282
Results | Other Polls

Related Links
o Also by porkchop_d_clown


Display: Sort:
Computer Science - Have we actually learned anything? | 263 comments (246 topical, 17 editorial, 0 hidden)
implementation lags behind theory (4.20 / 10) (#1)
by Delirium on Fri Apr 12, 2002 at 08:24:49 PM EST

There's a ton of theory that's been written up in just the last ten years. OO is mostly 1970s and early 1980s work yes, but OO is not the epitome of modern programming -- it's simply the newest paradigm that's caught on for widespread use. Aspect-oriented programming is one particularly popular field of research not yet in widespread use. Parallel programming? Security-typed languages? Prediction-based caching algorithms? All these have had major advances in "modern" times. Actual programming will catch up to the theory (usually) within another ten years or so, though much of the cutting-edge theory is already finding its way into compilers.

And if you consider C++ a pre-processor hack on C, then you obviously don't know C++. It's got a lot of stuff thrown in it, but it certainly is not just "C with classes" anymore.

Parallel programming is *not* modern. (5.00 / 2) (#10)
by porkchop_d_clown on Fri Apr 12, 2002 at 08:50:12 PM EST

Hypercube topologies were being explored while I was still in school and clusters and vector processors go back to the 60s. And, if I remember correctly, wasn't Occam a language designed just for parallel processing?

Predictive caching - actually, that's one of the things I mention in the article, chips being designed to get around the limits of the programmer. When was the last time a VB programmer worried about it?

C++ - as I mentioned, the libraries included with C++ have certainly improved, but libraries are not the same as the language itself.

Technology finding its way into compilers - okay. How many compilers does the world actually need? I gotta tell you, it's been a good ten years since I've seen a compiler that wasn't a flavor of GCC and again, adding optimization techniques to the compiler is a way of getting round dain bramaged application programmers - it doesn't improve the quality of their programs and, in fact, encourages poorer programming techniques by self-same application coders.

Security-typed language - now, there is a term I am unfamiliar with. Tell me more?


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
new stuff (5.00 / 6) (#15)
by Delirium on Fri Apr 12, 2002 at 09:06:02 PM EST

Parallel programming is certainly not new. Solutions to important parallel programming problems (especially efficent solutions) are new though -- with the advent of commercially available SMP systems there's a huge demand for efficient SMP. There've been significant advances in writing multiprocessor-efficient operating systems; some of them are just implementation of old theory, but there's been new theory as well.

And of course predictive caching isn't done by the VB programmer. It shouldn't be done by an applications programmer. That's something best handled by optimized algorithms that dynamically handle caching. There has been a bit of a debate over the possibility of adding caching hints to programming languages (i.e. "I will use this data again often, please keep it cached"), but it's generally been concluded that algorithmic solutions are both less onerous to the programmer and more efficient in most cases, because the dynamic system can compensate for current conditions on the fly. And with modern processing such systems have overhead in the 1% range. Same thing for garbage collection -- with any significantly complex program the GC can do a more efficient job than you can, and the program will be more maintainable and less buggy as well. In general I tend to favor this sort of abstraction. For example, was the slight speed increase that unbounded buffers offer in C worth the literally thousands of serious security holes caused by buffer overflows?

As for C++ I was talking about the language itself. Things like templating, virtual functions, etc. are completely impossible in C. The libraries are a nice addition of course.

I'd say optimization techniques should be in the compiler. With the complexity of modern systems it's simply impossible for a programmer to take care of the optimization issues. For example take the upcoming Itanium. It has a deeply pipelined superscalar architecture which results in multiple "delay slots" (similar to those on the SPARC) that cause out-of-order code execution on branches. For your program to be optimal, you have to fill these with code that can be executed out-of-order (otherwise you just fill them with a NOP, which is suboptimal of course). To do this, you have to program in pure assembly, as this is working on the single-instruction level -- you can't even do that in C. But a compiler can do it for you just fine. In fact the compiler with any complex program can probably do it better than you can, because it can scan thousands of lines of code and millions of permutations and find the most efficient way to do things. Similar reasoning to why all modern C compilers ignore the "register" keyword -- because they know that they can manage the registers better than you can. Soon garbage collectors will be more efficient than manual memory management as well, if they aren't already for some applications. I wouldn't call any of this "poorer programming" -- it's good programming practice because it's encapsulation. Good optimization requires arcane knowledge of minute architecture details, and is impossible to do in a portable fashion. Better to implement this optimization in a very robust way once on each system -- in the compiler -- and have the programmer write higher-level portable code. Thus when new optimization techniques are discovered you just modify the compiler, instead of every single program. And every single programmer doesn't have to be an architecture expert -- just the compiler writer does.

Security-typed languages have many variations, but the one I'm familiar with are ones designed to be used with operating systems that have fine-grained information security policies (none of which are in widespread use yet). For example a file in such an operating system might be marked "can be read by any user, but cannot be copied or transmitted off the system." Then your program (written in a security-typed language) will open it in "read-only with no retransmission privileges" mode, and the compiler will enforce this. If you try to do anything prohibited to the file, the compiler will issue an error and refuse to compile your code. Of course to be effective this requires that the system be a mainframe type multi-user OS with a trusted compiler and no way to run programs that were not compiled locally. But it's being researched particularly vigorously by places like the CIA that care about information security, and some corporations are interested as well.

[ Parent ]

Huh? (5.00 / 2) (#111)
by pb on Sat Apr 13, 2002 at 12:53:08 PM EST

Templates can be done through macros. Virtual functions are done through function pointers.

The only thing C++ does is bolt support for this into the language and give it a new name; of course you can do it all in C.
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
virtual functions (5.00 / 1) (#162)
by Delirium on Sat Apr 13, 2002 at 08:28:34 PM EST

Templates can be done through macros. Virtual functions are done through function pointers.
Yeah, templates can be done in C with macros (though it's even messier than C++ templating is), but I don't think you can do automatic virtual function dispatch in C. In C++ for example you have a pointer declared as being a pointer to the virtual class's type, but it actually points to one of the derived classes; when you call a member function it calls the right one automatically. In C you could approximate the "pointer of one type can point to multiple types of objects" by using a void*, but then if you wanted to call a function to operate on this object, how would you know which function to call? There's no way to set up automatic virtual function dispatch -- you have to have some manual way of determining which type the void* actually points to and then calling the appropriate function, in which case you might as well not have bothered in the first place.

[ Parent ]
Incorrect (5.00 / 1) (#171)
by statusbar on Sat Apr 13, 2002 at 09:19:28 PM EST

Virtual functions most definetly can be done in plain old C, and quite often were.

All you do is make the first item in your struct to point to your virtual table. You set this pointer when you create the object. The virtual table itself is just a static struct containing pointers to the various virtual functions.

To call a virtual function, with single dynamic dispatch, all you do is something like:

self->vtbl->Clear( self );

Yes it is more wordy, but this is fundamentally what the C++ compiler is doing behind the scenes. It was a common 'pattern' in C long ago, and as such it made sense to make C++ make it less wordy to construct, define, and use these function pointers.

So C++ does not expand the field of computer science much, except in the manner of 'generic programming' which most people do not do. Although they may utilize the STL which uses generic programming techniques, most programmers do not write their own code with generic programming techniques.

And even in that respect, generic programming in c++ is really at the level of object oriented programming in plain C was before - Overly wordy, prone to obscure errors. Expect newer languages which make generic programming better than it is in c++, perhaps better than Ada95 is.

--Jeff

[ Parent ]

C vs C++ (none / 0) (#191)
by boris on Sun Apr 14, 2002 at 09:15:43 AM EST

Indeed, virtual functions can be simulated in C using this technique. But there's another feature of C++ that cannot be implemented in C: constructors/destructors. OK, so technically you can do without constructors - just call the construction code explicitly. But no amount of C trickery will buy you the functionality of destructors.

With properly designed destructors, you may acquire any resource, be it an allocated piece of memory, an open file, or a reference-counted object, and never worry about releasing it. No matter how a block of code is structured or how many exit points it has - you're guaranteed that everything will be cleaned up. This is especially useful when you have to acquire several resources, and on failure, release everything you've acquired so far (common scenario: open input file, open output file, allocate some memory, etc.). With destructors, the code is more readable, and life is simpler :-)

You can even fake exceptions in C - but you can't fake destructors.

[ Parent ]

Templates can't be done through macros. (none / 0) (#194)
by i on Sun Apr 14, 2002 at 11:12:32 AM EST

Partial specialization is the name of the game nowadays.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Pleeeaze... (3.16 / 6) (#4)
by maroberts on Fri Apr 12, 2002 at 08:37:03 PM EST

I voted your article up but disagree profoundly with what it says.

I looked in my Stroustrup C++ Programming Language first edition and C++ didn't hit the road till '83, with C with Classes out in '80. Theres no denying that C++ does make you more productive compared to C, and the increasing OO emphasis in most languages does have a return in development speed.

You may have written adventure games that ran on a calculator, but did they have the beauty of Myst or the fun of Diablo [FWIW I was at the University of Essex in the early 80s when the first multiplayer MUD was being developed there].

Your payroll software - how much analysis could you perform on your employees ? You can do lots of financial analysis on huge numbers with very little effort.

The other area which I can think of which has come on in leaps and bounds is computer graphics, and I'm not just talking about the hardware. Fog, T&L, Bump mapping, various shading techniques etc etc.

Next theres the computer interface. OK we owe a big debt to Xerox at Palo Alto, which was early 80s, but in terms of functionality it has come a long way since then.

Mind you in your favour I note it takes longer to load Word 2000 than it does Word 1.0 for DOS!

P.S. First decent program I wrote was a copy of Pong for an Acorn Atom in 6502 Assembler [1980-1981]

~~~
The greatest trick the Devil pulled was to convince the world he didn't exist -- Verbil Kint, The Usual Suspects
Spiffy! (4.50 / 2) (#19)
by porkchop_d_clown on Fri Apr 12, 2002 at 09:14:00 PM EST

Let's tussle.

C++ came out in 83? Okay. But Stroustrup didn't invent OO, he retrofitted OO ideas that were born from much older languages, like Smalltalk and (I think?) Simula(?).

OO does improve productivity, no argument - but like I said, OO was thought up in the 60's and 70's.

As for Diablo and Myst, yup. Beautiful games; visually stunning. Beat them both in considerably less time than I invested in Ultima III or Elite, but maybe I'm just astonishingly good at games. (probably not, though). Still, Delirium is right - I threw games in as a side whine and we should probably have a separate "best games ever" story to argue that out. (And, I will admit that the original 3d techniques developed by the guys at Id and so forth were absolutely amazing - but bump mapping was old when I was in school. I wrote Fortran programs that rendered tanks and teapots on the PR1ME's we had at Drexel).

Xerox Parc and the WIMP interface were early 80s? Dude, when I started at Drexel in 82-83, we were already starting to get Apple Lisas shipped in. Macintosh was 1984. Remember the commercials? Windows, Icons, Mouse and Pointer go back a lot farther than that. And we're still using the same basic desktop metaphors they originally used. In any case, WIMP is user interface stuff, not Comp Sci.

As for that payroll package - in theory, 500 employees stored on that tape cassette. Don't think we ever tried that many though. But my dad was still using my C64 and the fuel-tax program I wrote for him when he died in 2000. It had a small database for tracking when/where he had bought fuel for his semi - because the states make you pay extra taxes if you drive your truck on their roads but don't buy enough diesel in their state.

You used Acorns? Cool. I had a lot of fun with the 6502 processor, although that was on the C64.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Theory and practice (none / 0) (#61)
by maroberts on Sat Apr 13, 2002 at 05:17:15 AM EST

You are correct in saying that a lot of the spadework had been done in the 70s, but there was precious little sign of it in the real world until the late 80's and early 90's. Its the difference between theories being discussed in the academic world and coming into mainstream use.

There are a few item that springs to mind as starting in the early 80s and continuing today - compression techniques and cryptography. [I checked the filing date to be sure ;-0 ]. In addition JPEG, MPEG, Ogg, MP3 compression techniques have all come along in leaps and bounds. These techniques are Comp Sci without a shadow of a doubt, whether hardware or software implemented. Public key encryption started in the 70s, but a lot of work and enhancement, reseach and paper have gone on since then to make it the commonplace tool it is today.

Part of your argument is true - a lot of the basic work was done then, but a lot of comp sci is evolution, not revolution. If you teleported someone from 1980 I'm willing to bet he would not be so blase about the changes in PCs and what they are now capable of as you are.
~~~
The greatest trick the Devil pulled was to convince the world he didn't exist -- Verbil Kint, The Usual Suspects
[ Parent ]
CS education has changed over the years... (4.57 / 7) (#12)
by jeffy124 on Fri Apr 12, 2002 at 09:02:08 PM EST

i'll think i'll run away with how college has changed. Years ago CS was about what you discussed - which hash algorithm is best? Is this grammar SLR(1)? Is this problem NP complete?

Schools today (I should know - I'm an undergrad) still teach these topics, but the mentality of students has changed. Years ago it was all about theory and stuff. Nowadays you only find that in grad students and udergrads who will go straight to grad school.

Students today are all about "My box is faster", "I got linux working!", and the leetness type stuff. Reason for this is the commercialization of computing. Students enter college looking to get 50k/year upon graduation (myself included), not about improving the theory. A few students mature and find their nitch in wanting to improve things. Those are the ones that end up in grad school and with the PhDs.

In answer to the question "what has been learned in the past 20-30 years?" -- You have me stumped. In terms of what can be done with computers, too much to list, but I cant think of anything in terms of truely major breakthroughs.

for the records - my first language was QBasic (a superset of BASIC) in MSDOS 5.0 ten years ago. I'm now 22 and in my 4th year of a 5 year undergrad program. I work in a research lab doing work in developing tools for static code analysis (buzzword version: Reverse Engineering). I'm the type of student to find himself in grad school right after I get the BS.
--
You're the straw that broke the camel's back!
And what's wrong with this? (5.00 / 1) (#22)
by skim123 on Fri Apr 12, 2002 at 09:20:40 PM EST

i'll think i'll run away with how college has changed. Years ago CS was about what you discussed - which hash algorithm is best? Is this grammar SLR(1)? Is this problem NP complete?

Right, because then it was harder to program, to use computers, etc., so only those with formal training and a knack for math were computer scientists. Kind of like how your average Joe isn't a chemical engineer.

Schools today (I should know - I'm an undergrad) still teach these topics, but the mentality of students has changed. Years ago it was all about theory and stuff. Nowadays you only find that in grad students and udergrads who will go straight to grad school

IMO, this is a Good Thing. It shows that computers are easier to use, easier to program, and have permeated into our everyday lives. More people using a technology is good, even if bringing in more people also brings in those who are not into the theory, but into leetness.

In answer to the question "what has been learned in the past 20-30 years?" -- You have me stumped. In terms of what can be done with computers, too much to list, but I cant think of anything in terms of truely major breakthroughs

Egad. Please get a subscription to the ACM Digital Library and start reading the compilations from years past. Granted, once Turing and Church laid out the theoretical underpinnings of computation, it became clear what computers could and could not do; however, every year there are a plethora of new ideas, new algorithms, new ways for solving old problems, new problems coming up that get solved, etc. You make computer science sound like a dead science, which it hardly is.

Money is in some respects like fire; it is a very excellent servant but a terrible master.
PT Barnum


[ Parent ]
State of Students (3.00 / 2) (#80)
by Matrix on Sat Apr 13, 2002 at 09:57:43 AM EST

Also being an undergrad, I agree with the original poster that the state of CS students these days is horrible. I don't think it bodes well for the field (and industry), but I'm just a tiny bit pessimistic. ;)

Most of these students aren't interested in theory. Ordinarily, this would not be the problem. At the risk of being speared and roasted over an open fire, I'm going to claim that you don't need to know too much theory to be a good programmer. General things, like what you can do easily, what different algorithms and data structures do well, yes. But that's more knowledge of good software engineering practices.

Which is where we run into problem #2. Most students aren't interested in that, either. You start talking about the advantages of learning program generators or alternative software models (even, in some extreme cases, OO), and they wonder why they should bother learning another language. You start talking about design practices or rigorous testing, and they wonder why anyone would need or want to take the time to use it.

Part of the blame does lie on the students, yes. We've all been told by the news media that the tech industry is the place to be. Big money, easy! (Pfft. Yeah, right) Part of the blame lies on the professors and administrators who're trying to convert "Computer Science" to "Programming School" and failing miserably.

I've no idea if this is a trend specific to my school, but it seems to be quite widespread. And solutions escape me. Waiting for these people to get out into the real world, discover how things actually work, and run back to get a different degree doesn't seem to strike me as particularly effective. It screws over the rest of us who actually want to learn something, and will probably swamp companies with job applications, making it harder for those of us who are actually interested to get jobs.


Matrix
"...Pulling together is the aim of despotism and tyranny. Free men pull in all kinds of directions. It's the only way to make progress."
- Lord Vetinari, pg 312 of the Truth, a Discworld novel by Terry Pratchett
[ Parent ]

And how much "real world" experience hav (4.50 / 4) (#119)
by skim123 on Sat Apr 13, 2002 at 01:54:00 PM EST

Waiting for these people to get out into the real world, discover how things actually work, and run back to get a different degree doesn't seem to strike me as particularly effective

You seem to assume that the tech jobs in the "real world" require kick-ass developers who are well versed in computational theory. Such a statement makes me wonder what your real world experience(s) have been. I am a mere 23 years old, so I am not trying to say that I've been around the block and then some, but I've worked at both Microsoft and a computer consulting firm. At Microsoft the theory was important, and the top developers were hired. At the consulting firm, knowledge of VB was essentially all that was required. There were many employees at the consulting company that fit your model of "programming scientists."

There are tons of jobs out there for those middle of the road programmers, for the majority of computer scientists, who are programmers first, scientists last (if at all). And there are also jobs for those scientists (albeit many fewer ones). Personally I don't think that having more people in computer science is a bad thing. Yes, having more people will bring down the average IQ, the average aptitute, the average interest in theory, but it will broaden the field, provides jobs for many more people than before, and brings technology more to the masses. If computer science was something that could only be done be folks who invested decades of their life in study and research, then nearly all of the things we enjoy today computer-related would not be possible.

Money is in some respects like fire; it is a very excellent servant but a terrible master.
PT Barnum


[ Parent ]
Neither Programmers Nor Scientists (5.00 / 2) (#181)
by Matrix on Sat Apr 13, 2002 at 10:49:42 PM EST

When I begin by wondering if you read my post or just skimmed it, its not a good sign.

No, I do not have much real world experience. However, I work with people who do. And when I have far more than practically any of my classmates, I begin to see a serious problem developing. Of course, that is supposedly why we have a co-op program, but I digress.

The problem occurs when you get people who are not interested in the theory (which is perfectly fine, I'm not sure that I am) and are not particularly interested in improving their practical software engineering skills. These people aren't "programmers first, scientists last (if at all)". They're neither programmers nor scientists. They don't care about good programming, development practices, or even just doing things right. They just want to do the minimum amount of work necessary to get one of those wonderful high-paying tech jobs that they've been told (since high school) are everywhere, where they can goof off and get $50k CDN a year.

As I said above, they don't need to be "programming scientists". They do, however, need to be competent software engineers. Most of them, from what I've seen, aren't even that. After working at a job helping students with assignments, etc. for two terms, I've discovered that most don't even know the basics of debugging. Much less good design practices, OO or otherwise.

As for bringing technology to the masses, I do not believe that more programmers is the way to do that. I think the last few years have proved that flooding the market with crappy software is not the way to get good software. Or inspire consumer confidence. Or investor confidence. To bring technology to the masses, you need reliability and to do something that they need/want done. To get those, you would seem to need good programers and a stable base of software and knowledge on which to build.

Unfortunately, "stable base of software and knowledge" seems to describe pretty much exactly what the current mostly-hype-driven computer industry is not.


Matrix
"...Pulling together is the aim of despotism and tyranny. Free men pull in all kinds of directions. It's the only way to make progress."
- Lord Vetinari, pg 312 of the Truth, a Discworld novel by Terry Pratchett
[ Parent ]

Vocational (none / 0) (#261)
by awgsilyari on Mon Apr 22, 2002 at 02:52:03 PM EST

IMO, this is a Good Thing. It shows that computers are easier to use, easier to program, and have permeated into our everyday lives. More people using a technology is good, even if bringing in more people also brings in those who are not into the theory, but into leetness.

If you are a person who wants to become an automotive mechanic, you go to vocational school, not an engineering school. I think the time is getting ripe for computer programming vocational schools -- this huge influx of "practical" students who just want to learn job skills is drowning the few of us who are trying to take CS for the sake of CS.

I'm not being negative about "practical" programmers, or people who want to become them. But universities are being forced to change their traditional CS programs to accomodate for the masses of people who are studying CS just in order to get jobs programming computers. The two groups of people need to be recognized as distinct, and the schools should be split.

--------
Please direct SPAM to john@neuralnw.com
[ Parent ]

Invisible Theory (3.66 / 6) (#13)
by Woundweavr on Fri Apr 12, 2002 at 09:02:32 PM EST

Maybe its not so much that there is no more theory as theres no additional theory yet implimented? Also, since the programmers were practically working from scratch back then they had a lot less of a hurdle. That is to say modern computer science theorists have to matter everything that came before and then go beyond that. Its a slower process than earlier as the 'learning' process for them was also their own innovation.

I think most of the development has been with UI over the last two decades as computers became viable as a commercial product to the masses and a more efficent way of interfacing with the novice became more important than squeezing those last few bytes out. Especially with the advancements in hardware and the scope of applications, it makes much more sense to make a program that runs adequately with a good UI than a tight codebase with a crappy ASCII menu. That, in combination with the pure size of development teams on the average project, means that if it compiles they'll be satisfied. To get the absolute best efficency would not be efficent in economic terms, as the added benefit would be less than the added cost.

There is still room for theory, though. Especially among crypto and, for the real far sighted/optimistic, Quantum Algorithms.

Theres one!! (5.00 / 1) (#16)
by jeffy124 on Fri Apr 12, 2002 at 09:09:14 PM EST

Quantum Algorithms

You just hit a nail on the head of what the author might be looking for. I know there's a teacher at my school teaching a course in Quantum Computing, I had him last semester for OS. He said most students are probably gonna drop by the fourth page of the textbook. Quantum is certainly an interesting field, and tons of algorithms have been developed for it, all while having no quantum computer to run it on.
--
You're the straw that broke the camel's back!
[ Parent ]

Quantum Computing... (none / 0) (#29)
by porkchop_d_clown on Fri Apr 12, 2002 at 10:05:33 PM EST

Actually, you've got a point there - it doesn't affect the real world (yet) but I should dig up some books on that subject and see what's going on. From my perspective, quantum computing is sort of like fusion ("The technology of the future - and it always will be") but maybe it is getting closer to reality now.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
yes and no (none / 0) (#78)
by martingale on Sat Apr 13, 2002 at 09:48:45 AM EST

QC is gueared towards different problems as classical computing (phrase dodgy?). For example, it is very unlikely that QC would ever be useful for data entry/database type applications. The terminal at your local bank will always be classical.

What QC is good at AFAIK is massively parallel computations at a snails pace. So you want to factor that integer? Just try out all possible factors *simultaneously*. You'll need to set up the system and perform a measurement afterwards. Then a small amount of classical computing. That's Shor's algorithm.



[ Parent ]
Tons of algorithms? (none / 0) (#86)
by erp6502 on Sat Apr 13, 2002 at 10:41:38 AM EST

The only algorithms I'm aware of that have better scaling properties on a QC that on a CC are
  • Shor's factoring
  • Grover's search
  • Simulation of quantum systems
Please correct this list if you know better..

[ Parent ]
does it matter if we don't teach you anything? (4.00 / 4) (#17)
by SocratesGhost on Fri Apr 12, 2002 at 09:11:48 PM EST

It's called maturity. Things get to the point where they don't really grow anymore, but that doesn't mean that it's stagnating. I haven't grown an inch in the last ten years(except in the waistline), but I think I have some usefulness left.

So what if the principles of CS haven't changed in the past 20 years? The principles of logic haven't changed since Aristotle, but does that mean that we think of it as quaint, old or in decline? No. Even in logic, there's still plenty of work and new ground to cover. There's still new ways of looking at it, and new ways to combine old principles to uncover yet new truths about it.

Honestly, with your experience, it is you who should be creating the new models, uncovering new grounds, coming up with unique ideas. Be an explorer, friend! That you haven't says something very interesting: Either there is no possible further growth (I think we both don't believe that), or that the foundations of CS are still being established, but it hasn't taken sufficient root yet that people have been able to play around with it. We're all still too busy laying the foundations for everyone else while the world catches up. This play time will happen, though. The steam engine existed in Ancient Egypt, but it was a while before anyone did anything new with it...


-Soc
I drank what?


All good points. (none / 0) (#28)
by porkchop_d_clown on Fri Apr 12, 2002 at 09:50:49 PM EST

I guess my main frustration is that the advances in hardware, coupled with marketing-driven software design (i.e., creeping featurism) has led to a regression in the programmers. I'm hoping I'm wrong, hence my story.

You also have a point about helping push the edges outward; it just gets so frustrating listening to all the babies bragging about Linux when I know full well it offers nothing that BSD wasn't doing back in, what, 79? 80? (And, yeah, my day job right now is writing device drivers for Linux for a hardware company.) Meanwhile, when I try to explain to some HR person that learning a new OS won't be hard (since I've already learned 15 or 16 in my career, along with 20 or so variations of programming languages) they get rather, hmmmm, disbelieving - since the average kid out of school barely knows one of each.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
parable (5.00 / 2) (#30)
by SocratesGhost on Fri Apr 12, 2002 at 10:06:07 PM EST

back when i was on the other side of the software fence (as the client requesting features, instead of programming those features for clients), my IT department asked, "What reports would you like to have?" I named a few. Then they said, "Did you know that we can do this, that & the other thing with these reports? Would you like us to do that, too?" I said, "Yes! I didn't know you could do that. What else can you do?" They said, "We can help meet your needs, but you have to tell us what they are. Tell us what you need and I'll see if we can do it."

of course, here i go spoiling the nature of parables by explaining what i mean by it:

I'm sure you've said this to clients. I say it to them all the time now. But let's look at it from the perspective of CS as the science that should advance, and we are the clients asking for it to do things. From that point of view, we are asking what is in the science's bag of tricks that we can use to exploit the world. Of course, it will shrug and say, "Ask me to do something and I'll do it. Tell me what you need." But in our case, it isn't another entity that will create the solution: it's us. When the needs arise, advances will come. Perhaps if it is your need, you'll be the one to create it.

Best of luck! I'm glad I don't feel your frustration. For me, it's still a brand new world.


-Soc
I drank what?


[ Parent ]
Programming is changing (none / 0) (#33)
by CaptainSuperBoy on Fri Apr 12, 2002 at 10:25:06 PM EST

The field of programming is changing, and I think most current programmers are worried at least a little bit. I know I am, but I am trying to be realistic and plan ahead. I believe that developing custom software is getting progressively easier. This means that sooner or later, your average operations manager will sit down and write a custom software package that handles all aspects of business. The time will come when any user will be able to sit down and write an app of the same quality as an experienced developer right now. This will be enabled by faster hardware and huge advances in software development tools. Just look at what spreadsheets enabled the average user to create! This empowerment will continue.

This isn't happening tomorrow, but I do anticipate being forced to change careers sometime during my life. I think there will always be a market for the kind of analytical skills we programmers gain, just not jobs developing most kinds of software.

--
jimmysquid.com - I take pictures.
[ Parent ]

Your Main Frustration (none / 0) (#151)
by Kwil on Sat Apr 13, 2002 at 05:59:02 PM EST

Is absolutely correct.

Programmers are regressing. There are fewer "Iron Men" out there who can figure out a more efficient compression algorithm, or tweak their way through the hardware to eek out that extra iota of performance.

Along similar lines, there are also fewer shoe sellers out there who can make a shoe, fashion consultants who can make clothes, farmers who can use a hand driven plow, or in our society - even fewer people who can make a loaf of bread from scratch.

Yet somehow we manage to have shoes, clothes, food, and bread - and in some cases, better than what could be had previously.

Hell, there have been no real significant improvements to transportation since the jet engine in the 40s. Do we declare that industry stagnant because of it?

So your colleagues are ignorant of histroy. Too bad for them, I guess.

Personally, I'm happy about the regression in programmers and you should be too, for two reasons:
1. It makes you more valuable personally.
2. It means the tools we do it with are getting better - so more of society is able to benefit.
When society as a whole can benefit, and I can too at the same time.. that's a good thing.




That Jesus Christ guy is getting some terrible lag... it took him 3 days to respawn! -NJ CoolBreeze


[ Parent ]
One word (3.00 / 2) (#65)
by qpt on Sat Apr 13, 2002 at 06:42:44 AM EST

The principles of logic haven't changed since Aristotle
S5.

Domine Deus, creator coeli et terrae respice humilitatem nostram.
[ Parent ]

New stuff... (3.00 / 4) (#20)
by chipuni on Fri Apr 12, 2002 at 09:14:01 PM EST

I enjoy playing with my toys, but can anyone actually dispute me? Can you show me something, somewhere in the software business that is actually less than 20 years old?

  1. Anything dealing with music over a network.
  2. Anything dealing with video over a network.
  3. Design Patterns
  4. XML and all related technologies
  5. Very large-scale (100,000+-machine) parallel programs.
  6. Fuzzy logic

--
Perfection is not reached when nothing more can be added, but only when nothing more can be taken away.
Wisdom for short attention spans.
Not really (5.00 / 1) (#23)
by Woundweavr on Fri Apr 12, 2002 at 09:21:48 PM EST

1 & 2 are just implimentations of existing theory.

3 - I assume you mean Modeling, especially UML stuff? Thats not new either, in or outside Comp Sci.

4 - Its a markup language...

5 - True but itts just a scale issue.

6 - Fuzzy Logix was introduced in 1960.

You kind of prove the authors point. There are new ideas but not widely implimented as far as alogrithm level goes.

[ Parent ]

DP (none / 0) (#43)
by delmoi on Sat Apr 13, 2002 at 12:27:28 AM EST

I assume you mean Modeling, especially UML stuff? Thats not new either, in or outside Comp Sci.

No, design patterns are design patterns. They don't really have that much to do with UML
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Yep, but they're old (none / 0) (#123)
by linca on Sat Apr 13, 2002 at 02:30:18 PM EST

Alexander's book came out in the seventies. Of course, it was talking about architecture rather than CS, but design patterns got into CS in the early 80's. Newer stuff might be frameworks.

[ Parent ]
Design Patterns (none / 0) (#95)
by porkchop_d_clown on Sat Apr 13, 2002 at 11:29:00 AM EST

Patterns are ways to talk and think about design, rather than a design language. For example, if I say that I'm using the model/view/controller pattern, another programmer will immediately understand the general organization of my code, and the design philosophy, even if he hasn't seen anything of the actual design or implementation.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Oh, man. (5.00 / 2) (#27)
by porkchop_d_clown on Fri Apr 12, 2002 at 09:33:27 PM EST

Are you kidding?

  1. Hate to break it to you, but I was downloading music to my C64 in 84. Granted, that was only 18 years ago, but Usenet and the Arpanet were already old by then and I'm sure we can find somebody who was doing the same thing in 79.
  2. Ditto for graphics. In either case, do this things have anything to do with Computer Science?
  3. Design Patterns. Design Patterns is the fancy habit of assigning names to things people were doing anyway. I will admit, though, that by giving them standard names it helps programmers communicate with each other. Doesn't seem to help them write better code, though.
  4. XML - oooooo another standard for formating data and text. So was, let's see, EDI, HTML, SGML, TeX, properties files and SQL. heck so were ASCII and EBCIDIC. The only thing XML does for me is save me a week's work writing the config file handler for a given project.
  5. Very large scale parallel processing: Yeah, distributing a problem across the net means many more machines are available now to do the work on. But that's a difference in DEGREE not a difference in kind. In addition, it's a hardware solution not a Comp Sci solution. The basic principles of parallel processing are 30 years old.
  6. Fuzzy logic. Fuzzy logic is new? Didn't Marvin Minsky do seminal work on Perceptrons (i.e., "neural nets") way back in the 60s?

Sorry, but none of these things are new at all.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Nothing new in CS? (4.00 / 1) (#39)
by X3nocide on Fri Apr 12, 2002 at 11:45:16 PM EST

Its because of attitudes like the one you've been flaunting here. Surely someone with your background knows better than a freshman CS student; they didn't just invent bubble sort to prove that quicksort was faster--one came before the other! Algorithms take time to develop and mature. We can't and shouldn't just find some solution when we can find an optimal one, right? I thought this was one of the teachings and philosophies of your "real professionals."

So how is discussing the differences between mp3 and ogg vorbis any different than say... using a hash table versus a tree to hold names to sort? If you're looking for new uses for computers, perhaps you should look away from acadamia.

pwnguin.net
[ Parent ]

Not really (none / 0) (#40)
by bugmaster on Fri Apr 12, 2002 at 11:50:18 PM EST

Once again, in a way, every technology we have today is based on the excellent groundwork laid down by the Ancient Greeks, Egyptians, assorted arabic countries, and Oog the wheel-inventor. So what ? Humans advance in gradual steps, not by leaps and bound. An iron sword is just a variation on a bronze sword, after all. I could go on but I think you see my point.

Anyway, if you actually concentrate, you might see that technology today actually is better in some ways than yesterday. Streaming music and video, for example, did not exist until very recently. No, downloading a sequence file from the Usenet does not count. Computer graphics has made major breakthroughs, as another poster has mentioned (for the record, I did not know BSP trees were so new). Yes, XML is a text format; but that's like saying, "yes, the car is just another carriage". What makes XML different is that it is a structured text format; with a DTD, XML becomes a structured text format which describes its own structure; which I personally think is pretty cool. And parallel processing involves all kinds of bandwidth and scheduling problems that did not exist in the Good Old Days, since all you had were the two UNIVACs connected together.

Basically what I am saying is, just because not everything we do today is shiny and new, doesn't mean that nothing is.
>|<*:=
[ Parent ]

I'm supposed to type something here? (none / 0) (#208)
by matthead on Sun Apr 14, 2002 at 06:39:16 PM EST

Technology advancement usually goes incrementally, but sometimes there are leaps and bounds. I'm not good on my history, so I'll be waiting for someone to point out problems here, but I think all the below advances count as "leaps" as opposed to "steps."

  • The internal combustion engine
  • Development of nuclear fission (for use as weapons and as a power source (and it's subsequent abandonment due to political campaigning by prissy little paranoid environmentalists who give decent people a bad name))
  • Recombining DNA as a method for genetic engineering

It's not that technology hasn't advanced, it's that there have been no big leaps in the last twenty years. A bronze sword is to an iron sword as XML is to SGML. A horse-drawn carriage is to a Model T Ford as the bourne shell is to PARC's initial GUI. Well, that one won't hold up as well.


-- - Matt S.
[ Parent ]
Devil's Advocate (none / 0) (#212)
by bugmaster on Sun Apr 14, 2002 at 07:41:41 PM EST

Playing Devil's Advocate for a moment:
The internal combustion engine
Me pappy had one o'dem steam engine thangs. Yeah, it was big, and black, and I guess it had fire inside too. How's this any different ?
Development of nuclear fission
What, you mean blowing people up ? We have been doing this for a long time now.
Recombining DNA as a method for genetic engineering
Farmers and breeders have been doing that for thousands of years. Just look at chihuahua... Don't tell me that's natural.

See ? It's easy to dismiss any new technology as same old, same old, because a). no new technology is born in a vacuuum, and b). being ignorant of how the technology actually works restricts you to only looking at the applications.
>|<*:=
[ Parent ]

Steps, etc. (none / 0) (#237)
by matthead on Mon Apr 15, 2002 at 12:30:53 PM EST

How does a steam engine work?


--

I'm not talking about blowing people up, I said both as weapons and a power source. I was trying to get across that I think of nuclear fission, period, as a technical leap.

Genetic engineering has indeed been around since at least Gregory Mendel's time. "Recombining DNA" was the key phrase there.

I know that nothing is born in a vacuum. I'm suggesting that sometimes a technical advance counts as a leap, instead of an incremental step.


--

I haven't just been "trolled," have I?


-- - Matt S.
[ Parent ]
I think I can do better than that (none / 0) (#51)
by debolaz on Sat Apr 13, 2002 at 02:15:34 AM EST

Hate to break it to you, but I was downloading music to my C64 in 84. Granted, that was only 18 years ago, but Usenet and the Arpanet were already old by then and I'm sure we can find somebody who was doing the same thing in 79.

I think it's possible to dig up a much earlier example of audio and video over network. If I recall correctly, wasn't this a part of the demonstration of NLS in 1968, 18 years before you started downloading C64 music? That would probably also invalidate delmoi's comment about the web.

By the way, I couldn't agree more with the article and I'm 18 years old, im not saying this out of nostalgia for a time period I weren't even alive in. I think most programmers today are lazy, plain and simple. There's no need to stop and think about "How could I had done this better?", you don't need to do it better.. but does that mean that you shouldn't do it better?

As a final note, what some people just fail to realise today is that just because you make efficient code doesn't mean you have to give up all the other stuff. I asked a programmer once why his program required a 600MHz CPU to run, while other programs doing the same thing could easily run on a 100MHz. (Of course) he made a remark about how we should recode everything in assembler.. I think he contributes patches to the linux kernel today (No I'm not joking).

-
--
If they can buy one, why can't we?
[ Parent ]
LoL. Ah, the Linux Kernel... (none / 0) (#96)
by porkchop_d_clown on Sat Apr 13, 2002 at 11:34:43 AM EST

Yeah, I think people who just broadly pronounce that open source code is better than closed have never actually looked at the Linux source.

for example: 2.4.7 shipped with a broken string.c file. Promptly broke the /proc/ file handler I had written for my drivers because strsep() was returning zero length strings for all tokens.

Hope they've fixed it in the newer revisions!


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Kernel 2.4.7 is broken (none / 0) (#97)
by xtremex on Sat Apr 13, 2002 at 11:52:06 AM EST

It's been noted for months as soon as 2.4.7 came out, 2.4.10 is the new "minimum" kernel build, and the current is 2.54.19. All those problems are gone.

[ Parent ]
Unfortunately, (none / 0) (#176)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:43:45 PM EST

Unfortunately, I'm stuck with the version my hardware vendor is writing their pre-release drivers for. *they* like to ship things precompiled and with NDA's coming out the, errr, manifest.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
What??? (none / 0) (#190)
by xtremex on Sun Apr 14, 2002 at 07:22:49 AM EST

Go to www.kernel.org and see the "glaring" posts about that kernel tree! The VM is completely fscked. That was the "embarassing" kernel release. Linus himslef apologized for it. Why would a vendor release drivers for a faulty kernel??? Even if it IS binary only,if they're customizing the kernel, don't they ever glance at kernel.org?

[ Parent ]
Neural nets are not fuzzy logic (4.00 / 1) (#100)
by zenit on Sat Apr 13, 2002 at 12:04:03 PM EST

Fuzzy logic. Fuzzy logic is new? Didn't Marvin Minsky do seminal work on Perceptrons (i.e., "neural nets") way back in the 60s?

Yes, neural nets are that old, but fuzzy logic is something else. The mathematical/logical groundwork was laid by Charles S. Pierce, who first invented the "vague" logic instead of crisp "true/false" logic ("invented" by Aristoteles).

Lofti Zadeh was the one who really developed fuzzy logic, for example by using it to describe reality. As we all know, reality can not be described by "true" and "false"; but rather "almost", "more", "less", "maybe", etc. I believe the ideas were formalized in the 70s, and I know for a fact that an advanced fuzzy system has controlled Japanese trains since 1987 (the city of Sundai).

Neural nets and fuzzy logic are very different and at the same time very similar. They partly deal with the same things, but in very different ways.



[ Parent ]
More XML Brain Wash (none / 0) (#259)
by jo42 on Sun Apr 21, 2002 at 12:00:48 PM EST

> XML and all related technologies

Oh, great. Instead of "lets have applications talk to each other over sockets over well defined ports using well defined protocols" to "lets use text files with tags 'talking' to each other over HTTP".

XML and all related technologies are pure kludgery because the wankers that came up with them either 1) don't have a clue how to run a secure network, or, 2) never learned how to write network applications.

[ Parent ]

python (3.33 / 3) (#24)
by zephc on Fri Apr 12, 2002 at 09:22:26 PM EST

i choose you, python!

Grin (none / 0) (#94)
by porkchop_d_clown on Sat Apr 13, 2002 at 11:22:39 AM EST

Sorry, I was really wishing for about ten more poll options I can think of a pile of languages that didn't get listed.

With enough entries we could get really esoteric. Snobol, anyone?


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
RTL/2 (none / 0) (#189)
by pwhysall on Sun Apr 14, 2002 at 06:32:13 AM EST

Bleh. I was scarred by that.
--
Peter
K5 Editors
I'm going to wager that the story keeps getting dumped because it is a steaming pile of badly formatted fool-meme.
CheeseBurgerBrown
[ Parent ]
+1FP -- this is so right (3.50 / 2) (#25)
by VoxLobster on Fri Apr 12, 2002 at 09:29:07 PM EST

this is the most accurate article about programming I've ever read. Hardware is so far ahead of software that software development could probably continue almost indefinately on the hardware we currently have. If people programmed like they used to (and yeah, I realize how hard it is and how much time it takes) then life would be great.

VoxLobster -- Program!! Program like you mean it!!

VoxLobster
I was raised by a cup of coffee! -- Homsar

Interesting (4.25 / 8) (#31)
by CaptainSuperBoy on Fri Apr 12, 2002 at 10:14:34 PM EST

Yes, it's a little old-farty and it does stray towards the common misconception that everything was better back then (whenever). However you do illustrate what most people don't understand - most concepts in software haven't changed fundamentally in 20 years.

As for your challenge.. Due to its vagueness you have an excuse for every answer someone gives you.. if I were to suggest MP3, you might tell me that the concept of compression has been around for over 20 years. I'll still bite.

Database technology has changed fundamentally since 20 years ago. For example, transactional databases have revolutionized multiple user access to data. I'm pretty sure the concept of a transaction log was developed since 1982, since relational databases only date to the mid seventies. Another example would be new database paradigms such as object-relational and OO databases. They will revolutionize what people do with their data. No, neither of them are widely used right now. But your example from 20 years ago, OO programming, was barely used back then outside of academia.

Another example would be object access in heterogeneous environments. That (DCOM, CORBA, etc) couldn't be older than the 80s and it is coming of age right now with SOAP, XML, EJB, and .Net.

Yes, back then you could write a similar system to one today.. but you couldn't separate business logic from display code, you couldn't create a multiple tier application, and you were tied to a very specific hardware platform.

--
jimmysquid.com - I take pictures.

People keep suggesting MP3... (none / 0) (#92)
by porkchop_d_clown on Sat Apr 13, 2002 at 11:14:31 AM EST

I still can't understand what music has to do with writing software. (Unless you're writing an MP3 player).

I guess I need to work on the clarity of my writing...

Database technologies... Well, I was certainly learning about SQL in school, but you might be right about COMMIT and ROLLBACK not being standard features of the language then. I would point out though, that the current SQL standard is ten years old.

Distributed computing is, indeed a child of the net. All my work with it (CORBA and EJB, in particular) has been negative, though. All it did was slow things down (compared with banging on the database directly and doing the computing locally). Working directly isn't as portable but, in the real world, people rarely care about portability.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
New technologies and mature technologies (none / 0) (#120)
by CaptainSuperBoy on Sat Apr 13, 2002 at 02:07:11 PM EST

MP3 wasn't developed by musicians, it was developed by experts in perceptual audio coding. In addition to computer scientists, they needed experts on the human ear and brain, physicists, and other specialists. It had little to do with music - it was just another patent being added to a large research lab's intellectual property collection.

It's very unlikely for any technology to be created today and be mature tomorrow.. it just doesn't work that way. So when you ask for technologies that have been developed recently, and dismiss any examples because they're not mature, you're ignoring the fact that things just don't work that way. There is a process that takes an idea from initial development to maturity. If you judged every technology based on its initial, 'proof of concept' implementation you might get the idea that all new technologies are crap. A little patience would be in order, though.

EJB and CORBA have been clunky in the past.. but I am convinced that remote object access is beyond the 'fad' stage. It should continue to mature until it's completely transparent from the user that they are accessing remote objects. People frequently dismiss Java RMI as lightweight (and yes it's clunky), but it has been used to create many distributed applications in the real world.

--
jimmysquid.com - I take pictures.
[ Parent ]

MP3 codecs (none / 0) (#163)
by statusbar on Sat Apr 13, 2002 at 08:44:00 PM EST

    MP3 wasn't developed by musicians, it was developed by experts in perceptual audio coding. In addition to computer scientists, they needed experts on the human ear and brain, physicists, and other specialists. It had little to do with music - it was just another patent being added to a large research lab's intellectual property collection.

Exactly - the people designing MP3 codecs were not breaking any new ground with ANY software engineering concepts or new computer science. There were only breaking new ground with respect to perceptual audio coding.

--Jeff

[ Parent ]

What's MP3 got to do with it? (none / 0) (#139)
by mbrubeck on Sat Apr 13, 2002 at 04:27:19 PM EST

I still can't understand what music has to do with writing software.
The point is not that MP3 is a new tool for writing software. The points is that MPEG codecs are software, and the people creating them are using knowledge and techniques that didn't exist twenty years ago.

Similarly, I would give elliptic curve cryptography and fast wavelet transforms as examples of genuine advances in our understanding of software and computation over the past two decades. I could name other advances in real-time operating systems, new functional languages like Haskell and OCaml, and many others. However, it seems you are actually asking, "What has changed in my narrow subfield of computing?" You are quick to dismiss anything that hasn't affected you personally, or that is outside your expertise.

I would point out though, that the current SQL standard is ten years old.
So there hasn't been a sudden revolution. But compare the state-of-the art in database systems design to that twenty years ago, and try saying that there hasn't been a giant leap forward. I am reminded of this article and my response, which seems relevant to the current discussion.

[ Parent ]
Old database technology (none / 0) (#136)
by KWillets on Sat Apr 13, 2002 at 03:54:19 PM EST

I don't recall the dates, but just about all of the algorithms for undo/redo logging go back to the 70's. Even the stuff people acclaim as "new", like the tux2 filesystem's no-log transaction management, go back to that era.

OODB's, and the recent development, XML DB's, remind me a lot of the old IMS system. A big hierarchy of stuff that's navigated by pointer chasing and hand-coded queries.

I don't wish to condemn ideas merely for being old, but certainly we need to be aware of history.

[ Parent ]

Some have obviously not learned much - others have (3.75 / 12) (#32)
by Groby on Fri Apr 12, 2002 at 10:24:47 PM EST

Oh yes, hopfrogs articles can really make you think. Mainly, you start thinking about programmers and education. Same for your article....

So, let's look at your article, and the misconceptions in there...

Memory Bloat

Yes, some programs do use more memory. One issue is that we actually have decent runtime support. Nobody needs to write their own multiply routines any more.
Another reason is the fact that most programs now contain icons and other nice-looking stuff. That actually costs memory, you know.

If you ever compared the size of programs that actually do things, as opposed to 'Hello World', you'd find they don't differ that much. Yes, there's bad coding out there - but a good programmer can keep his files pretty small. If he needs to.

File Size Bloat

If you're using something that is capable of typesetting for your letters to Grandma, then they use more space. As easy as that. Go back to Notepad, and they're still as back as they've ever been. The more information you want to include in your file, the more space you need. Yes, again, some overdo it. (Not to name any names here :)

But the old games are just better!

Well, then don't buy the new ones. I actually happen to like the eye candy I get in Quake III, or MS Flightsim, or whatnot. And you need a beefier machine for that. Incidentally, I happen to think that 3D graphics have improved quite a bit since 1980.

Java is just a rehash of old ideas

Yup. But it's a pretty good one. It's the first OO language that comes with a nice set of libraries. And, as opposed to other languages, it does introspection....

There's nothing new out there

That's just so ridiculous, it's unbelievable. New ideas out there include:

  • Aspect oriented programming
  • Refactoring
  • Patterns
  • Agile Programming
  • XML
  • Agent based programming
There's more, I just don't have the time to list it all for you.

That's not new - it was all around back then

That's debatable. Nobody ever published it, that's for sure. And if you want something really new-fangled: It's that shiny toy you're playing with. They call it Internet, you might've heard that.

You might actually want to follow what's going on in the software field before posting articles like that. Right now, it's nothing more than one of those 'Back then, everything was better' whinings.

I know, you had to go 5 miles uphill to the bathroom. Against the wind. Both ways.
Sorry things are easier now.

Bull. (4.50 / 2) (#89)
by porkchop_d_clown on Sat Apr 13, 2002 at 11:03:19 AM EST

I've already addressed most of your comments, but the one that really floors me is this:

If you ever compared the size of programs that actually do things, as opposed to 'Hello World', you'd find they don't differ that much.

Oh. that explains why GCC is the exact same size it was when I was running it on a DG Aviion.

Gimme a break. At least try to pay attention!

As for XML, agents and patterns, I'll let my other responses stand.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
The internet--newfangled? (none / 0) (#134)
by Macrobat on Sat Apr 13, 2002 at 03:48:08 PM EST

And if you want something really new-fangled: It's that shiny toy you're playing with. They call it Internet, you might've heard that.

The internet has been around since the late '60s. You know, if you absolutely must be smug, at least be right, too.

"Hardly used" will not fetch a better price for your brain.
[ Parent ]

Wireless Internet is old too, (none / 0) (#165)
by ka9dgx on Sat Apr 13, 2002 at 08:56:40 PM EST

Before someone else says Wireless networking is new...
The reason we have TCP/IP instead of the previous generation (NCP?) is because the wireless networks in Europe couldn't guarantee delivery, so NCP was split into IP and TCP. All of this back in 1973.

--Mike--

[ Parent ]

WWW is a bit newer than that (none / 0) (#200)
by Groby on Sun Apr 14, 2002 at 02:56:59 PM EST

See above. I said they _call_ it the Internet.

[ Parent ]
How Sonny Bono forced us to buy new games (none / 0) (#235)
by pin0cchio on Mon Apr 15, 2002 at 09:52:24 AM EST

Well, [if you like the old games better than the new games,] then don't buy the new ones.

The old games are out of print and no longer available lawfully because of a long copyright term. (The poster boy for counterproductive copyright terms is the late Sonny Bono; the poster mouse is Mickey Mouse.) The game publishers pull their old titles out of print because they compete with the publishers' new titles; they enforce the copyrights as a side effect of their obligation under U.S. Federal law to enforce their trademarks.


lj65
[ Parent ]
XML Brain Wash (none / 0) (#258)
by jo42 on Sun Apr 21, 2002 at 11:55:49 AM EST

XML a new idea? A text-based file format that uses tags? Good God man, you are a young one, aren't you?

[ Parent ]
No offence... (4.41 / 12) (#34)
by Nagash on Fri Apr 12, 2002 at 10:43:30 PM EST

Part of the reason that no one seems to learn anything is that everyone and their dog seems to think that C/C++ is the be-all and end-all with languages. The other part is that people don't (and shouldn't) write programs that are geared toward specific architectures.

Face it: C sucks. Ok, so I'm being a tad harsh. C certainly has its uses, but it is not the panacea of languages. Some will argue to their death bed over this point, incredibly enough. I'm not going to get into a language war, since they are pointless. All I'm going to say is that you use the right tool for the job, and many times C is not that tool. Yet some programmers will use it anyway, probably because they learned nothing else (and refuse to believe that another language might actually be better).

And by the way, computer science is not programming. Programming is merely part of computer science. You didn't assert that CS == programming, but the rant-ish style of the story smacks of it.

As for your games that ran in x amount of memory, where x is small, do they run on multiple architectures? As it turns out, software ends up being used in the darnest of places. Writing portable stuff is usually a good idea, so that tends to cause some bloat. Now, there's no denying there's some real crap out there, but it's to be expected in a world driven by the need to push more product out rather than spend the time on it (I want it now!).

For those in academia CS, there is lots of interesting stuff happening. I'm in the world of biocomputing and there's just neat things happening. The idea that we might better understand what is going on in biological systems (or use them as storage devices) is quite fun to work with.

At any rate, much has been learned in the last 20 years. You just have to go looking for it.

Woz



Portablity (5.00 / 1) (#87)
by porkchop_d_clown on Sat Apr 13, 2002 at 10:48:46 AM EST

You're right, coding like that is often very unportable. Heck, that adventure game did data compression/expansion by hacking into quirks of the oddball processor.

But how much windows code is portable?

POSIX compliant stuff is fairly portable in source form (hah! I wonder how many Linux programmers know what "POSIX" is, or that they are inadvertently complying with the standard?)

As for C - I won't defend it except to say that each languge has design goals and tasks for which it is intended. C was designed for writing low-level OS-type code and I think it does that exceptionally well. I'm a big fan of Java, but I wouldn't write a SCSI driver with it.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Wtf? (Posix) (5.00 / 1) (#152)
by Parity on Sat Apr 13, 2002 at 06:20:57 PM EST

POSIX compliant stuff is fairly portable in source form (hah! I wonder how many Linux programmers know what "POSIX" is, or that they are inadvertently complying with the standard?)

I have no idea where this random ad-hominem attack on a whole class of people came from, but the answer is 'most of them, and not inadvertently'. J. Random Newbie writing yet-another-text-editor or yet-another-2d-tiled-adventure-game may not really know or care what POSIX is, but certainly everyone at the FSF and on the Linux kernel development team knows, and is concerned about, POSIX compliance. Just leafing through the manual pages and the source code for the kernel and the libraries that have POSIX standards shows this very quickly. So, while J. Random Newbie may be writing POSIX compliant code because those are the interfaces available (but probably not, because there's usually an easier non-POSIX interface for each task too, except maybe RT and threading), the systems programmers, at kernel, device driver, compiler, and library levels are all being very -deliberately- POSIX compliant. I'm not involved in any of these things, but nonetheless I keep the POSIX programmer's guide in easy reach both at work and when hacking free software, and I don't think I'm the only one.

Parity None



[ Parent ]
Maybe I'm looking in the wrong parts of the kernel (none / 0) (#199)
by porkchop_d_clown on Sun Apr 14, 2002 at 02:38:05 PM EST

I've never noticed references to POSIX in any of the kernel source code I've read or worked with. I'll admit, though that I've been working on device drivers and not core OS functionality.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Posix in the kernel... (none / 0) (#222)
by Parity on Sun Apr 14, 2002 at 09:55:13 PM EST

$ rgrep -i posix /usr/src/linux | wc -l
744

It's in there... it may not be the foremost concern, but it's definitely addressed.
Parity None

[ Parent ]
Using the right tool for the job (5.00 / 1) (#113)
by pb on Sat Apr 13, 2002 at 01:02:34 PM EST

C, practically by definition, is great for systems programming and compiler writing. That's because it started as a small language to do rapid development (as opposed to coding straight assembler) without too much extra overhead (like 40% instead of 400%). And UNIX was rewritten in C, and C compilers were written for C. Because of this, there are also great tools for developing programming languages in C.

One of the consequences of this is that C became an easy language to impelement on a new machine, and UNIX became an easy operating system to port to a new machine. Before this time, almost every really new machine basically had a custom operating system written (or entirely ported) just for that platform. Since C has been ported everywhere, C programs are very portable as well.

So there you have it; if you're doing any of these things, C is great. If you aren't, it might not be so great. It doesn't try to be the end-all and be-all language for everything, like some other languages do.

And you can probably find add-ons and libraries for or written in C to do just about anything, if you want to. But of course you'll ultimately have to write some glue code in C, and C is not a great language for just writing glue code; for that, you might want something like Perl.

And yes, computer science is most definitely not programming. Programmers came up with C, for example. :)
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
I like this/you're absolutely right (4.33 / 6) (#36)
by riceowlguy on Fri Apr 12, 2002 at 11:20:48 PM EST

First, a little editorial comment: this is the sort of thing I like reading about on K5, so +1FP.

I think you're making some very precient observations, which are good to see even if they're not something I think every programmer past the high school level has made for themselves. Nobody really seems to do anything about them, unfortunately.

It is quite easy for people to dismiss expanded code size as a nonproblem, due to the ever-lessening cost of storage. However, when people say cost, they're only thinking of their wallets. I can have 256MB of SDRAM and a 512KB L2 cache for, like, what, 50 bucks? Most people don't really consider the other costs of such large storage spaces. Like the widening gap between clock speed and memory access latencies. Like the predictions that in a few years, most than 90% of your average CPU's transistor budget will be going to memories of one kind or another. Like all those transistors add up to more power consumption, and that is rapidly approaching a breaking point (either cooling technology (for desktops) or overall power consumption (for server farms) will become a BIG problem if current trends continue).

But increasing code size is just a symptom of the larger, overall problem which you describe, which is that there seem to be very few advances in software development techniques. Fads come and go as the years go by (goto-less programming, structured programming, object-oriented programming, extreme programming) and they don't seem to solve fundamental problems. As my favorite web celebrity, Phillip Greenspun, says:

"Hardware engineers have done such a brilliant job over the last 40 years that nobody notices that, in the world of commercial software, the clocks all stopped in 1957. Society can build a processor for $50 capable of executing 200 million instructions per second. Marvelous. With computers this powerful, the amazing thing is that anyone still has to go into work at all. Perhaps part of the explanation for this apparent contradiction is that, during its short life, the $50 chip will consume $10,000 of system administration time."

Having gone through the Rice CS grinder, I have some nice iconoclastic views about the way CS should be taught. I took two classes with Dr. Robert Cartwright, who loved to gripe about how he thinks that most CS teaching in America is regressing (when he was an undergrad at Stanford, they used Forth or some such language, I can't remember what it was exactly, but anyway it was type-safe and had garbage collection. Now they're teaching in C.). He resigned from the committe that oversees the Advanced Placement tests in CS because they were switching from Pascal to C++, not because Pascal is all that great, but because they had so many other fine alternatives. Having your first CS class taught in Scheme requires a major mental attitude adjustment on the part of all the people who came into the program with previous experience, usually in C or Pascal. However, it is an adjustment I am very glad I made. I would love to go back to my high school and try and teach an intro CS class in Scheme as opposed to BASIC or whatever crap they're still using.

Ugh. I guess I'm just ranting now.

"That meant spending the night in the living room with Frank watching over me like some kind of Lovecraftian soul-stealing nightmare creature-Azag-Frank the Blind God of Feet, laughing and drooling from his black throne of madness." -TRASG0

Graphics (4.70 / 17) (#37)
by fluffy grue on Fri Apr 12, 2002 at 11:30:57 PM EST

Given that I'm working on my PhD in computer graphics, I feel qualified in stating that the world of graphics is definitely advancing quite a bit. Not just rasterization techniques (such as programmable pixel rasterization - which isn't strictly a hardware issue, it's just that the current crop of video cards are starting to implement that part of things in hardware, which is a wholly different matter), but fundamental rendering techniques, such as photon mapping, new applications of subdivision surfaces, visibility determination, volumetric rendering, and so on. Yes, all of that stuff has its "basis" in stuff done 20 years ago, but there are plenty of new things coming out all the time.

Sure, photon mapping is based on the original concepts of forward raytracing, but all of the "new" things from 20 years ago can easily trace their heritage back thousands of years (after all, the idea of a programmable computer is only based on giving instructions to be carried out, and honeybees have been doing that for millions of years). Everything is evolutionary, and truly revolutionary things are extremely rare.

For a perfect example of an algorithm which is definitely younger than 20 years: PVS determination based on a BSP. I forget when Seth Teller's pivotal dissertation was written, but I'm pretty sure it was sometime in the early 90s. (I'm too lazy to look it up, though. He's at the MIT LCS, graphics group.) Pretty much every current 3D game (most notably everything by id software since Quake) is based in part on this paper.

Modern volumetric rendering is definitely newer than 20 years; yes, the marching cubes algorithm is "only" a 3D version of Wyvill's contours, but it's an evolution. Not a revolution, but neither were Wyvill's contours (since it was just a procedural implementation of what mapmakers had been doing by hand for ages).

Also, even though ray tracing has been around for ages, a lot of the newer techniques done with ray tracing are evolutions. Rays traced directly to NURBS, volume integration, proper caustics and physically-modelled lighting (not just lame approximations devised by Lambert and Phong in the late 70s), realtime techniques... read the SIGGRAPH proceedings someday.

Radiosity as we know it today is pretty much brand-new. The various optimizations on it (hemicube approach to form-factor determination, BSP-based visibility acceleration, adding raytracing and/or photonmapping to put in reflections, etc.) are things that weren't even conceived of 20 years ago. Hell, 20 years ago, pretty much everything was on (and geared towards) vector devices; Gouraud's pivotal 1979 dissertation on interpolated shading was seen as practically worthless then, as raster devices were only just beginning to be useful. Hidden-surface determination was all done based on polygon intersections; the idea of using a z-buffer was unheard of. Additionally, it's the readily-available "overkill" hardware which even makes a z-buffer possible.

Basically, things have progressed since the "good old days." Don't get so caught up looking behind you that you run into a wall.
--
"...but who knows, perhaps [stories about] technology and hardware will come to be [unpopular]." -- rusty the p

Again, it's Moore's Law (5.00 / 1) (#64)
by gordonjcp on Sat Apr 13, 2002 at 05:48:08 AM EST

20 years ago, pretty much everything was on (and geared towards) vector devices; Gouraud's pivotal 1979 dissertation on interpolated shading was seen as practically worthless then, as raster devices were only just beginning to be useful
Bear in mind that it was only with the falling price of memory that decent bitmapped graphics became possibe. One of the reasons the first (128k) Apple Mac was so expensive was the huge amount of memory it had. When you consider that as recently as ten years ago, a fairly meaty graphics card might have 256k (a place I worked for bought a card for doing CAD with a whole 1 meg framebuffer - it cost as much as a new small car) it puts things in perspective.

Give a man a fish, and he'll eat for a day. Teach a man to fish, and he'll bore you rigid with fishing stories for the rest of your life.


[ Parent ]
So what? (5.00 / 3) (#125)
by fluffy grue on Sat Apr 13, 2002 at 02:52:52 PM EST

Just because the hardware has gotten to a point that the software can do more doesn't invalidate the belief that the software has made any advances. Yes, the hardware made the advances possible, but that doesn't mean the advances don't exist!
--
"...but who knows, perhaps [stories about] technology and hardware will come to be [unpopular]." -- rusty the p
[
Parent ]
Practical computing (5.00 / 1) (#179)
by pslam on Sat Apr 13, 2002 at 10:13:31 PM EST

Now this is where real software engineering departed (thankfully) from computer science in the last two decades or so. Computers suddenly became fast enough to perform massive calculations in seconds, not days. Software engineers noticed this and invented new uses for the new hardware. As your original comment says - memory suddenly got cheap and processing power suddenly got higher - so z-buffers suddenly became practical. Not only that, but z-buffers have an O(n) property vs number of polygons, whereas sorting (and similar) had O(n log n). It just so happened that the constant associated with both crossed over round about 1990-1995 (for desktop PCs) - and suddenly z-buffering became way more efficient than polygon sorting.

Go back a bit further to the time when people started using texture mapping in games. It took somebody a mental leap to realise that desktop computers had suddenly become fast enough to do it without being laughable. Perhaps they even saw it in a demo first - maybe a single deforming texture mapped rectangle (remember Unreal by Future Crew?) There must be tons of examples of "discoveries" like this.

I'd say that's the essense of computer science: figuring out what algorithm to use for a problem. It's plain wrong to say, as the article does, that computer science is only about discovering the algorithms in the first place. If anything, nothing revolutionary happened until 1980, because nobody made any practical use of computers until then.

[ Parent ]

Not quite (none / 0) (#202)
by fluffy grue on Sun Apr 14, 2002 at 04:35:47 PM EST

For starters, z-buffers did not get their start in "practical computing." z-buffers are also not O(n) in respect to polygons, but in respect to pixels. Also, there are many O(n) sort algorithms which work just fine for graphics (such as radix), and that still doesn't deal with polygon splits (which zbuffers do, of course). Also, only an idiot would rely solely on a zbuffer for visibility determination - you typically want a precull stage, such as BSP-based PVS (from a paper by Seth Teller) as well as frustum culling (from a paper by Cohen and Sutherland).

Yes, of course I remember demos. Demos were probably the first things to bring the academic papers into the "real world," but that doesn't mean that they invented the stuff, they just finally implemented it. Huge difference.
--
"...but who knows, perhaps [stories about] technology and hardware will come to be [unpopular]." -- rusty the p
[
Parent ]

Depends if you're talking average case (none / 0) (#229)
by pslam on Mon Apr 15, 2002 at 07:41:08 AM EST

From my understanding, z-buffers and all other pixel-to-depth mapping algorithms are:
  • Best, worst and average case - Drawing: O(pixels * polygons).
Polygon sorting on its own and all other polygon-to-depth mapping algorithms without culling and intersection are:
  • Best, worst and average case - Drawing: O(pixels * polygons), Sorting: O(polygons log polygons)
Algorithms which perform culling and intersection with possible z-buffer usage are:
  • Worst case - Drawing: O(pixels * polygons), Manipulating: O(polygons ^ 2)
  • Best and average case - Drawing: O(pixels), Manipulating: O(polygons log polygons)

I'm not aware of any sorting algorithm which is O(n) in the general case.

The trouble is with the coefficients. This is why I was saying z-buffering took off around 1990-1995. Polygon intersection and culling was too expensive to compute even for low numbers of polygons, so plain polygon sorting was usually employed, with BSPs for static precalculated scenes. Suddenly the read-modify-write for every pixel in the depth buffer became cheap enough for it to be better than polygon sorting on its own. Nowadays we're seeing the coefficients for intersection and culling becoming low enough to be reconsidered. E.g, tile based rendering and other tricks to reduce memory bandwidth through overdraw that are making their way into 3d hardware.

I didn't claim that demos invented any of that stuff. They were pretty much the first people to actually do anything useful with it. By useful I mean realtime like most people talk about realtime - frames per second, not seconds per frame. It's a shame today's demos hardly ever show any of the same lateral thinking. Most of them are just a pretty dull exercise in straight-line optimization.

The interesting thing today is the battle between big z-buffer optimized graphics cards (e.g GeForce) and tile based rendering (e.g Kyro), although GeForce seems to have won that for now. I'm still not entirely convinced that tile rendering (or equivalent) will win any time soon because the advances that make the computation faster are pretty much matched by advances in memory cost and speed.

[ Parent ]

Then you're not very aware (4.00 / 1) (#238)
by fluffy grue on Mon Apr 15, 2002 at 02:05:49 PM EST

For starters, zbuffers (which are strictly O(pixels), or O(pixels + polygons) if you want to be pedantic though the pixels dominate in general) tend to be O(n lg n) simply because n polygons tend to represent O(n lg n) pixels - it's very rare that you have a uniform distribution of polygons all at the same distance from the camera! In fact, the further you get away from the camera, the more polygons you get, and the smaller they are...

Second, radix sort is strictly O(n) in the general case. You can only use it on fixed-precision numbers, though.

I agree about z-buffers vs. tiles. Technically tiles are just a specific form of z-buffers, though. What I'd like to see happen in hardware are hierarchal (pyramid topology) z-buffers (from Greene's paper on visibility determination); they can, at least in theory, remove a lot of overdraw. They still don't do anything to help with the bus bottleneck, though; in my engine, the bottleneck is just in sending data to the video card, and I don't even have hardware T&L (about 25% of CPU time is spent in client space with all of my fancy geometry manipulations, 25% of CPU time is spent in software T&L, and the other 50% of CPU time is spent waiting for AGP 4x to send data!).

Another thing we need to see in more video cards is proper geometry caching (which, of course, requires hardware T&L). Compiled vertex arrays are a step in the right direction, but the CVAs need to actually be stored on the card to be useful. That alone would speed my engine up a whole lot on a T&L card, since then I could cache the displaylists where they're needed. (As it is, it already builds and caches displaylists as it goes to effectively make CVAs without requiring OpenGL extensions, but since I don't have a hardware T&L card it only serves to free up a little tiny bit of CPU time in client-space.)
--
"...but who knows, perhaps [stories about] technology and hardware will come to be [unpopular]." -- rusty the p
[
Parent ]

One last thing :) (none / 0) (#246)
by pslam on Tue Apr 16, 2002 at 05:19:14 PM EST

I guess if I'm going to talk about average case, I should have considered an average case distribution of polygons, which you say is about O(log n) overlap. I suppose you're right.

I've been thinking far too academically recently - I totally forgot about radix sort. Radix sort is not O(n) - it's O(n^2). But it's O(n) in any non-academic case because the numbers and storage are both finite. It's again a case of prefering a worse order algorithm because the coefficients are in your favour. In this case, O(c*n) where c is vastly smaller than n (c is about 4, e.g one pass per byte).

Radix sort isn't limited to fixed precision either. You can sort IEEE floating point numbers by choosing the radix sort keys carefully. Seeing as the floats are normalised, every possible 32 bit word maps to a unique floating point number. You sort by sign, exponent then mantissa. Every positive number is greater than every negative number, which is easy to sort. Each exponent maps a range of numbers which doesn't overlap any other exponent's (seeing as the mantissa is normalised from 1.0-1.999), so those order easily. And finally the mantissa sorts in an obvious way. There's a few pages detailing how to do this on a google search.

I'd go further than asking for display lists and other structures to get moved onto the card. You can't do it on the main processor because you have to ship that data across a bus. What I'd do is move most of the engine onto the 3d hardware. Why stop with texture programs? A general purpose processor on the graphics card is the logical extension of shifting more and more into 3d hardware. It's becoming somewhat of a joke calling x86 processors "general purpose" these days anyway, looking at the purposes they're usually put to, and how bad they are at it.

[ Parent ]

By the way (none / 0) (#203)
by fluffy grue on Sun Apr 14, 2002 at 04:39:43 PM EST

Z-buffers predate even Gouraud shading. They come from a 1974 paper by Catmull.
--
"...but who knows, perhaps [stories about] technology and hardware will come to be [unpopular]." -- rusty the p
[
Parent ]
Another note (none / 0) (#204)
by fluffy grue on Sun Apr 14, 2002 at 04:43:14 PM EST

Academic graphics stuff has always been about pushing the hardware to its absolute limits; consider that in research circles, "real-time algorithm" means "at least 0.1fps," since if it renders at 0.1fps now, in a few years it'll run at 30 (both due to hardware speedups and various optimizations which people find).
--
"...but who knows, perhaps [stories about] technology and hardware will come to be [unpopular]." -- rusty the p
[
Parent ]
Well... (3.50 / 6) (#38)
by bugmaster on Fri Apr 12, 2002 at 11:39:29 PM EST

What you say is mostly true; "programming" today usually involves nothing more than snapping Lego-like blocks together. However, I believe that we have made some advances, such as:
  • The C++ Standard Template Library, and its bastard child, the java.util.* . True, the ideas presented in these libraries have been kicking around for a while, but nothing like STL existed until recently. It is true that generic algorithms are basically lamdas; however, C++ supports strongly typed generic algorithms, and it makes a lot of difference. Ditto for operator overloading.
  • P2P systems, such as (now defunct) Napster, Gnutella, Kazaa, etc. These things did not exist before broadband became widespread because they could not. P2P is a very interesting concept that is still in development today.
  • FPGAs. True, the technology is quite old, but it has only gained massive acceptance recently, AFAIK.
  • MP3, DivX and other compression technologies. Once again, the ideas used by these technologies are quite old, but that's not saying much. After all, Newtonian mechanics is quite old too, but a lot more is involved in launching a satellite today than mere equations.
  • Open-source movement and Linux/FreeBSD. Enough said.
Now, these are just some practical things that I know of; I am not a theoretical scientist. As far as I know, there is massive research being done right now on distributed robotics (an anathema to any 70s AI buff), computer vision, etc.; however, I have yet to see any massive application of those, and so I can't include them in my list.
>|<*:=
Hmmm... (none / 0) (#84)
by porkchop_d_clown on Sat Apr 13, 2002 at 10:31:52 AM EST

  1. Standard Template Library, etc... others have discussed that - but as your yourself admit, none of the ideas themselves are new. They may be well done, but they aren't new.
  2. P2P... New? What, nobody had fidonet, part time bulletin boards, ftp or usenet binaries groups before napster? I wonder how I used to distribute those freeware programs I wrote for the HP41 and C64 then...
  3. MP3, etc... Explain to me how they have anything to do with writing software.
  4. Open Source. Jeez. Now I understand what my father meant when he said every generation thinks they invented sex. Freeware has been around since the first programmer said to the second "here's how I did it". GNU and the GPL have been around since I started school. That adventure program I wrote? I distributed it as source code - and got thank yous and credits from all over the globe.

--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Explanation (5.00 / 2) (#180)
by bugmaster on Sat Apr 13, 2002 at 10:22:54 PM EST

Standard Template Library, etc... others have discussed that - but as your yourself admit, none of the ideas themselves are new. They may be well done, but they aren't new.
There is a big difference between saying, "Hey, it would be cool to have these collection things... But these parentheses are enough for now" and actually implementing a stanradtized, benchmarked, tested, and, most importantly, coherent collections/algorithms framework. It's like saying, "well, all modern programs use variables and flow control... bah, nothing new under the sun".
P2P... New? What, nobody had fidonet, part time bulletin boards, ftp or usenet binaries groups before napster? I wonder how I used to distribute those freeware programs I wrote for the HP41 and C64 then...
FTP is radically different from P2P; so are usenet binaries and any other server/client model. The interesting part of P2P is that every client is a server. To put this into terms you can understand, consider the old BBS model. You have a central BBS, to which all users connect. Some BBSs can talk to each other through specialized protocols, but all they do is mostly synchronize some specific content. Now imagine if there were no distinct BBSs; instead, each user is a node in a major, distributed BBS. Content is spread all over this network; nodes hand off requests to one another when someone asks, "does anyone have the latest Metallica music video ?". Content is cached locally on the nodes where it is needed most, for extra efficiency. Of course, with all the advantages come some serious disadvantages, such as bandwidth saturation. But hey, what do you expect, it's an emerging technology.

Anyway, do not confuse the usage of a technology (downloading files) with its actual structure. I can share files by mailing floppies to people; however, that is not the same as FTP or email. For more info, check out the freenet homepage; they describe their take on P2P very concisely.

MP3, etc... Explain to me how they have anything to do with writing software.
I am not sure I understand the question. Who do you think actually compresses and decompresses an MP3 or a DivX file ? Magical forest smurfs ?
Open Source. Jeez. Now I understand what my father meant when he said every generation thinks they invented sex.
Once again, giving your coworker a copy of your punchcards is not the same as having a distributed code repository with multiple contributors, unit testing, periodic builds, etc. It is also not the same as having an actual open-source license (GPL), and building a business model on top of it.

Basically, I think you might want to research these topics (and all the other stuff people posted, especially XML) before making sweeping generalization-type statements about them. It's easy to say "all technology is the same, bah humbug" when you don't really know much about the details.
>|<*:=
[ Parent ]

Thank you. (none / 0) (#201)
by nstenz on Sun Apr 14, 2002 at 04:29:26 PM EST

I've just found a new .sig.

[ Parent ]
FreeBSD is an advance? (none / 0) (#85)
by Tau on Sat Apr 13, 2002 at 10:39:51 AM EST

FreeBSD descends from BSD which originated from THE original UNIX. Closed source is the 'new' thing, not open source. Software wasn't sold before the 80's (though Iwouldnt know because I wasn't around before the 80's). Why else do you think Stallman founded the FSF when he did

---
WHEN THE REVOLUTION COMES WE WILL MAKE SAUSAGES OUT OF YOUR FUCKING ENTRAILS - TRASG0
[ Parent ]
Software wasn't sold before the 80s... (5.00 / 1) (#88)
by porkchop_d_clown on Sat Apr 13, 2002 at 10:59:30 AM EST

*THUD*

(That was the sound of my jaw bouncing off the floor, by the way....)


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
OK OK fine go back another ten years (none / 0) (#148)
by Tau on Sat Apr 13, 2002 at 05:38:28 PM EST

but free software still predates commercial software ;)

---
WHEN THE REVOLUTION COMES WE WILL MAKE SAUSAGES OUT OF YOUR FUCKING ENTRAILS - TRASG0
[ Parent ]
Ooooo yeah. (none / 0) (#175)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:36:46 PM EST

I can remember when it started dawning on people that we actually needed something like the GPL to protect ourselves. Some of my early freeware code didn't have any license on it at all. One day, a guy called me and asked permission to use the button layout I had designed for a calendar program. Asking permission? To reuse code? (BTW - the astonishing button layout was to have two left arrows to the left of the date and two right arrows on the right. I'll leave it to you to guess what the buttons did...)

Course, I can also remember Bill Gates whining about how people were ripping him off by giving each other copies of MS BASIC on paper tape.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Recent advances == Mainstream? (4.20 / 5) (#41)
by whatwasthatagain on Fri Apr 12, 2002 at 11:55:01 PM EST

Isn't it possible that most of the recent advances in Computer Science (in other words, not just programming languages) have not yet become mainstream?

Consider, for example, hardware-level multithreading. The first publications on the subject are only about 6-7 years old. There are a handful of commercial processors which implement these concepts.

Sure, these machines aren't PCs. This doesn't mean that 1) They aren't used at all (why else would Sun come up with a processor?) or 2) This doesn't count a CS advance.

Joe user doesn't need a multithreaded machine today, and not unnaturally, does not use one. Maybe development on PCs are stagnant, maybe it's programming languages that's the problem. But I wouldn't go so far as to say that CS itself has been a waste.


--

With profound apologies to whomsoever this sig originally belonged.

That's not right (none / 0) (#74)
by erp6502 on Sat Apr 13, 2002 at 09:22:57 AM EST

The Denelcor HEP was shipping in 1984. Which means it was developed 20 years ago.

[ Parent ]
Multithreading isn't the same as multiprocessing (none / 0) (#186)
by whatwasthatagain on Sun Apr 14, 2002 at 04:18:40 AM EST

The HEP is a multiprocessor system, which means that every processor has its own cache. In a multithreaded system, there is a single processor running many threads, all sharing a single cache, but with replicated functional units. The basic idea here is to hide memory latency.

Multithreading on mass-market machines is fairly new - the U. of Washington website indicates that the first commercial multithreading processors were introduced only in 2001.


--

With profound apologies to whomsoever this sig originally belonged.
[ Parent ]

OpenGL., XML, photoshop, video editing. (3.33 / 3) (#42)
by delmoi on Sat Apr 13, 2002 at 12:13:59 AM EST

Didn't have hardware accelerated 3d back in those days. You also didn't have XML/web services/etc. You didn't have Graphic editing software like Photoshop, video editing software for your PC. You didn't have web browsers.

I guess that mostly covers the software side of things. As far as routines and the like go, I dunno. Its like math. Most of the math we study is hundreds of years old, and although new math is always cropping up, most of it is to esoteric for anyone in the mainstream to learn about. There have been a lot of advances in parallelization and stuff in CS lately, that wasn't around back in the 4mhz days.
--
"'argumentation' is not a word, idiot." -- thelizman
missing poll option (2.33 / 3) (#44)
by nodsmasher on Sat Apr 13, 2002 at 12:27:49 AM EST

im empty on the inside and don't know how to program in any language
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Most people don't realise just how funny cannibalism can actually be.
-Tatarigami
Back in my day... (3.71 / 14) (#45)
by Torgos Pizza on Sat Apr 13, 2002 at 12:34:26 AM EST

...we didn't have any of your fancy computers or calculators. We just had a pile of rocks. And we liked it.

We programed by lining up the rocks in small groups. You would compile the code by relacing the small rocks with big rocks. Then you use your program by throwing the rocks at people. Throwaway code we called it.

You and your smarmy TRS-80. Why, I had to upgrade once from a stick to a branch. Then the bark would peel off and bugs would get into the wood. Hell, our bugs would eat your fingers off. What do your bugs do, huh? Make your screen go blue? Boo hoo hoo. Well, our bugs would lay eggs on you and you'd itch for a month.

You young 'uns think you're so special...

I intend to live forever, or die trying.

Pile of rocks? (3.50 / 2) (#48)
by Wondertoad on Sat Apr 13, 2002 at 01:28:22 AM EST

You had it lucky! What we wouldn't have given for a pile of rocks. In my day, the only thing we had was a pile of clipped toenails. You couldn't program with them, all you could do was write in the dirt with them. But we liked it!

[ Parent ]
Toenail clippings to write in the dirt? (4.60 / 5) (#58)
by gnovos on Sat Apr 13, 2002 at 03:34:09 AM EST

You lucky sod! In my day we didn't even have molecules to form high order matter like dirt, let alone toenails. We had quantum foam, and we liked it. We would have to program by hoping some patch of quantum foam would randomly configure itself into a reperesentation of a bit. It took billions of years just to create a single "1" or "0", and most of the time that bit would vanish back into the void before you could program your second bit. Bah, you kids these days, you don't know how good you have it!

A Haiku: "fuck you fuck you fuck/you fuck you fuck you fuck you/fuck you fuck you snow" - JChen
[ Parent ]
You had ones AND zeros? (4.00 / 3) (#82)
by porkchop_d_clown on Sat Apr 13, 2002 at 10:07:23 AM EST

Dang. All we had were zeros.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
_. (none / 0) (#145)
by Souhait on Sat Apr 13, 2002 at 05:27:58 PM EST

.

[ Parent ]
Interpreted langs (3.83 / 6) (#47)
by carbon on Sat Apr 13, 2002 at 01:09:25 AM EST

Well, nowadays, interpreted languages have become much more advanced. Have you looked at the close-to-release Perl 6? It's absolutely fantabulous. Same for PHP and Ruby and Python and several other scripted languages that simply didn't exist 20 years ago. Interpreted code is where the current language evolution is at.

Interpreted code may not be as fast as compiled code, but if computer hardware gets faster as rapidly as it has been (and it certainly will) then there will come a time when the speed difference between compiled and interpreted code is neglible in 95% of non-embedded cases. Think about it: most interpreted languages are easier to use then compiled languages (not always, but usually). Just look at how most interpreted languages handle data; for instance, PHP and Perl have scalars, which perform the functions of C++'s ints, floats, doubles, longs, chars, char*s, strings, bools, long longs, pointers, references, etc, etc. And both PHP and Perl have arrays/hashes, which perform the functions of C++/STL arrays, vectors, lists, maps, multimaps, stacks, queues, etc, etc, most of which are variously optimized variations on the same theme.

Also note how much better the garbage collection is in just about any interpreted language, not to mention less of a defend-the-bastille method of OOP. Plus, interpreted code is inherently cross platform (even when it comes to platform specific things, due to handy stuff like File::Spec and co.) and does not need to be compiled, making the barrier of entry for a newbie to alter an existing app much lower.

My favorite feature of interpreted languages, however, is the ability they have to execute dynamically created code. This is really very useful in many situations. It allows you to, for instance, create an entirely new syntax based upon an old langauge, implemented in a pre-processor. Or to allow savvy users to have extremely powerful control over their apps (TkDesk comes to mind).

In fact, I think that (if you haven't been roused into an uproar of compiled language defense by now, here's the big one) it would be great if someone built an entire OS (kernel, modules, and applications) this way, making everything but the extremely low-level code interpreted. Not now, but in 5 years when computers will have become many times as quick, this sort of thing would give users a much better way of customizing their machines : editing the code itself. If the barrier of entry for developing an application is much lower (as it would be, as interpreted languages are usually easier to use because of the reasons I've described above) then there will be many more OSS developers, which means many more OSS patches and improvements to many more OSS projects.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
What's this? (5.00 / 2) (#49)
by debolaz on Sat Apr 13, 2002 at 02:07:02 AM EST

Lisp isn't good enough for you anymore?

[ Parent ]
Unfortunately, this is nothing new (4.33 / 3) (#50)
by Jonathan Walther on Sat Apr 13, 2002 at 02:07:41 AM EST

Or fortunately, if you are inclined that way :-)

As Paul Graham points out, Perl, Python, and related interpreted languages are just getting asymptotically closer to what LISP was 20 years ago. He posits that eventually they will BE LISP. If you want to truly advance, you must learn the state of the art in LISP.

There truly is nothing new under the sun.

(Luke '22:36 '19:13) => ("Sell your coat and buy a gun." . "Occupy until I come.")


[ Parent ]
Actually, it is (1.25 / 4) (#53)
by carbon on Sat Apr 13, 2002 at 02:55:34 AM EST

Uh, LISP is not that great. It doesn't have the advantage of easy coding (come on, it's an entire language built around self-recursion, and it doesn't even have it's own real 'for' construct), it doesn't have OOP. There are other things that I'm probably forgetting, but saying something like "LISP does everything!" is just plain incorrect.

Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
Trollbait (4.60 / 5) (#55)
by Jonathan Walther on Sat Apr 13, 2002 at 02:59:54 AM EST

I can only assume you posted that comment as trollbait. However, just in case you did not, I will clarify. Possibly you consider LISP to be the same as the language McCarthy designed 30 years ago. When I said LISP, I was referring to both Common Lisp, and the family of modern LISP dialects like Scheme. None of your assertions hold true for Common Lisp and the other modern LISP dialects.

(Luke '22:36 '19:13) => ("Sell your coat and buy a gun." . "Occupy until I come.")


[ Parent ]
Ooops, just looked, you're quite right (4.50 / 6) (#57)
by carbon on Sat Apr 13, 2002 at 03:10:46 AM EST

I haven't extensively used any of the more recent LISPs, but I checked Google, and you seem to be right on all the counts I mentioned. But still, syntax is an issue, I've found that the modern LISP I've looked at (for example, certain parts of The Gimp) to be a little hyperparenthized. Sorry to waste everyone's time with an uninformed opinion, I should've done a little more research first.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
I just started learning Scheme (none / 0) (#60)
by leviramsey on Sat Apr 13, 2002 at 04:27:33 AM EST

And I love it. It's the fastest of the interpreted/bytecoded languages and can do just about anything you want it to. The syntax takes a little getting used to, but that's more because every other language (that's in wide use, at least) is ALGOL inspired.

[ Parent ]

Well.. (none / 0) (#155)
by carbon on Sat Apr 13, 2002 at 07:42:02 PM EST

I don't know about Scheme being the fastest, though it certainly does seem fast. That award probably goes to Lua, because of it's pathologically minimalist design.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
Scheme doesn't replace Perl easily (none / 0) (#234)
by pin0cchio on Mon Apr 15, 2002 at 09:47:11 AM EST

[Scheme is] the fastest of the interpreted/bytecoded languages and can do just about anything you want it to.

Not if you want to use regular expressions or other string-handling. When you work with strings, you want it to be fast to cut down on the number of application servers you have to colo, and regex algorithms implemented using Scheme's primitive string handling are bound to be slow. Scheme doesn't look like it'll replace the Three P's (Perl, Python, PHP) soon.

That is, unless you meant "anything you want it to" in the Turing machine sense, but then you bring in the Turing tarpits such as PDP-8, Brainfu(k, and Unlambda.


lj65
[ Parent ]
Interpreted code (4.50 / 2) (#52)
by jacob on Sat Apr 13, 2002 at 02:43:34 AM EST

None of the features you mentioned require an interpreter, actually, and in fact most languages that started out with classic interpreters have abandoned them in favor of bytecode compilation (essentially, compiling to a language that's higher-level than machine code but lower-level than the input language). That's certainly true of Perl and PHP; and, though you didn't mention it, it's true of Common Lisp and Scheme as well.

The features you mentioned are cool, though, and they definitely make programming more fun and more productive, but your implication that they exist because the system that reduces the programs to values is an interpreter rather than a compiler doesn't hold.

(One more thing: interpreted languages are definitely not inherently cross-platform. There are a million system-level gotchas that language implementors have to discover and work around all the time.)



--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Yes and no (4.00 / 1) (#54)
by carbon on Sat Apr 13, 2002 at 02:59:34 AM EST

The features you mentioned are cool, though, and they definitely make programming more fun and more productive, but your implication that they exist because the system that reduces the programs to values is an interpreter rather than a compiler doesn't hold.

That's true in only some cases. In ease of programming, universal scalars, better dynamic array handling, etc etc, you're right, theoretically. A compile language could do all that, and faster too. But where is this langauge? I haven't seen it, though I'd really like to be proved wrong. And the various extensions added onto C++ do not count, since the considerable delay between the invention of the language and the addition of the STL, the popularity of Boost, etc, have resulted in far too much fracturing and homemade versions of these things.

Also, AIUI, a compiled langauge cannot compile internal dynamic code, because if it did, that would be interpreting, and it would become (at least partially) an interpreted language.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
Addendum (4.00 / 1) (#56)
by carbon on Sat Apr 13, 2002 at 03:06:21 AM EST

BTW, as an addition to my other comment, I do understand that technically, all languages are interpreted. But by 'interpreted language' I mean a language that is interpreted at the pre-runtime stage, and distributed in human readable format (or something rather close, as can be the case with Perl :-D ).

Oh, and about interpreted languages not being inherently cross platform : you're right, to a degree. A well programmed interpreted language (one with cross platform services) running a decent script (one which uses those services) will be almost completely cross platform. Since your app and langauge must be halfway decently designed to get to any size without becoming unusably unstable anyways, it's a safe bet that a given interpreted program will be cross platform up a fairly high degree.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
Common Lisp (4.00 / 1) (#69)
by pdw on Sat Apr 13, 2002 at 09:01:58 AM EST

Actually, every decent Common Lisp and Scheme implementation can compile to native code. There exist a lot of interpreters for these languages (especially for Scheme), but that's because they're easy to write :)

[ Parent ]
Been there, done that (none / 0) (#244)
by Salamander on Tue Apr 16, 2002 at 08:01:11 AM EST

it would be great if someone built an entire OS (kernel, modules, and applications) this way, making everything but the extremely low-level code interpreted.

I just happen to be writing this from one of the buildings where this was rather famously done - the former home of Symbolics. OSF was born, lived, and died here since then, it was so long ago.



[ Parent ]
Uh-huh (1.33 / 9) (#59)
by DranoK 420 on Sat Apr 13, 2002 at 04:25:31 AM EST

Java? Please. Syntactically, yes Java does OO very well. But OO itself has been around about as long as I've been alive...

Yeah, right. Please think before you speak. I'm sorry, creating a struct with pointers to functions does not an object make.

Anyhow, I'm sure more effecient algorithms have been created over the years, but people don't notice because, really, there's not much variety in variables, conditionals and loops. But nobody said the pathetic transistor-based circuitry was the end-all be-all of computing. Wait until we advance in how we create computers and CS will evolve with it.

DranoK


Poetry is simply a convenient excuse for incoherence.


Oh please (4.60 / 5) (#68)
by pdw on Sat Apr 13, 2002 at 08:51:26 AM EST

The first version of Smalltalk, the canonical OO language, was developed in 1969, and the current standard dates from 1980. OO concepts were first introduced in Simula, in 1964.

[ Parent ]
Okaay... All together now.... (3.50 / 4) (#79)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:54:34 AM EST

"Read A Book!"

Smalltalk and Simula, they've been around since before I entered kindergarten, kid.

And I really like how sure you are that things have advanced, but you can't think of any ways it actually has.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Java!!!! (2.33 / 3) (#83)
by bayankaran on Sat Apr 13, 2002 at 10:22:14 AM EST

I dont care much for Java anymore. It was promising; but SUN lost track in between. It is now a collection of many unwieldy components.

[ Parent ]
Oh-hoh (4.00 / 1) (#106)
by chbm on Sat Apr 13, 2002 at 12:29:52 PM EST

Yeah, right. Please think before you speak. I'm sorry, creating a struct with pointers to functions does not an object make.

Yeah, right. Please think before you speak.
Spewing out garbage in C++ or Java does not an OO program make. OO is method not a language, that means you can make beautifull OO programs in ASM and crap procedural programs in C++.

-- if you don't agree reply don't moderate --
[ Parent ]
Uh, no (3.00 / 2) (#117)
by budlite on Sat Apr 13, 2002 at 01:33:09 PM EST

you can make beautifull OO programs in ASM

No you can't. OO is a set of language attributes rather than something that can be applied to any language.

[ Parent ]

Yeah you can (none / 0) (#128)
by Sir Rastus Bear on Sat Apr 13, 2002 at 03:16:29 PM EST

OO has little to do with syntax, lots to do with how you view the problem.

If your point is that there isn't a whole lot of support in assembler for OO, that's absolutely true. There's no OO support in C either, but you could still craft an OO-style app in it.

Course, you'd have to either be a very disciplined coder, or mad as a hatter.


"It's the dog's fault, but she irrationally yells at me that I shouldn't use the wood chipper when I'm drunk."
[ Parent ]

hehe yup... (none / 0) (#143)
by pb on Sat Apr 13, 2002 at 05:05:11 PM EST

Take a look at libwww, put out by the w3c; it's a great example of a C program that doesn't know that it isn't written in Java...
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
Grin. (2.00 / 1) (#174)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:28:29 PM EST

I got the chance to "maintain" some C code. Here are the relevant stats:

  • One function
  • Two thousand lines
  • thirty seven gotos.

After reading through it I realized that the original author had been a very bright assembly language programmer. His code was very clearly laid out as if he had written it with a macro assembler.

After I rewrote it, it was 500 lines - of which 33% were block comments. It was also 75 times faster.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Learn Some History Before Flaming (4.00 / 1) (#130)
by czolgosz on Sat Apr 13, 2002 at 03:22:29 PM EST

Java? Please. Syntactically, yes Java does OO very well. But OO itself has been around about as long as I've been alive...
Yeah, right. Please think before you speak. I'm sorry, creating a struct with pointers to functions does not an object make.


Object-oriented variants of Lisp have been around since the early '70s, and Smalltalk since at least the late '70s. So we're not talking about just pointers. It's clear that you're extrapolating from a very narrow base of personal experience. Simula and Simscript had a full set of OO features in the 1960s.
Why should I let the toad work squat on my life? --Larkin
[ Parent ]
spOOj-OO-matic (none / 0) (#257)
by jo42 on Sun Apr 21, 2002 at 11:49:58 AM EST

> creating a struct with pointers to functions does not an object make

So Java, C++, et al hide the implementation details from you. So now you write one line of OO code that proceeds to execute several million instructions to, say, change a string to upper case in the OO way. That's progress.

[ Parent ]

High-level vs. low-level (3.00 / 5) (#63)
by TheophileEscargot on Sat Apr 13, 2002 at 05:26:39 AM EST

Interesting article, tho I don't agree. Lots of people have posted counterexamples now.

I think the thing is that pretty much all the new developments are to do with high-level programming. Low-level programming does indeed seem to have been pretty much stuck for the last twenty years.

I've seen this argument a few times now. Basically it seems to go: low-level programming is the only "real" programming, low-level programming is stuck, therefore programming is stuck.
----
Support the nascent Mad Open Science movement... when we talk about "hundreds of eyeballs," we really mean it. Lagged2Death

Pay me in weed (1.71 / 7) (#66)
by premier on Sat Apr 13, 2002 at 06:52:19 AM EST

Your vote (1) was recorded.
This story currently has a total score of 95.
You're the straw that broke the camel's back!
Your vote put this story over the threshold, and it should now appear on the front page. Enjoy!


My experience (3.57 / 7) (#67)
by xtremex on Sat Apr 13, 2002 at 07:52:06 AM EST

My first computer was the commodore Pet in 79. I was 9. I got it as a gift from my uncle. The SHEER joy of being able to type
10 ? "HELLO"
20 GOTO 10
at 9 yrs old! I gave it a command! And it listened!
When I was in college, it was very different than it was now. Every person in my classes was like me. Into "computers" since they were kids. Enjoying "researching". Now I am SHOCKED when I see kids who enter college for CS nowadays. They get a cracked copy of FrontPage and now they think they're l33t.
In MY day, it was pretty much platform neutral. We had every type of UNIX, VMS, we had Apple IIe's and Commodores! Now, they shove Microsoft down they're throats, and praise the beauty of Java.I'm pissed off about all these 21 yr olds being Java Guru's, when I've been programming since 1980 (I learned Z80 assembly at 11!) and I STILL can't get OOP. Java just PISSES me off! I've tried for 4 years to "get" Java, and I can't get it.:( I guess I'm still stuck in a procedural frame of mind.

Grin. OO is like one of those optical illusions (4.66 / 3) (#77)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:46:44 AM EST

You know where the old lady turns into a young woman? All of a sudden you see it, and after that you'll always be able to see it.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
don't java for oop, python! (4.00 / 1) (#107)
by Rainy on Sat Apr 13, 2002 at 12:31:09 PM EST

Python is most likely the easiest way to learn OOP. It does not try to shove it down your throat. You can use a class where you want and procedural code elsewhere.
--
Rainy "Collect all zero" Day
[ Parent ]
Explaining OOP (4.00 / 1) (#114)
by NotZen on Sat Apr 13, 2002 at 01:20:28 PM EST

I found it hard to 'get' OOP too.

I'd discovered that people used to GUI's 'get' OOP a lot faster because GUI's are naturally made of objects.

Take the preview button on the comment page, for instance. It's a button object, with a height property and a width property. It has a function (or method, depending on your jargon) called "Click" which is part of it. When a user clicks on the button, that code is run. There's yer basic object right there. It's called (for instance) PreviewButton, is an instance of the Button class, PreviewButton.Text="Preview", PreviewButton.Left=10, Previewbutton.Width=30, etc., etc.

Inheritance is quite simple too: If I'm going to want a whole bunch of purple buttons, it makes sense to define a new button called "PurpleButton" that is descended from the "Button" Class. I then override the 'color' property of the PurpleButton class to be whatever purple is. Fron that point on, my toolbox contains "PurpleButton" as well as "Button" and whenever I splat one down it is purple. It 'inherits' all the properties (and functions) of the Button Class, except where I've explicity overridden them with new ones (like the color).

I can then create a "WidePurpleButton" class inheriting from the "PurpleButton" class, with an overridden Width property. All instances of that class will then be wider. If I'm going to only have a couple of wide ones, of course, it's not worth making a class, I can just change each instance I put down to have the width I want. Also, any changes I make to PurpleButton later on will be also affect WidePurpleButton (unless they are in properties/functions that are overridden).

An example of actual use of this was for a database application I wrote, where numerous data classes were created. They all inherited from a central 'dataObject' class, that had a function in it left to define exactly how the basic data was obtained. Each subclass of this had that empty function overridden so that the class could be filled with data, but all the other functions were generally left alone (the ones to sort, convert, make totals, filter, etc.) unless they needed to be overridden. The programmer accessing the interface needed to know nothing about the data access objects internal workings. They simply instantiated the one they want, called the "Fetch" function, followed by any filtering or whatnot that they wanted and it all magically happened internally until they requested some output. The fact that the internals might be very different didn't matter, because the interface was the same for each class/object.

Have I muddied the waters sufficiently there?

[ Parent ]
Associating code with data (none / 0) (#122)
by CaptainSuperBoy on Sat Apr 13, 2002 at 02:26:32 PM EST

OO is the practice of associating code with data. Fundamentally, that is the only necessary concept I see in OO.

A counter:

Procedural implementation

To implement a counter procedurally, you'd create a counter structure. This would probably just contain an integer. Then you would create two procedures that incremented and decremented the counter. These would be three separate entities - you could stick them in their own source file, but they wouldn't be tied to each other in any way. If your counter was called c, you might type increment(c); You could call increment() on any structure, but it would give you an error if you tried to increment something that wasn't your counter structure.

OO implementation

Implementing an OO counter is pretty much the same - EXCEPT that the increment and decrement procedures are tied to the structure that stores the data for the counter. The structure is now called an object, and the procedures are now called methods of that object. You would do something like this:

Counter c = new Counter();
c.increment;
c.increment;

The data (the integer storing the counter) is associated with the code (the methods). That is the fundamental concept of OO, and the first concept you should understand. Constructors, Inheritance, Implementations will all come later. You may want to play around with the built in Java objects like Vector, StringBuffer, String. Don't worry about creating your own objects just yet. You don't have to do that until you're familiar with how OO works. Good luck.

--
jimmysquid.com - I take pictures.
[ Parent ]

Encapsulation, Inheritance, etc. (none / 0) (#149)
by swr on Sat Apr 13, 2002 at 05:46:58 PM EST

You would do something like this:
Counter c = new Counter();
c.increment;
c.increment;
The data (the integer storing the counter) is associated with the code (the methods).

For a programmer who doesn't get OOP, the real question is "why not just type foo++?"

One reason (there are many!) is encapsulation. If the internal value is declared "private" and can only be reached by the accessor/mutator methods (like increment above, or the ubiquitous getFoo/setFoo in Java) then you know for a fact that all changes to that variable, and whatever other variables that make up the object's state, are going through the accessor/mutator methods.

This allows you to do some cool things. For example, you could have another piece of code that displays the value of the counter (or whatever) to the screen, and be sure that it will always be kept up to date by having the increment (or setFoo) method call that piece of code. Then, no matter how that data gets changed, the view on the screen is always kept up-to-date.

You could even take it a step further and not have any code referencing the screen-display stuff inside your counter. In C you would do it by having the counter maintain a list of function references, and have the screen display code send the counter code a reference to its update function. Then, when the counter gets incremented, it just calls all the referenced functions and the screen (and whatever other code has provided function references) magically gets updated. In Java you would typically do it by having the counter extend the Observable class. Observable has some methods (functions/subroutines/whatever) called addObserver, setChanged, and notifyObservers. By having Counter extend Observable, Counter inherits these methods from Observable, which is quite handy. All you have to do is have your screen display code implement the Observer interface, by having a method called "update" (this is a classic callback mechanism). Then your screen code can just call counterInstance.addObservable(this) and magically be updated of all changes to the counter. As an added bonus, it's all strongly typed. You can't ever call addObserver and specify some object that doesn't actually have a proper update method, because the compiler will yell at you. Compare to C, where a function reference could point to any function at all regardless of what arguments it takes and you wouldn't be able to detect the error until the program crashed. Sure you can do all of these things in C (and you could even still call it object-oriented programming), but because Java is an object-oriented language these things are better supported.

What I've described above is quite common in applications that use the good old model-view-controller pattern. The counter is a model, the screen update is a view, and the parts of the program that increment the counter are controllers. The controllers and views don't need to be at all aware of each other, and the model doesn't have to be aware of anything it just provides a well-defined interface for the controllers and views to use.

Neat, eh?



[ Parent ]
may I suggest... (none / 0) (#131)
by unusualPerspective on Sat Apr 13, 2002 at 03:28:20 PM EST

an excellent resource which helped me "get it" when moving from stuctured to OO.

Objekt-orientierte Programmierung mit ANSI-C (Object-Oriented Programming with ANSI-C) by Axel-Tobias Schreiner (note : despite the title the paper is in English.)

[ Parent ]
25 years ago my university professor told me .. (2.75 / 4) (#70)
by cem on Sat Apr 13, 2002 at 09:12:26 AM EST

... there is no improvement in programming since turing. there is no artificial intelligence programming. well, i think he was right. finally all of them are based on the ability to communicate direct or indirect in run-mode with the von-neuman-machine in the instruction code of the machine. what is changing is the USER-INTERFACE (!!!) of the languages ... the only new things are the focuses of new languages to some aspects in programming: structured, object-oriented, ...etc. there is no nothing new under the sun ...


Young Tarzan: I'll be the best ape ever!
AI programming (3.50 / 2) (#76)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:41:53 AM EST

I've wondered if the problem with AI is the nature of the von Neumann architecture itself.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Re: AI programming (3.00 / 1) (#101)
by khallow on Sat Apr 13, 2002 at 12:05:50 PM EST

I've wondered if the problem with AI is the nature of the von Neumann architecture itself.

It appears to me to be a problem with the programmers rather than the architecture. In that case, coming up with better UI's should help some. IMHO the real problem is coming up with a true AI model.

Stating the obvious since 1969.
[ Parent ]

Heh. A good AI model? (none / 0) (#172)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:21:45 PM EST

Before we can do that, we have to have a good definition of intelligence.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
right on! (none / 0) (#192)
by cem on Sun Apr 14, 2002 at 10:06:03 AM EST

what is human intelligence really?

i'll like to write an essay about: "A machine is a machine is a machine or why R2D2 is a rusty tin can" ...

sorry for my funny Tarzan-english ...


Young Tarzan: I'll be the best ape ever!
[ Parent ]
It gets tough, though. (none / 0) (#198)
by porkchop_d_clown on Sun Apr 14, 2002 at 02:33:35 PM EST

You follow the "machines are just machines" argument too far and you end up like my undergrad philosophy professor who argued that even though my dog had learned how to spell "ice cream" (long story), my dog wasn't actually intelligent - just a collection of learned responses.

You follow that argument and you end up arguing that everyone but you is just a collection of learned responses.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
von Neumann (none / 0) (#164)
by statusbar on Sat Apr 13, 2002 at 08:54:42 PM EST

I agree very much.

I feel that a big problem with programming languages today is that they are all assuming a 'von Neumann' architecture CPU, when in fact most processors today are NOT strict Von Neuman architecture anymore!

Therefore there is an 'impedance mismatch' between the code and the efficiency that the cpu would be able to reach. This effect is most notable with the Intel Itanium and various other Very Long Instruction Word software pipelined (with 'flying registers') DSP's.

The capabilities of these systems will always be underutilized because of the old style programming languages that everyone prefers to use today.

--Jeff

[ Parent ]
ease of use (3.00 / 4) (#71)
by turmeric on Sat Apr 13, 2002 at 09:15:18 AM EST

computer science 'advances' all come basically as frustrated users get fed up and invent something new. funny that. hell the WWW was invented by a physicist right?

computers -> large vocal masses of the old school of sliderules and logarithm tables decried the uselessness,impossibility, 'waste of effort', and 'dumbing down', of this foolish new invention. but people were tired of using the old crap so they invented computers anyway, to make it easier to do math.

cobol/fortran -> large vocal masses of the old school machinecode/assembly people decried the uselessness, impossibility, 'waste of effort', and 'dumbing down' of this foolish new invention. but people were tired of using the old crap so they invented symbolic languages anyways, to make it easier to program computers.

timesharing -> large vocal masses of the old school punchcard/batch-job/computer-room people decried the uselessness,impossibility, 'waste of effort',and 'dumbing down' of this foolish new invention, but people were tired of using the old crap so they invented a way for people to have their 'own' session, to make it easier to get access to computers.

UI revolution -> large vocal masses of the old school unix people decried the uselessness, impossibility, 'waste of effort', and 'dumbing down' of this foolish new invention, but people were tired of using that old crap so they invented new ways of interacting with the computer, namely the windows/mouse thing, to make it easier to use computers.

networking -> large vocal masses of the old school floppy-disk/reel-tape people, decried the uselessness ,impossibility, 'waste of effort', and 'dumbing down' of this foolish new invention, but people were tired of using that old crap so they invented new ways for computers to talk to each other, making it easier to transfer data between computers.

internet/web -> use the computer networks.

do you see a pattern here? well, i do. advances in so called 'computer science' that have meant the most were in fact just improvements in EASE OF USE. i put that in all caps so that i would get flamed by ten million unix bigots. hee hee

now for my final trick:

free software -> large vocal masses of the old proprietary low end consumer software industry decried the impossibility, 'waste of effort', and 'dumbing down' of this foolish new invention, but people were tired of having to be thieves or pay hundreds of dollars of license fees just to do simple things like print a letter or draw a picture on their computer, so they invented free software to make computer software easier to get. in fact, linux is merely an improvement in 'ease of use', nothing more, nothing less.

the only thing that changes in time ... (none / 0) (#72)
by cem on Sat Apr 13, 2002 at 09:19:20 AM EST

... is the USER-INTERFACE of the tools. That's it. The machine part (instructions) are always the same since Alan Turing and Von Neuman.


Young Tarzan: I'll be the best ape ever!
[ Parent ]
oops (4.00 / 1) (#73)
by turmeric on Sat Apr 13, 2002 at 09:20:55 AM EST

that paragraph about the internet/www was supposed to be how it toppled compuserve, prodigy, the bbs, fidonet, arpanet, aol, and a hundred other disconnected networks.

and then some physics guys got sick of this crap so they plopped out 'universal name spaces' just like librarians had been doing for oh, a hundred years or more, and 'hyper links', just like librarians and academics had been doing for, oh, i dont know, a hundred years or more. so basically the WWW was to improve 'ease of use'.



[ Parent ]

An interesting way of looking at it. (5.00 / 1) (#75)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:32:21 AM EST

And there's a lot of truth in what you say, although I don't know about large masses clamoring for ForTran and COBOL. There weren't any "large masses" of computer users then and COBOL, at least, was the work of a single person - a woman by the name of Rear Admiral Grace Hopper.

But overall I think your argument stands up - particularly the claim that Linux is really about "ease of use". It is - when you consider the "use" that hackers (in the original sense, i.e., "hobbiests") like to put their computers to.

And, I guess, I shouldn't run Linux down to much. It didn't advance the state of the art, but culture behind Linux is, I think, more important than Linux itself.

Sure wish someone would do an open source version of Plan 9, though.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
good show (4.00 / 1) (#81)
by VoxLobster on Sat Apr 13, 2002 at 10:06:06 AM EST

but I don't agree with computer networks being a "dumbed down" type of thing...I mean, have you ever studied networking protocols? Just to get a little reliability and routing takes a huge level of complexity. In fact, as far a simplicity goes, reel-to-reel and diskettes are way simpler ways to transfer data. Otherwise, I see your point.

VoxLobster -- Insert Quote Here

VoxLobster
I was raised by a cup of coffee! -- Homsar
[ Parent ]

Yep (none / 0) (#103)
by DeadBaby on Sat Apr 13, 2002 at 12:13:45 PM EST

Although I can add one more rule:

computer scientists: a large vocal group of people who take themselves WAY too seriously.

"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
[ Parent ]
That pattern does not hold (none / 0) (#126)
by Macrobat on Sat Apr 13, 2002 at 02:53:43 PM EST

I do not think the "old school hated it, new school won" paradigm you use is applicable in all the instances you cite. Notably, I think it would be hard to find evidence that "large, vocal masses" resisted networking. While I suppose there were a few dissenters, the overwhelming majority of the people actually working on the machines welcomed the ability to interconnect on a wide scale. Likewise for timesharing. And the mouse was conceived and a prototype was built in the late '60s to provide a different kind of interface (not merely an easier one, but one more suited to specific tasks), long before advent of mass-market PCs.

As far as the "dumbing down" argument goes, in many cases, the old school was right. Few engineers thought that using calculators was in and of itself a bad thing--what many decried, including most teachers, was the reliance on a tool as a crutch and a replacement for deep understanding. Nowadays we have college graduates who can't, for example, figure out a six percent sales tax on whole dollar amounts. So yes, there has been some dumbing down.

Moreover, these milestones to which you refer are not computer science accomplishments so much as they are software engineering ones. The term "science" refers to questions of fundamental nature and hypothetical possibilities. The term "engineering" implies a predetermined goal, to which scientific knowledge might be applied in the aid of achieving. While intertwined, they are not the same thing. And fundamental questions like NP-completeness or the robustness of algorithms, while they may have advanced in the last twenty years, are still covering territory that was pioneered decades ago. That is what this article is about.

"Hardly used" will not fetch a better price for your brain.
[ Parent ]

so (2.00 / 3) (#90)
by Prophet themusicgod1 on Sat Apr 13, 2002 at 11:04:42 AM EST

you dont think that modern computers/programs are better than older computers/programs? thats all well and good, actually i use my 386 and Mac ii cx daily...and i dont really have any problems...i'm imagining once i get a handle on I/O i could use my apple//e instead of the 386 for some htings...but really if you miss the old computers...why dont you use them? no one is forcing you to connect to the internet using anything higher than say DOS-LYNX (i'm unaware of any browser less intensive than this, if you know of one i'd love to hear it)...and you can use a 386 for that...mabye even a 286 if you were to really know what you were doing~
if this isnt good enough ...i'm kind of short on change and if you want i could sell you my apple//e :)
"I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet."swr
Did you deliberately misunderstand my point? (4.00 / 1) (#93)
by porkchop_d_clown on Sat Apr 13, 2002 at 11:17:50 AM EST

First, there are a half dozen machines I'd rather own than an x86 box of any vintage; Amigas come to mind. But, second, and in any case, that isn't the point. I want to know why programmers are using their fancy new machines as an excuse to avoid writing good software.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
and i LOVE amigas! (none / 0) (#98)
by Prophet themusicgod1 on Sat Apr 13, 2002 at 12:01:41 PM EST

i saw your point and i agree with you...!
there is plenty good code written for those old machines...and you, being a good coder could probably fill in the rest whatever you need. In the meanwhile...do you think that it could possibly be that us youngin's are having to deal with a whole load of Awesome coding of your generation...and that you guys have written so much good code that we just cant expose ourself to it all withouth devoting a large portion of our lives to it??? or the alternative is to take the easy route, not really give a shit about anything and learn the javastuff and whatnot...you know who i'm talking about... those of us who care about computers would much rather go another route, and we still can. at least i hope so...
*holds his cs major acceptance form tight*
"I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet."swr
[ Parent ]
I think... (none / 0) (#102)
by DeadBaby on Sat Apr 13, 2002 at 12:11:31 PM EST

It sounds like you're just pining for the "good ole days" instead of giving any real facts as to why things are any worse now. Even the crappiest software these days is better than some junk software package from the mid-80's.


"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
[ Parent ]
Nice. (none / 0) (#170)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:17:27 PM EST

Yet another rant with no actual data, or (dare I say) clue.

I never said things were worse. I asked why they weren't getting better. Notice the difference?


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
One word (none / 0) (#115)
by budlite on Sat Apr 13, 2002 at 01:28:44 PM EST

Productivity. Software houses want their programmers to get their software out of the door simply in a working state rather than going through again and again actually optimising it. Optimisation by hand is time-consuming, and the companies probably want to start making returns on the wages paid to the programmers as soon as possible.

[ Parent ]
The Age of Plastic Software (4.33 / 12) (#91)
by localroger on Sat Apr 13, 2002 at 11:06:51 AM EST

Modern software is like the car made out of plastic. Hidden advances make it more fuel efficient, theoretically safer, and more convenient to use, but it's still embarrassing to park it next to a well-restored classic which is actually made out of metal and weighs 4,500 pounds.

Like the plastic car, modern software is quickly produced, anonymous, and ugly. Contrary to popular belief, GUIs and icons do not make you more productive; in fact they discourage you from developing skills which could really make you more productive. In a CLI system the way to be more productive is to learn the commands and type fast. In a GUI system the way to become more productive is, hmmm, wait a minute, there isn't any way to be more productive. Everyone is reduced to the lowest common denominator, the Universal Newbie.

Traditional software didn't all have the same user interface because, surprise! the same user interface doesn't necessarily work very well for different applications. Programs like WordPerfect and AutoCad were crafted to make the most common functions as quickly accessible as possible, and to perform the most common operations as quickly and reliably as possible.

No processor is so fast and no RAM so extensive that it cannot be overused by shoddy software. Since most programmers don't worry about performance until it becomes a problem, most software has annoyingly poor performance -- the minimum level which will get by. This is equivalent to the car manufacturer which balances likely lawsuit judgements against a $4 bracket when designing that plastic car. If it's "good enough" there is no incentive to make it better; get it out the door and make some money. If it crashes occasionally, so what? People expect it and will put up with it to a certain extent. Nobody cares if it is done "right."

Good software is a beautiful thing. It may be demanding when you learn to use it (just as cars don't drive themselves), but when you depend on it it will be smooth, elegant, and reliable. The last piece of software which impressed me in this way was the Amstrad LocoScript word processor for the PCW9512. It was written entirely in assembly language by a team who cared, and it showed. Running on a 4 MHz Z80 in 512K of RAM, it blew away similar products running on 12MHz '286 based PC's with 2 meg RAM and hard drives.

Word may allow me to put a picture in my document, but it is an annoying piece of crap to use for the 99.9% of work that doesn't require any of its "advanced" features. Sure it's nice to be able to multitask, but it would be nicer if the software I multitasked were efficient and reliable. It would be nicer if the user interfaces weren't all the same, but were optimized for the task at hand. It would be nicer if I could use consistent key shortcuts to do the most common things, and if instead of trying to impress me by rearranging the menus with every release they left those functions in the same place so I could find them easily.

But I guess if the 2002 model plastic car looked just like the 2001 model, a lot of 2002 models would go unsold. It's one of the sicknesses of capitalism that talent which could be directed at making a better product is instead directed at making pointless geegaws for marketing hype. I guess it's a sign that our craft has come of age that companies can act this way and flourish. Next up, look for the automatic safety system that rats you out to the feds if you copy a CD.

I can haz blog!

CLI is *NOT* faster than GUI (2.25 / 4) (#108)
by brunes69 on Sat Apr 13, 2002 at 12:33:19 PM EST

I wish I could find my link to back this up, but you are going to have to trust me on it. WAYYY back in the day, Apple spent millions (alot more then than it is now) of dollars on HCI research. Once of the big points was - i s a CLI faster than a GUI? It turns out no, not by a long shot.

One of the major tests I remember vivdly was a test that involved a large amount of text - several pages. The participants, who were mixed between new computer users and experienced 100wpm+ keyboarders, were asked to go through the text and everywhere they saw one word, replace it with another. In the CLI trial you could not use the mouse, and in the GUI trial people could not use the keyboard. Both could use cut-and-paste. Everyone did both trials.

In the post trial survey, when asked which was faster, everyone said the keyboard. But the actual times showed a different story, with even the most experienced keyboarders showing a marked improvement using the GUI rather than the keyboard. Simmilar results were found in all kinds of trials, from launching applications to formatting documents.

The reason is (at least as postulated by the scientists), that when you are keyboarding your brain is active, having to move your fingers in a multitude of different ways, recalling keystrokes, etc. When you are mousing, you are essentially doing the same repettve tasks (move, click, move) over and over - your brain find it boring. This is the reason it appears to take longer.

No matter what you "think" is faster, all that matters in terms of productivity is what is "actually" faster.



---There is no Spoon---
[ Parent ]
Not for me (4.50 / 4) (#112)
by localroger on Sat Apr 13, 2002 at 12:57:24 PM EST

The reason is (at least as postulated by the scientists), that when you are keyboarding your brain is active, having to move your fingers in a multitude of different ways, recalling keystrokes, etc. When you are mousing, you are essentially doing the same repettve tasks (move, click, move) over and over - your brain find it boring. This is the reason it appears to take longer.

Actually I can keyboard and hold a conversation at the same time, but this must be a pretty unusual skill since it freaks some people out.

The big problem (and it is a huge problem) is that GUI key shortcuts tend to require multiple keystrokes so they cannot be repeated by holding down a key. GUI based programs also tend not to have macro capabilities at all. It's true that without these tools, keyboardists are handicapped.

For example, if I were using my favorite text editor, the DOS program BSE (Basic Screen Editor) which was distributed with the DOS 3.3 Developer's Kit for the Heath/Zenith computer series, I would do the following:

F7 (search and replace first)
OriginalWord <enter>
NewWord <enter>

BSE would advance the cursor to the first occurrence of the word and replace it. I would then hold the F8 key (search and replace next) down until every occurrence had been completed. BSE is fast enough on a Pentium to keep up with my 30 kps repeat setting at this task, making it very fast but also slow enough that I could avoid part of the document I wanted to skip. Show me the GUI that lets you pull off a stunt like that.

Let the skilled keyboardists compete on software they are familiar with, against skilled GUI users on typical GUI software, and there is no way the GUI will be faster. I have used both and it's not a matter of erroneous perception; the tools I use in CLI software are not there to be used in GUI software.

I can haz blog!
[ Parent ]

Repetitive tasks.... (4.75 / 4) (#127)
by spcmanspiff on Sat Apr 13, 2002 at 02:55:23 PM EST

Um, erh.

That seemps like a pretty dumb test. Anyone familiar with basic UNIX commands could do it far faster, and for an arbitrary length document.

Why? Because a command line environment has enough power to tell the computer "repeat this a bunch of times, in these situations" and sit back and let it do it.

A GUI, in order to enable that sort of flexibility, would sprout more buttons than anyone would ever know what to do with.

The speed of "keyboarding" vs the speed of "clicking" has nothing to do with it; it is all about the expressiveness available to the user when she tells the computer to perform a task.



[ Parent ]

That's not the point (none / 0) (#251)
by brunes69 on Wed Apr 17, 2002 at 07:46:15 PM EST

The point of the test is that it always seems like the CLI is fatser, because of the way your brain works, even though the truth is this is rarely the case.



---There is no Spoon---
[ Parent ]
Noooo.... (none / 0) (#256)
by spcmanspiff on Fri Apr 19, 2002 at 01:11:57 AM EST

It seems like *typing* is faster than mousing, even if that isn't always true. This is very a different thing.

If you really want to talk about CLI v. GUI, then let's bring in grep, awk, sed, and perl and the powerful things an experienced CLI user can do with them.

Of course, even in a GUI I would be using the Edit->Replace function.



[ Parent ]

Doing your job and doing something once (4.00 / 3) (#132)
by autonomous on Sat Apr 13, 2002 at 03:28:33 PM EST

There is a big difference between doing something once and being productive. To increase productivity you need to automate, have things handled for you without requiring your full and constant attention. I would like to see you convince you gui to do your quarterly report formatting automagically, your gui to download your latest fact packs for your stat analysis. In reality a gui lets you point and click, but only point and click. My CLI on the other hand lets me script, automate, and delegate without having to touch the keyboard.
-- Always remember you are nothing more than a collection of complementary chemicals worth not more than $5.00
[ Parent ]
Well (5.00 / 2) (#135)
by DeadBaby on Sat Apr 13, 2002 at 03:53:34 PM EST

Given just about anyone who uses the command line would know how to use an editors search & replace function I really can't see the value of this study. Even if we want to compare search and replace from within an editor:

GUI:
(hand on mouse) Edit - Replace
(hands on keyboard) "search for"
(hands on keyboard) "replace with"
(hands on mouse) "OK"

CL:
(hands on keyboard) :s/search for/replace with/g



"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
[ Parent ]
Apple Study (5.00 / 1) (#161)
by Frigorific on Sat Apr 13, 2002 at 08:28:21 PM EST

The study you refer to is mentioned and described here. However, it is not used to "prove" that the GUI is faster than the CLI--just that, for most tasks, keyboard shortcuts are slower than mouse movements.

As others have suggested, the power of the CLI is not in the keyboard, but rather in the ability to apply a command an infinite number of times to an arbitrarily large chunk of data; while this can be done with a GUI, it more naturally (in my opinion) works better under the paradigm of the CLI

Of course, that's just my sqrt(2**2) cents (obligatory obfuscation of common phrase).


Who is John Galt? Rather, who is Vasilios Hoffman?
[ Parent ]
HCI (4.00 / 2) (#116)
by enthalpyX on Sat Apr 13, 2002 at 01:30:34 PM EST

Traditional software didn't all have the same user interface because, surprise! the same user interface doesn't necessarily work very well for different applications. Programs like WordPerfect and AutoCad were crafted to make the most common functions as quickly accessible as possible, and to perform the most common operations as quickly and reliably as possible.

But were they? Were studies really done to ascertain which features were the most important to users? How was "working well" defined? It appears that the interface to most older programs was created in an "intuitive" fashion-- that is, what does the programmer think is the best way to do it? Sure, a programming team might sit down and whiteboard it out-- but that's no guarantee that the actual user will enjoy using the interface, or find it to be any less cumbersome.

To make this somewhat more on-topic, HCI research [which, arguably, peaked @ Xerox PARC] application is still lacking in many software products. The actual implementation of task-centered and user-centered design can create great software nowadays. Still not a new idea, but shops who use HCI techniques are reaping the benefits.

* This is if you consider HCI part of CS. Which is certainly debatable.

[ Parent ]

The way I do it (4.57 / 7) (#124)
by localroger on Sat Apr 13, 2002 at 02:36:11 PM EST

Were studies really done to ascertain which features were the most important to users? How was "working well" defined?

You give it to the people who will use it and watch them. If they are having problems with the function layout, change it and try again.

It's not rocket science.

It appears that the interface to most older programs was created in an "intuitive" fashion-- that is, what does the programmer think is the best way to do it?

Some were. When the programmer is typical of an end user and has the knack for thinking like one, this can be the best way. When the programmer has an "engineer" mentality and is asked to write code for use by clerks and truck drivers, the result can be dreadful.

Sure, a programming team might sit down and whiteboard it out-- but that's no guarantee that the actual user will enjoy using the interface, or find it to be any less cumbersome.

Right, design by committee -- we all know how well that's likely to work 8-O

Seriously, there is a right way to do this, and that is to put the beta product in the hands of users, and be responsive to their complaints and to observe for yourself any time/motion awkwardness which becomes apparent. It is nearly impossible to predict how a program will be used by people whose skills are different than yours, and anyone who says otherwise is either a liar or a fool.

Some older programs were really good in this way, though a lot weren't. Nearly all new programs are really crappy in this way, because the default Windows user interface is really crappy at doing certain things. (Think behavior of the ALT key, or using TAB instead of ENTER to terminate each field in data entry, or the uselessness of key repeat with menu-based keyboard shortcuts). Even if you tried to fix these problems -- and believe me, in some applications they are big problems -- it can be difficult or even impossible to coerce the operating system into doing what you want.

To celebrate Y2K my company moved from a very antique IBM System 36 based accounting system to a spiffy new Windows based solution which, unlike the Sys36 software, was actually designed for our industry. The politest thing I can say about the result is that it has been a disaster. Despite a network with 100x the computing power of the Sys36 the new system is slower, crashes frequently, raises weird error messages meant for the eyes of the Access programmer who developed it, and is universally harder to understand and more difficult to use than the "cryptic" function-key based Sys36 software. Our best secretaries are about half as effective on the new system as they were on the old one, and this is after nearly two years of using the GUI. On the old system they had critical command sequences and codes memorized and flew through them; in the GUI there are no such shortcuts to be found and many actions have different, unpredictable results depending on account activity, network traffic, phase of the Moon, and so on.

The Sys36 system ran continuously for 18 years without crashing. With luck, the new software will make it through one day. When our customers ask why I won't provide X where X is some buzzword-compliant "feature" they've read about in the trade journals, I point them at our own accounting system as an excellent example of why I work the way I do. If I ever provided a such a crappy product to a paying customer I'd be ashamed to show my face at work. But most people don't seem to feel that way.

I can haz blog!
[ Parent ]

Going to the users... (4.00 / 1) (#158)
by enthalpyX on Sat Apr 13, 2002 at 08:01:41 PM EST

It's not rocket science.

It's not rocket science, true-- which makes the fact that so many shops do NOT do this even more absurd. We have lots of software that programmers love to use, but users absolutely hate.

Seriously, there is a right way to do this, and that is to put the beta product in the hands of users, and be responsive to their complaints and to observe for yourself any time/motion awkwardness which becomes apparent.

There are two major problems with this method, both related in such a way as to sort of cancel each other out. First, when the user is presented with a functioning piece of software, many users are going to be loathe to change anything. It's hard for a user to forumlate a new idea about how something should look on a screen, when it's already there. When I ask users for changes on something that's already coded up, the changes will typically be minute-- color changes, items missing, etc. Developing low-fidelity prototypes with good 'ole pencil & paper is the surest way to get new layout ideas out of a user. Secondly, as the interface and underlying code already exists, truly sweeping design changes have the potential to warrant complete re-writes. Programmers typically don't want to do this, either.

If you have an extremely large user base, it may not even be possible to go directly to your users and find out what they want. For example, even though it seems that the new accounting system used by your company is the worst POS ever, it's possible that the application works well for someone else. Rather unlikely, but still.

Users have been trained to just "put up" with the horrendous software out there and beat their heads against the wall until they [somehow] get it working. It shouldn't have to be this way.

I guess my point is that design (especially graphical) isn't easy.

Humans are finicky creatures, and wants in software vary significantly.

[ Parent ]

The Standard Speech, and other methods (none / 0) (#166)
by localroger on Sat Apr 13, 2002 at 08:57:08 PM EST

First, when the user is presented with a functioning piece of software, many users are going to be loathe to change anything. It's hard for a user to forumlate a new idea about how something should look on a screen, when it's already there.

When I put in a new system, I will watch the customer use it myself; often even if they're shy about asking for changes, I'll notice and suggest something myself because I can see that the way they are using it is awkward and frustrating.

After an initial install I'll usually leave it in place a couple of weeks, with instructions to keep a "wish list" near the machine. "Anything you'd like it to do, even if you don't think it's possible or don't know if it's a good idea, write it down," I instruct. "Sometimes a thing you think is really unreasonable will turn out to be easy to give you, and sometimes a thing you think is trivial will turn out to be costly. You'll never know if you don't ask." I will also encourage managers and bosses to ask for changes, so they will put the actual end users (the source of real information) at ease.

Of course it doesn't hurt that I've been doing this for 17+ years and have gotten pretty good at getting it right the first time. Most people don't think of themselves as "computer users;" they think of themselves as operators, secretaries, or whatever who have things to get done. My job is to make getting their things done easy. Often it means the opposite of every cow that is sacred in GUI land; in a repetetive and unchanging environment the interface that leads you through the process a step at a time can be easier and more reliable than one that gives you the "freedom" to click on anything whenever you want.

There are places where a multitasking GUI is absolutely the last thing you need or want, but if the hardware you need only supports Windows drivers you are simply fucked.

If you have an extremely large user base, it may not even be possible to go directly to your users and find out what they want.

It is always possible to go to a few users and see how they react to an alpha or beta copy of the software. Always. There is no excuse for not doing this, ever. Not doing it is simply elitist and/or cheap. I've lost count of the number of times the specification turned out to be totally misleading w/r/t how an application was actually used.

End users are the only ground truth that means anything in this field. It doesn't matter what platform you're programming, what language you use, whether you prefer GUI or CLI or how you optimize your code: Someone will use your work, and it will either make their life easier or harder. That is the only judgement I will accept on my work. That is the only judgement which is worth a hill of beans. And if your manager or boss feels otherwise, your company or department is fucked and you should seek employment elsewhere.

I can haz blog!
[ Parent ]

lol (3.45 / 11) (#99)
by Rezand on Sat Apr 13, 2002 at 12:02:36 PM EST

This is the most ridiculous article I've ever read on kuro5hin. I almost fell over. This has to be a huge 'troll'-- otherwise we live in a sad world.

Any scientific topic I mention, no matter how unique to the modern world, has to have been based in some part by findings in the past. Just about every technique we mention in computer science has to mention computers somehow, and, boy, it's a bit hard to get around the fact that they were built before the mid-1980's. So we're forced to come up with super original sciences generated in the last 15 years. I'd like to see any aging field that was able to accomplish this.

No "science" (oh, btw, I have a huge gripe about using 'computer science' to refer merely to programming optimization and languages) can progress without building and proving and tearing down findings from previous generations. That's how progression behaves. "Deep Blue" was a breakthrough for computer science in its niche, but obviously it has to be based on mixing and matching ideas from the past. And you'll surely balk at the idea of "DNA computing" and "Quantum Computing" because they don't run on your Apple II. The Apple II and its ilk and no ability to generate the graphics we are able to process today-- this has lead to numerous amazing advances to graphics. Unfortunately, it's all got basis on the past mathematics and findings from history. (Damn Newton and all his calculating!)

I hope when I get older, I won't be as blind. You always hear the "back in my day it was better" stories, and perhaps I'll feel the same way. The day, though, that I stop noticing the number of advances in science (especially those happening right under my nose) will be a very tragic day indeed.

Back in my day (5.00 / 1) (#168)
by ka9dgx on Sat Apr 13, 2002 at 09:12:18 PM EST

Back when I started (1980), we thought computers we had were pretty darn good, because they were WAY better than the predecessors. No punch cards or paper tape to worry about, you actually had a whole computer to yourself, waiting for you to tell it what to do.

Flash forward to 2002, HAL isn't here, but he was crazy anyway. We now have multiple processors waiting for us to tell them what to do, computers all over the world at our disposal, and we're able to send so-so video over the internet for about $50/month (cable modem). Things are much better.

However, my 90 Mhz Pentium machine worked much better at playing MP3 files than my current 400 Mhz Pentium II thanks to the goofballs at Micro$oft who decided that it's a good idea to put sound, video, and network cards all on INT 9. The slack taken up by better hardware is an obscene waste of resources.

Things will get better, but it'll take market forces to focus attention back on software performance as the world realises that upgrades aren't any more.

There is hope, but you have to be patient.

--Mike--

[ Parent ]

Yawn. (4.00 / 7) (#104)
by rebelcool on Sat Apr 13, 2002 at 12:14:06 PM EST

With a little intuitive thinking (and real experience), one can quickly ascertain the real reason why code sucks these days.

That reason, is deadlines. Yeah, if I had 6 months to tweak, massage and optimize every project I've worked on, I too would turn out the most beautiful, gorgeous and clever code you've ever seen.

Functionally, it would be identical (nearly) to something slapped together in a month, but hooboy would it be nice underneath!

Unfortunately, in the real world, nobody cares, so long as the functionality works. And the faster you get that done, the faster your company has a product to generate revenue to pay you.

In any case, theres a good reason why 'techniques' havent changed any. The hardware is still almost identical to what it was 20 years ago. Electrical engineering hasn't changed any. Sure, machines are faster and with more memory, but so what? The underlying things are all the same. Someone has implemented a better adder on a processor, more powerful floating point...but the Load/Store architecture is still the most popular way to design a system. The fundamentals of programming such a system hasnt changed at all.

COG. Build your own community. Free, easy, powerful. Demo site

Elegance brings forth maintainability (5.00 / 1) (#233)
by pin0cchio on Mon Apr 15, 2002 at 09:36:54 AM EST

Unfortunately, in the real world, nobody cares, so long as the functionality works.

Not unless your client wants further updates to the software. Elegant code is maintainable code.

And the faster you get that done, the faster your company has a product to generate revenue to pay you.

Pay a little now, or a lot later.


lj65
[ Parent ]
Neural networks (3.40 / 5) (#105)
by zenit on Sat Apr 13, 2002 at 12:22:01 PM EST

Artificial neural nets have been around since the 60s, but there are several types of neural networks. The primitive perceptron was invented in 1957 by Rosenblatt, but it wasn't really usable for much. The real breakthrough was made with the backpropagation net in 1986 (less than 20 years!).

Another very important area of neural networks is the research done by Teuvo Kohonen. The "self-organizing map" (SOM) is a revolutionary technique, radically different from existing neural nets, but still based on the same basic ideas. (Much like object orientation is a radically different programming paradigm, based on traditional programming.)



Redefinition of skill (3.62 / 8) (#109)
by mech9t8 on Sat Apr 13, 2002 at 12:39:06 PM EST

Instead of skill, programmers began simply writing incredibly bloated code. I've written payroll software that ran on the 4k computer I mentioned above. I've written adventure games that ran on a calculataor with 4.6k of RAM and a processor speed that was measured in kilohertz. These days simple hello world programs compile to over 128k in size, and a letter to grandmom can require over a megabyte of disk storage!

Skilled programmers nowadays write code that's maintainable and that's user friendly. Writing code that fits in a thumbnail no longer needs to be a priority.

Programming since the 80's has concentrated on writing software that *everyone*, not just CS majors, uses - on writing software that's useful *everywhere*, not just in CS departments - on making computers a means rather than an end. There's naturally less progress in the basic computer grammer; the explosion of new discoveries made in the 70's is simply being refined into something useful.

If you want to see what's new and cutting edge, try the Research part of Computer Science department website. But it's just like math - there's still of plenty of new advances being made, but they take time to carry over to the mainstream - and, in any case, the mathematics known in Newton's time is all 99% of the population needs...

Same with Computer Science - the basic grammar known in the early 80's, combined with the new technology, is all that's needed to exploit what computers can do. English has been around for hundreds of years, but does that mean everything written today is old?

--
IMHO

Maintainability vs code size (none / 0) (#167)
by porkchop_d_clown on Sat Apr 13, 2002 at 09:07:58 PM EST

Skilled programmers nowadays write code that's maintainable and that's user friendly. Writing code that fits in a thumbnail no longer needs to be a priority.

A good point; and I guess I have to agree - I write that way myself out of pure self-defense: the dumb bastard who will have to maintain my code might be me!

Still, that doesn't explain why the current version of MS Word is a full hundred times (if not more!) the size of MS Word 1.0. Is the resulting source code 100 times as readable? Is it even 10% easier to maintain?


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
Sure, bring up Word ;) (none / 0) (#182)
by mech9t8 on Sat Apr 13, 2002 at 11:49:37 PM EST

Well, word has a million more features now than Word 1.0, from grammer checking, to DTP capabilities, to object embedding, to all those autocorrect things, to collaboration tools, to web capabilities, etc. You may not use them, but they're there.

Whether those features are "bloat" or not is another question, but they're obviously going to need more code - and another truism is the more user-friendly and user-tolerant an app becomes, the more code it requires. Writing a GUI obviously requires far more code (in total, although of course most of it is now handled by libraries) than a command line tool. And even writing a command-line tool with graceful error handling and useful error messages requires far more code than one that just fucks out when it encounters an error.

Word is usually the single worst example people can come up with to illustrate code bloat, so I'm hardly going to say that its size makes it better or more readable or whatnot (esp. since I've never seen it's code). But I will say that pretty much any app written today is probably much more readable and maintainable than any app written 10 or 20 years ago, given (a) equivalent functionality, and (b) that it isn't based on the same codebase that's simply been expanded for 10 years. And any app written in, say, Java or Visual Basic is far more maintainable than something written in C++ given the same restrictions.

--
IMHO
[ Parent ]
Embedded systems and user friendliness (none / 0) (#232)
by pin0cchio on Mon Apr 15, 2002 at 09:33:03 AM EST

Skilled programmers nowadays write code that's maintainable and that's user friendly.

Define "user friendly." A huge binary for an embedded system raises the cost of the ROM part that contains the code, which in turn raises the price of the good. How is an expensive good more "user friendly" in economic terms than a cheap good?

Writing code that fits in a thumbnail no longer needs to be a priority.

Unless the thumbnail in question is the processor's instruction cache. That can mean the difference between 15 updates per second and 60 updates per second in a real-time system such as an interactive simulation.


lj65
[ Parent ]
what technologies are in use... (3.00 / 4) (#110)
by pb on Sat Apr 13, 2002 at 12:51:25 PM EST

Just list some common technologies that are in use, where they came from, and what's new. Since people are suggesting 'MP3' for some reason, let's take a look at it.

MP3 -- The big technology innovations I see here are perceptual audio coding, lossy compression, and later variable bit rates and streaming. Lossy compression can be traced back to JPEGs, which came about in the late 80's; I remember reading an article in BYTE magazine about the compression techniques used, (cosine interpolation? what's that?) and later downloading programs like 'convert' (then part of "Image Alchemy"?) and 'dvpeg' and playing around with compression options.

So what about the other ideas? Variable bit rates don't seem like much of a leap to me; that's conceptually like having a 'quality' setting in different chunks of a file, or just an efficient encoder.

We've known about what the human ear can and can't hear for quite some time now, so the idea of "perceptual audio coding" isn't really a new one; that's how the telephone system worked (and also .AU files). And streaming was what we did with .AU files when we'd download entire songs from ftp archives and play them on SunOS; we didn't have much of a buffering system, but ultimately it all depends on having enough bandwidth. And no one cared about the legality of it all because these AU files sounded really crappy; few people would pay for telephone quality music. :)

So yes, there are some innovations and discoveries here, the latest one being the introduction and adoption of the JPEG file format, which was a significant innovation at the time. And as for the rest, they are achived whilst 'standing on the shoulders of giants', as are most things once there is an established field for it.
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
PSTN doesn't do adaptive psychoacoustics (none / 0) (#231)
by pin0cchio on Mon Apr 15, 2002 at 09:29:37 AM EST

so the idea of "perceptual audio coding" isn't really a new one; that's how the telephone system worked (and also .AU files).

The public switched telephone network's codec uses a static model of a 300-3600 Hz bandpass filter followed by a mu-law quantization. (See my previous comment.) On the other hand, MP3, Vorbis, AAC, AC3, and the like use more advanced models of hearing to allocate bits to frequencies dynamically.


lj65
[ Parent ]
The explanation is simple. (4.25 / 4) (#118)
by bgalehouse on Sat Apr 13, 2002 at 01:37:46 PM EST

For a great, huge number of everyday tasks, we don't need new fancy algorithms. Web development, for example, doesn't require much that isn't already in place. You can debate wether this is because these tasks are simpler, or because computer scientists went for the areas with maximum ROI first.

Now, there are things which have come out of academia more recently. One is the Java bytecode verifier. Proof Carrying Code is an interesting research field. I think that it could and should replace memory management for operating system security. Imagine how much faster programs would communicate with each other and the os if you didn't have context switches.

But even in this case, faster hardware reduces the practical need for the inovation. So though the academics work on it, it isn't in the public eye.

There is also the train garbage collection algorithm, also now available with Java. Garbage collection which puts a bound on time for each collection stage. There is a paper which documents the first published implementation of the train algorithm.

Of course, some might claim tha gc is just a way for programmers to be lazy, or some such. I remember a story told by a CMU CS prof about debugging an X11 program. It was seg faulting; as far as he could figure the libraries were holding some pointer that he was freeing. So he found himself randomly removing deallocation statements to make it work. He would claim that all non-trivial X programs almost have to have memory leaks. True or not, the possibility says much for the value of garbage collection in large or complex systems.

Truth. (none / 0) (#159)
by porkchop_d_clown on Sat Apr 13, 2002 at 08:13:23 PM EST

For a great, huge number of everyday tasks, we don't need new fancy algorithms.

True. The problem is that few coders seem to know (a) when fancy algorithms are needed or (b) when they've met their Peter Principle and should ask someone for help.

As for memory leaks - yeah, the single biggest bugaboo in C coding and Java too (since GC is unpredictable). Thanks for the link to the paper, I'm very interested in GC stuff.


--
Uhhh.... Where did I drop that clue?
I know I had one just a minute ago!


[ Parent ]
GC is quite predictable (none / 0) (#255)
by bgalehouse on Thu Apr 18, 2002 at 04:33:41 PM EST

The only leak in a GC system comes from the fact that any pointer might be followed. Typical memory leaks in a GC system involve, say, listeners being created to inform transitory windows of events, and then not being unlinked when the windows close. Mostly you need to take a very close look at any global/long lived variables, and at any operation which adds to them.

Of course, these things would still be memory leaks in non-gc systems. Best case, you'd get a hashtable of dangling pointers instead of objects, but that would still a bunch of extranious hashtable structure objects.

[ Parent ]

DLL hell and pointer arithmetic (none / 0) (#230)
by pin0cchio on Mon Apr 15, 2002 at 09:27:01 AM EST

I think that [code proofs] could and should replace memory management for operating system security. Imagine how much faster programs would communicate with each other and the os if you didn't have context switches.

You just described loading applications in-process. On win32 operating systems, those are called "DLLs," and you know how those go. Plus, if the hardware doesn't enforce segment limits or paging limits, you have no security in the face of untrusted binaries such as mass-market proprietary software. Remember multitasking in Mac OS 6-9 or Windows 9x?

The .NET Framework includes some facility for proving type- and bounds-safety of programs, and it starts by prohibiting pointer arithmetic. Other program proof systems I've seen also work on strongly-typed languages that don't provide for pointer arithmetic (such as Lisp derivatives and the Java programming language). Without pointer arithmetic, it will be much more difficult to write software that talks to the hardware. In addition, the constant checking of pre- and post-conditions may decrease performance on CPU-bound code, and suddenly, your application requires twice the server hardware investment and twice the electricity and floor space at the colo.


lj65
[ Parent ]
Is a start (none / 0) (#254)
by bgalehouse on Thu Apr 18, 2002 at 04:19:48 PM EST

The java byte code verifier and strong typing are a start. But once every bit of code is verified, there won't be any need for context switches at all - it is a different paradigm completely for OS design. So much of OS design is about minimizing/optimizing communications with the kernal. Make each program as integrated with the kernal as a DLL (without giving up any safty) and we will see just how much faster things can get.

Also, it is possible to verify any safe to execute assembly code, though often the verifier needs hints on how to prove the safty of loops. Note that these hints themselves can't poison the system, so they, along with the code, can be provided by an untrusted producer.

[ Parent ]

CS hasn't stopped - it's still evolving (4.22 / 9) (#121)
by pslam on Sat Apr 13, 2002 at 02:08:58 PM EST

It's just that the changes are not as revolutionary as the examples in the article. There are very few "revolutionary" ideas in Computer Science. In recent times there are a few notable major advances:
  • From 1980: Neural networking. It's only the last two decades where people have figured out how a) it works biologically, and b) how to do it practically.
  • From 1985-1990: Efficient, realistic computer graphics. You can argue the dates here, but I'd say you can throw away most books written before about 1985-1990. They will describe algorithms which are impractical on real architectures, or just plain suck for efficiency. The last 10 years have seen increases in processing power that obsolete the old choices of algorithms.
  • From 1985-1990: Practical parallel programming techniques. In other words, more than just theory on computational complexity. I believe it's also only recently that any real work on formalising methods has been done, such as standard hods for performing common algorithms in parallel. Try to find a metbook on parallel processing written before 1985 and you'll see what I mean.
  • From 1990: Psychoacoustics, psychovisuals. Or whatever you call visual perception. This is still pretty young as far as practical implementation goes. I can't find any useful detailed information on visual perception as regards to possible compression, other than hand-wavey "quantise higher frequencies more using this arbitrary curve we got by trial and error". You can get your hands on audio perception books, but where's the research papers on visual perception?

I'd say even the last decade has seen significant advances in the art. Does everyone see how much easier it is to do system level programming today than it was, say, 10 years ago? How about how much easier it is to write a program which shovels data between two computers?

Saying that computer science hasn't advanced is like saying that going to the moon wasn't an advancement because people had shown a rocket could do it a hundred years before. But it took a lot more knowledge to actually get it done than just showing that it could be!

One of the other comments refers to the time delay between theory and implementation. Perhaps the reason we haven't seen ideas of the same scale as circa 1970 is because we won't see the ideas of 1990-2000 until another decade - the time when they become practical to implement.

Check those example fields above - neural networking, graphics, parallel processing and perception. All of them are still pretty much in their infancy. We'll see the ultimate benefits of today's ideas in another decade or two. Don't dismiss the current generation too quickly.

CS hasn't stopped - it's still evolving (5.00 / 3) (#144)
by xenotrope on Sat Apr 13, 2002 at 05:06:00 PM EST

I have a problem with the way this topic is being tackled from both sides of the argument. Honestly, it can be argued that anything involving the encoding of instructions as electronic signals to be processed in a deterministic way by a silicon chip is this gentleman's definition of "computer science". To make observations from several perspectives:

1. Since the inception of CS, we've worked on basic principles and defined larger concepts based on them.

The very bottom of CS is discrete mathematics and model theory. Before you can program, you have to define everything you intend to use, such as a character, an integer, what an iteration is and how to do it. These fundamentals are still important, so it's no wonder you're still going to get the same basics of CS in school now as you would years and years ago. These constructs are abstract and universal, and are thus without a language or an architecture. They are primitive and necessary. By comparison, I find it absurd for a mathematician to lament that there's nothing new being taught in high schools since the 1970s. If you find 15 year olds learning geometry and trig, don't be surprised. It's not that nothing new has occured in the field of math, it's that you're looking in the wrong place.

2. New ideas don't happen in the classroom.

Einstein was a patent clerk when he conceived the basics of relativistic physics. Scientific journals don't publish papers authored by "Dr. Hanson's CHEM-332 class". All these signs point to someplace else. Innovation is not the same as education. Rather, education is the enforcement of old ideas. If you want something new, you shouldn't be looking at what is taught today. Ideas like clockless computers and quantum computing aren't exactly new, but they're also not exactly mainstream. Research universities are working on brilliant new concepts -- a good example is the MIT Media Lab -- that would knock the socks off of a CS undergrad. Great, sweeping changes in the basic paradigm of computation aren't going to be made into a concise 1-semester course with a simple outline, some recommended reading, and a certified curriculum. Look to professors and their grad students, not the courses that a board of directors has approved.

3. What constitutes revolutionary?

If you're asking to be educated, you have to be more specific. Suggestions here abound about MP3 and multimedia graphics. True, while these things certainly weren't around 30 years ago, they're based on very old concepts. Claude Shannon, father of information theory, developed the first great equation used in audio sampling. I'll bet he never dreamt of being able to get full, rich sound off of magnetic media like we take for granted today. So is it revolutionary or not? It's new, but based on something old. Where does the demarcation lie? Do you need something completely outrageous in order to be impressed? In that case, bioengineering has great potential to wow you. DNA computing is becoming more and more feasible, with Dr. Adleman (the "A" in RSA cryptography) recently setting a record for using DNA to solve a million-variable problem. Is this impressive enough for you?

4. There is more to life than 0 and 1.

Tertiary computing, using an additional state other than "on" and "off" is developing nicely. Don Knuth, who we should all be listening to, suggests that "balanced tertiary ordering" is a better and more elegant system than binary. In it, you have three states: -1, 0, and 1. I don't even know how to go about building a viable computational system based on this idea, but it's certainly possible and unlike anything you'll read about in an O'Reilly book. Is that revolutionary? The idea is old, but it's remained in an infantile state since, well, ever.


[ Parent ]
The real revolutions (5.00 / 2) (#169)
by pslam on Sat Apr 13, 2002 at 09:15:34 PM EST

You're right - the problem is people are being too narrow in their definition of "Computer Science" and "Revolutionary". Until recent decades (notably, after the "revolutionary" times in the article), Computer Science was regarding as merely a branch of mathematics as far as University teaching was concerned. That's because it had very little real world use because of hardware limitations.

There's a lot of comments concentrating on the end products. This is wrong. MP3 itself isn't revolutionary - the research into psychoacoustics behind it is. MPEG video isn't revolutionary - it's the research into efficient algorithms that made it possible that is. I'd even say that C++ (or Java, if you prefer) isn't revolutionary - the modern concepts of defensive programming, strong typing and problem layering and encapsulation are the real advances.

I'd like to counter tertiary computing as an example of something revolutionary. The first computers (mechanical and electric) used decimal arithmetic. It wasn't until later that people realised that binary was far easier to design and that decimal only really meant something to humans as input and output - the intermediate stages really didn't need it. Knuth is generally the model of good computer science, but I have to disagree with this one. We use binary and not decimal for one good reason - it's efficient and practical.

Tertiary computing has yet to be proven efficient or practical. I suspect it never will. If it's either inefficient or impractical, no chip manufacturer will ever use it. And let's not forget (like everyone always does) the software part of the equation. Even if somebody makes a practical tertiary based chip, you still need to write software for the thing. If that's impractical, the whole idea is useless regardless of whether hardware is practical.

[ Parent ]

This article is 100% accurate. (3.00 / 7) (#129)
by jd on Sat Apr 13, 2002 at 03:19:58 PM EST

We haven't learned anything, over the last 20 years or so. At least, not at the lower levels.

I started programming in 1978, on a Commodore PET. It was an excellent first machine, and taught me a lot about how to write efficient code, and to design before I write.

My next machine was the infamous BBC model B. This machine had to have been God's Gift to Geeks! Parallel banks of memory, ADC converters, the best sound system at the time (4 channels, with sound envelope controls), parallel & serial ports, TWO video ports, and a 2nd processor port.

There's virtually nothing on a BBC B that does not exist today, on standard AMD SMP systems, provided they have a sound card. But, then, there's virtually nothing on a standard SMP system that did not exist on the BBC, either!!!

Ok, now we'll move on to parallel processing. The first "real" parallel processing machine was, of course, Colossus. Yes, that machine. One of the reasons it could break codes quickly was because it could do many things at the same time. It wasn't dependent on one function finishing before it could do something else.

The first -programmable- parallel architecture was the Cray X-MP. (MP is multi-processor.) This was an ingenious design, but a little expensive to be actually practical.

The first -practical- parallel architecture was the Inmos Transputer. This could scale indefinitely. There was no limit of 2, 4, or 8 processors. You didn't need any special chip-sets to make it work. Arrays of 1000+ were commonplace in large Universities in Europe, where a Transputer-based machine could outperform a Cray at 1% of the list-price, and >>1% of the running cost.

The Transputer was a mid 80's architecture, designed as a military-grade system, with as few external components as possible, and as few requirements as possible. If it had been taken seriously by Thorn EMI, and backed by the Thatcher regime, we would not be using Intel processors today. That much is certain. Sadly, Inmos was sold to SGS-Thompson, and hasn't really been heard of since.

Let's get onto stuff that is perceived as modern. Take "neural networks", for example. A nerual network is just a collection of programmable gates. You might as well use an FPGA chip, and spare yourself the complex overtones. You have certain inputs which produce an output, and other inputs which don't. That is nothing more than an n-ary gate. All that has been added is an improvement in the programmability.

Bloat vs. Readability: I've rarely seen code that is genuinely readable. It's often poorly commented and variables are given obscure names. No, I'm not talking about BASIC, although I wish I was. I'm referring to Motif, X11R6, Gnome, KDE and even the Linux kernel itself!

If the benefit of readability actually existed, it would be worth the space. At present, though, readable code, formal specifications, and structured designs simply DON'T EXIST!

Let's take some simple examples. When should you use a global variable? Answer: NEVER! Global variables, especially in multi-threaded code, are totally unpredictable and susceptable to causing dangerous side-effects. Pass in what you want in, and pass out what you want out. That way, if two threads try to grab the same variable at the same time, they each have an instance to play with.

How many exit points should a function have? ONE! One way in, one way out. This may seem time-consuming, but it really does make the code much more predictable, and therefore better.

When should type not matter? NEVER! If you want to enforce a particular behaviour, then cast the type. Otherwise, the behaviour is dependent on the compiler, and the phase of the moon. You can't be sure exactly what will happen. You should NEVER get a single warning about type mismatches.

Out-of-memory errors? NEVER! If you're checking for errors on returns from system functions, you can handle them, in a controlled manner, so that what you want done gets done. If you allow uncontrolled behaviour to occur, then expect the program to explode from time to time.

Algorithms that are new: Hmmmm. Tough. Someone mentioned MP3, but that's just a form of lossy compression, and the loss of data to compress is something that's been around a long time. (The old analog phone system used lossy compression, in a sense, as it supported only a very narrow band, and simply eliminated all sounds outside that band.)

I've still got my Beeb. (none / 0) (#137)
by pwhysall on Sat Apr 13, 2002 at 04:12:41 PM EST

And it still works. 21 years old this coming Christmas.
--
Peter
K5 Editors
I'm going to wager that the story keeps getting dumped because it is a steaming pile of badly formatted fool-meme.
CheeseBurgerBrown
[ Parent ]
Yeeesh! (none / 0) (#262)
by jd on Sat May 04, 2002 at 03:13:06 PM EST

I knew those machines were well-built! But 21 years is good going!

[ Parent ]
Not entirely true... (none / 0) (#150)
by Uller on Sat Apr 13, 2002 at 05:52:53 PM EST

There are a couple projects out there that are well structured. For example, nearly six months was spent planning out the structure for LCDriver v2 - it's well documented down to the file and function level, well before any coding started. It's about 80% done code-wise at this point - the driver core is being released under LGPL, and the demo apps that come with it under BSD. Currently in private beta.

- Ryan "Uller" Myers

Given an infinite amount of time, an infinite number of monkeys with typewriters must eventually produce the collected written works of Shakespeare. John Romero's Daikatana was a five minute, ten monkey job.
[ Parent ]
Psychoacoustics are the difference (none / 0) (#223)
by pin0cchio on Sun Apr 14, 2002 at 10:16:14 PM EST

Algorithms that are new: Hmmmm. Tough. Someone mentioned MP3, but that's just a form of lossy compression, and the loss of data to compress is something that's been around a long time [since the invention of PSTN].

But notice that the public switched telephone system's bandpass filter on 300-3600 Hz grossly distorts voiceless fricative consonants such as [s], [f], [ʃ] (sh), [θ] (th), and [x] (ch in loch). The difference between the PSTN's codec (bandpass filter followed by μ-law quantization) and modern audio codecs (MP3, Vorbis, M3V, SBM[1], etc) is that the modern codecs use an adaptive method (either Fast Fourier Transform or linear prediction) to adapt to a particular input and allocate more bits to what the human auditory system can actually hear based on a psychoacoustic simulation.

[1] SBM is Super Bit Mapping, Sony's trademark for a noise-shaped dither from 20-bit to 16-bit PCM that shoves all the quantization noise up into the top third of the spectrum (16-22 kHz) where the ear is least sensitive. It is used primarily in the CD Audio and DAT formats. Microsoft's competing HDCD technique is apparently SBM on the high-order bits and then a lossy codec in the last bit to encode the difference between the SBM'd signal and the true 20-bit signal. Or something.


lj65
[ Parent ]
Relational Gap (3.57 / 7) (#133)
by Baldrson on Sat Apr 13, 2002 at 03:30:04 PM EST

When Gauss said "Mathematics is the study of relations." he didn't have computer science in mind, but both CS and math curricula leave relational theory as a backwater subject even though it was the primary objective of Principia Mathematica and, due to the relational database industry, is arguably the most important field of mathematics for the computer industry outside boolean algebra.

-------- Empty the Cities --------


I'm not convinced (none / 0) (#146)
by jolly st nick on Sat Apr 13, 2002 at 05:29:23 PM EST

I'm not convinced that relational theory has all that much additional practical potential over what it has already achieved. I'd be interested in specific benefits you think could be gained by additional research.

Many of the subtler aspects of relational theory (for example membership in higher normal forms) hinge on niceties of semantic assertions which aren't all that reliable in practice, which is why a lot of design guides say go to 3NF and don't worry much more. I can think of a few results that have some nice practical applications, such as R. Fagin's theorems on certain structural special cases (e.g. 3NF + all candidate keys simple = 4NF), but nothing earth shattering.



[ Parent ]

I Wouldn't Expect You To Be Convinced (4.50 / 2) (#187)
by Baldrson on Sun Apr 14, 2002 at 05:08:05 AM EST

If it took Tony Hoare most of his career to come to the conclusion that relations were the focus of CS despite being a Turing award winner. I'd hardly expect a random software expert to be so convinced by a short comment at K5.

Nevertheless, it is verifiable that Russell and Whitehead had set out to develop what they called Relation Arithmetic as the crowning achievement of Principia Mathematica and that Codd's work, while manifestly valuable, didn't scratch the surface of what R & W set out to accomplish in the final volume of P.M.

Basically all you have to do to make major advances in the mathematics of relations is look at the grid-lock into which R & W put themselves in P.M.'s last volume, as though it were a failure to specify spaces. Essentially their conception of relational similarity could not allow composition of relations due to the fact that they didn't conceive of spaces properly. Relation spaces are much like geometric spaces that allow congruence, as well as similarity. Take it from there if you're serious.

I am doing so.

This is where I worked from with the E-Speak project's advanced research with relation arithmetic (that was still too abstract after a short few months for direct application).

If you want to really get into the really heavy implications to natural science here go get a copy of Bit String Physics and read the article "Process, System, Causality and Quantum Mechanics: A Psychoanalysis of Animal Faith" as well as the following paper.

-------- Empty the Cities --------


[ Parent ]

Why Apps and Not Concepts? (3.75 / 4) (#138)
by czolgosz on Sat Apr 13, 2002 at 04:15:34 PM EST

It seems that the majority of posts list new kinds of applications, rather than new concepts.

I'd say that, in the past 20 years, the biggest changes have been:

  • Emergence of software engineering as a discipline. Yeah, it was there in 1980, but quite embryonic.
  • Increased adoption of functional programming concepts. Again, FP was there in 1980, but it's much more widely used now.
  • Growth of interpreted language use and the adoption of high-level scripting languages.
  • Visual programming paradigms.

    As far as fundamental new algorithms, how about public-key encryption? Most of that happened in the past two decades (though Diffie-Hellman goes all the way back to 1976).
    Why should I let the toad work squat on my life? --Larkin
  • New Concepts? (3.00 / 1) (#156)
    by porkchop_d_clown on Sat Apr 13, 2002 at 07:58:55 PM EST

    1. Software Engineering was "embryonic" in 1980? In what context? At that point, people had been receiving advanced degrees in computer science for over twenty years. Actually, as far as I can tell it was embryonic then - and it still is. Everyone talks about it, but almost no one actually does it in the field.
    2. FP - define "widely used".
    3. Growth of interpreted languages? What, you mean because Java has replaced BASIC? Have you ever heard of shell scripting or JCL?
    4. "Visual programming paradigms" - I'll admit, I don't even know what that means. UML? Turtle graphics?

    --
    Uhhh.... Where did I drop that clue?
    I know I had one just a minute ago!


    [ Parent ]
    Well, Maybe New-ish (4.00 / 1) (#183)
    by czolgosz on Sun Apr 14, 2002 at 12:23:05 AM EST

    Those are some good questions...
    1. Software Engineering was "embryonic" in 1980? In what context? At that point, people had been receiving advanced degrees in computer science for over twenty years. Actually, as far as I can tell it was embryonic then - and it still is. Everyone talks about it, but almost no one actually does it in the field.
    Well, SE isn't CompSci. I've been doing SE for a very long time, and it was still in its formative stages in 1980, degrees or not. There were few institutions offering an SE (as opposed to Comp Sci) degree. Fewer yet offered systems engineering. Comp Sci then was often quite vocational, and when it was rigorous, it was algorithmically rather than process-focused (think Don Knuth). Nothing wrong with that, but definitely not "big picture." The closest to that was probably operations research. Very little was known about what made some software projects succeed and others fail, and what was known often had only anecdotal support. Now at least there are some "islands" of process repeatability (the Gang of 5 patterns on the technical side, a couple of reasonable end-to-end development methodologies), and some real statistics that demonstrate the effectiveness of certain processes (such as firm real-world evidence proving the cost-effectiveness of in-phase defect correction). But you're right that software engineering is more honored in the breach than in the observance, and it's not a fully mature discipline. Still, it's MUCH better now than it was then.
    2. FP - define "widely used".
    Well, for one thing, now I can occasionally find people who know how to do it. And several reasonably well-known higher-level languages, including Python, have at least some readily accessible FP features. OK, my kid still isn't being taught Haskell in 6th grade, but maybe next year, huh?
    3. Growth of interpreted languages? What, you mean because Java has replaced BASIC? Have you ever heard of shell scripting or JCL?
    I was thinking of Perl and Python as well as Java. Scheme seems to be showing up in more places too. Shell scripting and JCL aren't semantically rich enough to be all that useful for general-purpose coding. It's quite possible to code up a "real" app in Perl or Python, though performance might be an issue in some cases. I'd certainly never write a 5,000-line app in (say) C shell or JCL if I could possibly avoid it. There are quite a few apps now that contain a lot of interpreted content. Not to mention extension languages, embedded scripting capabilities, etc.
    4. "Visual programming paradigms" - I'll admit, I don't even know what that means. UML? Turtle graphics?
    Oops, that WAS a bit obscure, I went to get a beer in the middle of writing that and halfway lost the thought. I was referring to IDEs with well-integrated GUI builders, and graphic tools to support things like class browsing and method disclosure. UML, well... good for what it is, but I'm still waiting to see the promise fulfilled. IDEs made a huge difference. Remember, there were still punched cards in use in 1980 (though I wasn't having to use them by then). The code-unit test cycle could take a couple of days at times. Now it can be minutes. BTW, I smiled at the "turtle graphics" comment... remember LOGO? Sheesh.
    Why should I let the toad work squat on my life? --Larkin
    [ Parent ]
    IDEs (3.00 / 1) (#197)
    by porkchop_d_clown on Sun Apr 14, 2002 at 02:25:37 PM EST

    Yeah, IDEs are a huge step forward in terms of the tools we have to use. In my opinion, good IDEs obliterated the main advantage of incremental interpreters like Forth and BASIC - being able to measure the code-test-debug cycle in seconds or minutes instead of hours.

    Python and Perl - well, Perl is nice, I haven't really used Python but neither is really all that new - one person has already mentioned REXX, and I should be dope slapped for not mentioning AppleScript. It's not 20 years old, (more like 10 or 15) but it does everything that Perl can do, plus directly interface with nearly every Mac application ever written.


    --
    Uhhh.... Where did I drop that clue?
    I know I had one just a minute ago!


    [ Parent ]
    AppleScript (3.00 / 1) (#209)
    by czolgosz on Sun Apr 14, 2002 at 06:42:59 PM EST

    I should be dope slapped for not mentioning AppleScript.
    And so should I. Talking about RAD tools earlier, the first really good GUI builder I played with was... well... Hypercard.
    Why should I let the toad work squat on my life? --Larkin
    [ Parent ]
    High-level scripting (4.00 / 1) (#185)
    by KWillets on Sun Apr 14, 2002 at 02:59:01 AM EST

    * Growth of interpreted language use and the adoption of high-level scripting languages. *
    Try REXX (1979- ). Many IBM internal apps were created in the 80's in REXX, and it was used as a macro language for editors and other apps. Having used it, I don't see anything major about perl - same easy variable typing, file handling and parsing, and command execution. In fact I noticed the other day that someone implemented REXX's PARSE command in a perl module. Surprisingly, it still seems to have a following.

    [ Parent ]
    I can't believe I forgot about REXX! (none / 0) (#195)
    by porkchop_d_clown on Sun Apr 14, 2002 at 02:20:55 PM EST

    And here I was being swayed by the python and perl fans.

    Heck, and AREXX was a standard part of AmigaDOS, too.


    --
    Uhhh.... Where did I drop that clue?
    I know I had one just a minute ago!


    [ Parent ]
    The glass has gotten fuller. (4.62 / 8) (#140)
    by jolly st nick on Sat Apr 13, 2002 at 04:30:32 PM EST

    I've been programming professionally since around 1980. In some respects, fundamentals don't change. The field is much more commercially important now, and has many more people in it, and so is subject to bewildering fads like XML. I'm not against XML, mind you, it's just that it's committee origins shows too plainly.

    On the other hand, skillfully constructed software is much more common. So is unskillfuly constructed software; software is just much more pervasive and there is a lot more people doing it. Still, in absolute terms there are many, many more programmers capable of creating well designed, complex software systems than when I started out.

    Methodologies have gotten better, in my opinion. XP and other modern development models are focusing on faster delivery of business value, which is good; back then methodologies were more cargo-cultish attempts to recreate rare successes through outlining.

    So far as 'bloat' is concerned, we are just writing much, much more ambitious software than we used to. Back then 90% of programs could probably be characterized as utilties or filters. They'd take a file of time entries and create a file of payroll records or some other kind of discrete and easily characterizable transformation. In some ways, the "software tools" movement, laudable as it was, has been passed by. Now, software almost always has numerous interfaces. Not only user interfaces, but interfaces to persistent, long term databases, network interfaces, etc. A piece of software is now almost always an actor or agent within a larger system, and may have a variety of responsibilities. The principles of design haven't changed, but the ways in which they can be violated have multiplied vastly.

    The complex responsibilties of software drives a greater sophistication and, unfortunately, complexity. This can be seen in terms of design movements. In the 1970s, the focus was on clearly written expressions and well organized representation of algorithms. Thus books like "Elements of Programming Style" (still a very good read IMO) and movements like structured programming. Recently, you are more likely to have heard of things like "Design Patterns" -- this is in a sense orthagonal to the earlier buzzwords, and indicates a shift in focus towards organization on a grander scale -- software as actor within a system. One cost of this shift in emphasis is that there may have been some backsliding on the earlier quality metrics.

    I believe the next great design movement will probably be around even larger structures distributed in space and time, engendered by things like SOAP and .NET. Certainly people have written things like CORBA based distributed systems in the past, just as some people back in the 1970s could somehow magically produce large "systems" programs. But I think we will begin to see people struggling to codify what it takes to produce a service or agent running in such environments that is reliable, maintainable and secure.



    Hidden Markov Modelling for Speech is less than 20 (3.66 / 3) (#141)
    by StephenThompson on Sat Apr 13, 2002 at 04:41:43 PM EST

    A major breakthrough in Speech recognition was invented within the last twenty years. Continuous speech recognition would not work if it weren't for the advent of hidden markov modelling. This and other algorithms have been invaluable to any sort of pattern recognizer.

    HMMs go back to the seventies (none / 0) (#160)
    by YU Knicks NE Way on Sat Apr 13, 2002 at 08:17:36 PM EST

    The forward-backward algorithm algorithm for training HMMs was published in 1970. Janet Baker published a summary of results with DRAGON, an HMM-based SR system in 1975. Bahl et al. published a prliminary study of the performance of a continuous speech recognition based on HMMs in 1976.

    [ Parent ]
    *yawn* (3.00 / 3) (#142)
    by 0xdeadbeef on Sat Apr 13, 2002 at 04:56:59 PM EST

    Can you teach me something?
    Probably not. In fact, you seem to have beaten the master.

    Nothing like snotty sarcasm (none / 0) (#157)
    by porkchop_d_clown on Sat Apr 13, 2002 at 07:59:59 PM EST

    When you have nothing intelligent to say.


    --
    Uhhh.... Where did I drop that clue?
    I know I had one just a minute ago!


    [ Parent ]
    Ex-actly (none / 0) (#205)
    by 0xdeadbeef on Sun Apr 14, 2002 at 05:05:20 PM EST

    I had a more thoughtful rebuttal, comparing your article to that "where's my flying cars" commercial, but then I thought, "you know, he hasn't said anything substantial. Just an old fogey, trolling the newbie programmers, and pissing in the face of everyone who has advanced the state of the art in the 20 years." So I was like, "fuck it, I'll say something snide and get on with me life."

    BTW, you can't see the forest for the trees. You're using the most influential advance in computing since the integrated circuit. Just because it was invented in the sixties doesn't discount the myriad applications discovered for it in the last decade or so.

    If you're going to keep splitting hairs, then who cares, because as others have said, it's all Turing machines.



    [ Parent ]
    TRS-80 (3.00 / 1) (#153)
    by vile on Sat Apr 13, 2002 at 06:39:12 PM EST

    1986 baby! Right there with ya. New revisions of old ideas.. isn't that how everything goes?

    ~
    The money is in the treatment, not the cure.
    1986? (none / 0) (#245)
    by b1t r0t on Tue Apr 16, 2002 at 03:01:21 PM EST

    1986 baby! Right there with ya. New revisions of old ideas.. isn't that how everything goes?

    Huh? 1986 was the year the TRS-80 died, if it was even that late. Furrfu. Kids these days.

    -- Indymedia: the fanfiction.net of journalism.
    [ Parent ]

    Actually.. (none / 0) (#249)
    by vile on Wed Apr 17, 2002 at 05:20:57 PM EST

    Being my first computer, it was turned on in december of '85. And I know that it was still available in your local Radio Shack store for at least a year afterwards..

    Yeah.. kids these days.. we still have a decent memory. ;)

    ~
    The money is in the treatment, not the cure.
    [ Parent ]
    And.. (none / 0) (#250)
    by vile on Wed Apr 17, 2002 at 05:30:17 PM EST

    http://www.trs-80.com/trs80-c.htm

    Google is your friend. Be friendly.

    ~
    The money is in the treatment, not the cure.
    [ Parent ]
    Learn Haskell (3.25 / 4) (#154)
    by SIGFPE on Sat Apr 13, 2002 at 07:21:24 PM EST

    Can you teach me something?
    Yes, it's just lambda calculus which is decades old, but it's a far cry from Trash 80 BASIC and pretty mind expanding.
    SIGFPE
    The next revolution - bidirectional compiler IDEs (3.00 / 2) (#173)
    by ka9dgx on Sat Apr 13, 2002 at 09:23:45 PM EST

    I'm just egocentric to think that I've discovered what's really going to change the face of programming in the next 20 years, and here it is:
    Compilers are evil.

    When you take your tenderly crafted human readable source code, then feed it to a compiler, it perpetuates a great evil, and you allow it!

    The evil is subtle, and very powerful. Your source code gets shredded, in a one way transformation into whatever the compiler wants to make of it.

    When you can work the compiler in both directions, programming gets a LOT simpler.

    --Mike--

    Uhh (none / 0) (#177)
    by BlackTriangle on Sat Apr 13, 2002 at 09:51:28 PM EST

    You mean the way Java does?

    Moo.


    [ Parent ]
    No known compiler (none / 0) (#193)
    by ka9dgx on Sun Apr 14, 2002 at 10:32:06 AM EST

    I know of no compiler that works both forwards and backwards. It needs to be a transformation engine, allowing you to modify the symbol table, and have that cause the source to be updated, for example.

    --Mike--

    [ Parent ]

    yep (none / 0) (#247)
    by gps on Tue Apr 16, 2002 at 08:26:07 PM EST

    that's why i prefer to code in python these days. working with a language that has an interactive interpreter can make life so much easier.

    really what high level languages have bought us these days is a huge set of available libraries on top of being able to write useful code much faster. so what if it runs a bit slower, profile your code later and optimize the 5% that takes 99% of the execution time.

    [ Parent ]
    I don't think it's a bad thing (4.66 / 6) (#178)
    by Sethamin on Sat Apr 13, 2002 at 10:10:13 PM EST

    I would have to disagree with you that this is such a bad thing. The reason that Computer Science is a "Science" is because there is some fundamental concepts and theories behind it. If there weren't, then it would be nothing but vocational. As one professor in my department used to say (paraphrasing):

    If you want to learn how to program some language, go to Computer Learning Center. We're here to teach you the fundamentals behind programming; the theories, the algorithms, and the conceptual framework you need. If you know how to program in C++, you'll probably be useful for 10 years. If you know the science behind it, your skills will never be obsolete.

    You are clearly living proof of that. It means that your professors taught you well that your skills are still as applicable today as they when you began. There is inherent similarity in all programming that transcends any particular language you use. And once you know it, you're golden.

    Your disdain for the "reversal" in programming is also misplaced. It is nothing less than a paradigm shift in the world of computing. It used to be that programmers were cheap and hardware expensive; now it is completely the opposite. It's kind of like lamenting that furniture is rarely made by hand anymore. Handmade furniture is (generally) better quality, but this is reality we're talking about here. Time is money, and nothing more so than programmer's time. Sure, we can all appreciate good, solid, efficient code when we see it, but from an economic perspective it's not worth most programmer's time to do it.

    Lastly, I do think you have a point about the major advances being gone. Software is far more about penetrating the commercial world than coming up with new and radical ideas. I think it is fair to say that most of the field has been pioneered and at this point we're dealing with a maturing industry. Hope you enjoyed the golden years of advancement while they lasted. Most fields probably go through this and eventually quiet down to a sustainable pace, and Computer Science is really no different.

    On a final note, I think that graphics have come a hell of a long way since your time, to name one thing. AI hasn't moved much as of late but I still think there's a ton of innovation ready to happen there as well. And databases have become far more efficient in the past decade as well. I agree that the core concepts have mostly been mapped out, but a lot of the more peripherial stuff has changed quite a bit.

    A society should not be judged by its output of junk, but by what it thinks is significant. -Neil Postman

    Linear logic, anyone? (3.00 / 2) (#184)
    by washort on Sun Apr 14, 2002 at 02:44:27 AM EST

    the point you're missing is that there's about a 20 year lag between theory/initial implementations and initial acceptance. hence the gap between Smalltalk and Java, etc.

    Learned anything? I don't know... (3.00 / 2) (#188)
    by AWalker on Sun Apr 14, 2002 at 05:23:35 AM EST

    I have to say, I'm a PHD student, and I've been programming since I was about 10 (on my good old Atari ST!) Since then, I've seen languages come and go, and I have seen what we are currently left with. I'm doing quite a lot of Java, Perl, Python and PHP, with some C++ dropped in there too. The scripting languages are just lots of fun to use, and you can really get things done. A few lines of Perl here, and all of your files are re-edited the way you want and sent to your database. These langauges are fast and generally very nice to use. You write something, it doesn't work, you insert the bracket you missed, and that is it. In C++ at least you can build things relatively quickly with decent APIs these days, and if you need to build something yourself, you can get it done in a logical way. JAVA? It sucks. It seems that every different subsection of a task has a different 'framework', so that maybe there are three ways to do a given task, two of which aren't capable of doing it, and the other is depracated and undocumented. Non-uniform naming of base classes and their methods is a big issue as well. Classes which don't work as they intuitively should; classes where you have to do complicated bit shifting all over the place to try and get your structures working. Here is an example... Anyone who has used Java will probably be familiar with the Interger.parseInt() method, which takes a String and produces an int. Now, suppose you were wanting to try and take a double in a String and get that out to a double? Create a double object, get the Double value from that String, then get the Double to output a double into an assignment. It took what, three or four years until Java 1.2 before some bright spark thought of creating a Double.parseDouble() method. Compiling for Java before 1.2? Write a string parser yourself. You would expect a very basic class such as this, and the group of Integer, Double, Float and Boolean to share the same method types, but no. This complete lack of standardisation throughout the entire language leaves you flicking through massive reference volumes every time you want to use a method. Even then, they may not work as advertised, or have shortcomings that can't be foreseen until a user puts in some extreme data. I guess I'm just frustrated with Java, that it seems the language slows me down rather than allows me to be more productive. To be frank, if someone took the Atari GFA Basic and added some modern GUI hooks and comms commands, it would be so very easy to write the quality applications that we need. Granted, it isn't OO, but any decent programmer can usually write around that. To be frank, I've run my old Atari games lately for nostalgia, on the original machine and under emulation (you don't want to try and play them with the CPU speed unlimited :) ), and then tried coding something in Java. Why is the Java dramatically slower on my Athlon 1.4 with 512MB ram than the old games on my 8Mhz Atari? "Java is almost as fast as C these days..." I'll believe it when I see it.

    In Java's Support (none / 0) (#228)
    by philwise on Mon Apr 15, 2002 at 06:18:04 AM EST

    While I agree that these problems with Java exist, I think that they are more to do with the implementation of Java (and it's libraries) than any technical flaws in the design. K&R C didn't have type checking on function calls (yes, I understand why) but given time it evolved. I believe that Java, while not perfect at the moment is a solid base to work from. For example compare RMI to Corba/rpc. RMI involves practically zero effort to use within a well-written OO program. Sun have spent a lot of effort getting things like this right.

    The Sun gui libraries suck, but given time I think they will improve. Java has only been going a few years, and given time it might mature as the 'standard' language in use at the moment.

    In terms of asking if we have learned anything, I don't think that the rock pools are the right place to be looking. The big developments have happened 'in the large.' I can now read my email from just about anywhere. This isn't because any new algorithms have been discovered, but because all the parts are coming together. If you want a maths analogy then consider vectors. Explaining the procession of a gyroscope takes about half a page with vectors, but I couldn't even attempt the maths without using them.

    Developments like Java are, in my opinion an equivalent to vectors in maths, both don't let us do anything more, but they make more things possible within a certain difficulty.
    --
    (presenter) "So, altogether now, what are we?"
    (audience) "We are all Free Thinkers."
    [ Parent ]

    Bad example (you're just plain wrong) (none / 0) (#236)
    by lordpixel on Mon Apr 15, 2002 at 12:08:17 PM EST

    Now, I might be inclined to agree with some of what you say, but you're not doing yourself any favours by choosing the double example you did.

    Firstly, as you note it got fixed in JDK 1.2, which was what? Over three years ago now...? So a mistake was made and it got fixed - what's your point?

    Well, I have to use JDK 1.1 every day at work still, so maybe you do have one:

    >Now, suppose you were wanting to try and take a >double in a String and get that out to a double?

    JDK 1.2+:

    double d = Double.parseDouble(someString);

    JDK 1.1:
    double d = new Double(someString).doubleValue();

    Yes, the 1.1 version creates an unnecessary object, and there should have been a .parseDouble() from the start, but it was never as bad as how you described it.

    I don't know. You may have a point - I'd agree that some of the Java libraries have design flaws, but you're not convincing me with that argument. My point would be that while I can see better ways to implement some of the standard library functionality, its mostly a matter of it being less convenient, rather than impossible. Unless you have a good example of where its actually a serious problem?

    One example that springs to mind is java.io.File, and alsothe pre JDK1.4 socket/io stuff badly needed what was done in 1.4 to have _some_ cases, but for most people the original stuff was good enough.

    Which is my real point: if you listen to a certain vocal contingent whine, especially the self proclaimed "experts" found on The Other Site, the problems with the Java libraries should mean that my professional career for the last 4 years was impossible. Every piece of software I've been involved with at work was doomed to fail from the outset.

    Plainly this is not true: much good work got done when all we had was JDK 1.1. Where the APIs are good, progress is swift, where they could stand improvement, people manage. My assessment is things are not perfect, but what language is? More importantly, the overall trend with each JDK release is upwards.


    I am the cat who walks through walls, all places and all times are alike to me.
    [ Parent ]
    Blame Microsoft (3.00 / 2) (#206)
    by brandon21m on Sun Apr 14, 2002 at 05:30:44 PM EST

    Instead of skill, programmers began simply writing incredibly bloated code. I've written payroll software that ran on the 4k computer I mentioned above. I've written adventure games that ran on a calculataor with 4.6k of RAM and a processor speed that was measured in kilohertz. These days simple hello world programs compile to over 128k in size, and a letter to grandmom can require over a megabyte of disk storage!

    You can blame MS for making binaries so big. I can make a binary in Linux that is 20k and the same code in VC++ compiles 125k or so. That is pathetic and unnecessary.

    Another thing, if you want an example of code bloat just look at any of MS's applications; flight sim in a spreadsheet anyone?



    Get off your high zealot horse (4.00 / 2) (#226)
    by erlando on Mon Apr 15, 2002 at 05:14:36 AM EST

    You can blame MS for making binaries so big. I can make a binary in Linux that is 20k and the same code in VC++ compiles 125k or so. That is pathetic and unnecessary.
    Oh please! Go learn VC++ before posting something like this. "Hello World" without any effort put into it is easily down to 40K compared to the 20K that gcc produces (on the same platform mind you - no sense in comparing two different compilers on two different platforms). And that's with a lot of unneeded libs linked in. But why should I care? Why should I spend time on getting my program down in size?

    The time spent tweaking the size of the binary is far better spent in the other end of the spectrum; caring for the design, caring for clean, structured code and caring for maintainability. Face it; the days where the size of the binary mattered are over. Standard disksize is what? 20GB? 40?

    JoelOnSoftWare has a pretty good article about so-called bloatware including comparisons in the cost of hard-disk space.

    [ Parent ]

    Not really. (none / 0) (#242)
    by andylx on Mon Apr 15, 2002 at 09:16:20 PM EST

    Last year I had to create an "internet voice communication" app that recorded your voice, compressed it, uploaded it to a server, and managed a contact/mailing list. So, a little more complex then "hello world". It ended up being only 85k. Of course, it required mfc runtime libs (~500k), but since everyone has them already it's not (for me) a big deal. Still, when I linked statically the exec size came to ~180k. The typical bloat in MS apps has little to do with their dev tools and more to do with "features" you don't need (yes, i don't need/want full motion video in my finance app).

    [ Parent ]
    i can see the point... (none / 0) (#260)
    by juln on Sun Apr 21, 2002 at 06:38:13 PM EST

    Microsoft based their business about selling upgrades to software, and the hardware companies based their business about selling progressively faster hardware... and I would say they are both to blame for computers getting faster and faster and doing about the same things, but a little faster.

    [ Parent ]
    You're right, sort of (4.66 / 3) (#207)
    by epepke on Sun Apr 14, 2002 at 06:32:11 PM EST

    I'm an old fart, too, and to some degree you're right. However, not quite. Here are some things I can think of that are less than 20 years old. Most of them are in computer graphics, which is where I have done most of my research:

    1. The Rendering Equation (ca 1986, I think). The basics of ray tracing have been known since a long time ago (Newton did it, and ray tracing for a sign company was one of Turing's first paying jobs). Radiosity has been in practice since the 1970's. However, a single equation that could be solved several ways, something like the Navier-Stokes equations for light, didn't appear until the Rendering Equation.
    2. Non-Photorealistic Rendering (ca 1990-present). This wasn't really anticipated by the pioneers, let alone done. I'm talking about painterly renderings, artificial engraving, pen-and-ink illustration from geometry. Most of this still hasn't made it into the mainstream yet.
    3. Image-based rendering (ca 1995-present). This is very hard to describe but is basically the opposite of geometry; images are combined in different ways to generate 3-D models, sucking perspective and stereopsis out of image information. 20 years ago people basically knew that images existed and you could apply the FFT algorithm; the stuff nowadays is way advanced.
    4. Automatic capturing of geometry. As of a year or two ago, it is possible to walk around with an uncalibrated hand-held camera and, completely automatically, generate a satisfactory, solid 3-D model. No hand tweaking required, but the math is enough to drive you to drink. Heavily.
    5. Satisfactory artistic tools. 1984 was the date of the first good ray-traced image with motion blur. Who Framed Roger Rabbit used a purely optical system for the highlights on characters. The Hitchhiker's Guide to the Galaxy used traditional, by-hand animation to make the Guide look computer-like, but there was no computer graphics. Nowadays, the tools in the hands of the artists are good enough for them to do real art on a regular basis.
    6. Massive parallelism (ca 1985). Not just the fact that you can put together a hypercube, but the fact that there are intelligent algorithms to be able to organize the pieces.
    7. Cluster computing (ca 1988). I'm talking about the good stuff, like Linda, not rehashed crap like CORBA.
    8. Compiler optimization (ca 1985-1995). The modern algorithms really are a lot better than they were; the whole RISC philosophy (which of course is just Turing's ACE), but actual algorithms to handle it reasonably well. Twenty years ago, you could always hand-tweak assembly to get factors of several improvement over compilers; now compilers are pretty good. Of course, it hardly matters if everybody buys an Intel CISC chip.
    9. Speech recognition. It's still crap, but it doesn't smell as bad.
    10. OCR. Fifteen years ago, you could shell out $45,000 for a Kurzweil or suck wind. Now you can do a fairly passable job.
    11. Fonts look good. Hint-based font algorithms are way better than they used to be. TeX documents in 1983 looked terrible.
    12. Multimedia. It used to be that people thought desk-top moviemaking was a pipe dream. Now it's pretty common.

    Of course, much of this is simply refinement, but there are some genuinely new things in the list. Also, what has gotten better vanishes in comparison to what has stayed the same or even gotten worse. Most of what has gotten worse involves delusions of adequacy. I occasionally look at the Scheme 5 report for pleasure. It has a line in it that goes something like "the reason Scheme programs do not (usually!) run out of space is that a Scheme implementation is permitted to reclaim storage if it can prove that it cannot affect any future computation. [italics mine]" This is an entirely different mindset from "gee, we've got a memory leak; let's throw together a garbage collector." As I write this, there's another thread about Python and how it didn't have more than reference counts until recently.


    The truth may be out there, but lies are inside your head.--Terry Pratchett


    I always learn (4.50 / 2) (#210)
    by danimal on Sun Apr 14, 2002 at 07:28:18 PM EST

    Whether it is optimizing or a new algorithm I am always learning. I definatley feel it when I'm not learning.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    and also (3.50 / 2) (#211)
    by danimal on Sun Apr 14, 2002 at 07:31:10 PM EST

    there is a large body of work to be learned from our peers. I enjoy learning from the older programmers at work.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    oh, um (3.50 / 2) (#213)
    by danimal on Sun Apr 14, 2002 at 08:07:14 PM EST

    (i hate it when k5 goes down)

    being in a really small R&D group helps a lot. we are all independent workers, but there is lots to be learned from looking at things that have worked for years.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]

    a bit like knuth (3.50 / 2) (#214)
    by danimal on Sun Apr 14, 2002 at 08:08:07 PM EST

    it's a bit like knuth. great history to refer to and it keeps you guessing.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    did i mention (3.50 / 2) (#215)
    by danimal on Sun Apr 14, 2002 at 08:08:51 PM EST

    did I mention that some of the code is crazy to look at? the optimizations and differnent coding styles are nutso sometimes.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    I do hate... (3.50 / 2) (#216)
    by danimal on Sun Apr 14, 2002 at 08:09:51 PM EST

    picking up bad habits from that code though
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    of course (3.50 / 2) (#217)
    by danimal on Sun Apr 14, 2002 at 08:10:42 PM EST

    a lot of what i write is new code. integrating it can be a pain though, too many ways of doing things that I don't like.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    and systems, don't get me started (3.50 / 2) (#218)
    by danimal on Sun Apr 14, 2002 at 08:12:01 PM EST

    the messiest part of all that code is supporting multiple platforms. all those damn #ifdefs and stuff.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    better that than a monoculture though (3.50 / 2) (#219)
    by danimal on Sun Apr 14, 2002 at 08:13:30 PM EST

    however, a monoculture in computing would be boring. part of the challenges are in making it work right and optimized compiling on all archs.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    and finally (3.50 / 2) (#220)
    by danimal on Sun Apr 14, 2002 at 08:14:14 PM EST

    finally, i just want to say that some archs are sucky and should die. like IRIX.
    --
    <bestest> what does the dark side lead to
    <@justinfinity> a gleeful life of torturing people and getting your way
    [ Parent ]
    LSP ! (4.00 / 1) (#221)
    by kaltan on Sun Apr 14, 2002 at 08:26:49 PM EST

    It is hard to believe that any decent computer program could ever have been written before the Liskov Substitution Principle (Barbara Liskov, 1988) was formulated.
    This paper is a nice introduction.

    Not all computer programs are OO. (none / 0) (#224)
    by i on Mon Apr 15, 2002 at 03:12:12 AM EST

    Not even all decent ones.

    and we have a contradicton according to our assumptions and the factor theorem

    [ Parent ]
    nothing new since the dark ages (4.00 / 1) (#225)
    by boxed on Mon Apr 15, 2002 at 03:26:40 AM EST

    Programming is math. No advance in math made since the 1700:eds have made it into ANY programming language yet, so from a mathematicians view there has really been no progress in computing except in the hardware since the dark ages! Of course, both matematicians and you are horribly wrong and close minded. Advances don't have to be revolutionary to be significant. OO is not a revolutionary idea, it's just the logical continuation of functional programming. Garbage collection is not revolutionary either, nor is byte-code, serializable classes, reflection and other advanced features of Java. If you care about if it's revolutionary or not you are missing the point. What's important is programming productivity. In this area there has been huge advances since the computer was invented and they continue to this very day and will continue for many many years.

    I woulda (none / 0) (#240)
    by Kinthelt on Mon Apr 15, 2002 at 06:34:38 PM EST

    just used the Lambda Calculus as a counterexample to the 1700s argument.

    [ Parent ]
    I'm no expert on math history but ... (none / 0) (#241)
    by gauze on Mon Apr 15, 2002 at 08:07:25 PM EST

    I remember reading somewhere around the turn of the century or so (maybe 1880? I forget something like that) 1 man could understand all known math concepts but in the 1990s (this was a few years ago) no one person can hope to have a deep understanding of more than maybe 5% of everything known. I dunno how true this is but it certainly implies there is new math out there.


    There's nothing wrong with a PC that a little UNIX won't cure.
    [ Parent ]
    you are right but miss the point (none / 0) (#243)
    by boxed on Tue Apr 16, 2002 at 04:07:25 AM EST

    The total knowledge of math is huge, you are correct in that respect. But 99% of all math isn't used in normal programming at all. That was the point I was trying to make.

    [ Parent ]
    Fourier kills your arguement (none / 0) (#252)
    by Rhodes on Wed Apr 17, 2002 at 09:25:27 PM EST

    One more example- Fourier transforms (along with others)- allows the transformation from the time domain to the frequency domain, or back between.

    [ Parent ]
    many, many things (5.00 / 3) (#227)
    by freefall on Mon Apr 15, 2002 at 05:33:45 AM EST


    I'm a comp. sci. student at U of T, and as such, I've become increasingly interested in what the hell "computer science" MEANS. This naturally leads to the question: "Where has computer science been, and where is it headed?" The answers certainly reveal that computer science has always been and continues to be a very active and inventive discipline.

    First of all, That Other Site, recently had a discussion about something quite related to this: DEEP ALGORITHMS:

    "A paper ... quotes Donald Knuth as saying the computer science has 500 deep algorithms. He mentions that Euclid's algorithm is one of the most important, and he seems to agree with the idea that CS will be mature when it has 1000 deep algorithms."

    There's also this list of Top Ten Algorithms, discovered between 1946 and 1987. But this doesn't really answer the author's question. I like to collect information about new and strange ideas in the world of comp. sci., and I've come accross a lot of ideas that seem quite new, quite extraordinary, and oftern very weird:
  • IBM Research on Unstructured Data - new ideas in the study of data mining and analysis.
  • the Ruby Language - an new, innovative little language with lots of potential.
  • MIT Exokernel Operatin System - a project to transfer more control of the computer to individual applications
  • Blogdex - this is more media-related, but "blogs" have been gaining a great deal of popularity, on the web, as a way of publishing information
  • Data Dectors - Apple's research on dynamic data interfaces
  • LiveDoc - beyond Data Detectors
  • P2P innovations and research is yet another example
  • GUI research is always fun
  • Many new comp sci problems have been tackled during the development of various opensource projects. This has left to, for example, the advent of new file systems (SGI XFS, IBM JFS, ReiserFS, ext3, etc).
  • Stemming from Linux/BSD/etc, is the pretty recent concept of Beowulf clusters, that connect hundreds of individually inferior PCs to for a larger, collective supercomputer.>
  • Graphics Theory is a rapidly changing field of computer science that will certainly bring about great changes in the world.
  • Google probably wasn't around when you were playing with your TRS-80, and it is certainly a very practical and ingenious invention, of this recent time period


  • Computer science has come a long way, and continues to progress, regardless of whether you're paying attention to the industry or complaining about how it used to be.


    Flip the coin (none / 0) (#239)
    by Jevesus on Mon Apr 15, 2002 at 05:43:57 PM EST

    I find attributes such as code readability, development time, etc, to be as important, or even more important than attributes such as program size in todays business climate.

    And please, don't tell me that you could whip out a resource management application (for instance) with a user-friendly GUI (by todays standards) in the same amount of time using an "old" language as you (or whoever else) could do the same in any of the "new" high-level languages, and producing readable code, too..

    But maybe this is beyond the scope of your article which, I guess, is just about the pure technical evolvement of programming languages and algorithms..
    Either way, the development of programming languages has shifted to more programming-friendly attributes. Readability, ease of use, shorter development time, etc.
    That's not all bad.

    - Jevesus
    CS doesn't make news (5.00 / 1) (#248)
    by aigeek on Tue Apr 16, 2002 at 08:59:50 PM EST

    People often confuse Computer Science with programming or software engineering. Whether or not that's the case here, you seem to be asking what CS has done for you lately. A few people have listed a few counterexamples, and I'll point to tons more. Most advances are in better ways of doing very difficult things. These advances never make it to mainstream programming lore for several reasons:
    1. They're usually not necessary. As you pointed out, hardware has gotten pretty fast, so the old methods are often good enough.
    2. They're usually not relevant. If you're writing the next great business workflow app, you don't really need to know about support vector machines. If you're making a web site, you don't need to use distributed resource allocation algorithms, no matter how good they've gotten.
    3. Most programmers don't know about them. Most programmers are not scientists, and many are barely engineers. They people don't keep up with the latest advances in constraint satisfaction algorithms, for example, even if it might turn out to be relevant every now and then. Why not? See #1 and #2.


    Optimization Technology (none / 0) (#253)
    by Will Sargent on Thu Apr 18, 2002 at 03:37:07 AM EST

    Compiler technology today uses techniques which were simply impractical and/or unknown 20 years ago.

    Adaptive Optimization may not be exactly new (I remember it was used in the 7th Guest as a codec optimization technique), but Hotspot is a significant advance in the state of the art, making Java actually faster than some traditionally compiled C programs.

    Meanwhile, cheap processor time means that genetic algorithms can crank away at multiple different pathways for a particular program, producing tailored optimizations that might never be found using traditional techniques.


    ----
    I'm pickle. I'm stealing your pregnant.
    give it some time! (none / 0) (#263)
    by eeee on Wed May 22, 2002 at 06:48:03 PM EST

    It's only been 30 years -- give it some time, man!  That's like Isaac Newton invents calculus then 30 years later he's like "Okay, where's the rocketships?  HURRY UP!".

    It hasn't even been a generation yet.  There are thousands of new algorithms and techniques to be discovered, only we haven't yet run into the problems that will make these algorithms and techniques necessary -- they are hundreds of years down the road.  I was programming in BASIC for Apple II+ when I was in 6th grade or something like that, and that was the first time I saw a real computer (Atari game console not included).  But when I think about how my kids will think about/relate to computers, it gives me the chills -- they'll have their grubby little fingers on the keyboard from BIRTH.  I mean, we don't even have household robots yet.  Be patient.

    Computer Science - Have we actually learned anything? | 263 comments (246 topical, 17 editorial, 0 hidden)
    Display: Sort:

    kuro5hin.org

    [XML]
    All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
    See our legalese page for copyright policies. Please also read our Privacy Policy.
    Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
    Need some help? Email help@kuro5hin.org.
    My heart's the long stairs.

    Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!