Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

NOPs, NOPs everywhere, but not a cycle in use

By hardburn in Op-Ed
Tue Aug 28, 2001 at 04:39:11 PM EST
Tags: Technology (all tags)

The early years of computers were noted by hulking giants where every cycle was precious. Some of the first hackers grew out of this environment to create brilliant programs that would shock and amaze many programmers both then and now.

Today, however, we are bombarded with such an overabundance of computing power that the modern programmer's mantra seems to be "just throw cycles at the problem".

Now don't get me wrong. I love having a fast computer. I was the first among my friends to get a gigahertz processor. There is no doubt that there are times when you absolutely need that much power (such as 3D rendering).

What I'd like to know is how we went from "640k should be enough for anybody." to recommending 128 MB for an e-mail program. I don't want to turn this into an anti-MS rant, but I only want to say that Microsoft hasn't done anything to help curb the effect of what this article is about.

I came face-to-face with this problem just recently as I went back to school (a technical college) starting yesterday. Last year, the library on campus contained an area of about thirty computers running Windows 98. They were around Pentium II 400s with 128 MB of RAM. They ran a bit slowly, but I bet a little disk formatting and reinstalling over the summer would fix that. Apparently the tech staff felt different.

I came back yesterday to find that the library computers had been completely replaced. All right, I thought, there's nothing wrong with getting new computers, and maybe the tech staff knew something about the old ones I didn't.

It was still fairly early, and I was about the only one in the computer area. There was a little sticker on each computer telling what was inside. They were all identical. Here is what is written on that sticker:

  • CPU: Pentium 4 1.4 GHz
  • RAM: RDRAM 256 MB
  • Video Card: Matrox G450 W/32 MB RAM
  • Hard Drive: 20.5 GB @ 7200 RPM

That is the configuration of computers which are used for little more then web surfing and e-mail, with the occasional word processing or the CS student compiling their program.

I stood all alone surrounded by thirty such machines, all of them displaying the boring light-blue Windows 98 desktop color with the smattering of icons, humming away with the massive fans needed to keep a P4 cool. I was in a room with more computing power then there probably was in the entire world just thirty years ago. NOPs, NOPs everywhere, but not a cycle in use.

I have no idea what these systems cost the school. It certainly wasn't cheep. These systems were probably ordered over the summer or towards the end of last year so that they could be ready by the time students come in for the new semester. Given that, a 1.4 GHz Intel processor was probably the best you could buy at the time, and we all know how much Intel charges for their top-of-the-line processors.

So maybe the computers here last year really did need replacing, but do they really need to go to such great overkill? I just don't see how you can justify having this much processing power? Even running Windows, a computer with half as much megahertz would be sufficient for what they're being used for, and at greatly reduced cost.

This goes back to the root of the problem. "Just throw cycles at it". When institutions are willing to spend so much on new computers, how can you not fall into that trap?

Some people say that I am advocating the false-god of efficiency, but this isn't so. I will accept that some people don't use uber-efficient code in the interests of readability and reuse. However, I think it is the job of good programmer to properly balance reusability, efficiency, portability, and readability (not necessarily in that order) in a program. Forget 128 MB for e-mail, how can one say that 128 MB for just the OS is anywhere near balanced?

How can programmers counteract this effect? Why not try to take a simple program and bum it down? Buy a TI graphing calculator and learn assembly for it. These calculators are about as powerful as an Apple ][, and there are many brilliant programs out there for them. If you don't want to get a $90 calculator just for fun, try making some 4k intros. You need not program everything so that it's bummed to the max, but knowing how to do it will influence your real programs for the better.

There is a lot of emphasis today to program for reusability. Programmers should not forget, though, that a horribly inefficient program, even if the code highly reusable, is still a horribly inefficient program. If it's reused in another program, you've just created two horribly inefficient programs.

Do not lose your reusability, but neither should you ignore the efficiency.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Most important part of programming
o Efficenciy 35%
o Reusability 26%
o Readability 23%
o Portability 14%

Votes: 34
Results | Other Polls

Related Links
o hackers
o "640k should be enough for anybody."
o recommendi ng 128 MB for an e-mail program
o false-god of efficiency
o 128 MB for just the OS
o bum
o assembly
o 4k intros
o Also by hardburn

Display: Sort:
NOPs, NOPs everywhere, but not a cycle in use | 40 comments (40 topical, editorial, 0 hidden)
TI calcs (3.00 / 5) (#1)
by Anonymous 6522 on Tue Aug 28, 2001 at 02:28:15 PM EST

Buy a TI graphing calculator and learn assembly for it. These calculators are about as powerful as an Apple ][

They're more powerful than an Apple ][. The one I have is faster than every Mac (and probably every PC) up until about 1987.

The power of TI Calculators (4.80 / 5) (#20)
by Legolas on Tue Aug 28, 2001 at 09:40:45 PM EST

I have a TI-89, which (along with the TI-92 plus) is the highest end calculator that TI sells. It is pretty close to being on par with the Apple Macintosh Plus, as far as power. This computer was released in January, 1986.

Macintosh:MC68000, 8 MHz
TI-89:MC68000, 10MHz
Macintosh:1 MB standard, 4 MB maximum
TI-89:188K RAM, 384K Flash ROM

Still, it's neat knowing that there is the power of a top of the line personal computer (in 1986) in the palm of my hand.


[ Parent ]
urban legends (4.50 / 14) (#2)
by ucblockhead on Tue Aug 28, 2001 at 02:29:51 PM EST

I almost voted -1 on this because it perpetuates that whole "640k is enough for anyone" urban legend. Two things about this: first, Gates never said it. Second, the machine that DOS was designed for could only address 1 megabyte of memory and the first models produced shipped with 64k. IBM's marketting department expected the machine to have a lifetime of a few years. At the time, machines were not typically "backward compatible". Those were the constraints under which DOS was designed, and as such, 640k was a reasonable software limit given the 1 Mb hardware limit. The limit was needed so that there was address space for video memory and such.

This was not at all unusual. The Apple ][+, for instance, stuck video memory smack in the middle of the address space, so that any program larger than 16k would overflow into it. But you never hear anyone make snide comments about the Woz saying "16k is enough for anyone". (Admittedly an assembly programmer who knew what he was doing could work around this, but still...)

Had anyone foreseen that people would try to drag these OSes through the decades on multiple hardware platforms, decisions might have been differently, but it is hard to imagine how they could have done anything substantially different on the particular hardware platform in question. At best, they might have bumped that 640k up 128k or so, but not much more. You've got to put video memory somewhere. You could have picked a better chip, I suppose, like the 68000, but that was IBM's call, not Microsoft's.

The shortsightedness came in not killing DOS dead when the 80286 came out, or later, when the 80386 came out. But "market forces" are as responsible for this as anything. OS/2, originally a joint IBM/Microsoft project, came out not all that long after the 80386 came out and when Microsoft decided to kill that, they went onto the 32-bit Windows we know and love. (It just took them a while to manage to code it.) If you want to blame anyone for the "640k limit", you've got to place the blame squarely where it belongs: on the shoulders of customers who didn't understand why they needed a 32-bit OS.
This is k5. We're all tools - duxup

OT - but... (3.40 / 5) (#5)
by DeadBaby on Tue Aug 28, 2001 at 02:49:06 PM EST

When the 286 was released the PC was still a growing platform. I can understand why neither IBM nor Microsoft wanted to simply kill DOS off. I think the real time to have made a move was when Windows NT 3.51 shipped.

Microsoft could have included MS-DOS free with it and NT has always run win16 far better than Windows 3.x ever did.

As for the quote, I think it's very sad so many people who seem to be smart spread lies around just becuase they happen to sound neat. I don't know how many times in a month I see that "Bill Gates" quote from Linux zealots or your average know-it all type person.

"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
[ Parent ]
NT 3.51 (none / 0) (#36)
by weirdling on Wed Aug 29, 2001 at 01:20:14 PM EST

When this came, there wasn't a personal machine alive that could run it well, and games never ran on it. It sucked as a desktop OS and was barely competent as a server OS. OS/2 ran much lighter, with a smaller footprint and less OS overhead, and had a much faster FS to boot.

I'm not doing this again; last time no one believed it.
[ Parent ]
Slight correction (3.00 / 2) (#10)
by ucblockhead on Tue Aug 28, 2001 at 03:52:12 PM EST

I just wanted to point out that the first versions of OS/2 were 16-bit (rereading, I seem to imply it was 32-bit originally), before some pedant corrects me...it did, however, offer protected memory and was free of the "640k limit".
This is k5. We're all tools - duxup
[ Parent ]
I hope so (none / 0) (#32)
by hardburn on Wed Aug 29, 2001 at 09:13:47 AM EST

it did, however, offer protected memory and was free of the "640k limit".

I would hope so, because as the rift grew between Microsoft and IBM, Microsoft took it's portion of the OS/2 code and put it into Windows NT.

while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }

[ Parent ]
FUD (4.42 / 14) (#3)
by DeadBaby on Tue Aug 28, 2001 at 02:44:44 PM EST

I don't want to turn this into an anti-MS rant, but I only want to say that Microsoft hasn't done anything to help curb the effect of what this article is about.

Well, neither has KDE, Gnome the good people at xfree.org or such bloat-fest distro vendors as Mandrake or Redhat. Let's be fair while we're at it, you ever try running MacOSX on a machine with less than 256MB of ram? What about on a first gen ppc (opps, you can't. No support for NewBus in MacOSX) Sure, you can single Microsoft out, I'm sure it's a great agenda, but ignoring the issue doesn't change the fact almost everyone uses the power that is there.

I don't see much of a problem... RAM and CPU prices are at all time lows, why should a programmer spend 3 months trying to make his code run well on a pentium 200? 3 months = more features and more testing. On top of it all, my P3 has no problems at all running modern code or multi-tasking heavily.

the issue of budgets... You never know what you're going to need. Buying the best now ensures you'll have room to expand down the road. A P4 1.4ghz, after all, might just be enough to run a Java app quickly.

"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
Red Hat and other bloat (3.66 / 6) (#6)
by hardburn on Tue Aug 28, 2001 at 02:54:53 PM EST

neither has KDE, Gnome the good people at xfree.org or such bloat-fest distro vendors as Mandrake or Redhat.

Yes, crap like this is why I use Debian and only use X and GNOME on one machine, and that only for games and graphical web browsing. Even so, that one machine isn't at all fancy (P2 300, 64 MB RAM) and it runs things just fine.

Let's be fair while we're at it, you ever try running MacOSX on a machine with less than 256MB of ram?

I've never used MacOS X period. When I heard that MacOS X was going to be based on BSD, I considered buying a G3/4. When the final version came out, I was glad I didn't. If I ever get a Mac, it's going to Debian PPC on it pronto.

the issue of budgets... You never know what you're going to need. Buying the best now ensures you'll have room to expand down the road. A P4 1.4ghz, after all, might just be enough to run a Java app quickly.

While you might not know exactly what you need, I'm fairly confident that if you're not running games, you will never need a 1.4 GHz. The only justification I can think of for computers more then 1.5-3 GHz is as a server that runs the programs of many diffrent systems (essentialy graphical dumb terminals). In which case you need only one or two such systems, not thirty.

while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }

[ Parent ]
Long term use.. (1.00 / 5) (#9)
by DeadBaby on Tue Aug 28, 2001 at 03:38:47 PM EST

Well, if President Dumbo keeps stealing money from social security to give his rich white friends a tax break who knows how screwed up we'll be in a few years..

Those P4 1.4ghz might have to last for a very long time. If they had the money to buy them, I think it was a good move to get the best they could.

"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
[ Parent ]
Hey... (2.25 / 4) (#15)
by ghjm on Tue Aug 28, 2001 at 05:59:59 PM EST

Let's be fair to the man, after all. His rich black friends got a tax break too.

[ Parent ]
retro-distros are fun. (3.00 / 1) (#12)
by chuma on Tue Aug 28, 2001 at 04:03:21 PM EST

such bloat-fest distro vendors as Mandrake or RedHat
My solution to this? No, not Debian (tried it, really I did), but retro-distros. My cable modem gateway/firewall/NAT/whatever box is a silly little 6x86 clone box that I haven't managed to fry yet, try as I might. It runs RedHat 6.0 (yes. point zero) with a 2.2.16 kernel. It has a 500MB hard drive. rpm -qa | wc tells me that there are 191 packages installed, which I suppose is pretty excessive for what this box does (NAT/portfw, sshd, smtp, pop3), but it was much easier to trim RH 6.0 than wrestle with 7.1.

[ Parent ]
ha! i got you beat... (none / 0) (#39)
by Spiral Man on Wed Aug 29, 2001 at 10:46:09 PM EST

first of all, id like to say that redhat (especially 6.0) is hardly a "retro-distro". if you want retro, let me dig up some of my old info-magic slackare cds (2.0.x kernel, at most). and even that is barely retro.

as per the subject, my firewall/gateway is a pentuim 133 (no, there is no roman numeral after that...) w/ 64mb of ram (which was a lot, at the time). and its running slackware 7.1 (definately not retro).

concerning bloat, i would never be able to run NT or 2000 on this, and would probably have problems w/ 95 (especially using it as a router). the fact is, while, yes, x, and kde etc are all bloated, at least you dont have to run them. sure ive got the latest kde and x windows on my desktop, but why put any of that on my server when it doenst even have a monitor plugged into it...

[ Parent ]

Valid point, though anecdotal (4.25 / 8) (#4)
by KWillets on Tue Aug 28, 2001 at 02:47:13 PM EST

It might be more relevant to cite other stats, like the declines in PC sales as consumers find little reason to upgrade their machines to read email. The library most likely has some plausible need to run high-resolution MPEG encoding or some such; unfortunately plans and reality seldom coincide.

Still, I think the question is always on the mind of the person watching a huge machine produce a blinking prompt. What is the next big application?

For instance, the web is consolidating into a corporate-controlled mall. The "glass rooms" of the old mainframes have reappeared as server farms. Can the PC (or anything) fight this slide into central control?

What is the point of this article when 128MB=$20 ? (3.40 / 15) (#7)
by Carnage4Life on Tue Aug 28, 2001 at 03:19:23 PM EST

I don't see what the point of this article is. Daily we are inundated with articles about how the PC industry is laying off thousands because consumers don't need more powerful PCs.

So besides pointless geek mental masturbation, what purpose is served by trying to create apps that use less resources when
  1. The current crop of apps do not even greatly tax the resources of the current crop of machines

  2. The cost of RAM has dropped so much that you can pick up a 128MB stick for $19.99 and a 256MB stick for $39.99.
Arguments like this made sense when computers and RAM were extremely expenive and limited but in this day and age, this seems like complaining about the lack of skilled basket weavers and textile loom operators in an industrial age.

Uhm, I'm maxed out .. (3.50 / 2) (#25)
by arcade on Wed Aug 29, 2001 at 02:40:22 AM EST

My P200 has a maximum upper limit of 64MB.

The ads claimed the upper limit were 128MB - but I've proved that false. :-/

The end result is that I can't only spend money on RAM. I need to upgrade the mobo, the CPU, and THEN the ram. Oh, and I may find that my good old ISA cards in the computer no longer has any slots. So - that means that I've got to replace the NIC and the soundcard.

Damn, this is getting expensive..

[ Parent ]
Embeded systems (3.00 / 1) (#31)
by hardburn on Wed Aug 29, 2001 at 09:11:05 AM EST

Sure, 128 MB might be $20 on a desktop, but I go take a look at modules for my Handspring Visor and see that an 8 MB flash module is around $70.

Now take a programmer who is used to the "throw cycles at it" camp and give him an embeded system to program on. As another poster brought out, they will probably say "I demand more memory on this thing!"

When one day you can get a palmtop that's as fast as today's desktops, we'll probably go down to nanotech, where things will be as limited by space as today's embeded machines. And so it will go on until the laws of physics don't let you get any smaller.

while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }

[ Parent ]
Simple answer (4.07 / 14) (#8)
by jabber on Tue Aug 28, 2001 at 03:25:10 PM EST

What I'd like to know is how we went from "640k should be enough for anybody." to recommending 128 MB for an e-mail program.

Same reason we went from Wagner and Shakespeare to Friends and Britney Spears; from Mom & Pops groceries to Walmart; and from talking to our kids to giving them Prozac and a TV remote. Thinking hurts. It's easier to take the easy way out. And its 'not my problem'.

You are completely right in your rant, but realize that the Universal Law of Entropy virtually guarantees that any excess in resources will be squandered. You can not reasonably expect efficiency in excess - efficiency requires a shortage, else there is not 'real' motivation for it. Time and money are always more limited when compared to computing power, and writing bloat is faster and cheaper than crafting tight, fast code.

Given a fast chip and plenty of RAM, even a monkey can be taught to bang out LoC's, and monkeys cost less than artists and engineers.

I've noticed a degree of monkeyfication in myself, since I got a workstation that doubled the performance of my previous issue.. I have become semi-simian, simply since I don't have to wait as long for my system anymore. It's so much easier to write crap (with bloat which equates to more lines of code) , and cross another item off my todo list in less time, when it runs 'well enough' on my new, fast machine. And hey! My new PC cost my company 1 week of my pay.. Quite the RoI for my improved 'efficiency', wouldn't you say?

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

A turning point... (4.00 / 6) (#11)
by ucblockhead on Tue Aug 28, 2001 at 04:00:54 PM EST

I remember in my own career when one day I had a project that required me to store information about whether my company had to charge sales tax and what the rate was, by zipcode. I spent about thirty minutes thinking about data structures like linked lists of zip code ranges with bitfields and such. Then I said to myself "Self, what the hell am I wasting all this time for when machines typically have 16 meg of memory", and I did:

long SalesTax[10000];
This is k5. We're all tools - duxup
[ Parent ]

Bingo (1.00 / 1) (#24)
by smallstepforman on Wed Aug 29, 2001 at 02:33:30 AM EST


[ Parent ]
Yes, but... (5.00 / 1) (#27)
by linus nielsen on Wed Aug 29, 2001 at 07:41:04 AM EST

...what happens when a programmer who is used to such "sloppiness" comes to an embedded system with 128K of RAM? "We must have more RAM!!!" he will say, when he can easily solve the problem by using his brain. I meet these people all the time at work. Sigh.

[ Parent ]
Oh, I agree... (4.00 / 1) (#35)
by ucblockhead on Wed Aug 29, 2001 at 10:51:14 AM EST

At the same time I did that, I was also maintaining software (written in C) that ran on a machine with 140k of memory that used bank-switching to cram it all into the 64k of addressing space the Z80 the machine ran on used.

One of the keys to good programming is not just knowing how to be efficient, but also knowing when to be efficient. In the zipcode example above, the code was going to be run only on a few workstations at the corporate headquarters. Spending time saving space just wasn't worth it if it ran effectively on the four or five machines it was destined for.

This is k5. We're all tools - duxup
[ Parent ]

School machines (2.40 / 5) (#13)
by Abstraction on Tue Aug 28, 2001 at 04:49:05 PM EST

Your school could either:
1) Buy cheap machines and have to replace them every year because of excess OS and software bloat.
2) Buy expensive machines and replace them every 3 years or so due to the same reasons

Which would you pick?

How about using older software? (4.00 / 2) (#26)
by Dievs on Wed Aug 29, 2001 at 04:47:20 AM EST

How about win95/office95, which would run screamingly fast on the 'slow' machines they replaced?
or win 3.11/office 6, which would run great on anything with 'pentium' in it's name?
The fact is, M$ software often gets worse with new versions.

[ Parent ]
Remeniscence (3.80 / 5) (#14)
by jd on Tue Aug 28, 2001 at 04:59:22 PM EST

My first computer was a Commodore PET 3032, with (guess what!) 32K of RAM. Yes, that's a K, not an M. :)

My second computer, though, was a BBC model B, which had 64K of memory, but 32K of that was allocated to ROM and "Sideways ROM". Of the remainder, the graphics could take anything from 1K ("Mode 7" teletext) to 20K (mode's 0-2, I believe).

These computers were large enough and powerful enough to be used as embedded computers, although the term didn't exist back then.

(The BBC was especially good for that, having a parallel port, a serial RS-432 port -and- a 4-channel ADC, not to mention RGB and analoge video. It even had a connector for another processor, though this would largely turn the BBC into an over-powerful video processor.)

Way before I was born, the Manchester Mark I (nicknamed "Baby") managed to run a program for finding the highest common factor of two 32-bit integers. The program, plus storage, took less memory than the URL for this site.

So? what's the point of reciting all that?

The point is, we -can- become very much more efficient in what we're doing. Reusability is not the enemy of efficiency - it's the much-maligned ally. (Take the Linux kernel as an example - just how many min/max functions DO you need? By having one resuable function, you save yourself space -AND- grief.)

The overhead of a reusable tool is not insignificant, but if the tool is designed right, the overhead is considerably less than that of re-implementing. The secret is in that "design" bit.

It's been said that if builders built buildings the way coders wrote programs, the first woodpecker that came along would destroy civilization. I can believe that, too. Code is generally designed to meet deadlines, win approval from the boss and awe the customer with the fancy feature list. It's not designed to function.

One dream I have, that I know will never be realised, would be for a small army of coders to take Linux, the entire FSF software collection, Gnome, KDE, Qt, Berlin, and every other free piece of software in existance, move into a closed camp, and just get it right.

It shouldn't be hard, assuming anyone would give a damn enough about quality code to actually fund something like that. You just produce specs from the code, debug the specs, then re-engineer the code, using the specs to avoid redundancy.

I did something like this when working for a former company. I reduced a 12 megs source / 10 megs binary to a 6 megs source / 1.6 megs binary. It was almost effortless, lifting off all that fat. Compiling with optimization reduced the binary to 720K - the size of a single-sided 3.5" floppy.

Not only did I reduce the waste, I increased the performance. (Much less swapping, for a start!) I increased the reliability (fewer components to go wrong), and I increased the understandability (one function, one purpose; one purpose, one function.)

If I can do that, so can any other coder. So get on with it!

Luxury (4.80 / 5) (#17)
by ghjm on Tue Aug 28, 2001 at 07:32:57 PM EST

Okay, apparently if you want to post on this thread you must first establish credentials, so here goes. My first home computer was a homebrew using a Motorola 6802 cpu. For those of you who might have forgotten (ha!), the 6802 was just like a 6800 except it had 128 bytes of on-chip RAM. It ran at 890khz. Yes, you read that right, less than 1Mhz. Eventually we built a memory card but initially, the on-board chip memory was all the storage available. No M, no K, just 128 bytes. No mass storage of any kind, either. If your 128-byte program is important enough to you, just commit it to memory. We had a ROM monitor that could accept opcodes in mnemonics instead of hex, which we thought was quite luxurious at the time. I called BBSs on a 300-baud acoustic coupler and interacted with the machine on a TTY. Not a /dev/tty* device, a full, 150-pound Teletype 33 ASR printing terminal with paper tape reader/writer. So yes, I know what it means to optimize out one more cycle. I mean, this stuff was only one small step beyond toggle switches on the front panel. (I know this all sounds like I'm spoofing the whole "my computer was worse than yours" bit, but it is all literally true. For the record, I'm 33 years old.)

But come on, people - back then it was really cool if your app sent VT100 codes to boldface some of the text on your screen. Back then (actually years later) people said things like 'the graphical user interface will never catch on because of the text scrolling problem - you will never be able to scroll a page of graphically rendered text as fast as you can in text mode, therefore people will never accept it.' This was a serious argument back then.

It's like saying, "Cars suck! They weigh 2000 to 5000 pounds! We used to accomplish the same thing with 30 pound bicycles!" Well, from a broad view maybe that's true, but cars provide a totally different menu of benefits. There are lots of things you can say about bicycles - they're healthy, environmentally friendly, etc - but ultimately you'd prefer to have a car if you want to visit Grandma in Des Moines or go buy a mattress.

I do indeed miss those old machines. It was really cool to be able to say "Hey, I figured out a way to run the Sieve of Erastosthenes and get all the primes up to 1000 in less than a second!" But if I need to figure out the terms of my car loan, just shut up and gimmie 128Mb and a copy of Excel, because I have to go to the bank in 30 minutes and quite frankly I'm a happier person now that I can't tell you how many instructions a 16-bit add accumulator to indexed memory takes to execute on my box.

Hell, even if I did have the instruction sheet memorized I couldn't tell you how long any given instruction takes, because it would depend on what else is in the pipelines and blah blah...maybe people don't like hand-optimizing assembler these days because assembler has become really irritating? The most fun instruction set of any CPU ever was the 6809, everything after that has been progressively more tedious. I mean, all these old dogs talking about how programmers should know how to optimize assember - have you ever really tried to get down to the metal on a VLIW architecture, for example?


[ Parent ]

Yes and no (3.33 / 3) (#22)
by jd on Tue Aug 28, 2001 at 10:23:12 PM EST

I don't think it's necessary for a coder to know how to wrestle the last clock cycle out of the source code. That's the compiler's job. It's my belief that the coder should stop -obstructing- the compiler from doing its job.

In the example of Linux, you have dozens, if not hundreds, of ways to find minima and maxima. Even the best compilers aren't going to always spot the duplications. This wastes space and time.

There are millions of other such examples. Eloquent code is easily optimized, because it's clean. Bloated code is hard to optimise, because how do you distinguish the useful from the useless?

[ Parent ]

Back in the old days, we ate the TAIL (3.40 / 5) (#16)
by slaytanic killer on Tue Aug 28, 2001 at 06:32:06 PM EST

The point I agree with is the need to know data structures, as well as the ability not to lose your head when faced with bit-twiddling.

However, the amount of organization and infrastructure gained by "squandering" resources is worth it. Nowadays, it's commonplace to store Python code within XML within Swing within Java, which leads to sane design that erases the entire compile cycle for most development tasks.

The first time I saw that, I realized how absurd it is to compile code and store it on bare filesystems.

People always say that programming in Lisp is good for you, because its list structure is sufficiently general to represent anything in a pure, uniform way. But it never caught on outside of academia and Gnu because of the difficulty of optimizing list structures without knowing the specifics of the application. So now we have the chance do what we want. And if people decide to waste it... that just means more for us.

Pythin within XML within Swing within Java (4.00 / 1) (#23)
by swr on Tue Aug 28, 2001 at 11:18:19 PM EST

Nowadays, it's commonplace to store Python code within XML within Swing within Java, which leads to sane design that erases the entire compile cycle for most development tasks.

Can you provide links to more information? I've been doing things the hard way for way too long.

[ Parent ]

Jext (none / 0) (#29)
by slaytanic killer on Wed Aug 29, 2001 at 08:42:28 AM EST

At the moment I know only that Jext does this, which is where I found it when I poked around their CVS. Emacs has been doing something similar for decades, using emacs-lisp instead of Python or XML, so it would surprise me if this is a Jext-only innovation.

I'll ask around Usenet if there are any libs devoted to this, or places where this is mentioned. FYI, here are the sourcefiles that implement this:

[ Parent ]

my lord, we are LOST (4.25 / 8) (#18)
by h2odragon on Tue Aug 28, 2001 at 08:20:56 PM EST

I read through the story, going "yeah, yeah, this is somebody who gets it." I've thought these things myself, ever since i played with the "PDQ" replacement library for MicroSloth QB 4.5; nigh onto a decade ago now. A "hello world" went from 30k .EXE file to 1k; it used assembly replacements for stock internal routines plus linker tricks and such, very cool. MontaVista's Library Optimizer plus a glibc lite might be a modern equivalent. MicroSloth bought that product's producer not long after it came out; and buried them.

Since that time I've been cringing at each new generation of software, which manages to completely erase any gains that the hardware has made in the intervening time... if not gain on the hardware a little. I have no budget for newest computer kit anyway, so my "old fart" fettish for useless things like efficiency still benefits me.

Anyway, then I hit the comments; 17 so far... and while I certainly won't castigate everybody as a cycle frittering maniac, I really see no reason for hope in the long term here. Yes, hardware is cheap, etc. That helps, but has masked somewhat the symptoms of this horrible sickness that is pervading our hacker culture ("culture" in the biological sense).

"Efficient" is just not a goal for the modern hacker, it seems. "I realized how absurd it is to compile code and store it on bare filesystems," says one egregious example. The root cause is the fact that hackers have been valuable the last few years, and anybody with a bit of talent has been spoiled by a constant infusion of newest and best hardware to play with.

In all the places where that's happened, that hacker culture may be irrepairably contaminated. The tendancy to think in overblown, overobfuscated, overdesigned ways is incredibly hard to shake once acquired. Mozilla. Jabber.* Perl. IPv6. I'd cite more examples but, really, who needs to? Look around. At best, there's a couple good ideas struggling to escape a massive framework of ambition, but i can't recall that its ever happened.

Perhaps there remain enough technology starved proto-geeks out there to help us retain some memory of why we once desired that our systems be efficient. Maybe there are enough of us old farts with some ability to teach, to impress the lesson... It won't be me; I'm feeling cranky.

*I was told it was alive. It isn't. I looked. It just ain't quit twitching yet.

Shut up, fogey ;) (2.66 / 3) (#21)
by slaytanic killer on Tue Aug 28, 2001 at 10:00:13 PM EST

We didn't get dumber... just the constraints are different. And you'll have to trust like you were trusted.

Hell, it's always been a war of complexity battling with hardware limitations.

[ Parent ]
Thank the lurker in heaven for bloat (4.62 / 8) (#19)
by slaytanic killer on Tue Aug 28, 2001 at 09:36:49 PM EST

I think there is a lot of confusion on this topic.

First, we have to determine what we're talking about. There are in general two kinds of programs for the PC:

  • Server side programs
    These are designed for scalability, efficiency, and the long term. The code is nestled in a controlled environment, without the vagaries of programming for the lowest common denominator. Sure, inefficiency can be found here, but in general all sorts of performance strategies are bolted on because these are designed for the long haul.

  • Client side programs
    Anything goes. On one hand there are the Winamps of the world, with minimal footprint and maximum flexibility. Then there's MS Bob, pieces of software that for one reason or another serve as embarassments to someone. Things go in an out of existence as entire environments grow and die in 18-month cycles. There is absolutely no financial point of paying a hundred very skilled programmers to squeeze out performance.

    The dirty secret here is that human interactions set absolute bounds on the need to optimize performance. The accepted maximum time for some graphical widget to respond to you is 50 millisecs. And when you code in a competitive environment, you don't go all-out for performance. You go into the code with a profiler, look at the hotspots, and bring everything down to the specification goals. Once you hit the specification goals, you're done. And that 50 ms stays constant no matter how fast hardware gets. Human response times don't grow any faster.


    And there's even further confusion. There's two ways we talk about better software:

  • Faster, cheaper better
    "I had a 66 MHz machine last year, now it's 132 MHz. Everything had better be twice as fast now." No, it ain't unless you pay ten times as much. No one's going to break their backs to bum every single instruction out of your hardware -- WordPerfect got wrongfooted against Word when they decided to throw out their code and optimize on a processor that was nearly obsolete by then. The customers spoke, and crowned the worse-is-better technology.

  • Better design, more reliability
    Why did I mention that compiling programs and storing them on bare filesystems is absurd? Because it is a kludge. Many terrible habits have been learned by the computer industry because of scarce resources. "Object-Oriented Programming -- Leads to code bloat?" Before that, I'm sure the headlines were asking the exact same thing about Structured programming, which actually did lead to code bloat if you compare it to beautiful, error-prone hacks.

    Want to know something that will make you start slashing your wrists? My project for months has been to purely optimize everything -- performance, memory, stability, readability. Before then, I wrote a game for a Linux company where I had to share the mountain-creation routine to also build the AI model. I had severe codesize restraints. So don't tell me I can't get any chops, because I can. If it's not honed at the moment, it's because I have to balance many more considerations.

    And hell, despite my stance, I actually do work to put out performance fires, pushing to restructure a performance-friendly design. But I think that everyone shocked at performance had better expand their worldviews. I've read Knuth 1 & 3, I've studied writing structures in some form of asm. But at the same time, I use virtual machines on top of virtual machines. I use structures that are blissfully ignorant of the fact they're running on dirty hackish logic gates, because their internal logic looks nothing like digital logic.

  • Agree and disagree. (4.33 / 3) (#28)
    by Rainy on Wed Aug 29, 2001 at 07:54:52 AM EST

    I agree that the whole mail-on-128mb thing feels way wrong. But learning assembly is just as wrong, just as extreme, just as wasteful! Instead of wasting cycles, you waste programmers' time. There are far easier and faster-in-development languages that are also efficient. There's python, ruby, smalltalk. A python program that prints "hello" takes up 14 bytes. Python interpreter I'm using right now takes up 1.4mb. If I were to write an email client like mutt in python, it'd run nearly instantaneously. Efficient programming doesn't need to be a torture.

    In regard to that school.. think globally. These processors are silicon, which is one of the most abundant elements. You are witnessing the process of funds being transferred from dumb people to smart people, and I think that's a good process, generally. Although.. scratch that one, we're paying a huge price here, if schools everywhere would instead raise teachers salaries and teacher would become a high-paying job, which would in its turn push the brightest people in the trade.. that would be just great. Then again, that's 30 systems at $2k each, probably - that's just 60k, that's just a buch of tiny bonuses for the faculty.
    Rainy "Collect all zero" Day

    But it's just FUN! (4.50 / 2) (#30)
    by hardburn on Wed Aug 29, 2001 at 08:46:53 AM EST

    learning assembly is just as wrong, just as extreme, just as wasteful!

    There are just some things that absolutly MUST do in assembly. Beyond those few things, assembly is just plain fun. I've never felt so much joy in programming as I do when I learn some new ASM technique that shaves a few instructions off my program. No, not everything should be in ASM, but I think everyone should at least know how to do it.

    I suggest reading at least the first chapter of Art Of Assembly Language (available on-line) to get a feel for it.

    These processors are silicon, which is one of the most abundant elements.

    Actualy, I've heard that in some countries, they're worried about too much sand being shiped off to chip fab plants (although I'm too lazy to go find a link).

    while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }

    [ Parent ]
    Fair enough. (4.00 / 2) (#33)
    by Rainy on Wed Aug 29, 2001 at 09:53:48 AM EST

    Well, that is a matter of taste :-). If you love it, of course there's nothing wrong with coding in it. However, I think it's not a solution for overbloat of MS - the bloated programs cannot be rewritten in assembly in practical constraints of time. Thanks for the link - I bookmarked it and I'll read it when I feel like studying a new language. I have a sizable backlog already, though - haskell, ocaml, scheme, CL, smalltalk, ruby.. :P. I never done any assembly, just c,c++, java and python, and of these the only one I enjoy is python.
    Rainy "Collect all zero" Day
    [ Parent ]
    Something I forgot to mention. (4.50 / 2) (#34)
    by Rainy on Wed Aug 29, 2001 at 10:46:53 AM EST

    There seems to be a class of hackers.. in fact, this is probably the difference between a hacker and a programmer - a hacker will greatly enjoy a *different* way of doing something, perhaps using a different algorithm to solve the same problem, or, as you put it, "shaving a few instructions off". My attitude is completely different - I'm a programmer. When I code something, I enjoy the functionality of the program, i.e. I needed an mp3 player that had certain features that no existing player had, and I coded my own. If there was a player like that, I would see no point in doing the same thing in a different manner. Some of it's insides are quite ugly and even inefficient, strictly speaking, but I will only fix these things when they become a problem for adding new features or when this inefficiency becomes a problem for the user - for example, when random mode is on, a new list of tracks is built each time. This could be done more efficiently by perhaps caching the list, but things are complicated by the fact that random mode tries to play high-scored tracks more often, so I had a choice of either polishing this feature until it's perfect and efficient, or adding some other features. Since it already works instantaneously, I chose to leave it as is, IOW, for me there's no difference between it working in 0.002s or 0.2 seconds as a user, so there's no difference for me as a programmer too.
    Rainy "Collect all zero" Day
    [ Parent ]
    Heresy! (5.00 / 2) (#37)
    by weirdling on Wed Aug 29, 2001 at 01:29:19 PM EST

    Everyone knows HP calculators are *far* more powerful than TI calculators, or I've just spent most of my life learing Reverse Polish Lisp for nothing. Seriously, it does amaze me that my HP-48 can do so much, although my Handspring Visor is also a marvel of efficiency.

    However, I wanted to put in my $0.02 about efficiency. Right now, Windows is clearly the efficiency looser. However, the same can't be so easily said of Java. Why? For one, Java bytecode, when jarred, is pretty small compared to equivalent C executables. For another, while C is theoretically faster than Java, your average programmer doesn't have the time nor knowledge to extract that speed, which has always been true, so the built-in optimizations in Java will make your average programmer's program often run *faster* on Java than on C.

    So, I guess what I'm trying to say is that we need to delineate the bad code/rapid development difference. Windows is written badly. Programs written in Java/python/perl, whathaveyou, are not necessarily written badly, and, while often less efficient, they certainly take a lot less time to write and are less error-prone than uber-optimised code. So, we have bad code written in C and assembler in the Win kernel vs. good code running much of the web written in interpretive languages that happen to be at a mild performance disadvantage because of their interpretive status, but are defensible on a cost-to-develop basis.

    So, rail away at bad code, but leave the interpreters alone...

    I'm not doing this again; last time no one believed it.
    Finding balance (5.00 / 1) (#38)
    by hardburn on Wed Aug 29, 2001 at 03:15:57 PM EST

    Yes, this is why I pointed out that you should write programs with a certain balance in mind. Personaly, I love Java. It's the most fun I've gotten out of programming since I learned ASM. And as far as most applications are concerned, the little extra overhead in translating bytecode to machine code on the fly (in a JIT compiler, at least) is well worth it.

    The same can't be said for Windows. A kernel and libraries shouldn't dictate (within reason *) how efficent your program is going to end up being. The libraries and kernel should allow the program to work on a 386 if the programmer so desires.

    * I say within reason, because there is a certain point where it's just too much work to get it to scale downwards. Trying to get a linux kernel to work on a 80088 probably isn't worth the effort.

    while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }

    [ Parent ]
    Embedded developers (none / 0) (#40)
    by MSBob on Sat Sep 01, 2001 at 10:48:55 PM EST

    People who do embedded development can still code great things with very little resources. QNX have the 1.44 MB challenge. On a single floppy they packed their OS with proc, the file system, their Photon GUI environment, their Voyager web browser, an email programme, a file manager and an internet dialer. On a single floppy! Microsoft struggles to bundle this much functionality on a single CD-ROM!

    It's been my dream for some time to work for QNX makers as I share your views on efficiency. Looking at their 1.44 MB challenge and later at their QNX Rtp has totally changed the way I view software development. I hope these guys will give me a chance one day. Here's to hoping...

    I don't mind paying taxes, they buy me civilization.

    NOPs, NOPs everywhere, but not a cycle in use | 40 comments (40 topical, 0 editorial, 0 hidden)
    Display: Sort:


    All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
    See our legalese page for copyright policies. Please also read our Privacy Policy.
    Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
    Need some help? Email help@kuro5hin.org.
    My heart's the long stairs.

    Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!