Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Programming and Separation from Reality

By snowphoton in MLP
Tue Aug 14, 2001 at 02:58:51 AM EST
Tags: Technology (all tags)
Technology

Slant-Six is running an interesting piece called Layers of Separation: The Future of Programming. The general gist of it is that as programs becoming increasingly high-level, we will soon find ourselves with a workforce of computer 'gurus' who actually know very little about how computers really work. The real question is, is this a prediction of the future or a description of the present?


Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Slant-Six
o Layers of Separation: The Future of Programming
o Also by snowphoton


Display: Sort:
Programming and Separation from Reality | 82 comments (80 topical, 2 editorial, 0 hidden)
Description of the present (4.46 / 15) (#1)
by localroger on Mon Aug 13, 2001 at 09:44:15 PM EST

Do you really know how a computer works? If you do, you won't have to consult a reference to answer these questions.

  • What is the 2's-complement binary representation of -1?
  • Using only logical operations (AND, OR, NOT) and addition, how do you take the additive inverse of a 2's-complement number?
  • What is the fundamental problem with representing decimal fractions in binary floating point math?
  • To avoid the problem just mentioned, would you be better off using the single or double precision library, and why?
  • You have a large array of 37-byte fields. In assembly language, what is the most efficient way to multiply an index by 37? (Hint: it uses only shifts and adds)
  • You must very quickly scale a 24-bit input into a positive 2's-complement 16-bit space. This step must include a calibration factor so it can't be done by simple shifts. What's the quickest way to do it (assuming you have hardware multiply) and how do you determine the conversion factor?
  • In the last example, if your processor only has a 16x16=32 bit result multiply, how do you proceed?
Well, that's enough. These are all problems I've had to deal with on a recent project. There are many, many more that would pop into mind if that project had been a bit different, or if I was asked to make this list 10 years ago.

These are also all things that recently graduated CS and EE graduates did not know when I explained them to them.

And they are not trivial. These tricks produced a 1,000-fold performance increase in an existing instrument. That is not something you can get by saying "fuck it" and paying for a better CPU. It doesn't come up often, but when it does it's up and then the only way to do it is to have an awareness of how the hardware works so you can schmooze it.

And it's already a lost art. I'm one of the youngest people I know who has these skills. The future looks bleak indeed; I fear when we get those 100 GHz Pentium XXXVI's they really won't perform much better than an 8088 running WordPerfect 1.0 because they'll be programmed in Objective GUIhack 9.x which generates "portable assembly language" as written by a dazed crackhead who just opened up Using Assembly Language three days ago for the first time and has to meet a Friday deadline.

I can haz blog!

Well (3.50 / 4) (#5)
by eroberts00 on Mon Aug 13, 2001 at 10:18:15 PM EST

Certainly in some cases its important to be able to achieve maximum performance from a specific piece of hardware. But that is not always, probably not even very frequently, the case. In most things that CS people will work, on by far the most expensive piece is programming time. If something makes it twice as easy for me to comprehend and program, I'll gladly sacrifice half the performance in nearly every case.

And as far as having having to know something without consulting a reference, that's like making people memorize the periodic table. Why bother? If you can easily look it up, I say leave it in the book and simply remember where it is that you need to look.



[ Parent ]
It really does matter (4.00 / 6) (#8)
by localroger on Mon Aug 13, 2001 at 10:31:40 PM EST

Even if you don't use it, knowledge of how the system works informs you as to when you need to step out of that nifty high-level interface and get your hands dirty. I guess I didn't mention that I do most of my work in VB, did I? I do, but I also know when a convenience tool like VB isn't going to work. You don't know that unless you know what's possible beyond the convenience tool.

I have personally found that C and variants are a useless middle ground. I do the easy stuff in BASIC (V or otherwise) and the necessary stuff in Assembler and I have little need for anything else. (I can get away with this because I'm writing one-time single-use apps for individual customers; YM _would_ V if writing commercial software for distribution.) Most of what I do is either embedded or custom apps collecting data from embedded devices, so my perspective is a bit closer to the hardware than most programmers get nowadays.

My experience has been that skilled, talented programmers who only know the high end stuff do some amazingly bone-headed and stupid things when performance is an issue. I have the greatest admiration, for example, for the designer of the box I've spent the last few months hacking; the user interface is brilliant, the allocation of resources is brlliant, and the form factor of the box itself is perfect for its market. If only the designer had written about 300 lines of assembly language into the firmware it would have been at least 100 times faster. (It uses an 80186 for reasons that made good sense back in 1995 when it was designed. And I have, of course, proven that the '186 was, adequate, if programmed properly.)

I'm sure my life would be a bit richer if I had the time to study some of the OO and structured concepts going around nowadays -- but OTOH, I wouldn't give up my experience with raw assembly in limited environments to get that. There are times when you simply have to know how the machine works. And when that happens, you either do or you don't. Increasingly often, talented and skilled programmers don't know what they should. And that is a shame.

I can haz blog!
[ Parent ]

Ok (4.00 / 4) (#12)
by eroberts00 on Mon Aug 13, 2001 at 10:48:48 PM EST

I'm not disagreeing that it's good to know how something works if you need to. But consider this: how long will it take someone to learn assembly language for the next generation of processors, such as the Itanium? Are they going to have to learn about instruction ordering and pipelining and whatever else? What if it is actually faster to do a multiply than a shift on some future processor in a certain instance because it uses a seperate pipeline that would otherwise be idle. Do they have to know the timings for each instruction in each possible case? Unless they are writing the compiler, I doubt it. Most general purpose computing in the future will be done on processors that are too complicated to bother learning how to write to at a low level.

Certainly there will always be embeded computers that are simple and need to be programmed in assembly for performance considerations, and there will always be people who can do that or who can learn to if needed. But to say isn't it a shame that everyone can't do that when it is of marginal utility for most is pointless. In my opinion it's the programmer's job to write correct, easy to understand code and the compiler's job to make it fast. Very few trade offs of readability and comprehendability are worth the performance gains they impart. But then again, I'm not an embeded programmer...



[ Parent ]
Modern processors and Assembly (3.50 / 4) (#18)
by localroger on Mon Aug 13, 2001 at 11:24:02 PM EST

The whole thing about pipelining and instruction sequencing is way overblown. The thing is, badly written 808x assembly will always trump compiler object code. Always has, always will. Why? In those tight critical situations, you will know where to put the data. Registers, memory, or stack? Most compilers are horribly inefficient at this no matter what ther PR machines say. For high-level code they're fine, but in those low-level loops where performance really matters, no, they don't do the job.

And the thing is, if it's really important to you, you can do your own pipeline optimization. It's a pain, but you'll never be doing it for more than a few hundred instructions, so it's worth the effort. Of course, this goes back to the original question -- you have to know how the machine works to do that. It's not FM. It's a circuit you could breadboard out of raw transistors, if you had a lot of transistors and a lot of time. That's what people aren't taught any more.

I have run across a lot of pure-D crap that was written by people who just didn't know what they were doing. Sometimes it was that they didn't know the machine, sometimes that they didn't know some other aspect of the man-machine interface. But it all comes down to knowing what you are doing. I have a fundamental problem with API's that leave you 30 stories in the air with no safety harness. I want to know things like how good is the library, is it optimized for X operation, how much memory does it use, etc. Today nobody asks questions like that. That's what makes disastrous resource misallocation possible.

I can haz blog!
[ Parent ]

Real optimization (4.00 / 3) (#48)
by simon farnz on Tue Aug 14, 2001 at 08:52:26 AM EST

Optimising code should be a four stage process, to avoid wasting programmer time.
  1. Can you get faster hardware? Don't bother optimizing if your application needs an 4MHz 8086, and has a 100THz Pentium 900 devoted to it.
  2. Are there better algorithms available? For example, I can calculate the MPEG-2 CRC32 using 32 shifts, 32 mask instructions and 32 adds per 32 bit word. OTOH, I can use a 256 entry lookup table, and convert the process to 4 table lookups and 4 xors per 32 bit word. Which one to use depends on the memory pressure vs CPU load at the time.
  3. Can you implement the algorithm better? For example, are you using doubles to store and process bytes? Are you using an optimizing compiler?
  4. Finally, assuming that none of the above help, write hand coded assembly to do the task; on modern 16 bit, 20MHz MCUs, this is rarely needed.

Above all however, DON'T GUESS AT WHERE TO OPTIMIZE, USE A PROFILING TOOL. Often, the worst case loops are not where you would expect them to be; halving the execution time of a loop that takes .01% of the processor has much less effect than trimming 1% off the loop that runs at 99% of CPU time.
--
If guns are outlawed, only outlaws have guns
[ Parent ]

OO (3.00 / 2) (#23)
by ucblockhead on Mon Aug 13, 2001 at 11:40:52 PM EST

OO doesn't have to be the enemy of efficiency, as long as you understand effiency first.

One of the whole points of the C++ templates that people bitch about so much is efficiency. (Though unfortunately as a high-level tool, it burns a lot of memory space to gain speed.)

Not that it will reach assembly speeds.

But IMHO, you don't really understand a high-level language unless you've looked at the raw assembly output and studied it at a deep level.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Hmm... (4.00 / 2) (#31)
by eroberts00 on Tue Aug 14, 2001 at 12:18:52 AM EST

But IMHO, you don't really understand a high-level language unless you've looked at the raw assembly output and studied it at a deep level.

I would have to suggest against this unless you are writing your own compiler. Say you do look through the output and figure out the absolute best way to use each and every language feature in terms of its raw output. Well, you've pretty much just attached yourself to that one version of the compiler for ever. As soon as you move to a later version of the compiler all of that knowledge, which probably took a great deal of effort to learn, will be useless. The whole point of a compiler is to take care of those details for you. Your code should not be dependent upon what the compiler outputs or you risk binding yourself to tightly to one version. If something is not working right, then it is the compiler that needs fixed, not your program.

Of course, this does not apply if you are writing and using your own compiler. In that case I would definitely suggest understanding the code that it outputs.



[ Parent ]
"Taking care of the details" (4.00 / 2) (#34)
by ucblockhead on Tue Aug 14, 2001 at 12:27:46 AM EST

Yes, the compiler takes care of the details. I don't have to right the code to set up the stack frame every time I make a function call. Hell, I don't even have to worry about it at all.

However, I do know exactly how much of a performance hit setting up a stack frame takes, and knowing that, and using that knowledge in tight loops makes me a better programmer.

And believe me, a new compiler generally does not make the knowledge useless, because the truth is, most compilers are pretty similar in this respect, especially those generating code for i86 machines.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

I agree. (3.50 / 2) (#13)
by rebelcool on Mon Aug 13, 2001 at 10:49:05 PM EST

Sacrificing speed for legibility is often something that must be done. And if its in an area where speed really does not matter, then by all means, sacrifice it.

Saves you (and whoever else needs to read your code later) a whole lot of trouble 2 years down the road.

It's kind of like saying if you can't integrate a large equation in your head without the use of pen and paper...you're not a *real* mathematician. Ridiculous and elitist.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

comments (4.00 / 3) (#24)
by ucblockhead on Mon Aug 13, 2001 at 11:43:04 PM EST

If you know what "//" and "/* */" and "#" and ";" and "REM" are for, you won't be sacrificing legibility for speed. You'll just be getting speed.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Well... (3.50 / 2) (#27)
by eroberts00 on Tue Aug 14, 2001 at 12:02:49 AM EST

Of course you must realize that that statement is far too simplistic. Simply having the ability to comment something does not automatically lead to comprehendability. Some decisions about performance vs readability could affect thousands of lines of code in many different places. It could even affect the enviroment you choose to write in. I'll take an uncommented piece of nicely object-oriented Java code over a nicely commented pieces of assembly code any day.

Besides, it is all too easy for comments and code to get out of sink when changes are made. It's much better if the code is easy to comprehend in the first place. But still, you should definitely always comment your code :)



[ Parent ]
I don't buy it... (3.66 / 3) (#29)
by ucblockhead on Tue Aug 14, 2001 at 12:10:09 AM EST

"self documenting code" is never truly self documenting, no matter what the language.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
not exactly... (4.00 / 3) (#36)
by rebelcool on Tue Aug 14, 2001 at 12:35:14 AM EST

if you're needing to comment every single line of code, it can be easy to lose where the algorithm is inbetween the lines of comment. I've seen it before, and its actually worse than just having the algorithm there in front of you.

COmments are needed, but an easily read piece of code is better.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Poor commenting. (4.25 / 4) (#58)
by ucblockhead on Tue Aug 14, 2001 at 11:34:30 AM EST

You don't have to "comment every line". What you need to do is right a clear and cogent description of what you are doing and put it in a comment block. If you can't write a simple, Plain english description of what you are doing, than you probably don't really understand what you are doing yourself.

Also, "easy to read" code is a siren that misleads. Everyone understands their own code because, of course, they just wrote it. But what seems simple to you when you write it isn't simple to the intern they hire five years later to figure it out.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Where then did you learn it ? (4.00 / 3) (#7)
by mami on Mon Aug 13, 2001 at 10:21:13 PM EST

If even the EE graduates did not know it, then where did you learn it and why is it "a lost art" ? Wouldn't that indicate that for some reason some decision makers think it's not necessary to teach it ? Who are the decision makers in this regard ?

[ Parent ]
Computer Architecture (3.50 / 2) (#10)
by duffbeer703 on Mon Aug 13, 2001 at 10:47:01 PM EST

If you were a CS or EE major, who must have been subjected to it.

At my school we used Hannessey & Patterson and Mano.

[ Parent ]
Good question, easy answer (4.00 / 5) (#11)
by localroger on Mon Aug 13, 2001 at 10:48:20 PM EST

I had my first contact with a computer at the age of 10. This was in 1974. My Dad had gotten a grant request approved and got a brand-spankin'-new Hewlett-Packard 2100A minicomputer for his lab. It had a whopping 4Kx16bit ("8k" by today's parlance) magnetic core RAM, ran at IIRC 175 KHz (NOT MegaHertz, KiloHertz), had no user interface or mass storage at all, and was the size of a dorm refrigerator. With a model 22 teletype machine (operated at 110 baud) for UI it set the university back a cool $45,000.

Later, around 1978, we finally got a home computer. It was an off-brand surplus reject and the user's group had about 300 members. It had 16K RAM but, unlike most computers of its day, no BASIC in ROM so much of that 16K was eaten up by the programming environment.

When you learn in limited resources, you learn to use the resources you have. As computers have become more powerful I've let myself get a bit more wasteful, but I'm always conscious of it when I am. The PC I'm typing this into could emulate 1,000 of those old HP2100's and I wouldn't even notice the resource drain. But that's no excuse for profligate waste. Every system has its limits, and it seems that every increase in computing power is used up by wasteful programming techniques and unnecessary bloat. PC's today aren't any faster or more efficient for the real work that's done than any 8-bit computer of the 80's. They just maintain a lot more options at the ready that few people ever use. There have been examples all through the last 20 years of awfully wasteful programs that were just badly written, by people who were unaware of efficiencies that were possible.

I am probably one of the youngest people alive who actually used a computer with magnetic core memory to do real work. (I'm 37, and interested in hearing any rebuttals to that statement.) It just burns me up to think of the miracles that this little machine in front of me could do if it were available to the people who worked with what they had in 1978. It wasn't really that long ago. But in computer time, it's an aeon or two.

I can haz blog!
[ Parent ]

Miracles? you already have them (3.00 / 2) (#76)
by Ceej on Wed Aug 15, 2001 at 11:55:34 AM EST

That little machine in front of you *already* works miracles, from the point of view of somebody in 1974. Take a look at the windowed applications in front of you and think about what goes on behind the scenes. Run any modern 3d game and consider what that little machine is modelling.

[ Parent ]
Abstraction (4.33 / 3) (#16)
by sigwinch on Mon Aug 13, 2001 at 11:12:01 PM EST

What is the 2's-complement binary representation of -1?
All bits set.
Using only logical operations (AND, OR, NOT) and addition, how do you take the additive inverse of a 2's-complement number?
Bitwise complement plus one.
What is the fundamental problem with representing decimal fractions in binary floating point math?
Problem? What problem? The SNR for a double is 200+ dB. That's plenty for *anything*." ;-)
You have a large array of 37-byte fields. In assembly language, what is the most efficient way to multiply an index by 37? (Hint: it uses only shifts and adds)
y = (x << 5) + (x << 2) + x

(Although I'd look pretty hard for a multiply instruction before I resorted to crap like that.)

In the last example, if your processor only has a 16x16=32 bit result multiply, how do you proceed?
Use 16x16 multiply and addition together to do a 32x32 multiply.
These are also all things that recently graduated CS and EE graduates did not know when I explained them to them. ... It doesn't come up often, but when it does it's up and then the only way to do it is to have an awareness of how the hardware works so you can schmooze it.
I'd don't think the problem is really 'hardware knowledge'. These days it is unreasonable for a CS graduate to have much assembler experience. I wouldn't fault a new graduate for not knowing how to write an opcode like SHL AX, 5. If they are clueful, I can teach it to them in a few hours and they'll be productive in a few weeks.

The problem is the ones who are *not* clueful. The whole concept of binary representation is fuzzy to them, and they simply don't understand the isomorphism between 'shift left five bits' and 'multiply by 32', let alone why you might want to use shift because it is faster. These are the people to avoid.

And it's already a lost art. I'm one of the youngest people I know who has these skills. The future looks bleak indeed;...
I wouldn't go that far. Of the ~10 people who do programming where I work, about half already have assembler experience, most of them could easily come up the learning curve for any assembly language you ask them to work with, only a few would need tutoring on the basic concepts, and none of them would be incapable of doing assembler. (Although whether they would be happy about doing it, and whether it is wise to subject them to the annoyance, are different questions.)

Then again, I'm kind of lucky in that regard. Company policy is to only hire the people who actually paid attention to their classes, and who know the fundamentals of the subject. We try to sort out the people who got their diploma through sheer persistence (which is well over half of the class at second-tier and lower universities).

--
I don't want the world, I just want your half.
[ Parent ]

Errr, wrong. (2.00 / 4) (#42)
by smallstepforman on Tue Aug 14, 2001 at 04:48:42 AM EST

<quote>
What is the 2's-complement binary representation of -1?
All bits set.
<end quote>

The humiliation of getting the first question wrong infront of your K5 peers.
- 2's compliment is the first compliment plus one
- -1 = 11111111 (since -1+1=0, therefore 11111111+00000001 = 0
- first compliment of 11111111 = 00000000
- second compliment = 00000000+00000001 = 00000001

The second compliment is the negative value, hence -1 and 1.

I haven't got the patience to go through the rest...
Have fun.

[ Parent ]
Note the word representation (3.00 / 2) (#47)
by simon farnz on Tue Aug 14, 2001 at 08:28:34 AM EST

He wanted to know what -1 looks like in two's complement binary, not what the 2's complement of -1 is.

-1 in 2's complement is indeed all bits set, so he got it right, for any number of bits in a word.
--
If guns are outlawed, only outlaws have guns
[ Parent ]

1/10 in binary is like 1/3 in decimal (4.20 / 5) (#44)
by swr on Tue Aug 14, 2001 at 07:26:19 AM EST

What is the fundamental problem with representing decimal fractions in binary floating point math?
Problem? What problem? The SNR for a double is 200+ dB. That's plenty for *anything*." ;-)

A lot of people don't know this, so I'll explain...

In decimal, a number like 1/3 comes out to an infinite repeating series of 3s. Everyone takes this for granted, but there is a reason for it. The denominator (3) has a prime factor (uh, 3 :) that is not shared by the base (10 (=5x2)). So, there is no way to represent one third in base 10 without resorting to the repeating numbers (which I can't remember the proper name for :).

In base 2 you can't even represent numbers like 0.1 without resorting to the infinite series things, because given 1/10 the denominator (10) contains a prime factor (5) that is not shared by the base (2).

If you try to represent 0.1 in floating point with a finite number of bits, the computer has to round, just as if you were to try to represent 1/3 in with a finite number of decimal digits. So you can have problems if you do something naive like "for (double x = 0; x != 10; x += 0.1) { }". That loop will never terminate[*], because X will never be exactly 10. The rounding errors that happen with each x+=0.1 are cumulative, and X slowly creeps away from the value you would expect.

This is actually one example where you do need to know what is going on at lower layers. If you just assumed that math on computers worked the way that math you learned in algebra class worked, you would not understand why the for loop never terminates.


[*] I'm not certain offhand (it may be system dependant), but I think the loop might terminate when the number overflows. That would take a very long time of course. :)

[ Parent ]

...and about those libraries (4.50 / 4) (#45)
by localroger on Tue Aug 14, 2001 at 08:05:00 AM EST

And the last question, which nobody has answered yet...

The floating point libraries still in use today were mostly written back in the 1970's when every clock cycle still cost money.

The single precision library does some rounding to prevent the 1/10*10=.999999 problem, since you'd see this problem a lot if it didn't.

The writers of the double precision routines didn't put this rounding in -- they figured that if you needed double precision, you'd probably prefer to figure out for yourself when to round off the result.

This burns a lot of high-level programmers who don't know how floating point math really works; not being sure of their precision, they decide to throw double-precision at everything (hey, CPU cycles are cheap) and they don't have any idea why the results keep coming up truncated.

This is the most important of the questions I posed for high-level programmers, and it's noteworthy that nobody has answered it yet. This is the kind of stuff that will get you into serious trouble if you don't know it even if you never go near assembly language.

There is no substitute for knowing how the machine works. Yeah, you can drive a car if you regard the pedals as magic buttons that make it go and stop, but you'll always be a better and more attentive driver if you have some awareness of what happens between those pedals and the wheels. It's the same in software -- you don't have to be capable of writing 10,000 lines of .asm in order to get the benefit of a little low-level theory. And sometimes, as when the brakes start squealing and steam starts coming from under the hood, knowing a little of that theory will get you out of a serious bind even if you aren't Mr. Goodwrench.

I can haz blog!
[ Parent ]

Lost knowledge (4.00 / 4) (#21)
by ucblockhead on Mon Aug 13, 2001 at 11:32:58 PM EST

When I was right out of college, in 1987, I could have answered every one of those questions. I used that knowledge every day in my first couple jobs. I still remember distinctly trying to pack programs into a 64k segment. (It seemed like paradise at the time. When I cut my teeth on 6502 assembly, 64k was the whole damn machine, writing 30k TSR's seemed like a snap.) I remember distinctly doing things like removing all uses of "float" from a program because that avoided linking to the floating point libraries, saving 16k...

Today? Well, I'd half to look up about half the answers. But that's better than a lot of the younger guys.

I'm amazed at how little "kids today" know about efficiency in programming. That, IMHO, is half the reason for code bloat. But the sad thing is that most don't even realize how bad it is because if the algorithm is O(n^3), no amount of Moore's law improvements are gonna save you.

Microsoft is the worst. It is all encapsulated in this one line from the Windows header files:

typedef unsigned long BOOL

That says it all right there...

Anyway, if I were in charge of designing a Computer Science program at a university, at least one year of it would be spent programming 64k, or even 16k boxes using C, assembly, Forth or something similar.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

BOOL (4.00 / 3) (#28)
by eroberts00 on Tue Aug 14, 2001 at 12:07:14 AM EST

Actually, I fail to see why this is bad. As far as I know on 32-bit processors it is better to deal only with longs as they have the best performace. If you tried to make BOOL be some kind of a bit mask, you would definitely take a performance hit. Quite an acceptable trade off of memory for performance, I would say.



[ Parent ]
You have to be careful with that... (3.00 / 3) (#30)
by ucblockhead on Tue Aug 14, 2001 at 12:16:02 AM EST

The trouble with trading memory for performance on a modern machine is that increased memory usage means more page faults. Disk speed is three orders of magnitude slower than memory speed. It only takes a couple page-faults to blow that performance you traded the memory for all to hell.

This is especially true as the performance hit that you take dealing with individual bits instead of words is fairly small.

(And I won't even get started on what happens of those "BOOLS" are part of packets going over the Internet on a 56k modem.)
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

True (2.33 / 3) (#33)
by eroberts00 on Tue Aug 14, 2001 at 12:27:11 AM EST

I had not thought of that, but still I don't know if I would agree that it should be anything other than a long. Memory is cheap, programming time is expensive.

As far as over a 56K, the modem should compress this so sending eight bits, seven of which are 0, should not take too much longer than just one bit, although I have never tried. Sending a one meg text file takes considerably less time than sending a one meg zip file, though, so there is definitely somthing to be said for the modem's compression ability.



[ Parent ]
Dangerous thinking... (3.66 / 3) (#35)
by ucblockhead on Tue Aug 14, 2001 at 12:32:58 AM EST

That is exactly why Windows (and other things) are so bloated, and why Windows 2000 often seems to perform worse than DOS did fifteen years ago.

Yeah, memory is cheap...
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Dangerous thinking... (3.50 / 4) (#37)
by eroberts00 on Tue Aug 14, 2001 at 01:03:41 AM EST

That is nonsense. The only dangerous thinking would be to not think about it at all, which is hardly what I was suggesting. Determining the performance value of things like this is quite complicated and very subjective. What is more improtant memory or processor time? Well, obviously, it depends on a lot of things. I am quite sure that whoever decided to make BOOL a long thought about his decision, and I have no desire to go through a sophisitcated evaluation to second guess it. It's not my job to write the compiler. If I were to start everything from scratch, I'd never get anything done.



[ Parent ]
Thought... (4.00 / 2) (#57)
by ucblockhead on Tue Aug 14, 2001 at 11:27:07 AM EST

I'm not so sure about the "thought" involved. Clearly the compiler folks thought differently, which is why sizeof(bool) = 1 while sizeof(BOOL) = 4.

Anyway, the best way to exemplify the thought Microsoft put into this issue is the fact that a couple of Windows API calls that are defined to return "BOOL" can return things other than "TRUE" or "FALSE". But that's not a performance issue.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Do *you* really know how a computer works? (4.66 / 9) (#22)
by srichman on Mon Aug 13, 2001 at 11:37:12 PM EST

Do you really know how a computer works? If you do, you won't have to consult a reference to answer these questions.
  • What is the 2's-complement binary representation of -1?
  • Using only logical operations (AND, OR, NOT) and addition, how do you take the additive inverse of a 2's-complement number?
    ...
Oh yeah? How is that 2s-complement -1 being stored? In RAM? What's that? How's it physically stored? Transistors? What's the physics behind the transistors? Electrons and holes in silicon lattices? How is the RAM laid out in rows and columns? How often is it refreshed? How much power does the refresh take? What internal error correction does the RAM module use? Or maybe you're not reading your -1 from RAM at all! Maybe it's in the level-1 cache? Or the L2 cache? Or maybe it's on disk? How do all those things work? How did you even get the physical address for that -1? From a page table? A two-level page table? A three-level page table? An inverted page table? Or maybe it was in the TLB?

I agree that programmers should know as much about computers as possible: the implementations, capabilities, and limits of their compiler, their operating system, their hardware. However, there is necessarily a limit to a person's field of expertise, and this is what abstraction layers are for. As smart as I'm sure you are, and as much as you know about binary arithmetic, there are no doubt many areas in which you are not an expert. You're only mortal, and have only so many days in your life to acquire knowledge. I have a lot more faith in a system with a layered design in which the implementors of the layers were expert in their respective areas, than in a flat design in which the implementors of the system knew a bit about everything.

Abstraction layers have been one of the greatest design boons in the history of computing. Abstraction layers speed development and minimize bugs because they allow for the creation of great complexity out of relatively simple components with simple interfaces.

If everyone was still coding in machine code, you would not be using a computer right now. I'm glad that there are gaggles of Java programmers out there who know nothing about endianness, dynamic memory management, memory model portability issues, etc. I'm glad I don't need to worry about the bookkeeping when my process gets context switched, or the dependency headaches when my processor speculatively executes code out of order. I'm glad I don't have to talk to my hard drive to write a file, that I don't have to find free blocks, that I don't have to schedule my hard drive's disk arm. If I am not an expert at some detail of the computer because of the layers sitting between me and the hardware, so be it. It's a small price to pay for the many orders of magnitude increase in productivity and reliability we've all gained as a result.

In closing, thank you to layers for bringing us the ISO OSI reference model and TCP/IP layer cakes. I don't know if I'd be posting this if the folks implementing end-to-end internetwork reliability and congestion control for TCP had to deal with talking to their ethernet card, ethernet collisions, routing, etc. Well, actually, that's possible, but I doubt Tim Berners-Lee would have given us HTTP if he had to implement reliability, congestion control, fragmentation, packetization, routing, etc. along with it in a flat design. Well, actually, that's possible, but I doubt I'd be able to plug in an alternative network protocol.

[ Parent ]

indeed. much like unix, do one thing, do it well (3.33 / 3) (#43)
by Justinfinity on Tue Aug 14, 2001 at 05:37:14 AM EST

layering is definitely good in the aspect that each layer is optimized for a specific purpose. the good ol' "unix philosophy".

everyone has been talking about time saving and such. if the guy working on the GUI <em>has to</em> know the fasted way to blit a matrix of bytes into the front buffer, then that programmer won't have the <em>time</em> to make the GUI as useful and productive as possible.

the problem comes when people don't <em>care</em> about the lower level things. i don't know much assembler, but i do know about things like the compiler pushing my function paramters onto the stack. i know the difference between saving memory and gaining speed.

IMHO, with CPUs and (some) subsystems getting so fast, take the speed hit and abtract a little bit, <em>if</em> you have the hardware. if you don't, well then you're probly on something small enough that one or two people <em>can</em> memorize the whole system. in those cases, fuck abstraction, go for the ultra-optimization.

as everything in the world of computers comes down to, use the best tool for the job. programming PICs and little embedded (also read: static) hardware, optimize out the ass, because you can. programming a gigahertz speed CPU with hundreds of millions of spare bytes and cycles, abstract a bit. it's impossible to know everything about a system that huge. instead, know everything about what you deal with, and a little bit more on each side of your layer, ya know, to interact and shit :-P

* justinfinity pimps his ripoff of the unix philosophy some more, for no real reason :-P

use the best tool/layer for the job.
if you don't like the best tool/layer, make it better. (open source, baby!)
tools/layers should do one thing and do it well. (unix)
tools/layers should talk to each other in standard and known ways (open standards, POSIX)

-Justin
Why don't you listen to me? If you listen, you get some of that clean, refreshing, new world water.
ok bye
:wq


[ Parent ]
Hint: it uses only multiplies (4.88 / 9) (#32)
by srichman on Tue Aug 14, 2001 at 12:25:30 AM EST

I forgot to mention this in my previous post:
You have a large array of 37-byte fields. In assembly language, what is the most efficient way to multiply an index by 37? (Hint: it uses only shifts and adds)
Starting with the Pentium 4, shifts and rotates use a normal clock speed shift-rotate execution unit rather than the fast barrel shifter from previous x86s. This is bad news for shift fans; shifts now take 4+ clock cycles. So, your shift and add implementation of multiply by 37 (two shifts, two adds) weighs in at 12-14 clock cycles or so, while the single multiply instruction executes in 14. (If anyone has a P4, I'd be interested to see an execution time comparison.) See The Microarchitecture of the Pentium® 4 Processor, page 5, or point 6 in this document for reference.

I'd hate to write unreadable code to multiply two integers, only to find out that my ingenious solution ran slower than the straightforward one. The moral of the story, then, is that this is a job for the compiler. A reasonable compiler can do simple multiply to shift-add conversions, and knows how long each instruction takes. A reasonable compiler can also do crazy global analysis and come up with optimizations that my little head would never think of.

Compilers are experts at compilation. The layer model at work.



[ Parent ]

Hmmmm... (3.50 / 2) (#71)
by localroger on Tue Aug 14, 2001 at 07:53:55 PM EST

This is bad news for shift fans; shifts now take 4+ clock cycles. So, your shift and add implementation of multiply by 37 (two shifts, two adds) weighs in at 12-14 clock cycles or so, while the single multiply instruction executes in 14.

So, despite the poor implementation of shifts in the unusual architecture of the P4, it's still a bit faster to do them. Also, with the shifts you have the option of dumping the result straight into SI or DI where you can use it, and not stomping on DX which you might be using for something else.

Of course, you need to be aware of things like this when you are working with such an architecture. But most of the CPU's in the world are not in PC's. They are in embedded devices, and they are not P4-class devices.

I'd hate to write unreadable code to multiply two integers,

What's unreadable about it? Especially when you have the comment "here we multiply AX by 37 and place in DI" next to it. Even as object code, if you work with this stuff very much you learn to spot these tricks. This is very much one of my points; these things aren't unreadable code. They are standard practice where they produce good results.

The most important question in my set is the one nobody got, even though it affects people who never touch .asm: Which library avoids the insidious truncation problem which most people don't even know exists, until one day their code starts spitting out 3.999999999999999 where they expected a 4.

I can haz blog!
[ Parent ]

answer (none / 0) (#80)
by srichman on Thu Aug 16, 2001 at 06:31:39 PM EST

Sorry for the late reply, but I just got back from a camping trip.

Which library avoids the insidious truncation problem...?
"Truncation problem" is a very poor name for this; the problem has nothing to do with truncation. For instance, the closest 32-bit IEEE floating point representation of 0.1 (base 10) is closer to 0.100000001 (base 10) than to 0.100000000 (base 10). If by truncation you're refering to the way the decimal-binary conversion algorithm works, then truncation is still not a good name; the problem still exists with rounding conversion algorithms (which perform much better than truncation).

The answer is, neither library can avoid the problem: given a binary floating point representation of X bits, there will be a number of decimal digits Y < X beyond which it can't be guaranteed that a decimal -> binary -> decimal conversion will recover the original decimal number.

Unless your question is a trick question or there's something I'm missing, a double library will obviously be better. A double precision representation can perfectly represent all the numbers a single precision representation can represent, and then some.

So, for single precision we have a mantissa that can include the sum of any of the terms ((1/2)^1, (1/2)^2, ..., (1/2)^n), where the double precision representation allows our mantissa to include any of the terms in the set ((1/2)^1, (1/2)^2, ..., (1/2)^(2n)) in its sum. This gives us greater precision, and allows us to get closer to the real value of any given decimal fraction.

Note that this is dealing with converting the same decimal number to single and double precision binary representations. If you scale the length of the decimal representation with the length of the binary representation (i.e., you give the double precision representation a harder job), then double precision still performs better. For single precision, the number Y mentioned above should be about 6 (given a good, rounding conversion algorithm), which for double it should be about 15.

[ Parent ]

No, no, no, you missed it entirely (none / 0) (#81)
by localroger on Fri Aug 17, 2001 at 05:10:48 PM EST

"Truncation problem" is a very poor name for this; the problem has nothing to do with truncation.

The problem is truncation. The reason it arises is that the standard double-precision libraries do not automatically round off their results. This decision was made back in the 70's when computing power was expensive and it was felt that users of double-precision math would want to make their own decision on when to do rounding.

The single-precision libraries, having more trouble with rounding, being in more general use, and needing less CPU power to do the actual math, do rounding by default.

The upshot is that in many languages single precision 1/10*10=1, while double precision 1/10*10=0.99999999999999999999999 unless you specifically round off the result yourself.

I can haz blog!
[ Parent ]

Reference? (none / 0) (#82)
by srichman on Sun Aug 19, 2001 at 01:32:30 PM EST

What's a reference for this?

The only relevant thing I can find is a recent Apple book that a coworker owns. It details computer arithmetic on Apple hardware, and states that they use rounding decimal to binary conversion for all precisions.

This is obviously not a very good reference for the rest of the computer world, and I'm curious what you can come up with. Truncation in double libraries strikes me as something that might be a vestige, eliminated as hardware speeds obviated it.

[ Parent ]
The question is, who are you hiring? (4.25 / 4) (#40)
by nads on Tue Aug 14, 2001 at 02:45:30 AM EST

.. All those questions are covered in a sophmore level computer architecture class. At least it's covered at the state univeristy I attend (I know, I just took the class this summer). Of course I might not remeber the answers two years from now when I graduate, but I feel confident that if the task that was posed to me required any of the knowledge you mention I could easily look it up in my notes and whip up a solution pretty quickly. This is definitely not an example of what knowledge is lost. This whole question of levels of abstraction and people not knowing what they are doing imho is idiotic. There is no possible way you could know everything because there is just too much. Even if you are comp sci and comp eng. double major, you are never going to know everythign about compilers and os and at the same time understand the all the little details in a transistor. It's just too much information. Research is being actively done in all the fields mentioned. Things are changing constantly. There is no way 1 man could keep up. That is why we specialize. Just like I have almost no idea how my car, microwave, or phone work. These tasks are other peoples responsibilities. Computers will be split up in a similiar manner.

[ Parent ]
The right tool for the right job (4.33 / 3) (#46)
by Obvious Pseudonym on Tue Aug 14, 2001 at 08:26:50 AM EST

And they are not trivial. These tricks produced a 1,000-fold performance increase in an existing instrument. That is not something you can get by saying "fuck it" and paying for a better CPU. It doesn't come up often, but when it does it's up and then the only way to do it is to have an awareness of how the hardware works so you can schmooze it.

These tricks may well have produced a 1,000-fold performance increase for your code. Good for you. That's what you were after. In my code a 1,000-fold speed increase would be completely irrelevant. After all, if the user drags something across the screen and it follows the mouse with practically no lag, what use is making it 1,000 times faster?

Most of the code I write at work is for aeronautical and civil engineering design. It has to follow manuals that are up to 500 pages long detailing complex geometry for things like flight procedure design and air safeguarding zones.

I write this code in VB. This is not because VB is some 'wunderkind' language, but simply because it is the best tool for the job. I do not need the raw speed of hand-tooled assembly. What I need is easy-to-read, easy-to-debug, easy-to-check-against-the-geometry-in-the-manual, quick-to-rewrite code.

If I were to get all worked up about using shifts instead of multiplies and things like that the code would be an unreadable mess - and would take years to write.

At home, I used to write in either C or Assembly depending on the needs of the program. These days, nothing I write ever needs the speed of Assembly.

Abstraction is good. Low-level control is also good. It all depends on what the task in hand needs.

Obvious Pseudonym

I am obviously right, and as you disagree with me, then logically you must be wrong.
[ Parent ]

assembly v. C (3.50 / 2) (#52)
by claudius on Tue Aug 14, 2001 at 09:29:59 AM EST

At home, I used to write in either C or Assembly depending on the needs of the program. These days, nothing I write ever needs the speed of Assembly.

You are undoubtedly much more proficient with assembly than I am. With compilers as good as they are now, I doubt that I could even write assembly that is appreciably faster than "C with all the appropriate optimizations turned on." Almost anything trivial enough for me to spot is caught by any compiler I'd care to use--and I'd be willing to bet that the same goes for 97% of the k5 readership as well (this thread's authors notwithstanding).

[ Parent ]
Actually... (3.50 / 2) (#74)
by Obvious Pseudonym on Wed Aug 15, 2001 at 03:01:28 AM EST

It's more to do with the fact that I didn't have a good C compiler so assembly was almost always faster. These days I doubt I could write assembly that ran significantly faster than a decent C compilation would.

Obvious Pseudonym

I am obviously right, and as you disagree with me, then logically you must be wrong.
[ Parent ]

Not all programming is for the pc (3.66 / 3) (#55)
by Giant Robot on Tue Aug 14, 2001 at 10:40:51 AM EST

Keep in mind that not all 'programmers' program for the web/software for the pc/unix/workstations etc, even though the majority do.

A lot of us have to work with tiny processors that costs pennies in bulk quantities that will eventually go into your mp3 player, control cars, microwaves, tv's, washrooms that will have to be 'hooked up' to a lot of analog and complex dsp stuff.

Writing some object code like

Lamp lamp=new Lamp("room 101");
lamp.turnOn();

and compiling in some "Java" embedded weeny code that takes 100K of Embedded Memory just for the KVM! costs a lot more than the ugly asm

mov $01 $LAMP_LOC

Abstraction layers are nice, but to the end user of simple electronics, it simply must Do the Job, fast and efficiently!


[ Parent ]
Overstatement (4.00 / 2) (#56)
by simon farnz on Tue Aug 14, 2001 at 11:04:57 AM EST

Even in embedded (which is my line of work), abstraction layers help. They are different abstractions to PCs, but (for example), a good compiler for the 8051 will turn the C code *lamp = LAMP_ON; into the mov you give.

It is just a case of use the appropriate tools; don't use Java/Swing to run the UI on an MP3 player. Likewise, don't use assembler to write code better expressed in C.
--
If guns are outlawed, only outlaws have guns
[ Parent ]

That was my whole point (3.50 / 2) (#75)
by Obvious Pseudonym on Wed Aug 15, 2001 at 03:06:03 AM EST

Keep in mind that not all 'programmers' program for the web/software for the pc/unix/workstations etc, even though the majority do.

Abstraction layers are nice, but to the end user of simple electronics, it simply must Do the Job, fast and efficiently!

That was my point. If you need compactness or speed then obviously you don't use an abstracted language. On the other hand, if you need flexibility and readability then you do.

I wasn't saying that abstraction is always good - I was pointing out that it isn't always bad. It depends on the purpose of the code and the hardware it is designed for.

Obvious Pseudonym

I am obviously right, and as you disagree with me, then logically you must be wrong.
[ Parent ]

Already here (4.00 / 6) (#2)
by davidduncanscott on Mon Aug 13, 2001 at 09:45:49 PM EST

Kids these days don't know shit, with their objects and do loops and whatever the hell they're doing now. Back in my day, we knew what the blinking lights meant, and by God we could write code in split-octal, right there on the bare metal. The sissies used assemblers and flowcharts -- the rest of us just knew...

LOL! (3.50 / 4) (#6)
by regeya on Mon Aug 13, 2001 at 10:18:58 PM EST

That's pretty funny...until you realize that classes in universities are taught, by and large, by crotchety old bastards who just knew, yet want to teach kids that Scheme is the most elegant language ever written *rolls eyes*

You helped me realize why I didn't try very hard to get a C.S. degree...what fun is wrapping your brain around C++ just to get elegant yet bloated solutions, when, say, a bit of x86 asm (yeah, I know, for wussies, right? :-) would work.

And don't give me that "it's not portable" crap, either. I'd be willing to bet that most modern code isn't portable, anyway. :-P If you know the concepts and document your work it's portable. :-)

[ yokelpunk | kuro5hin diary ]
[ Parent ]

let me save everyone the trouble (2.83 / 6) (#41)
by eLuddite on Tue Aug 14, 2001 at 03:14:35 AM EST

In my day, blah, blah, blah.
>In my day, sis, boom, bah
>>In my day, babble, babble, babble
>>> ...
>>>>>>In my day, yadda, yadda, yadda.

You had ones? Luxury! In my day, we did everything with 0s -- uphill.

---
God hates human rights.
[ Parent ]

This isn't necessarily a bad thing. (4.40 / 5) (#3)
by la princesa on Mon Aug 13, 2001 at 10:13:34 PM EST

The article notes that fifty or so years back, it would have taken scientists a sixmonth to write some of the functions presently taken for granted. That's fine and dandy, but just how deep must the understanding run for someone to be a 'real' programmer? A person only has so many years, only so much brainspace to give over to learning things. Does anyone really want to insist that a person learn every detail of a machine in order to call oneself a legit programmer? Must the wheel be reinvented by hand backwards in the snow both ways just for one's code to 'count'? <P>
It's true that too many coders/programmers/scripters just connect the preset dots and don't even try to acquire a theoretical grasp of the underlying process. However, that doesn't mean the old way of inventing each process as one needed it is somehow better. A lot less got done, too. Windows is a bloated mess, but its most recent incarnations are capable of functions that used to be limited to supercomputers. And much of that coding was done by people who didn't necessarily cut their teeth on one piece of hardware or another (another thing--there was a lot of hardware to program on, many kinds of bare metal. Exactly how many would one have to be adept in to be a Mel-style programmer?). <P>
The key to good programming is typically a solid grasp of applied logic. That's not something that MUST be acquired by coding in some flavor of assembly or learning how to manipulate individual binary bits. It's not contingent on knowing the bare metal. If more modern programmers had rigor and structure in their coding, applied logic cleanly, they'd have a solid enough theoretical grasp to be able to drill their way down to bare metal if the need did arise to code on it. One shouldn't get derailed into thinking that programming relies on a certain kind of hardware or language to be 'real programming'. It distracts from the graceful applications of logic that are well crafted programs and which can occur in any language or piece of hardware.

Different skillsets? (4.25 / 4) (#9)
by danceswithcrows on Mon Aug 13, 2001 at 10:42:25 PM EST

There's another way to think about this--maybe not a popular way, but what the heck: A Real Programmer, like the famous Mel, is like an extremely skilled artisan. Give him some tools and some time, and he'll turn out the finest widget you've ever seen.

Now take someone who's not a Real Programmer, but cut his teeth on VC++. Now this non-Real programmer is like a manager on a widget assembly line. More widgets get produced in less time, but they're of lower quality than the widgets that Mel produced. Thing is, for maximum widget output, the non-Real programmer must not think like a master artisan, but more like a middle manager. Different skillset, different methods, same goal.

Generally this is a good thing; people only complain when the assembly line gets so huge that the widgets aren't worth the upkeep, or the widgets are so badly made that they break at the slightest misuse. I think the author of the original piece might've done better to rail against the quality of the assembly lines (flaky compilers, crufty libraries, huge inconsistent poorly-documented APIs) than despair that there are no more Mels.

Hyperspecialization, multiple layers of abstraction, and progressive disconnection from reality are part of the price we pay for civilization. You think M. Antoinette was being flippant when she said, "Let them eat cake?" And the author of the piece linked to was a bit off base wrt debt... lots of English and Scottish peasants ended up in debt back in the 1700s, and they were not particularly separated from life's harsh realities by coined money or bank drafts.

Ah well. Software engineering's a young field, and will probably see many more shakeups and consequent lamentations about how the young'uns don't know how good they have it. I for one am glad I don't have to write "LDA #$40; JSR $FDED;" anymore.

Matt G (aka Dances With Crows) There is no Darkness in Eternity/But only Light too dim for us to see

ObHeinlein. (none / 0) (#54)
by ronin212 on Tue Aug 14, 2001 at 09:50:15 AM EST

Specialization is for insects.

--
Now is the time... get on the right side! You'll be godlike.
[ Parent ]
A9 40 20 ED FD (5.00 / 2) (#69)
by tmoertel on Tue Aug 14, 2001 at 05:47:33 PM EST

I'm amazed that I can still assemble that off the top of my head. It's been -- what? -- over 15 years since I hacked on a 6502-based computer. Judging by the code you used, I suspect a fellow Apple II hacker from the Olde Days. Print an inverse "@", right?

I guess the things we do as children are indelibly burned into our brains. It would seem that I'll never forget how to ride a bike, and I'll never forget how to code the 6502.

Woe unto me that I didn't cut my teeth on Haskell. Would be a bit more useful to have that flowing in my blood today. ;-)

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
specialization (3.66 / 3) (#14)
by rebelcool on Mon Aug 13, 2001 at 10:51:51 PM EST

much like the Real World, coding will become specialized. Think of it like this..there are physicists and there are mechanical engineers.

Both are well versed in the principals of physics. However mechanical engineers are specialists in applying their knowledge in one way, while physicists work on the building blocks.

Programming has become complex enough that this is what will happen. And it should. Hopefully it will lead to better products, much like how specialization has benefited the Real World.

COG. Build your own community. Free, easy, powerful. Demo site

"How computers really work" (4.50 / 4) (#15)
by onyxruby on Mon Aug 13, 2001 at 10:56:50 PM EST

I'd say we have reached that day long ago, perhaps we haven't had people who truly understand "how computers really work" since the Home Brew Computer Club. I know a lot people who know a lot about computers, many of them have probably forgotten more in their 30+ year careers than I have ever learned. Yet there is not a single one of them that I think could truly say they know "how computers really work".

To truly know how they work you would have to know far too many disciplines. There isn't a school around that can teach you all of this. There is simply too much information. Knowing everything: microchip architecture, assembly language, memory, storage, I/O interfaces, programming, kernels, graphics, sound, timing, etc., is nearly impossible. On top of this, to truly know how a computer works you need to know how the operating system itself works - how it interfaces with the hardware, system calls... and it's not enough to know a operating system, now we have cross-platform software, and protocols for OS' to communicate with one another... You need to know far more than how to simply use the OS.

The other complication is knowing exactly what a "computer" is. To know a computer, you certainly need to know how they communicate with other computers. This adds in additional layers of complixity as you get into protocols, transmission methods and the like. For example, to know TCP/IP you can't just study what happens on one end, at one level - you get into things like handshakes, retransmits and so on. Right now in the queue there is an article saying that the "Internet could be the computer". There are certainly strong arguements for this, even MS's .NET is leaning in that direction.

Simply put, it is possible to truly understand how computers work, but there are far fewer of us that actually know vs. how many we think we know. Did I forget to cover troubleshooting?

The moon is covered with the results of astronomical odds.

Mid-eighties... (3.00 / 2) (#26)
by ucblockhead on Mon Aug 13, 2001 at 11:52:06 PM EST

Even as late as the mid-eighties, I'd say that a person could truly understand the entirety of a computer. The 80286 was not a hugely complicated chip by today's standards. And machines where still small enough (640k!) that a person could know the entirety of DOS and the entirety of the IBM PC BIOS.

I personally didn't. I was too young. But back then, the amount of knowledge could conceivably be in the mind of one person. Not so today with the 15+ million lines of code found in the average operating system.

The first machine I used extensively, the Apple ][+, put all the OS in 16k of ROM. That's about 300 or so 25-line pages of assembly output.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Understanding Electronics & Computers (3.50 / 2) (#62)
by topham on Tue Aug 14, 2001 at 01:50:28 PM EST

For sake of discussion lets atleast split up the knowledge of Computers between the electrical components (Electronic) and the logical (software at any level; and the logic of interface chips(Hardware)).

Very few programmers need an understanding of electronics to program a computer. They may need an understanding in how timing can effect a device, but not the actual electronics of it.

So, you have a landscape of processors which execute code, and support chips which perform functions and this is all accessable via Assembler. (Arguments on how low-level assembler is are moot; if you stick to basic processor instructions and don't use excessive amount of pre-built macros to function).

Your next step up is the Assembler programmer who lets the Assembler handle the stack creation and basic handling. They likely know know how the stack is handled but take advantage of the assembler to handle it for them.

Then you have a programmer who writes in a higher-level language like C who may, or may not have a clue on how a stack or heap work in a system. ANd they can write at the OS level.

The application programmer needs to know even less.

[ Parent ]

Yikes! (3.50 / 2) (#65)
by ucblockhead on Tue Aug 14, 2001 at 03:36:26 PM EST

I'd be very, very afraid of C code written by someone who didn't understand how the stack or heap worked... I'm hardpressed to imagine how you could write efficient C code without that understanding.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Has anyone considered embedded programming? (3.50 / 2) (#66)
by nazhuret on Tue Aug 14, 2001 at 03:40:47 PM EST

And I quote (comments follow):

"Some experts estimate that embedded systems technology, which in 1998 is a $250 million industry, will be worth more than $2 billion within three years. Predictions are based on the commercial promise of smart devices. According to market researchers, consumers love electronic equipment that can do "smart" things like: transmit instructions to other devices wirelessly via infrared signals; be programmed to operate automatically; and connect to super-technologies, such as satellites, to bring remote power into their own hands."
The Moschovitis Group, The History of The Internet, 1999

"The just-emerging third era will be dominated by computers in disguise--"invisible" computers hidden inside cell phones, cars, home electronics, appliances, game consoles, personal digital assistants, and all manner of other gizmos. And no one or two companies will enjoy the dominance of IBM, Microsoft, or Intel in their heydays."
David Coursey, Executive Editor, AnchorDesk, ZDNet

Embedded development tools sales went from $690M in 1997 to $814.7M in 1998 - an increase of 18.1%. - Venture Development Corporation. Growth occurred only in companies that had quality products and customer service.
Microcontroller.com, CPU Technologies

It seems clear to me that embedded systems are a substantial portion of programming practices. And the embedded systems programmer _needs_ to understand the hardware underlying the software. If an embedded systems programmer does not know how the electronics work, how can s/he program efficiently?

[ Parent ]
Embedded programming is FUN! (3.33 / 3) (#67)
by djkimmel on Tue Aug 14, 2001 at 04:27:05 PM EST

And cheap too!

I build electronic stuff as a hobby. It waxed and waned over the years, but then I bought myself a Basic Stamp (see this article: An Intro to Microcontrollers) and got back into it in a big way.

Since buying the Basic Stamp, I've also bought two OOPics. These are cool too. The Basic Stamp has been relegated to being used for one-offs, prototyping, and the occasional use as a test tool (providing simulated inputs, recording the outputs). One of the OOPics is sitting on a robot right now, the other is on my workbench waiting for my next devious plan.

Somewhere along the way, I build a PIC programmer and bought a bunch of PICs (2 16F877s, 4 or 5 16F84s) and use them for all sorts of stuff. One of the 16F877s runs a robot, one of the 16F84s runs a digital clock on my desk.

Programming PICs is too cool - they may be slow (4MHz for the one on the robot, 32.768kHz on the clock), but you can make them do a LOT when working at such a low level!

You do really need to know how stuff works to make it all work though. When I was building the clock, for example, I learned EXACTLY why Quartz clocks use 32.768kHz crystals (its easy to divide to 1Hz using powers of 2). I also learned a little bit about persistance of vision, since I'm using multiplexed LED displays. I also learned that "proper code" is not necessairly the best thing for an embeded device. My display routine used to be a subroutine that automatically incremented a pointer, displayed the next digit, etc - all you had to do was call it every so often. Well, that worked fine when I was prototyping using a 4MHz clock, but not so well when I was using a 32.768kHz one, so I streamlined and inlined that code.

I think that the most valuable lesson I learned out of this though was this rather simple one: Human flesh REALLY, REALLY HURTS when it comes into contact with a hot soldering iron!!
-- Dave
[ Parent ]

It all boils down to the same thing (3.50 / 2) (#68)
by Big Dave Diode on Tue Aug 14, 2001 at 05:29:36 PM EST

In embedded systems you have to deal with hardware interfaces, while when programming on larger more complex machines you have to deal with complex APIs and class libraries instead. In my experience anyway, you still draw on the same kinds of skills.

There's just more "bit-twiddling" in embedded systems. It is kind of fun working with them, although tiny memory spaces can get kind of tiresome.



[ Parent ]
Favorite Quote (4.42 / 7) (#17)
by eroberts00 on Mon Aug 13, 2001 at 11:13:52 PM EST

Men are no longer men. Programmers no longer need to understand how computers work, they just have to understand the layer of separation nearest to them.

I wonder how many programmers ever actually understood how their computers worked? I mean down to the atomic level of each and every transistor and all of their interconnections.

Every programmer has always had layers of seperation between them and the real hardware. That's the whole point. It's the persons job who made the layer below yours to make it so you don't have to understand anything except the interface. It's not a problem that everyone doesn't work at the same level of abstraction, it's a benefit.



Understanding (3.66 / 3) (#20)
by egerlach on Mon Aug 13, 2001 at 11:30:59 PM EST

If you don't understand what's going on at all the levels in theory, you'll be weaker at the higher levels. It's like trying to build a skyscraper without digging a foundation.

I'm in a university programme where they teach Java first and then C++, compilers first and then hardware. In the former situation, students see pointers for the first time and freak, because they don't understand what's happening inside the computer and what pointers really mean. In the latter case, students doing code genereation don't have a picture in their mind of what those instructions are actually doing, and so they don't end up doing what you expect.

I don't know all the specifics of an Intel processor, or what algorithms are used to optimise code in gcc... but I have an idea of the theory, and that makes my coding ability on high-level projects that much superior.

"Free beer tends to lead to free speech"
[ Parent ]
General understanding (4.33 / 3) (#25)
by eroberts00 on Mon Aug 13, 2001 at 11:48:32 PM EST

Having a general understanding of the theory behind something and knowing the in depth specifics are two separate things. This article was bemoaning the fact that people no longer understand the hardware level in every detail. I was mearly pointing out that this has always been the case and always will be. And that is a good things.

As for building a skyscraper, no one person udnerstands and comprehends every detail of one of those either. It takes many people with many different specialties and at many different levels of abstraction to do anything that is so complex.



[ Parent ]
Acquiring understanding (4.00 / 3) (#38)
by jasonab on Tue Aug 14, 2001 at 01:07:24 AM EST

I'm in a university programme where they teach Java first and then C++, compilers first and then hardware.
And that's how it should be. You teach abstraction first -- algorithms. Once a student understands how to think, then you get into details. Pointers are useless unless you understand how to use them to get to a result. Many CS programs I know teach a very high level language (e.g. Scheme or just pseudocode) as an intro class.

[ Parent ]
Dangers of top down (3.50 / 2) (#59)
by waif on Tue Aug 14, 2001 at 11:50:02 AM EST

You teach abstraction first -- algorithms

Let me make an attempt about why I am cautious about this approach with a story that I was told [0]. A company was interviewing newly graduated CS students for a programming positions. One of the questions was to write up a simple linked list interface (something very simple, say a LL of ints). They kept on getting applicants that would start trying to make classes and object types, constructors, black-box type implementations... and they were taking a long time to think it all out and get it on paper.

The company was thinking more along the lines of a C implementation (or outline thereof) which takes about 10 minutes to sketch up on a blackboard.

The applicants had basically gone through they're program teaching introduction classes in C++ or Java (I forget which), and then sticking with that language for all or most of the required courses. The students could graduate without really knowing how to do things at a lower level.

This isn't necessarily a direct contradiction to your point of view that algorithms should take precedence. The problem seems to be that when you do teach algorithms, you need some illustration of the algorithm (i.e. an implementation in some language). Students may tie the algorithm and it's implementation together too tightly to seperate them mentally. IMO, this is more of a problem at the higher levels, where the abstraction may not be easily mappable to the lower level representation.

I think there's other reasons to opt for a more bottom-up approach to teaching CS, but I won't get into them here. (I could go on for all too long if I tried...)

Aside from my bias, I believe that the above problem would be less prevelant the students had been taught a more diverse range of language, methodologies, abstraction levels, and so on.

I'm not against higher level languages. I'm just of the opinion that unless we start having formal levels of CS-ness (akin to nurses, nurse practitioners, doctors, etc.), computer scientists should have a solid grounding in lower level programming. I also think it's easier to abstract more rather than less. (It seems that those who start with high level languages are more relunctant to learn about lower levels that vice-versa).

Finally, a random wishlist I have regarding what I think my university should have done, or should do in the future:

  • Have a required language course, or courses. Seems like most places, you can get by knowing whatever the introductory courses are taught in. I'd've loved to have a class available regarding lisp, or ML. Scripting languages weren't even mentioned; how about a introduction to Perl, Python, shell scripts? Instead, I taught myself. I don't think many are motivated to do so[1].
  • Programming tools outside of the language itself. From a Unix-centric standpoint: debuggers (like gdb), makefiles, lint. Common libraries. Regular expressions. CVS or something similar.
  • Heck, most approaches inside the language aren't taught either. In no required class was discussed benchmarking (profiling), dangers of buffer overflow, proper memory managament, any good way to do garbage collection, and so on.
  • I need to reemphasis that buffer overflow thing. I ran into it ALOT as a grader. The professors I was grading for only let me take off so many points for it, unfortunately. This needs to be driven home hard at an introductory level (or at least, at whatever level it's first possible at, given the introductory language). It's simple, it's been known forever, and it's a gaping security hole in the wrong application. (And an annoyance in any application). The only time this was at all mentioned was in a (non-required) Virus course that most people avoid (it's mostly in assembly and taught by a grad student who has a reputation for being tough. IMO he's one of the better teachers in the CS department.)
  • A real OO course. I wanted to get a better grounding in this area, really[2]. But the "OO" course was not a course in OO, it was a course in a certain Smalltalk interpreter. The only other possibility was the Java course, which was more about the API than the language. I didn't really learn anything. (Granted, the problem here is lack of review, not lack of courses.)

-waif

[0] This was told informally (not in class) by some professor, unfortunately I don't remember which one much less what company this was at, etc.
[1] Partially because they aren't aware, and partially because - IMNSHO - there's a number of people in CS and even graduating from CS programs who shouldn't be. But that's another rant....
[2] I'm not so fond of the concept any more. That's not really relevant though.

[ Parent ]

Abstraction != Incompetence (4.50 / 2) (#77)
by ttfkam on Wed Aug 15, 2001 at 02:53:33 PM EST

The company was thinking more along the lines of a C implementation
Yes, I agree with you, linked lists should be known, understood, and easily implemented by a CS graduate. Also note that the use of objects is a totally valid method of creating linked lists. If you wanted a linked list specifically without the use of classes, you should have specified that as part of the test requirements.

However, a programmer should almost never be writing a linked list implementation in production code. Why? Most languages' standard libraries come with already made, highly optimized linked lists. In short, a CS graduate should (a) know how to conceptualize a linked list, (b) know its performance characteristics, and (c) know that they should be using things like std::list or a C equivalent instead.

The students could graduate without really knowing how to do things at a lower level
How low is necessary? Should they know how to construct or-gates? Is this necessary for a general purpose programmer? Any level below the level in which you work is mostly irrelevant to the problem at hand. Linked lists should be thought of as base programmatic constructs only slightly above struct itself. All a programmer really needs to know is that insertion into a list at an arbitrary point is done in constant time O(1), finding the length is a linear operation O(n), and know when it is the correct use of resources in a problem (versus a binary tree, array, vector, etc.).

Who cares if I know how to implement it in C unless (a) for my job, I am going to be writing C and (b) my job entails writing relatively low level libraries.

Note: I know C. I know how to write a linked list in C. I have not had to write a linked list in C from scratch for a job for a long time now. I have mostly migrated away from C. And I challenge someone to prove to me that std::list and boost::slist are fundamentally slower than any C-based linked list that you plan on whipping up.

If they know the concept and can do their job, creation of a linked list in C is a non-issue. Chances are that some of your applicants were taught to make lists that were typesafe, generic, and did not easily allow memory leakage: qualities usually lacking in C-based example code.

I submit that knowledge of the system should come first. Looking at how it is implemented comes next (with linked lists, binary trees, red-black trees, hashtables, maps, etc.). Next comes a look at how those mid-level constructs are created (allocation of nodes, setting ->next, etc.). Next is stack and heap implementations. Then on to the underlying hardware architecture. And on and on.

The more advanced the programmer, the lower they go. When a job doesn't go very low, there is little need for the advanced programmer (except in an advisory/mentor role).


If I'm made in God's image then God needs to lay off the corn chips and onion dip. Get some exercise, God! - Tatarigami
[ Parent ]

Ok (2.50 / 2) (#49)
by Simon Kinahan on Tue Aug 14, 2001 at 09:12:28 AM EST

Are you ready for your postgraduate course in quantum mechanics ? Understanding how those transistors work will really improve your Java programming skills.

</sarcasm>

Simon

Simon

If you disagree, post, don't moderate
[ Parent ]
My favorite quote (3.00 / 1) (#60)
by Y on Tue Aug 14, 2001 at 12:18:54 PM EST

A few hundred years ago, it was much more difficult to get into serious debt because people were much closer to reality

There is no factual evidence to support this. It is a wild claim which the author would have you believe in order to support her preposterous argument. In fact, a few hundred years ago, getting into debt could lead to indentured servitude, which I would consider pretty damn serious, much more serious than shooting your credit rating in the foot.

So why reinvent the wheel? Because it's manly? This article was little more than an incoherent ramble bemoaning the end of the "good old days." Yep, I remember the good old days, when people were apes and we didn't have this civilization thing keeping us from the reality of the outside world.

- Mike Y.

[ Parent ]

How does reliability figure in to this? (3.66 / 3) (#19)
by mjs on Mon Aug 13, 2001 at 11:26:18 PM EST

The idea of 'layers of seperation' has been brought up before (not necessarily on K5; in the professional literature, at any rate.) Certainly it is easier today to build a complex end-user application than it was say, 20 years ago, but one thing I've always wondered but never had the data to answer was, are applications built using modern frameworks more or less reliable than applications built 'from the ground up' 20 years ago? Personally, I have mixed feelings: memory tells me that after the first couple of weeks an application was in use, we most often had to go back into an application was because the business had changed, so the application had to change too. But that's obviously a very narrow perspective: at the same time it was common wisdom that one didn't upgrade to the latest version of the operating system until IBM had a pretty full PTF tape of bug fixes to apply. If you upgraded before your CE actually had a PTF tape in his hands, you were asking for trouble.

I realize that there are a lot of programmers here with less than 20 years of experiance, but I don't know that this is particularly relevant. We all use or have used applications written 20-ish years ago (DOS comes to mind, as well as some games,) and in all likelyhood someone who has come into the field in the last 10 years has a very different viewpoint than I do, so I'm interested in hearing everyone's opinion.

Classic (4.00 / 2) (#50)
by avdi on Tue Aug 14, 2001 at 09:14:24 AM EST

You can never return to true programming.
Because real programers code in assembly! Pah. In every age we'll have people saying "real programmers do x" where x will be whatever level of abstraction was all the rage when the speaker went to school. I have no doubt that in a few years people will be saying "real programmers code in Perl". The insinuation that by coding at a higher level of abstraction, we aren't as l33t as the hackers of yore is just absurd. Programming has always been about adding layer upon layer of abstraction, which has nothing to do with the skill level of people writing the code. 50 years from now there will still be the equivalent of the average VB programmers today who know very little about the systems they code on; and there will still be wizardly engineers who manage unbelievable amounts of complexity in their heads to arrive at an elegant, easy to use system. The complexity won't have changed; merely the abstraction level will have.

--
Now leave us, and take your fish with you. - Faramir
The hard part (4.50 / 2) (#51)
by Simon Kinahan on Tue Aug 14, 2001 at 09:25:30 AM EST

The hard part of programming is not knowing the details of some hardware platform, be it x86 assembler or the Win32 API, but being able to reduce a human level understanding of what needs to be done into as sufficiently precise specification the machine can use it. Even harder is being able to take something someone else understands at a human level, get the details from them, and turn that into code.

This is why "dragging a couple of icons" will never be the be-all and end-all of programming, at least not untill the advent of strong AI. Computers simply cannot accept specifications that are not concrete and precise in every detail, because they don't have the background knowledge and social context humans have.

Of course, it is still worth learning how things work, if only for the hell of it, and it is worth learning the lower level details underlying the platform you use, because if you do anything non-trivial one day it will force you to back down a level of abstraction. Layering is not perfect, but it is still a good idea.

Simon

If you disagree, post, don't moderate
Printing Press and economies of scale (4.66 / 3) (#53)
by jabber on Tue Aug 14, 2001 at 09:46:05 AM EST

Every once in a while, an article like this comes up bemoaning the loss of knowledge and skill to new technology. To a certain extent and from a certain perspective these articles are dead-on, but from another, they are all just so much Chicken Little fear-mongering.

How many people mourned the passing of illuminated manuscripts when movable type took the world by storm? How many people missed oil lighting when electricity became commonplace? How great was the outcry when synthetic fibers displaced natural ones in the manufacture of clothing?

We see the same things today. Children can not spell, write or do arithmetic in their heads because of spelling and grammar checking software and calculators. Penmanship is a 'lost art' due to the prevalence of the keyboard. Everyone is out of shape because they tend to drive rather than walk. Hardly anyone knows how to pluck, clean and dress a chicken anymore. So what?

The point I am trying to make is three-fold.

First, computing, being a part of engineering, is about making life simpler. It is about taking previous knowledge and building upon it. If the previous layer of knowledge can be encapsulated, automated and abstracted into a 'black box', so that new and easier things can be built on top of it, then why not? Should we all swim across the river rather than taking the bridge?

Second, the 'Ye Olde Arts' are not about to go the way of the Dodo. Illuminated manuscript is still being produced today, not out of necessity but out of passion for the art form. A dear friend of mine does beautiful calligraphy, and could very easily pick up the additional skill of foiling to make her work 'illuminated'. I know people who make armor and white-arms for a living. I know people who own and regularly use oil lamps. I'm wearing cotton at this very moment.

Third, specialization improves efficiency. Yes, the neighborhood auto garage staffs jack-of-all-trades mechanics who are able to do a decent job with almost any problem you may face, but your dealer probably has a brake specialist trained to do factory-spec work on the make of car you drive. The specialist can likely get twice as many cars done in a day as Joe the all-around-guy.

Interchangable parts were a boon to all manufacturing, as were experts. Guilds were composed of Master Craftsmen who knew the trade inside and out and Apprentices who wanted to become Masters. This is no longer necessary. An apprentice can easily be 'certified' to be 'master' of a specific niche - then he can easily be trusted to not foul up that part of work. He is useful sooner.

There is a lesson to be learned in the marginalization of the piece-worker. Not only does assembly-line work stupify the mind, it makes the 'expert' disposable as soon as their job can be wrapped into an automagical 'black box', yes. This is a shame and pity - which is why I always advocate University education over professional training. But, developing software as if by assembly-line is a very effective approach when given a jelled team of domain expterts who can hand 'black boxes' to one another as they crank out yet another killer app, on time and under budget.

There is a balance to be had here between one-trick-ponies and renaissance men. There is room to be moderately specialized within some layer of abstraction and curious enough about others to keep learning. This is far from a black and white issue, and while a GUI developer ought to be aware of how computers work in general, they really should not have to care about their particular platform from such a great height. To ask a VB or Java coder to worry about the hardware layer is like asking Netscape to implement 802.3, OS service routines, IDE driver, etc...

Separation of concerns allows computers to do what they do. It allows us to do what we do. I don't need to be schooled in Law, I have a 'black box' I can hire. I don't need to go to Medical school when I get sick, I just make a call through the interface to a layer of abstraction containing such knowledge. This is no different.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

Misses the point entirely (4.00 / 2) (#61)
by Big Dave Diode on Tue Aug 14, 2001 at 01:42:22 PM EST

The article was obviously written by someone who misunderstands the necessity of maintaining encapsulation of complexity with the problem of incomplete understanding of the system as a whole. As computer hardware and operating systems inevitably become more complex, in order to use them more "layers of separation" are required.

Programming hasn't become easier with all the fancy APIs and RAD tools that we use these days. However, with all the infrastructure available, programmers can be far more productive than they used to be. Good programmers still understand how things work at lower levels, but getting into picky little low level details is no longer necessary.

It is doubtful that programming will ever be as easy as dragging icons around. Some parts of it may be reduced to that kind of activity, but there will always be a requirement for the same kind of thinking and creativity that is needed today.

I have often considered programming a form of "complexity management", but that is a philosophical digression that can be left for another post...



To all those who say kids don't know anything... (5.00 / 5) (#63)
by evanbd on Tue Aug 14, 2001 at 02:36:08 PM EST

I think I make a nice counter example. No, I'm not your average teenage programmer geek. I have a couple friends who are, but I'm not. I program. some. I could knonk off 300 lines in an afternoon, or a thousand in a week, or maybe more given more time, but I don't do huge projects. I mostly code Java. But, this fall, I decided to learn in detail how the very guts of a computer worked, from the logic gates up. I decided to build one.

A few friends and I undertook the task of designing and building a functional computer from basic logic gates and a few more complicated chips (the complete list: basic gates, adders, memory, multiplexers, latches, flip flops, and shift registers). The design goal was to have it work and have the minimum circuit complexity. Final design had around 200 discrete chips. As for specs, about 1MHz, with one instruction every five clocks. Some executed faster, but they had an idle cycle to make the routing logic simpler. One ALU (also used for address computation). Whole thing was 16 bits wide. Instructions were all 1 word (made loading from memory easier, but they were bitchy. Address came in the form of a register plus a five-bit offset). 8 registers, with r0 being a hard 0, r6 being the stach pointer (nothing special, just how it was used), and r7 being the program counter (made for PC relative jumps). ISA said r7 was read only, except for jumps, but the circuit couldn't care less.

There were nine total instructions, if you count IO read/write (no mem mapped io), which the circuit didn't implement. The ISA had it for completeness in case we cared to go back and change it. They were: load, store, jump, cond. jump, add, AND, the funky non memory referencing instruction that did stuff to one register (four kinds of shift, NOT, increment, and a byte swap (to deal with the fact that you couldn't load a byte from an odd address, you loaded from the odd address, which actually gave you the even one before it, and then that set a flag, which you could then conditionally byte swap on to get the right byte in the low order half of the register), in any combination; it just gave you access to the ALU control lines), io put, io get. We pulled some clever tricks so we could get a 32 bit multiply. All very cool. Did I mention it was probably the funkiest ISA ever designed?

We had hardwired electronics for loading off a carefully sequenced parralel port on a PC. Plan said we read in all 64K memory off the parralel port, let it run, and then read it back out. UI was nonexistant, just direct memory manipulation. Not even switches and lights. It was really cool. We designed a simpler chip than the original 8008, by transistor count. Programming was a *bitch*, but that's not the point. And you thought x86 memory addressing was bad...

Anyway, we had the vast majority of the circuit design done in the first semester. Interest in actually doing the wire wrap was low, and we were slightly behind schedule, and we ended up not building it. But, we still learned a hell of a lot. And, it appears as a quarter credit on my transcript (half-time, one semester). Oh, did I mention it was fun?

Anyway, I don't know x86 assembly, I don't really understand the guts of any OS, but I did craft an ISA and hardware to run it from the ground up. Oh, and we did go learn about the basic functioning of the CMOS we were building from, just for the hell of it. I'm not entirely sure what the point of this was, except to say that all the "old folks" out there who think your TRS-80s were cool, our computer was cooler. And only cost about $300. Well, had it been finished. And I'll bet we learned just as much, anyway. about other things, but still. Knowing that every extra chip you have is going to be another chip you (personally) have to squeeze on the wire-wrap board, purchase, put on the wire wrap board, verify the placement of, and wire-wrap properly, is as good a motivator for creative use of resources as anything. We're all heading off to college now. Of the 7 of us, one is in CE, on in CS, one in Chem E, one in Psych, and 3 undecided.

Hey, kid... (3.50 / 2) (#64)
by ucblockhead on Tue Aug 14, 2001 at 03:33:58 PM EST

Were I able to hire, you'd be getting a job offer.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
Heh :) (none / 0) (#79)
by evanbd on Thu Aug 16, 2001 at 10:51:18 AM EST

And here I am, entering as a freshman at NCSU. I have a job this summer doing web stuff. Mostly Java / HTML. It doesn't put these things to use, but much of it's interesting enough. send me a summer internship offer and I might just be interested :)

[ Parent ]
Separation is fine: Optimize the bigger picture! (4.50 / 2) (#70)
by tmoertel on Tue Aug 14, 2001 at 06:17:59 PM EST

Optimize not just your code, but the entire development process. Work efficiently. Work at the right level of abstraction, which is usually the highest you can use. Most of the time, you don't need to know or even care about what is happening on the bare metal, so why not work at a higher level?

Almost always, the single most important resource is programmer time. If you optimize for programmer time first, you'll find that you're in a much better position to solve other problems like slow execution speed or overly-greedy resource consumption.

Here are the rules of thumb that I use:

  • Work at the highest-possible level of abstraction.
  • Measure -- don't guess about -- the qualities of your software that are important.
  • In areas where the measurements indicate real problems -- not imagined -- drop down into lower levels of abstraction as necessary to solve the problems.
The key is to avoid fooling yourself into believing that every cycle, every byte of memory is significant -- they're not. Only some cycles count, and only some bytes matter. Don't waste your time optimizing the rest. Measure. And then spend your time where it counts.

--
My blog | LectroTest

[ Disagree? Reply. ]


It is a description of the past. (3.00 / 2) (#72)
by your_desired_username on Tue Aug 14, 2001 at 08:23:00 PM EST

The real question is, is this a prediction of the future or a description of the present?

It is a description of the past. Modern programmmers have advanced beyond having no understanding of how computers really work. We have reached a higher plane where many things are beneath Our understanding. Dark secrets such as How our computers work, how the software we write works, why it even compiles and what it is good for are all safely encapsulated in sugar coated layers of abstraction. OOP has blessed us with a godlike superiority that places us above and beyond the needs and limitations of reality. We are unfamilar with machine language for the same reasons modern protestant pastors know nothing of the hideous rites of ancient Lovecraftian gods.

This state of enlightenment is both good and natural. To imply that we should lower ourselves to understand the difference between nop and add, or the operation of a power switch is both obscene and perverse. Most importantly, it threatens our over-inflated egos. The importance of low-level computer operation must be understated as much as possible, lest we start losing jobs faster than html-monkeys at a dot.bomb.



*ahem* (4.33 / 3) (#73)
by Elendale on Wed Aug 15, 2001 at 12:03:45 AM EST

Hate to say this, but it doesn't matter that average Joe Coder knows six flavors of assembly, how to fab chips from scratch, and God knows what else- it does matter that this stuff is learnable. For example, i don't know one bit of machine code for my box, but given the need/desire and time i can work it out. The difference (as is pointed out in the article) between many abstractions and the programming abstraction is that with programming you can look at any of the abstraction layers you want. Sure, in another ten years Perl will have evolved to the point that the only command is "compile", but if you feel like it you can hand-craft an entire operating system from the ground up. The credit card example perfectly demonstrates my point here: have you tried to barter for anything lately? Have you tried to code some Assembly lately? I imagine one of these is going to work, one will not.

-Elendale (Well ok, so you can barter if you go to the right places, and one has to learn assembly to code assembly, but you know what i mean...)
---

When free speech is outlawed, only criminals will complain.


Almost forgot (4.00 / 2) (#78)
by ttfkam on Wed Aug 15, 2001 at 03:07:51 PM EST

I'm just of the opinion that unless we start having formal levels of CS-ness (akin to nurses, nurse practitioners, doctors, etc.), computer scientists should have a solid grounding in lower level programming.
Like degrees in Computer Science, Computer & Information Sciences, Computer Engineering, etc.?

It has been proposed that Computer Science has little to do with programming. It is algorithm analysis, and overall it is closer to the mathematics department than a programming-specific discipline.

So if there is to be a split as you propose (not a bad idea now that I think of it), maybe there should be some distinction between a programming track (closer to CIS) and the standard CS track.

On the other hand, maybe we are expecting too much for an undergraduate education? Maybe some of these things are aspects of programming that should be thought of more at the graduate level or indicative of professional experience.

Maybe we aren't pushing students as hard as we should. Maybe pushing them harder won't improve things much. Maybe four years simply isn't sufficient to take someone who passed the AP tests to the levels you are suggesting? Maybe it's the fault of the instructor. Maybe buffer-overflows are too much into security concerns for someone who is just learning what a linked list is. Maybe it's the responsibility of the business world to train people properly instead of expecting University to teach everything. It's taken me years to get from basic for-loops to where I am now.

Maybe University should be thought of as the place to start learning instead of the place to learn everything an individual will need. Maybe I'm just ranting. Hopefully, this will spawn some lively debate.


If I'm made in God's image then God needs to lay off the corn chips and onion dip. Get some exercise, God! - Tatarigami

Programming and Separation from Reality | 82 comments (80 topical, 2 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!