My first computer was a Commodore PET 3032, with (guess what!) 32K of RAM. Yes, that's a K, not an M. :)
My second computer, though, was a BBC model B, which had 64K of memory, but 32K of that was allocated to ROM and "Sideways ROM". Of the remainder, the graphics could take anything from 1K ("Mode 7" teletext) to 20K (mode's 0-2, I believe).
These computers were large enough and powerful enough to be used as embedded computers, although the term didn't exist back then.
(The BBC was especially good for that, having a parallel port, a serial RS-432 port -and- a 4-channel ADC, not to mention RGB and analoge video. It even had a connector for another processor, though this would largely turn the BBC into an over-powerful video processor.)
Way before I was born, the Manchester Mark I (nicknamed "Baby") managed to run a program for finding the highest common factor of two 32-bit integers. The program, plus storage, took less memory than the URL for this site.
So? what's the point of reciting all that?
The point is, we -can- become very much more efficient in what we're doing. Reusability is not the enemy of efficiency - it's the much-maligned ally. (Take the Linux kernel as an example - just how many min/max functions DO you need? By having one resuable function, you save yourself space -AND- grief.)
The overhead of a reusable tool is not insignificant, but if the tool is designed right, the overhead is considerably less than that of re-implementing. The secret is in that "design" bit.
It's been said that if builders built buildings the way coders wrote programs, the first woodpecker that came along would destroy civilization. I can believe that, too. Code is generally designed to meet deadlines, win approval from the boss and awe the customer with the fancy feature list. It's not designed to function.
One dream I have, that I know will never be realised, would be for a small army of coders to take Linux, the entire FSF software collection, Gnome, KDE, Qt, Berlin, and every other free piece of software in existance, move into a closed camp, and just get it right.
It shouldn't be hard, assuming anyone would give a damn enough about quality code to actually fund something like that. You just produce specs from the code, debug the specs, then re-engineer the code, using the specs to avoid redundancy.
I did something like this when working for a former company. I reduced a 12 megs source / 10 megs binary to a 6 megs source / 1.6 megs binary. It was almost effortless, lifting off all that fat. Compiling with optimization reduced the binary to 720K - the size of a single-sided 3.5" floppy.
Not only did I reduce the waste, I increased the performance. (Much less swapping, for a start!) I increased the reliability (fewer components to go wrong), and I increased the understandability (one function, one purpose; one purpose, one function.)
If I can do that, so can any other coder. So get on with it!