Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Computers are funny

By Defect in Op-Ed
Mon Oct 02, 2000 at 09:52:22 PM EST
Tags: Software (all tags)

If funny was a feeling somewhat equivalent to getting a nail shoved through your forehead.

Who's fault is it for computer crippling software? Are we just too understanding? I'm sick of it.

Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
Don't get me wrong, i love computers. But why is it that so many software developers can't code software that is fatal bug-free? What gets me even more is that software can be released with huge bugs after going through a beta test period.

A quote from a video game forum posted by one of the developers:

"The memory leak is a known bug and will be fixed in the next patch."

A known bug that will be patched at an, as of yet, unannounced date.

Those of you unfamiliar with memory leaks, refer to my metaphor in the opening paragraph, but add "having fun" (or "doing work" if that's your vice) before the nail through forehead bit and that's what it is like. There is nothing quite as frustrating as going along expecting everything to be fine and then BAM your computer freezes. (the expecting is the problem. expect nothing from computers)

And memory leaks are not exactly "minor" bugs, they are crippling, obvious bugs. How is it that they can get by the testers and coders and end up on store shelves? Is it our fault for not complaining? This problem is not isolated in the gaming industry, it is EVERYWHERE, but it is just a lot more noticeable with video games. How can ion storm get away with releasing a 40-some odd meg patch that fixes game stopping bugs for daikatana. 40 megs? Back on my modem i downloaded everything i could almost without thinking, but i had a hard time swallowing the idea that i needed to download over 4 hours worth of patch to play a game that i already bought.

Right now, i am going to wait for my computer to freeze because it always does so almost 15 minutes after i get the first blue screen. And you know what, i'm going to accept it, because i don't think i have any other choice.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o Also by Defect

Display: Sort:
Computers are funny | 40 comments (38 topical, 2 editorial, 0 hidden)
insects (3.00 / 7) (#1)
by tokage on Mon Oct 02, 2000 at 02:03:44 AM EST

Well, first off the blue screen of death tells us you're running m$, probably 95 or 98. I think some of the reasons programs on windows are so buggy is because since we don't have access to the source, it's hard to find out some really lower level information about how code is going to interact with the kernel and other parts of the OS. Add on top of that poorly written drivers for your hardware, generally always close sourced, which themselves aren't well written interacting with the kernel and your game(or whatever) and you have a problem which starts to compound itself. Also there's the problem of the sheer amount of hardware and situations which software must run and interact on, all on top of this OS which you really have no control or idea of what's going on in. Products get pushed out quickly for various reasons, the costs of development mounting and wanting to get the product out, companies which just don't really care how much time you spend patching etc. They don't -have- to care as long as we keep buying and running their products.

As far as poorly coded OSS type stuff, I think it's partly the amount of programs available. You have an idea for something, so you just start coding and release the product under gnu/bsd(or whatever). There aren't any regulations on who can code(and shouldnt be) so you get college kids who have just started coding writing something they think would be cool and useful, but having other considerations on their mind(like passing classes). Even experienced OSS type hackers generally have another job, and can't dedicate full time to their projects, as much as we'd all like to.

When you say you have no other choice, that's partly true for some programs you run which need m$, but you do have a choice. Bug the manfacturer into supporting *nix, find another program which will do similar things and doesn't require an operating system of which you have no control over. BTW k5 seems like an odd place to be ranting about the blue screen of death etc..more a site of people who want to stay away from m$ products in general. I dual boot a box at home though w/win 98, for everquest which sadly has no linux support;)

I always play / Russian roulette in my head / It's 17 black, or 29 red

Re: insects (4.40 / 5) (#5)
by khym on Mon Oct 02, 2000 at 04:57:09 AM EST

I think some of the reasons programs on windows are so buggy is because since we don't have access to the source, it's hard to find out some really lower level information about how code is going to interact with the kernel and other parts of the OS.

If an OS is in such a sorry state that you need to read the source code to make well functioning programs, then then that OS just plain sucks. Outisde of device drivers, kernel modules, and a few other specialized pieces of software, you should never have to look inside of the kernel source to write good software. There's libraries that make system calls into the kernel, and you should only ever have to look at the documentation for the libraries.

The things that generally affect the quality of software on various OSes is:

  1. Bad documentation. If the documentation for system level stuff is unreadable or just plain wrong, software written off of that documentation will be bad.

    But speaking of Win95 and 98, the content of the documentation for Windows system libraries is pretty good (in my limited experience). It can be hair-pullingly difficult to even find what you want, the APIs themselves might be horrenous, and the example code can be useless, but the documentation is correct and readable.
  2. Missing documentation, A.K.A. "Undocumented Features". There are many types of programs that function poorly, or not at all, if they don't use the various undocumented functions in the Windows libraries. But since they're undocumented, you can never know if you're using them correctly.

    This is one advantage of OSS OSes: things are only undocumented by accident, not by design, and if they are undocumented you can always read the source.
  3. The OS itself is buggy. This is another area where OSS OSes have the advantage: there's many eyeballs to find any and solve bugs, and the OS doesn't get released until it's good and ready.
  4. Bad OS design/architecture. For instance, Windows has the registry (*gag* *gag* *choke*), plus "DLL Hell", where programs overwrite DLLS with their favorite version, clobbering the versions that other software is deppending on. There's no intrinsic reason why OSS OSes would be designed in a better manner than closes source OSes, but the ones that I've seen seem to have very solid designs.

Give a man a match, and he'll be warm for a minute, but set him on fire, and he'll be warm for the rest of his life.
[ Parent ]
Bugs (4.16 / 6) (#2)
by charter on Mon Oct 02, 2000 at 02:52:05 AM EST

Ever had to make something for an office potluck, and it didn't turn out quite right, but you ran out of time so you brought it to the potluck anyway, even though it was a little scorched on the bottom? Same phenomenon here.

It's not the developer's fault that there are problems with the code. Problems arise all the time, and most of them get fixed before the product launches.

The remaining problems would be (WILL be) fixed, but market pressures often force companies to rush a product launch before all the bugs have been squashed. Canny consumers wait until the first patch kit has been released before they buy a new software product.

I'm not excusing this effect; I'm just pointing out that it's not necessarily the developers who are at fault. Don't forget to blame those marketing weasels! And the sales pukes, too! ALWAYS blame the sales pukes!

-- Charter

Bugs are natural (3.50 / 6) (#3)
by Arkady on Mon Oct 02, 2000 at 03:02:16 AM EST

Don't be too hard on the coders; misbehavior like this is natural in any large and complicated system. Once any program gets beyond a few thousand lines it starts to get difficult to hold the entire thing in your mind at once, though some programmers are certainly better at this than others. Object systems, and other attempts at functionality isolation, were designed to ameliorate this effect but to some degree the issue will always be with us.

A wise person once wrote "Programming is the Art of debugging an empty text file". I really wish I could remember who, since it's the most concise and knowing description that I've read.

Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere Anarchy is loosed upon the world.

Complex question. Use a better OS for starters (2.66 / 6) (#4)
by NKJensen on Mon Oct 02, 2000 at 03:35:28 AM EST

"Fatal-bug free" is not a valid description. Bugs are, by definition, able to disrupt any kind of function within the limits of the process (if any such limits are provided by the OS).

Bug are bugs. Bugs that cause a system crash are just easy to understand.

By the way, your OS sucks. No application should be able to take down your system.

And so does most tools used for coding applications which must achieve maximum speed. They just can't provide stack protection, range checks etc. because video games need 3 things: performance, performance and performance.

That's why you will have to live with bugs in the video games.

For critical applications, choose better tools. Use a better OS and buy quality software. Return software with too many bugs. You can do that, you know.
From Denmark. I like it, I live there. France is another great place.

Re: Complex question. Use a better OS for starters (2.00 / 1) (#8)
by thomas on Mon Oct 02, 2000 at 06:36:32 AM EST

By the way, your OS sucks. No application should be able to take down your system. True, it shouldn't...

for some reason, though, on my system (Redhat 6.0, basic install), if I try to open up Spruce or CSCMail or pretty much any other GTK+ email program while using a particular GTK theme and with more than about 50 messages in the inbox, X pretty much instantly manages to use up ALL of the available (128Mb RAM + 128 swap) memory, then totally freezes up the machine.

Yes, I'm sure there's a way to fix this, but I just don't know how off the top of my head and I'm too busy to figure it out :-( so for now I'm just using the default GTK theme... it's no big loss, but still...

War never determines who is right; only who is left.
[ Parent ]

Re: Complex question. Use a better OS for starters (4.00 / 1) (#17)
by fluffy grue on Mon Oct 02, 2000 at 11:03:44 AM EST

Root processes which have low-level hardware access (such as XFree) taking up all the available memory and not exiting gracefully on crash (which XFree is notorious for) and forkbombs are the two things which can reliably bring down most OSes.

"Doctor, whenever I move my arm like this, it hurts!" "Then don't move your arm like that."

You've already figured out the solution to this problem. Your choice of GTK theme takes up all your system resources (probably due to something stupid, like a symlink pointing to itself or otherwise triggering a bug in the GTK theme engine which causes it to allocate all your memory as XPixmaps), and so you just don't use that theme. Problem solved. ;)

BTW, bonus points to anyone who can figure out why this variant of the forkbomb works and is particularly nasty:

while (!fork()) fork();

"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Re: Complex question. Use a better OS for starters (4.00 / 1) (#23)
by XScott on Mon Oct 02, 2000 at 02:34:40 PM EST

BTW, bonus points to anyone who can figure out why this variant of the forkbomb works and is particularly nasty:

while (!fork()) fork();

I'll try. Each parent gets a non-zero in the conditional. Negating that means the loop completes. Each child gets a zero the first time through. As such it executes the body of the loop once and fails the next time through the loop (after having forked a second time). So each child after the firstmost parent creates exactly two new children.

Why is it nastier? It still grows exponentially as while(1) fork(); would. Each child only creates two new processes though, so if system accounting to prevent fork bombs is on a per process basis instead particular user id this might slip by. That seems like a correctable problem with the accounting though.

What am I missing? How many points did I get?

-- Of course I think I'm right. If I thought I was wrong, I'd change my mind.
[ Parent ]
Re: Complex question. Use a better OS for starters (5.00 / 1) (#24)
by fluffy grue on Mon Oct 02, 2000 at 03:54:04 PM EST

The reason it's nastier is because the parent process terminates - effectively, with each iteration of the while(), it changes its PID (the parent process gets a nonzero return, so it exits). Thus, even if a sysadmin were to catch this early on, it'd be unlikely to be possible to kill it just by killing the parent PID, and even if there's per-user limits on the process table, actually killing all of the processes would be difficult (moreso than usual, anyway) since there's no longer a parent process which you can kill to kill all the children. (Unless I'm mistaken and the death of the parent process doesn't matter for children anyway, but I was under the impression that signals were propagated downwards like that, and that a signal sent to a child will send a SIGCHLD to the parent process, which is by default ignored except to exit a wait().)

Also, there's not really any such thing as exponential growth in a forkbomb, since it always takes a constant amount of time and it's not like the processes are truly running in "parallel." That is, it takes O(n) time to create n processes, not 2^n processes as one would assume based on common-sense analysis of the code.
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Re: Complex question. Use a better OS for starters (4.00 / 1) (#29)
by XScott on Mon Oct 02, 2000 at 06:59:24 PM EST

Honestly if I'm the admin and the fork bomb is running, I'm just going to kill every process associated with that user. In fact I'd probably run a little script to do it for me while I disabled his account. So in that regard, cleaning up doesn't seem so much worse than the traditional while(1) fork() example.

You're right, of course, about not being able to create processes any faster than the number of processors allow. Still there is something that seems exponential about it, or why is your example worse than
     while(!fork()) ;
which is linear in that each process only creates 1 new process?

-- Of course I think I'm right. If I thought I was wrong, I'd change my mind.
[ Parent ]
Re: Complex question. Use a better OS for starters (none / 0) (#33)
by fluffy grue on Tue Oct 03, 2000 at 01:21:36 AM EST

while (!fork()) terminates as soon as it gets another timeslice. The process table doesn't grow without bounds, it just gets thrashed. Of course, that means it might be considered even worse - a huge load on the OS but still impossible to kill based on PID.

And how do you suppose the 'killall' command works, even in UNIXen where killall is by user and not process name? Last I checked, it didn't lock the whole kernel in most implementations - which means that by the time all of the processes have been iterated through have been killed based on whatever criterion, chances are some others had been spawned. Race conditions and so forth.

Of course, the simple solution is to just kill the controlling process from which the forkbomb was started anyway. That'd probably do a good job of killing everything no matter what. But of course, it could have been started as a background task of the shell (which was then exited, which also detaches the TTY, so it won't get a HUP either). And even in the case of kills, it's trivially simple for the forkbomb process to set all of the non-divertable signals to SIG_IGN anyway.

Basically, unless killall has a special kernel-level hook which can absolutely guarantee that no fork()s and exit()s will happen in the process of killall doing its thing, there's still a possibility of the forkbomb continuing and just eating up the process table again after root has tried killing it. And in UNIXen where there's no per-user limits by default (such as Linux), root will have a pretty tough time even starting up killall to begin with (though granted, the while(!fork()) fork(); variant would make that possible since there'll be a gap in the allocated PIDs at least part of the time).

Of course, everything regarding the nastiness of this forkbomb depends heavily on OS-specific issues.

In any case, "the parent process seemingly hops PIDs" is the answer I was going for.

Jeeze, now I forget what the topic of this article was to begin with. (looks up) Oh yeah, whining about software development... :)
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Re: Complex question. Use a better OS for starters (none / 0) (#37)
by XScott on Tue Oct 03, 2000 at 01:11:16 PM EST

while (!fork()) terminates as soon as it gets another timeslice.

Well, not necessarily. Who says they wouldn't put a while(1) malloc(1) after that? Of course I'm being a nit picker, but my question was more to why my intuition says the bomb that forms a tree of processes is worse than the one that just forms a list.

[killall] didn't lock the whole kernel in most implementations

That's what negative nice values are for. (Probably not provably guaranteed to work with the non-realtime schedulers in most unixes, but probably would work in practice.)

Jeeze, now I forget what the topic of this article was to begin with. (looks up) Oh yeah, whining about software development... :)

Yeah, but this was more interesting that bitching about memory leaks. It even made me go look into the mess about signals (apparently POSIX, BSD, SysV and Linux all do it however they please).


-- Of course I think I'm right. If I thought I was wrong, I'd change my mind.
[ Parent ]
Re: Complex question. Use a better OS for starters (none / 0) (#38)
by fluffy grue on Tue Oct 03, 2000 at 02:59:11 PM EST

Again, a negative nice level makes no guarantees, it just says that the process is more important to be scheduled than a normal user process.

Oh, and my implication was that while(!fork()) fork(); was the entire source code (I didn't feel like wrapping it up inside a well-formed main()), and I figured you'd understand that it was the entire program, and not just two lines out of something random. Yeah, of course you can always add in lines of code in the program, but that doesn't mean that the point of the original program is any different. :)
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

memory leaks (3.50 / 4) (#6)
by aphrael on Mon Oct 02, 2000 at 05:39:20 AM EST

My favorite memory leak is one that was encountered about a year ago in a version of the product I work on, shortly before it was released. The memory leak occurred in a large quantity of code that I had inherited, and boiled down to: some_object* foo = new some_object; delete some_object; which is obvious and easy to fix, right? except that some_object wraps a system resource which is essentially a giant tree containing other different objects which contain yet other different objects *and which may recurse*; the typical instance of some_object would represent a system resource consisting of around ~20K minimum and might leak 64 bytes. After a week of trying to figure out which subobject in this multitude was leaking and when, I gave up.

Just have to live with it. (4.71 / 7) (#7)
by zakalwe on Mon Oct 02, 2000 at 06:06:54 AM EST

A known bug that will be patched at an, as of yet, unannounced date.
Just because a bug is known doesn't mean the causes are. Memory leaks especially are notoriously difficult to pinpoint, and despite what you say, are often not even easy to find - in some cases they aren't noticable until days of uptime doing a particular task. The problem with memory leaks are that the cause could be anywhere in the program, and the effect occurs in a completely unrelated area. The actual crash was probably some completely correct, innocent piece of code that tries to allocate memory that isn't there, due to it being leaked away over time. These things are nightmares to track down, so I'm not really surprised at the practice of setting no fixed date for the fix.

Yes - they probably did find this at the testing stage, and probably some coder spent the night before the release frantically vainly trying to find the cause to this, and probably a whole lot of other bugs - but in the end, it was released the next day. And the reason they can get away with this? You said it yourself:

And you know what, i'm going to accept it, because i don't think i have any other choice.
We'll probably never see an end to bugs. In any complex system it's usually impossible to find and fix them all, especially since most developers are turned towards adding features, and getting the latest version out in time. And currently they're right to do so.

I remember a quote from Bill Gates a while back where he said that people don't want bug fixes - they want new features. Sadly he seems to be right - people probably won't rush to buy the next version of Windows if all it is is bugfixes ( In fact they'd probably start to wonder why should they have to pay extra just to get a version of the product that actually does what it says it would the first time they bought it.)

Various solutions have been proposed to the problem of bugs. Open source has some advantages here - I don't really buy the claim that 'many eyes make all bugs shallow' - I don't think enough people actually read the source code to make a big difference in finding bugs, though it does help a user to actually find the cause, and so give a better bug report or even solution. More relevant is the fact that there is a good culture of actually passing bug reports back to the author, and the fact that there are no set deadlines and so open source software is immune to the 'We know it's buggy - but the release date's tomorrow' problem of commercial software. Even so - there's no silver bullet here. Some of the causes of bugs are removed, but not even close to all of them.

So is there a solution? Probably not - the best we can do is minimise the damage that can be caused by a single program (Memory protection and other OS level safeguards), and try to use techniques to minimise or catch bugs (Higher level languages, extra checks in the code etc) Of course the main problem with these is that they're slow - especially relevant for games, which requirs performance at all costs. Some people feel that as hardware speed continues to grow, we'll have enough resources that these methods will be viable - even for games. I doubt it though - another microsoft quote comes to mind (For some reason they keep coming to mind when I'm talking about bugs...) - 'software is a gas - it expands to fill the available resources'

Use open source software... (1.55 / 9) (#9)
by Luke Scharf on Mon Oct 02, 2000 at 08:03:46 AM EST

Use open source software, so you can post patches rather than rants. :-)

Problem with how everyone thinks (2.75 / 4) (#10)
by maketo on Mon Oct 02, 2000 at 08:07:04 AM EST

There might come time when you will have self-inspecting and self correcting software. Some attempts at this have already been done and some are still to come ;). Until then you can hope that your favourite OS guarantees that one loose process wont bring all the other down. As for why your software is bound to have bugs (which might also be a usability "feature" you call a bug), read Dijkstra's "The Humble Programmer" article.
agents, bugs, nanites....see the connection?
Not necessarily the developers fault (3.00 / 6) (#11)
by ribone on Mon Oct 02, 2000 at 08:42:19 AM EST

I understand your frustration, but you need to realize that a large amount of the time, developers in commercial entities such as MS are not given the proper time to complete something. This happens even with protests to management (usually the cause of the problem anwyay) that there will be serious problems with the code. The overwhelming motto from many companies is "Ship the product yesterday". I know it may not seem like that, but that's actually the way alot of good developers are treated. It's sad that these people's work suffers because of some greedy/ignorant higher ups.

Note: I realize that there are good managers out there who take care of their coders. I just happen to have seen/heard of too many of the other type to really be optimistic.

Complexity issues (4.50 / 4) (#12)
by madams on Mon Oct 02, 2000 at 08:45:18 AM EST

Software bugs are usually a complexity issues. While any program is really just an FSM (Finite State Machine), it is often a hideously complex FSM. It is almost impossible to keep and track all of the interactions between your program and a computer because of all the components that you can't see (particularly the OS, even in the case of free software OS's because Not Everyone Looks At The Code).

Non-software products suffer from bugs and defects as well, they are just easier to deal with because they are usually not fatal. For instance, ever notice that the ink in a fountain pen will dry up if you don't use it often enough? This I would call an "bug", in that I was not expecting it to happen. Or sometimes you just can't get the plug in the bottom of your kitchen sink to stop the water from draining. But while these are both bugs, they aren't show stoppers in the way that many computer bugs are.

I'll quote rusty on this one:

Software tends to exhibit the same properties, and shakes itself into little useless bits with greater frequency and less provocation than bridges.

Software is, unfortunately terribly, brittle. While there are several methods for proving programs correct, it's just not possible on large programs (proving one algorithm correct is hard enough).

The only answer I can think of is building less feature rich software. Concentrate on more robustness and less whiz-bang.

Mark Adams
"But pay no attention to anonymous charges, for they are a bad precedent and are not worthy of our age." - Trajan's reply to Pliny the Younger, 112 A.D.

Re: Complexity issues (3.50 / 4) (#13)
by sbeitzel on Mon Oct 02, 2000 at 09:03:06 AM EST

I agree, mostly. When a program has thousands of lines (or, more to the point, more than just a couple of subsystems/components that talk to each other) it gets hard to keep track of the interactions. Add into that mix a buggy foundation (yes, I mean MFC! It's crap!) and you've got pain just waiting to happen.

However. There's really no excuse for releasing software with crashing bugs. Sure, it happens all the time, particularly in the game business. The reason it happens is that the game business is not a software business. It's a content business, and the deadlines are driven by marketing, not by development.

So, when you wonder how some crap piece of code made it out the door, ask yourself if the product is software (compiler, word processor, spreadsheet) or time-sensitive content. Nobody's interested in last year's tax software, and nobody wants to buy a game that was cutting-edge five years ago.

[ Parent ]

Re: Complexity issues (none / 0) (#40)
by jnik on Wed Oct 04, 2000 at 04:59:00 PM EST

Nobody's interested in last year's tax software, and nobody wants to buy a game that was cutting-edge five years ago.
Actually, I spend a lot of time tracking down old games. I like to play good games, even if they're not the latest.
That said, games really should be allowed to slip more, and there's a very simple mechanism which will allow for that: the detail slider. Code the thing so that it just plain won't run on a current machine. Yes, it's hard. Yes, it's added complexity. But it's a nice piece of future-proofing.

[ Parent ]
You clearly don't know what you're talking about. (4.11 / 9) (#14)
by Inoshiro on Mon Oct 02, 2000 at 09:10:52 AM EST

First off, a memory leak is not an obvious or simple issue. Given the fact that most of the games are written in a 9-18 month life cycle, and rushed past Q and A so they can be first to market coupled with the fact that most of the code in games is marginal, and since most people assume that garbage collection is too hard (which is very wrong, especially in this day of Ghz CPUs and GBs of ram), you get poor quality games and code which rarely, if ever, works correctly the first time. That's why you try before you buy. I can think of three game companies whose software I will pay for without having to get a demo (or if there isn't a demo, borrow a fully functional version from a friend): id, Valve, Blizzard. Even then, their code can and does have bugs -- and they're some of the highest quality producers.

You also have to understand that a lot of people buy the cheapest crap they can get for their computers. "Wow, this online site has ram for 20$ cheaper," they think to themselves, " I think I'll buy that." Then a few months later, they wonder why their systems will randomly lock up. Bad ram? Probably. Given the (ahem) stability of windows for many years, a lot of companies got away with downright non-functional hardware and pathetic drivers. This is another reason why people have problems.

So before you go and rant and rave about some problem, go and figure out if it's hardware or software. If it's a hardware problem, learn to not buy cheap crap. If it's a software problem, then understand what software is like. Go learn how to program (as you sound like someone with not the foggiest on how a programs internals are), and learn how much fun it can be to find an off-by-one memory error in an environment where freeing the same pointer twice is a big, fat no-no. That's why so many of us programmers write Free software. If there's a problem, we can fix it so that others don't have to live with it.

[ イノシロ ]
Re: You clearly don't know what you're talking abo (4.00 / 2) (#20)
by maketo on Mon Oct 02, 2000 at 11:29:03 AM EST

First off, a memory leak is not an obvious or simple issue. Given the fact that most of the games are written in a 9-18 month life cycle, and rushed past Q and A so they can be first to market coupled with the fact that most of the code in games is marginal, and since most people assume that garbage collection is too hard (which is very wrong, especially in this day of Ghz CPUs and GBs of ram)

The link you posted on garbage collection is not esp. enlightening, looks like a rant to me. Point me to a fast and secure GC implementation and explain it to me in technical terms as to how and why it will help memory leak problems and we can continue to talk. I have seen 40+ Meg memory leaks amount in what should be a simple and clean java servlet and people still couldnt figure out why that happened. Despite Java's GC. by the way, the author of that GC implementation rant should know better about python's implementation.

To add to the discussion - the main problem is not poor programmers beeing pushed over the limit with the deadlines - it is bad programming practice (!) and an influx of people with bad habits and no education into the field. While time is certainly a factor, when you do not have a paradigm facilitating safe programming practices, the only hope rests with the designers/programmers. And they very often do not have the full knowledge of the platform they are working on, very often miss important clues on the problem they are solving and sadly, are very lazy. Proper way to accelerate is to revise and recode. Finally, where there is no way to prove if the solution is correct, any solution that resembles correct is sufficient. And with the correct ones many incorrect pass. Same goes for the people who make these solutions.
agents, bugs, nanites....see the connection?
[ Parent ]
Java's GC... (4.00 / 1) (#25)
by nuntius on Mon Oct 02, 2000 at 04:15:25 PM EST

Java has a very good garbage collector--its accurate and rather fast. It includes many advanced features whose importance is not obvious when you start studying GC...

However, even a great garbage collector cannot collect all garbage. Case in point:
Two objects (A and B) share common objects and pass references back and forth. Only A is supposed to keep the objects, but B does anyway. Later when the program wants to free memory, it tells A to free all its objects. However, since B still has references to them, garbage collection will not occur, and no memory will be freed.

[ Parent ]
Re: Java's GC... (none / 0) (#31)
by molo on Mon Oct 02, 2000 at 11:39:47 PM EST

Thats not a problem with the garbage collector. A GC only frees objects with no references. The problem is that B maintains references to the objects. Whether this is a feature or a bug of B depends on its nature.

Java GC works the way it is supposed to. It is up to the programmer to not keep unneeded references.

Whenever you walk by a computer and see someone using pico, be kind. Pause for a second and remind yourself that: "There, but for the grace of God, go I." -- Harley Hahn
[ Parent ]
Look no further (none / 0) (#34)
by Inoshiro on Tue Oct 03, 2000 at 07:47:06 AM EST

Try the Hans Boehm Garbace Collector. I found it with a quick gopher.

[ イノシロ ]
[ Parent ]
You are obviously not a programmer (4.00 / 5) (#15)
by Thaniel on Mon Oct 02, 2000 at 10:00:11 AM EST

But why is it that so many software developers can't code software that is fatal bug-free?

There is no such thing as perfect code. In general, the number of bugs rise at least linearly with the complexity of the code. Thus, very complex pieces of code, like most modern games, have a lot of bugs. Compare writing code to building a skyscraper. Both take a ton of design, planning, and building. Now take one bolt in the base of the skyscraper and move it 1/8 of an inch to the left. What happens? Most likely, nothing. The building still stands just fine. Now take a piece of code and change just one letter. Most likely you've just created a major bug.. if the code runs at all. It is a function of programming that computers are unforgiving.

And memory leaks are not exactly "minor" bugs, they are crippling, obvious bugs. How is it that they can get by the testers and coders and end up on store shelves?

Yes, they are crippling and they are extremely obvious... at least, the effect is very obvious. The cause may be very difficult to determine. It's like asking a doctor why he can't cure cancer - it's crippling and extremely obvious...

As for your computer freezing after getting the first blue screen, that's 100% microsoft. Don't blame the rest of us for their mistakes. Be happy your computer runs at all after getting a blue screen, mine usually stays blue til I pull the plug.

Re: You are obviously not a programmer (4.00 / 1) (#19)
by fluffy grue on Mon Oct 02, 2000 at 11:10:49 AM EST

It's actually bounded by O(n^2) for n blocks of functionality, since that's how many possible unforeseen interactions there are between functionality blocks.

This is why I use global variables VERY sparingly, and NEVER across modules. :) And even then I get some interesting unforeseen interactions, like my renderer forgetting exactly what the semantics of a functioncall in my visibility system does, or forgetting to deallocate light sources when moving between rooms - whee, tiny, slow, non-obvious memory leak that one was; I'm glad I was using Paul Nettle's memory manager, otherwise I'd have NEVER found that (and would be left to wonder why after a few weeks of a client running it was wasting a few megs of RAM).

For anyone who codes in C++, go to Ask Midnight on FlipCode and pick up the memory manager there. It's a pain to add into existing projects, but it's WELL worth the effort. From now on I believe I will use it in all of my new projects from the ground up, as well.
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

On bad software (4.50 / 2) (#16)
by Denor on Mon Oct 02, 2000 at 10:19:10 AM EST

A lot of comments here have made the observation that bugs are unavoidable, hard to find, and hard to fix. This is true, but I don't think it's what the rant was really about. I saw it as a reflection of my own thoughts on some games.

The author mentions Daikatana as an example of a poorly developed game. I've got another one - Ultima IX. From every review that I'd seen, the game itself was excellent, but one thing ruined the experience for nearly everyone: The bugs.

I didn't have a powerful enough computer to experience it for myself, but I had seen the results: characters getting stuck on parts of the landscape, quests going unfinished, patches upon patches that didn't really help. I remember reading these reviews and thinking to myself "Do these people have no pride in their work? Don't they care that they haven't made something good?"

Being slightly more world-wise nowadays, I realize it was likely that the coders had a great deal of pride in what they did, but management just wanted to ship a product. I

I think what rants like this are decrying is the lack of craftsmanship in many products for computers. Whenever something like Daikatana or Ultima IX comes out, I do the same thing. I don't blame the coders - I think with enough time they would have created something excellent. But there's always someone else further up the line who has a schedule in hand, and doesn't care about quality.


Re: On bad software (5.00 / 1) (#30)
by MrSpey on Mon Oct 02, 2000 at 11:31:51 PM EST

Denor's choice of Ultima IX was a good one. The entire development team for Ultima IX told management that they wouldn't have the game ready in time for the release date management wanted to hit (Christmas season, I believe). Management said, "You'd better be done, because we're releasing it in time for the Christmas season." The coders were, as they said, unable to have the game ready in time for the Christmas season, so when it was released during the Christmas season, it sucked.

I think the reason shoddy code upsets people who post and read sites like kuro5hin or The Other Site, for example, is that most of them have at least coded something at some point in their life, and many of them code daily, and almost all of them code because they enjoy it. As a reault, they work hard to do a good job when they code. When they see crap coming out of professional deveopment studios, they get really upset, since whenever they code anything they do everything they can do make it as good as possible. After all, they're coding for fun, so putting more work into it doesn't bother them.

Don't flame me over how hard it can be to code something as big and complex as a major game of office productivity suite. I'm not saying that the people reading this post could do a better job than the people who actually coded something like Ultima IX or Windows 2K (though I'm ont sayign they can't either). I'm just theorizing that coders are upset when they see a bad commercial program in the same way a professional architect looks a an ugly building and says to himself, "I could have done a better job than that. Why do they build ugly crap like that, anyway?"

Mr. Spey
Cover your butt, Bernard is watching.

[ Parent ]
An Old Question (5.00 / 6) (#18)
by Simon Kinahan on Mon Oct 02, 2000 at 11:05:46 AM EST

Software sucks because its complicated. Thats the final word. Its not mine either, Fred Brookes (author of "The Mythical Man Month", manager of the OS/360 project for IBM) wrote an essay called "No Silver Bullet" in which he explained exactly this point. Many software problems have irreducible complexity, in which lots of different factors interact to produce one big, complex problem that cannot usefully be broken down into bits. When you have to solve such a problem, it takes a long time, and, business being what it is, software firms often ship stuff thats not properly finished. This is the flip side of software's flexibility: the more flexible something is, the harder it is to use it right. Compare bash vs Windows Explorer.

Thats the core problem. However, we shoot ourselves in the foot in all kinds of little ways too. We fail to reuse techniques for solving problems that are well established, we choose to solve irreducibly complex problems when a simple solution would do for out purposes, we aim for performance before stability, etc. In these ways we make the task more complex than it must be.

This is the primitiveness of the craft of software. Any claim you hear that we're a mature engineering discipline is facile, at present. There's perilously little established lore about how to solve problems across the industry, and similarly there is no good way to evaluate
someone's skill as a developer except by experience and references. If you compare this with other disciplines: are you going to let some bloke who did an OK job on a bike shed remodel you house ? No chance. But thats exactly the state of affairs in software.


If you disagree, post, don't moderate
Better Languages Can Help (4.60 / 5) (#26)
by WonderClown on Mon Oct 02, 2000 at 04:53:49 PM EST

There is no easy fix for this, because the simple fact is that software systems are often inherently complex, and usually developed on crunched schedules. However, the right programming language can be rather helpful in improving software quality.

First of all, nobody should be using C and C++ to write typical applications anymore. When I say "typical,", I mean apps which are not performance-critical, hard real-time, or very close to the hardware level. For operating systems, device drivers, hard real-time industrial controllers, and the like, C/C++ is the way to go. For everything else, use a garbage-collected language. Then there will be no more memory leaks.

Secondly, use a strongly and statically-typed object-oriented language. I don't have time to produce a lengthy discussion of why now, but trust me, it helps you catch bugs earlier in the development process (like, at compile time). It also greatly improves the ability to manage the complexity of complex software systems by enhancing modularity and facilitating reuse.

Lastly, build quality-control mechanisms into the code. That means checking parameter validity using assertions, including robust exception-handling, and designing code to facilitate testing and debugging. (Each module/class should be testable in isolation, or near isolation.)

As far as I know, the only widely-used language that satisfies all of these requirements is Eiffel. Java comes close, but lacks strong typing (because of the lack of genericity, aka templates) and does not include an assertion mechanism. (You can roll your own assertions, but then they impact run-time performance. You need to be able to turn off assertion checking when you put the code in production.)

Eiffel is cool. Go to the website, read about it, code in it. 'Nuff said.

Re: Better Languages Can Help (3.00 / 1) (#27)
by CentrX on Mon Oct 02, 2000 at 05:11:51 PM EST

Doing what you say will just propagate huge, bloated programs with garbage collection. Yeah, they won't crash, but it will take forever to do anything :)
-- "The price of freedom is eternal vigilance." - Thomas Jefferson
[ Parent ]
Re: Better Languages Can Help (4.00 / 2) (#28)
by WonderClown on Mon Oct 02, 2000 at 05:23:06 PM EST

Bloated programs are usually a combination of crappy programming, dumb feature requirements from the marketing department, feature creap, and hastened development cycles. Object-oriented programming and garbage collection do not make these problems worse. (I've seen plenty of bloated C apps in my day.)

Writing good code requires good programmers working in a proper environment. If the programmers suck or the environment is not conducive to quality work, the end result will suck.

[ Parent ]

Re: Better Languages Can Help (none / 0) (#36)
by Spendocrat on Tue Oct 03, 2000 at 11:34:30 AM EST

What specifically are you doing in C++ that's causing memory leaks?

Don't do that any more.

If you're actually programming *in* C++ (and not C, they're hugely different now, despite peoples' prediliction for lumping them together) you shouldn't ever be getting memory leaks. Sure, you can get them still, but only if you're a clueless programmer, or if you try.

Also, there are free garbage collectors now for C++.

[ Parent ]

Re: Better Languages Can Help (none / 0) (#39)
by WonderClown on Tue Oct 03, 2000 at 03:07:59 PM EST

To do memory handling in C++ properly (using destructors) requires setting firm ownership relationships, which ties the lifecycle of each object to its owner. This causes complications when sharing objects among multiple other objects. In some cases, there is no one particular owner object that each object should share a lifecycle with. This is where garbage collection is quite useful; it automatically knows when all references to a shared object have gone away, and deallocates the object.

Adding a garbage collector to C++ is nice, but that turns it into a garbage-collected language, which qualifies it as a reasonable language for typical application development. I still have some issues with C++, but making it garbage-collected makes it much more usable. The last time I programmed in C++ (which will hopefully be the last time I program in C++), I ended up implementing my own reference-counting scheme to get automatic memory management. It was OK, but it had some problems. I'd rather just let the compiler and runtime handle it for me.

Also, C++, while being different that C, was designed to be "compatible" with C. This helped in the adoption of C++, but it makes the language needlessly complex. Eiffel, being designed from the start to be object-oriented, and not needing compatibility with non-oo languages (well, you can interface Eiffel with C, but that's different), is much cleaner and simpler. But if you want to use C++ with garbage collection, that's a reasonable choice.

[ Parent ]

I don't know, all these questions...! (none / 0) (#32)
by aztektum on Tue Oct 03, 2000 at 12:56:39 AM EST

Hey just make a standardized computer. My N64 and Playstation doesn't lock up on me. If you have one type of architecure to code for then you have fewer problems to worry about. But nooooo companies have to squander their resources and man power by making major computing break throughs that are overshadowed in a day. Nice. Commercialied society sucks because you know they're working towards the all mighty dollar and not a more bug free (not just computer bugs) society, which brings me back to buggy software. If people would take their damn time in the first place.

Well, the answer to my question is obvious (5.00 / 1) (#35)
by Defect on Tue Oct 03, 2000 at 08:23:20 AM EST

We're just too understanding.

And to respond to a few comments, i /am/ a programmer and have been for over 7 years.

I know software is hard to create, but should we be this understanding? We pay a lot of money for both the software and the hardware, shouldn't there be many more standards that are actually followed?

Is there a happy medium between standards and innovation?
defect - jso - joseth || a link
Computers are funny | 40 comments (38 topical, 2 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!