Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Crazy ideas in your head

By hardburn in Technology
Sun Sep 09, 2001 at 09:35:28 AM EST
Tags: Round Table (all tags)
Round Table

Do you ever wake up in the middle of the night, and suddenly a great idea pops into your head? I know I do; many (perhaps most) my best ideas come to me that way.


I think a lot of us feel that modern software sucks. I feel that we need a lot of good ideas if we are to solve this problem. Therefore, I am asking the K5 community to share their ideas here. It doesn't matter how crazy or seemingly useless the idea is; I say you should announce your idea and let other people worry about how useful it is.

In nearly every field of science, there is a "pure research" department, and millions of dollars corperate and government money goes into it each year. Within these departments, there is a strong urge to create things which have no practical value whatsoever. Part of the reason for this is that the internal culture has grown up around this idea, but the main reason is that you have no way of knowing just how useful a given discovery is going to be.

Why should Computer Science be any diffrent? In fact there have been several such corperate-run departments (such as the old Xerox PARC), but this site has lots of Free Software/Open Source developers. In harmany with those ideals, there should be a forum for spreading these seemingly useless ideas to people who can make them useful within the community.

Again, I stress that it doesn't matter how seemingly useless your idea is. Follow the tradition of other pure research and actualy strive for uselessness. You get bonus points for ideas which are slightly insane or extremely efficent. Now go ahead and post them below.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o modern software sucks
o crazy
o Also by hardburn


Display: Sort:
Crazy ideas in your head | 151 comments (143 topical, 8 editorial, 0 hidden)
Software doesn't suck (3.50 / 2) (#3)
by DeadBaby on Fri Sep 07, 2001 at 09:33:03 AM EST

While I do believe 90% of everything sucks I would be willing to say most of the software I actually use is quite good. The only test for me is, does it get the job done and the answer is almost always yes. Often even poor quality software can get the job done so I have never understood where the idea that most software sucks comes from.

It seems to me, it's like saying a screw driver sucks becuase you have to remember which way to turn it.
"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Carl Sagan
Getting the job done is the first requirement (4.00 / 2) (#5)
by roiem on Fri Sep 07, 2001 at 09:41:07 AM EST

Then comes ease of use, appearance (yes, that does count), efficiency and a bunch of other things, most of which probably fall into a few major categories.

Justa s an example, at work I have the misfortune of using Visual SourceSafe. (Not my choice, I'd have chosen something else). Now, yes, it gets the work done, and the engine's pretty OK as far as features. (Most features, such as diffs, are actually part of the front end and not part of the engine). I was thinking of using the OLE interface to create a new UI to the same thing. Naturally, the new UI will get the job done just as well (or just as bad) as the old one, but it will be more asthetically pleasing, easier to use, etc. if I ever get it done the way I want it.
90% of all projects out there are basically glorified interfaces to relational databases.
[ Parent ]

answer (2.00 / 2) (#17)
by regeya on Fri Sep 07, 2001 at 01:50:08 PM EST

Go find me a modern word processor whose code fits in <16K. General-purpose text editors don't count.<P> Go find me a recent, semi-popular game whose machine code fits in <4K.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

Good point (4.00 / 3) (#24)
by jacob on Fri Sep 07, 2001 at 06:32:18 PM EST

Since I have only 40GB of hard drive space and 512 MB physical RAM, having sub-16k apps is a reasonable priority. =p

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Bah (3.00 / 3) (#37)
by regeya on Sat Sep 08, 2001 at 05:14:33 PM EST

Not an excuse for poor programming. Shame on you.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

Of course (4.50 / 2) (#51)
by jacob on Sun Sep 09, 2001 at 12:41:13 PM EST

it's not an excuse for poor programming, but it IS an excuse to prioritize speed of development and gee-whiz nifty features over generated code size.

Consider: If I write my program in Scheme and use a compiler to turn it into C code and then a C compiler to turn it into assembly, I'm going to necessarily bloat the code size considerably unless I have two amazingly good code-size optimizing compilers, which nobody will write (for good reasons). Therefore, is my Scheme program poorly-written, even though it abstracts things so well I can replicate the behavior of your hundred-line C program in 5 lines of my Scheme code while making it much clearer what the program is supposed to do and facilitating its reuse in other programs? That depends on the platform you're running on. I'd say that on machines with 64k of RAM, yes, my program is worse. On machines with 512M of RAM, my program is ten times better.



--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Jesus...you don't get it. (none / 0) (#73)
by regeya on Sun Sep 09, 2001 at 09:23:00 PM EST

And that attitude is why my machine, a K6-300 with 64MB of RAM, and a 20GB hard drive, along with a Voodoo3 2000 video card, not to mention that my only Internet connection is a 56K dialup connection, is considered to be less than adequate. Never mind that, 10 years, ago, I could get real work done on a machine with 640KB of RAM, a 360K floppy drive, and a crappy dot-matrix printer...without a 'net connection, thank you very much. And yes, I know that 10 years ago the machine in question was way obsolete.

The attitude that "Oh, look, I have CPU cycles to burn," is such a crap attitude. If that's your attitude, stop programming. Now. I don't want your shitty software. Go away. Piss off and learn to program, not how to do things elegantly or quickly. I credit that attitude as most of the reason I got alienated and switched majors. I may be poor, but I'm not writing piss-poor software that people hail as wonderful simply because their computers are fast enough to run the trash.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

Erm, sorry. (none / 0) (#74)
by regeya on Sun Sep 09, 2001 at 09:45:59 PM EST

Let the parent comment stand, though, as a living testament of the necessity of thinking things through before replying.

Okay, I do stand by my conviction that software needs to be more efficient. A vicious cycle seems to exist: hardware gets faster, RAM gets cheaper, storage space gets cheaper, consumers feel that they're going to be able to run software faster than ever before, and programmers take advantage of the increased cycles/storage/memory. There are nice eye-candy applications, as well as making supercomputing ultra cheap.

I suppose there are benefits to sloppy programming, as well.

And I'm not saying I'm not guilty of the same (not that I program any more, as I don't see a need at the moment...though I'm considering a project, which will probably encompass many bloat technologies, mainly for the reasons you listed above. Hey, I don't want to write everything myself. :-) I still stand by my conviction that the Tandy 1000 EX currently relegated to serial-terminal status could be of use to modern programmers...if only they'd bother to learn platform-specific assembly-language techniques.

[ yokelpunk | kuro5hin diary ]
[ Parent ]

Better sorry then wrong. (none / 0) (#86)
by stfrn on Mon Sep 10, 2001 at 05:51:16 AM EST

Let me start by saying thank you. As someone who often has crazy ideas (but isn't crazy enuf to post them here), i was saddened at all the harsh words exchanged on this story, just because people didn't like others ideas. So your retraction gave me enuf hope to post.

My view on programing(bloat vs time consuming) is thus: we are not using good computers yet. These things? They are just prototypes. Funtion protypes, but just like most people joke about microsoft products, they are being used well before they actually work. Just look at cars for example. the idea of horseless carige was around for a while, then people starting buying them, but they were a hassle to use. you had to crank them, oil every 5 miles, whatever. Now you just jump in a car and off you go.

Well, i guess cars are still being worked on, but mainly in the area where computers are involved. My point is however, that i am hoping, at some not to distant point, computers will move beyond the limitions we currently attach to them. Very idealistic, but it has lead me to code mostly in the highest levels of abstraction, where i can experemnt with what will happen more than what is happening.

"Man, I'm going to bed. I can't even insult people properly tonight." - Imperfect
What would you recomend to someone who doesn't like SPAM?
[ Parent ]

I have to say it anyway. (1.00 / 2) (#78)
by delmoi on Sun Sep 09, 2001 at 11:21:15 PM EST

The attitude that "Oh, look, I have CPU cycles to burn," is such a crap attitude. If that's your attitude, stop programming. Now. I don't want your shitty software. Go away. Piss off and learn to program, not how to do things elegantly or quickly.

If you don't like stuff programmed that way, then don't use it. But don't tell me how to program. I don't owe you anything.

Of course, callmed down a bit and replied to this comment already, but I really wanted to post this reply :P
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Small code (5.00 / 1) (#115)
by k31 on Mon Sep 10, 2001 at 11:01:22 PM EST

Colorforth, and related ilk.

In particular, the stuff about the VLSI design program.

As for games; look at any gameboy game... most of the space is probally graphics/sound. In fact, that was true even for huge neo-geo games; all that space for mostly graphics (and great sound samples!!!).

Your dollar is you only Word, the wrath of it your only fear. He who has an EAR to hear....
[ Parent ]
Compression using Checksums (4.00 / 9) (#7)
by hulver on Fri Sep 07, 2001 at 10:07:29 AM EST

Right, this is going to be a very very quick compression, but a very very slow decompression.

So, to compress something. You take a 4K block of it and calculate the MD5 sum, and the SHA sum of it. You then store these (quite small yes). You also store the MD5 sum and the SHA sum at 8 and 16 K intervals.

Right so for a 32K file, you have.

  • 8 4K block checksums
  • 4 8K block checksums
  • 2 16K block checksums

Thats it, thats all you store.

Now, to decompress it. You brute force the MD5 sum of the first 4K block. When you find one that matches (OK, this might take a while) you check the SHA checksum. If this dosn't match, you carry on brute forcing until you get one that does. Repeat for the next 4K block.

Provided the universe still exists, you then checksum these two blocks together, and compare against the checksum that was stored for the 8k block.

Repeat until either.

A. You have recovered a reasonable facsimile of the data (well, the checksum will be right anyway)
Or
B. The heat death of the universe occurs.

Is that wacky enough for ya?

--
HuSi!

Perfect! (3.25 / 4) (#8)
by hardburn on Fri Sep 07, 2001 at 10:16:12 AM EST

This exactly the kind of stuff I was looking for. One warning: Your idea has a problem with birthday attacks. However, using both the MD5 and SHA checksums should avoid that.


----
while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


[ Parent ]
Birthday attack? (4.00 / 2) (#62)
by delmoi on Sun Sep 09, 2001 at 03:33:21 PM EST

It's not really an attack. And the algorithem is crap. It will never compress anything.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Compression (2.00 / 2) (#11)
by westgeof on Fri Sep 07, 2001 at 11:53:34 AM EST

Uh, do you mean encryption? It soesn't seem like it will be compressed very well if you convert a 32K file into 14 files totaling 96K. You just tripled the file size.

Or it could have been a typo (32M file maybe?) Or maybe I just totally failed to follow your logic.


As a child, I wanted to know everything. Now I miss my ignorance
[ Parent ]
Misinterpretted what he meant (I think) (4.00 / 2) (#13)
by gcmillwood on Fri Sep 07, 2001 at 12:32:39 PM EST

He said:
Right so for a 32K file, you have.
8 4K block checksums
4 8K block checksums
2 16K block checksums


To be a bit clearer he should have said:
8 x The checksum of a 4k block in the original file
...
Each md5 checksum is 128bit or 16bytes, and each SHA-1 checksum is 20bytes.

So for a 32k file you end up with
(4x16) + (4x20) + (2x16) + (2x20) + (1x16) + (1x20) = 252 bytes.

Sounds like a good compression ratio to me.


[ Parent ]
Pigeonhole (5.00 / 1) (#36)
by fluffy grue on Sat Sep 08, 2001 at 11:41:38 AM EST

You have 232*1024*8=2262144 possible 32K files. You can only represent 28*252=22016 of them. So only a tiny, tiny fraction of the files (1 in 2260128 - which is pretty damn close to Nothing) can be recoverably compressed by that scheme.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Huh? (2.50 / 2) (#15)
by Surial on Fri Sep 07, 2001 at 01:27:43 PM EST

How does this compress anything?

I don't know how to prove this or anything,
but won't there be more than 1 piece of 32k data that will generate all those checksums?

I understand that it's enormously unlikely to find a piece of 32k that generates all those checksums (just generating the SHA1 with random bytes should take essentially forever), but there will *still* be more than 1 piece of data generating the same checksum.

I have a hunch that nothing is getting compressed (I don't see how you are compressing anything, encryption generally doesn't compress anything, and makes further compression impossible too). If nothing is getting compressed, and you reduce 32k to 252 bytes of checksum, then there would be x different 32k messages which create that exact set of checksums, where the size of x in bytes (taking x as a bignum) comes to 32000 - 252. In other words, *ENORMOUS* amount of messages (2^32000-252) match all those checksums. Then again, this is completely nothing compared to 2^32000, which is the number of different possible combinations of bytes generating a 32k message.

Am I making any sense?
--
"is a signature" is a signature.

[ Parent ]
No, he's right. (3.00 / 1) (#25)
by mindstrm on Fri Sep 07, 2001 at 08:35:43 PM EST

He's not encrypting.. he's hashing it...

And, in theory, what he says is perfectly valid. You are taking a set of data and breaking it down into something that you can apply an algorithm to in order to recover the original. That's what compression is all about.

Of course, it can't be done today, and probably can't be done in the forseeable future.. so it has no practical application whatsoever.



[ Parent ]
Hash implies more than 1 dataset, no? (4.00 / 1) (#33)
by Surial on Sat Sep 08, 2001 at 08:52:50 AM EST

Isn't hashing reducing your data to a set of 'bins', where each bin only contains a relatively small amount of different pieces of data by virtue of a certain formula (the hash formula)?

Unless it's a one-on-one hash algorithm, something which I doubt (I'm not good enough in crypto math to prove that either it is one-on-one or to prove it's not, though...), you will need an 'index' to go along with your signatures. And my guess is the amount of bytes needed to store that index is 32k - 252. But that's just a hunch.
--
"is a signature" is a signature.

[ Parent ]
Not getting you... (none / 0) (#39)
by mindstrm on Sat Sep 08, 2001 at 08:45:39 PM EST

He's using two different hash algorithms, because they aren't one-on-one hashes. The odds of a pair of datum generating the same hash on two totally different algorithms is very high, is what he's implying.
I'm no mathematician.. dunno if you could prove it or not.


[ Parent ]
Right! (2.50 / 2) (#43)
by Surial on Sun Sep 09, 2001 at 09:45:36 AM EST

... unfortunately I can't actually prove it. However, because of these two facts, I strongly believe but can't quite prove that this in fact will not compress anything:
  • Hashes don't compress. They ease lookups, but they won't compress. Storing the hash number in addition to the 'index' into the hash table will end up to take as many bytes as the original number (give or take a few, some a bit more, some a bit less)
  • If this really compresses, you can use it to compress 100% random data. And that's impossible. This is as close as I can get to a proof.

--
"is a signature" is a signature.

[ Parent ]
Nope. You missed the point.. (3.00 / 2) (#48)
by mindstrm on Sun Sep 09, 2001 at 12:01:25 PM EST

He's not talking about later using hash tables to look things up.. he's talking generating those tables on-the-fly during 'decompression', ie: the same method used in brute-force password cracking.

He stores 2 hashes, and then a big algorithm ona quantum computer sits there and looks for data (of the known length) that match both hashes. If you find that, you have the original data.

The reason it can't work now, and doesn't, is because we don't have the computing power, period. nowhere near it. Won't happen.

The hash table you mention would, in effect, he the equivalent of an infinite number of monkeys.... you get the idea. Impractical.
But if we had a computer fast enough to generate those tables quickly, and test against them realtime.....

Yes, it is possible to compress 100% random data this way. This is not really 'compression' in the traditional sense.. it's more like 'brute-force recovery'.

And why would you ever *want* to compress random data?


[ Parent ]
ARGH (4.50 / 2) (#56)
by fluffy grue on Sun Sep 09, 2001 at 01:06:18 PM EST

And it's still mathematically impossible. Read up on my other posts on this thread, which do informally prove this!
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

See this comment (none / 0) (#89)
by hulver on Mon Sep 10, 2001 at 06:11:45 AM EST

For a solution



--
HuSi!
[ Parent ]

Sorry (none / 0) (#108)
by fluffy grue on Mon Sep 10, 2001 at 02:07:12 PM EST

Quantum computers don't magically solve the pigeonhole problem.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Did you actually read the post? (none / 0) (#121)
by hulver on Tue Sep 11, 2001 at 04:04:58 AM EST

Nowhere in that post did I say that Quantum computers would solver the pigeon hole problem. I did say that Quantum computers might make brute force calculation of MD5 hash values quicker.

--
HuSi!
[ Parent ]
Thing is... (none / 0) (#124)
by fluffy grue on Tue Sep 11, 2001 at 10:23:43 AM EST

Even fast md5 bruteforcing wouldn't make the problem any less intractable. Even assuming a means of instantaneously finding all bit patterns which can generate a particular hash, you still have a massively disproportionate number of unhashed bitpatterns generating the same hash (i.e. the pigeonhole problem).

Speed isn't the issue in an incomputability problem; incomputability proofs never bring up the issue of how much time it'd take, because it's assumed that you have "unbounded finite" (technically not "infinite" - there's a subtle difference) computation time available. My informal incomputability proof assumed that even if you have billions of years with computers millions of times has fast as today's, you will never be able to recover the "compressed" data.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Yes (none / 0) (#125)
by hulver on Wed Sep 12, 2001 at 05:29:57 AM EST

I get it now.

--
HuSi!
[ Parent ]
for fun? (5.00 / 1) (#61)
by delmoi on Sun Sep 09, 2001 at 03:28:25 PM EST

And why would you ever *want* to compress random data?

To prove how bad-ass your compression engine is.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
more than 1 matching dataset. (1.00 / 1) (#70)
by Surial on Sun Sep 09, 2001 at 08:21:38 PM EST

The point I'm trying to make is that there is more than 1 32k dataset which will generate all those hashes. If a single compressed datablock decompresses non-deterministically (ie: can decompress into more than 1 block), you aren't compressing anything...
--
"is a signature" is a signature.

[ Parent ]
Yes! (none / 0) (#71)
by mindstrm on Sun Sep 09, 2001 at 08:54:23 PM EST

Which is why he's suggesting using multiple hashes to see where they intersect.

He's going on the assumption that no pair of datasets exist that will generate equal hashes using two different hash algorithms.



[ Parent ]
... Which is ludicrous (1.00 / 2) (#72)
by Surial on Sun Sep 09, 2001 at 09:00:06 PM EST

that's a stupid idea.

Then again, I feel pretty stupid for not immediatly realizing the pigeonhole principle proves that this won't work.

My excuses too for not reading some other responses; I was relying a bit too much on clicking on my own comments in the 'your comments' section.
--
"is a signature" is a signature.

[ Parent ]
NO! (4.50 / 2) (#64)
by delmoi on Sun Sep 09, 2001 at 03:40:13 PM EST

If you combine hashes, all you have is a bigger hash!

That's it, nothing else. And you won't get the orgional file, you'll just get one of 2n where n is the diffrence in the number of bits between the size of the data and the size of the hash. In order to find out which block you have, you need an index of n bits. n bits + the size of the hash is the same size as the orgional data!
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
It can't *ever* be done. (5.00 / 1) (#59)
by delmoi on Sun Sep 09, 2001 at 03:20:05 PM EST

Just like you can't brute force a one time pad, you can't brute 'de-hash' anything. Remember all he's doing here is creating one big hash that's 14 times the size of a normal one, nothing else. a 14x MD5 hash would be 224 bytes, or 1,792 bits. 32k is 262,144 bits. So, There are only 21,729 possible combinations of the hash, but 2262,144 possible 32k chunks. That means 2262,144-1,729 32k chunks are possible for each 224byte hash. That's 2260,352 blocks. How do you know which one is the right one? You don't.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
which is why... (none / 0) (#91)
by jas on Mon Sep 10, 2001 at 07:40:43 AM EST

...he said to also use a SHA hash. Find the one that works for both, and you have "decompressed" your data

[ Parent ]
Use both hashes, and find many that fit both (none / 0) (#94)
by simon farnz on Mon Sep 10, 2001 at 08:48:53 AM EST

The pigeonhole problem implies that both hashes still leaves many sets of data that work for both; how do you select the appropriate set?
--
If guns are outlawed, only outlaws have guns
[ Parent ]
Actually.... (none / 0) (#148)
by DavidTC on Sun Sep 23, 2001 at 11:52:46 PM EST

I know all the pigeonhole stuff, but this might work if, for example, you know it's text, and you have a *lot* of time on your hands. You simply discard each binary result, discard all results with more then 5% mispelled words, discard all blatant grammarical nonsense, like missing or added punctuation, discard all files that don't have ' the ' and ' and ' in them somewhere, then sort them by decreasing spelling and grammar, and stick everything left in front of a human.

Basically, use the same logic rules that computers use when brute forcing encrypted text files.

I don't really know enough about the odds of getting a 'legal' English documents from random charactors, though. (At least, legal enough to not fail the machine test.) I suspect you wouldn't get two that would fool a human, though, espectially if they knew what kind of infomation the message was supposed to contain. (Should it be a sappy love letter, or military orders?)

Of course, for a bandwidth/processor-wise, we'd be using quantum computers to decode less information then some people can store in their head. Not a very good use of time, and, come on, we have enough bandwidth to transfer *real* encoded (after being compressed) files. ;)

-David T. C.
Yes, my email address is real.
[ Parent ]

No. (none / 0) (#95)
by delmoi on Mon Sep 10, 2001 at 08:50:32 AM EST

All you're doing is creating a 'bigger' hashing algorithem. You still only have one hash, it's just bigger. You can combine diffrent algorithems and diffrent bit lengths all you want, and you're really just running in circles.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Won't work (5.00 / 3) (#35)
by fluffy grue on Sat Sep 08, 2001 at 11:33:54 AM EST

Read up on the pigeonhole principle as applied to compression schemes. I can't find a link offhand, but the basic idea is that you can't have guaranteed-lossless compression. Consider that you have a compression scheme which can always compress 1024 bits down to 512 bits. So you have 21024 possible files before compression and 2512 possible files after compression. Because there is no way to distinguish between two uncompressed files with the same compressed signature, you can only possibly compress 2512 of the 21024 possible files - so it cannot work for all files (and, in fact, for every file which you can recoverably compress by that scheme, there are 2512 files which you can't compress).

There's a reason that hashing algorithms are considered "one-way." There's a many-to-one mapping, and so there is (in the pedantic mathematical sense) no possible way to have an inverse function, even a brute-force one.

Even really fancy checksum schemes can't get past the pigeonhole principle - no matter what you do, for any bit pattern in the uncompressed domain, there'll be some other bit pattern in the uncompressed domain which maps to the same bit pattern in the compressed domain. (Assuming that you're doing this "guaranteed" compression mechanism, that is.)

Even the best lossless compression schemes mathematically can not guarantee a certain level of compression on every file. The best you can hope for is that a file which cannot be compressed by that algorithm will not grow substantially.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Okay.. (none / 0) (#113)
by mindstrm on Mon Sep 10, 2001 at 07:06:42 PM EST

NOW I see what you are saying.

Makes perfect sense. Thanks.


[ Parent ]
Stupid algorithms (none / 0) (#147)
by DavidTC on Sun Sep 23, 2001 at 11:35:21 PM EST

The best you can hope for is that a file which cannot be compressed by that algorithm will not grow substantially.

Of course, once you've compressed the data, and found it winds up bigger, you really should, duh, just save the uncompressed data, using an extra byte (or a different file extention) at the start of the file to indicate compressed or non-compressed.

No file, no matter what kind of program compressed it, should ever end up more then one byte bigger when 'compressed'.

(Plus, if you ever get around to it, you can invent more then one type of compression and stick that value in the extra byte, using whatever compression works best on that file)

-David T. C.
Yes, my email address is real.
[ Parent ]

It won't work (5.00 / 2) (#60)
by delmoi on Sun Sep 09, 2001 at 03:21:12 PM EST

Just like you can't brute force a one time pad, you can't brute 'de-hash' anything. Remember all he's doing here is creating one big hash that's 14 times the size of a normal one, nothing else. a 14x MD5 hash would be 224 bytes, or 1,792 bits. 32k is 262,144 bits. So, There are only 21,729 possible combinations of the hash, but 2262,144 possible 32k chunks. That means 2262,144-1,729 32k chunks are possible for each 224byte hash. That's 2260,352 blocks. How do you know which one is the right one? You don't.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
So, all you do (none / 0) (#88)
by hulver on Mon Sep 10, 2001 at 06:09:40 AM EST

Considering you've solved the fact that brute forcing a set of data to cover every eventuallity of an MD5 sum, using a Quantum computer or something. You then store which iteration is going to solve it and give you back the right answer.
This requires that the compressing program also perform brute force lookups of the MD5.
So this is how to compress a file

Compressing program takes a 32K block of the file, and generares an MD5 hash, and an SHA hash of it.
It then starts by filling a 32K block with all zeros and counts up, until it finds a block which matches both Hashes. Right, it then compares the block it found with the original, every time it finds a block with the same two hashes, that dosn't match, it increments it's counter. When it eventually finds which iteration matches the original block, it stores that. Sure it might be the 4343495735345878th iteration, but whats wrong with that.
The decompressing program knows the algorithm, it knows the output, and it knows that the 4343495735345878th iteration of that algorithm is going to give it the correct answer. Might take a while, but it will work, it is just not feasable at the moment.
There will be better algorithms, but the article was to ask for silly ideas, and that's what this is.

--
HuSi!
[ Parent ]

Nope (none / 0) (#93)
by delmoi on Mon Sep 10, 2001 at 08:48:01 AM EST

The problem is, the index you need to use will take the same number of bits to store as the data you are removing. Lets go over the numbers again.

There are 21,729 possible 224 byte blocks.

There are 2262,144 possible 32k chunks.

That means, for every 224 byte block there are 2260,352 possible 32k chunks that can create it. That is a very large number. In fact, due to our 2x wonderfulness, we can see that it would require 260,352 bits to store. Which is way more then the size of your hashes. In fact, when you add 260,352 to the size of the hashes, you end up with the orgional size.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
I might be being thick (none / 0) (#96)
by hulver on Mon Sep 10, 2001 at 09:00:03 AM EST

More than likely, but I don't see how your numbers add up.
I'm Assuming the hash algorithm is well spread, and that each Hash value would have the same number of possible values that could have created it.
So, there are 21,729 possible 224 byte chunks, with 2262,144 possible 32K chunks.
That means, spreading them out, each 224 byte chunk has got a possible 2152 32K chunks that could create that hash. (262144 div 1729)
Is it just me being thick? Maths was never my strong point ;)

--
HuSi!
[ Parent ]
Nope (none / 0) (#98)
by delmoi on Mon Sep 10, 2001 at 09:15:55 AM EST

Ah, ok. Hopefully this will convince you :)

When you divide numbers with exponents, you subtract the exponents. In other words xy/xz = xy-z.

Example: Let's look at a simple exponent. 103 and 102. That's ten cubed, and ten squared. 10*10*10 and 10*10, or 1,000 100. (you'll note that since we are using base 10, you also get 3 and 2 zeros for 103and 102 respectivly. It works in base two the same way. 232 takes 32 bits to store.)

Now, as you can see 1000/100 = 10. So, we see that 103/102 = 101. And, 3 - 2 = 1. See?

So, to find out what 2262,144/21,729 is, you subtract 1,729 from 262,144. And you get get.... 260,352!

Also, think about it this way. If what you said was true, why use the complex, one way, hashing algorithem? Why not, say, just take the last 224 bits or whatever. There would be the same number of 'possible' extractions using that as there would be with the hashes, and you wouldn't waste all that computer power. But I think you can see why it wouldn't work.

Anyway, if you're still not sure, ask any other question you have.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Ah! (none / 0) (#100)
by hulver on Mon Sep 10, 2001 at 09:48:14 AM EST

Ding! (Sound of light going on in my head)
So, the moral of the story is, don't ask hulver any questions about maths.
I bet somebody has already patented my algorithm now as a way of infinatly compressing random data.

Of course, if you think about it, it is impossible. If it worked, you could reduce every file down to a couple of bits.



--
HuSi!
[ Parent ]

Dividing exponents means subtracting (none / 0) (#99)
by zavyman on Mon Sep 10, 2001 at 09:23:01 AM EST

Yeah, math may not be your strong point.

Dividing exponents with a common base means subtracting the exponents like so:

262,144 - 1,729 = 260,325

Yielding 2260,235.

[ Parent ]

Actually... (3.66 / 3) (#14)
by Surial on Fri Sep 07, 2001 at 01:09:32 PM EST

What I'd love to see is a website. You leave your idea on the web-site, and you categorize it through some fairly specific tree system, a lot like egroups, SourceForge, DMOZ, etc etc trees.

Ideas can be rated by other users of this system.

The dBase should be 'open', that is, anybody can just go ahead and implement an idea.

In order to make sure that happends, patented stuff is not allowed on the website. The site would also play a useful role in providing a huge dBase of potential 'prior art' (which you can use to get rid of patents).

Of course, the submitter of the idea could be reached by interested parties for a more elaborate explanation, or perhaps even help on making it happen. That would probably be done for payment.

I often come up with ideas for various things, and I would gladly hand 'em out for free, because usually, if anybody works out my idea, I benefit from it somehow. Better software, a better washing machine, whatever.

You could of course say that you'd rather want to cash in on your idea, but in order to do that you need expertise, time, and money, and not everybody is willing to expend all that.

Then again, maybe something like this exists and somebody can point me to it?
--
"is a signature" is a signature.

It already exists (none / 0) (#122)
by mumble on Tue Sep 11, 2001 at 06:32:28 AM EST

shouldexist.org

-----
stats for a better tomorrow
bitcoin: 1GsfkeggHSqbcVGS3GSJnwaCu6FYwF73fR
"They must know I'm here. The half and half jug is missing" - MDC.
"I've grown weary of googling the solutions to my many problems" - MDC.
[ Parent ]
Already possible, and simple (none / 0) (#151)
by bpt on Mon Oct 08, 2001 at 02:08:40 PM EST

The infrastructure to create this kind of system already exists; it only takes a bit of configuration to do this. Requirements:
  • NNTP server: probably INN, perhaps with custom extensions for keeping archives.
  • Collaborative scoring system: the only one I know of is GroupLens.
  • A newsreader capable of using the GroupLens system (the only one I know of is Gnus).
  • A moderator to check for patented material: train your cat to do this if you're too lazy.
The tree hierarchy is the hierarchy of ``ideagroups''. Eg, sci.time.travel, comp.os.emacs.games.roguelike.emacs-angband.
--
Lisp Users: Due to the holiday next Monday, there will be no garbage collection.
[ Parent ]
Related Sites (4.60 / 5) (#16)
by quam on Fri Sep 07, 2001 at 01:43:36 PM EST

See Halfbakery and ShouldExist.

-- U.S. Patent 5443036 concerns a device for encouraging a cat to exercise by chasing a light spot.
Revolutionizing compilers, one nanometer at a time (4.00 / 4) (#18)
by jd on Fri Sep 07, 2001 at 01:58:53 PM EST

The Multi-Pass Semantic Compiler, Optimizer, Linker, Prover

One of the big problems with modern compilers and linkers is that they are often single-pass, and often only look at a small section of code at a time. This means that you will get spurrious errors, poor optimization, and possibly incorrect code generation.

What you -really- want is to first "semantically" compile the program. In other words, establish what it does, rather than how. I suspect that this is a lot the same as the Stanford Validator, but we won't know until they release it.

Once you've established the semantics, the code can be optimally re-structured. All you need is a structure S that has identical semantics, but which is optimized by whatever parameter you want. This beats the rule-based optimizations compilers usually do - a rule isn't guaranteed to work for all situations, whereas you can prove the semantics are the same.

Again, the linker would operate semantically, not syntactically. Syntactically, if A calls B, then the linker will link both A.o and B.o into the final binary. If you were to do the same thing semantically, you'd want to know what bits of A and B are "live". If chunks of A are never reachable, and those are the only ones to call B, then you can omit those parts of A.o and omit B.o entirely.

Besides potentially producing smaller compilers, what would be the benefit of semantic compilation?

First, you'd pick up a whole lot more errors. Things can't just "look right", and get away with it. This is where this new element, the "Prover" comes in. You can state, mathematically, the bounds given elements can lie within, and then compare these specifications with the potential values those elements can have at those same points.

Let's say you have two files, "myprog.c, myprog.vfy". myprog.c contains within it the line: a += b ** 2;

Now, myprog.vfy contains the following line to go along with that: a >= 0

The rule in the .vfy file is your checkpoint. It can only be true if the value of 'a' is positive or greater than 'b ** 2'. Assuming all conditions at the last checkpoint were met, then if this check fails, the error exists between here and the last checkpoint, and nowhere else.

By validating programs this way, you can ensure that they either have no bugs at all, or a known set of bugs under a known set of conditions. (And, ideally, you'd aim for the "no bugs" category.)

Now, at first you might argue that the validator violates the Turing Halting Problem. After all, you can't know (in general) if a program will work correctly or not, without running it. This is still true. ALL you have done is automated the proving of those sections that can be proved. However, unless you like using lots 'n' lots of recursion, you really shouldn't have a lot of non-provable code, so this kind of validation should be extremely effective at eliminating bugs very early on.

Let's see (4.00 / 1) (#55)
by fluffy grue on Sun Sep 09, 2001 at 01:02:39 PM EST

Programmers are already too lazy to use assert()s, and what you're describing is basically a really complicated version of assert() which would require solving one of the holy grails of Computer Science to implement.

What's wrong with just using assert()?
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

assert() (none / 0) (#97)
by jd on Mon Sep 10, 2001 at 09:06:17 AM EST

The assert() statement is excellent, and provides a reasonable first-step in what I'm trying to describe, here.

That it's largely unused is something that needs to be tackled, but one impossible mission at a time. :)

What I'm looking for is something that will determine if the -meaning- of a function is correct, not just the syntax. If you were looking at the assert() function, it would be the same as the compiler running a bunch of test cases against the assert()s, to see if they were logically consistant -and- that, for EACH function, assuming the initial assert() holds true, will the assert() at the end of that function be necessarily true.

This is difficult, but not impossible, to do. Let's say your initial assert() states that x > 1, and your function reduces x by 1. A final assertion of x > 0 is necessarily true. A final assertion of x <y, where y is any number less than 0 is necessarily false. A final assertion of x> y, where y is any number greater than 0 is potentially false, and (if you want strict checking) should be treated as if necessarily false.

Ok, so how do you do this, without clogging up the machine by testing every possible value, and seeing if it ends up invalid?

Well, there are short-cuts. If you want the compiler to also do test runs, it at most needs to run through four cases for every function that can be reversed. (Take the minimum and maximum values of the pre- and post-conditions, and run the code forwards or backwards, then see if the values still lie within bounds.)

For more complex functions, you'd need to be a bit more crafty. You need to describe the function in abstract terms, collect operations dealing with the same data as far as possible without changing what the program does, simplify, and test each block of what's left, as above. (This would bloat the compiler something chronic, but if the resultant code was better, it might be worth it.)

PLEASE NOTE: You can't sort all code, all the time. Let's say you have the code:

y += 1;
x += 1;
y += 1;
if (x > y) do_something();
x += 1;

Now, the two y += 1's -can- be combined, because there is nothing dependent on y between those two statements. The two x += 1's cannot, because there is an intermediate dependency, the if.

Now, y += 2 will meet the same assertions, at the start and end, as two y += 1's.

If you want to be even smarter about it, then you would take the initial assertion, and concatinate operations onto it. If the resulting expression is functionally identical to the last assertion, then the code is correct.

This is harder, because what you're doing involves being able to simplify equations, collect terms, and generally do the stuff some of the better maths packages do, only for potentially horrible equations, and multiple variables.

This is the same as running the 4 cases, in lots of ways, except that you don't have this nagging "what if..." - what if some other value hits a discontinuity in the maths or logic? what if the results are horribly non-linear, so taking the extremes -doesn't- give an indication of how the rest of the values will behave?

By showing that your initial assertion, plus your function, equals your final assertion, for each path through that function, you're assured that if anything is incorrect, it is at least incorrect in a predictable way, and to the coder's specification.

Now for the REALLY complicated part - something that'll handle low-level operations. Here, we can make use of the fact that we are moving more and more towards validating the function, rather than the form. A low-level operation (such as calling a function via a pointer) really doesn't matter, provided we know what's passed in, what's returned and that the called function doesn't try to manipulate anything outside of its scope. (When you get side-effects in a program, ALL bets are off. You can't check it, because you have no means of knowing what to check.)

So, what we're doing, here, is to say that read-only parameters are going to be unchanged, and that read-write parameters AND any return value can have any value that is valid for that type. If it's a function within the program, you can also assume that the value(s) will comply with the post-condition.

Since we're testing all functions independently (even those in the same file), we can assume a valid output from any function, and thus just swap the function call with the post-condition, for the purpose of validating.

[ Parent ]

Yes, I understood (none / 0) (#107)
by fluffy grue on Mon Sep 10, 2001 at 01:34:20 PM EST

I understood that you were looking for a semantic compiler. That's one of the holy grails of computer science, and I believe it's been proven that you can't write one which works in general to begin with (as an extension of the halting problem).
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

impossible; already done; they won't use it (5.00 / 1) (#67)
by ryanc on Sun Sep 09, 2001 at 04:43:52 PM EST

One of the big problems with modern compilers and linkers is that they are often single-pass, and often only look at a small section of code at a time. This means that you will get spurrious errors, poor optimization, and possibly incorrect code generation.
I don't know of any compiler that anyone uses that only has one pass. I know of at least one that has more than a dozen passes. Errors and incorrect code means that your compiler is broken. It's not an optimization if it ever produces incorrect code.
What you -really- want is to first "semantically" compile the program. In other words, establish what it does, rather than how. I suspect that this is a lot the same as the Stanford Validator, but we won't know until they release it.

Once you've established the semantics, the code can be optimally re-structured. All you need is a structure S that has identical semantics, but which is optimized by whatever parameter you want. This beats the rule-based optimizations compilers usually do - a rule isn't guaranteed to work for all situations, whereas you can prove the semantics are the same.

In a language with low-level features (C, C++, FORTRAN, etc) this would probably be infeasible for any but the smallest blocks of code. And it has been done as a peephole optimizer for small blocks of code (on the order of 5-10 instructions, I think). You can't just wave your hands and pick the optimal representation for a computation. That's why compilers are based on analysis and transformations that focus on one issue at a time. If you could figure out how to do scheduling and register allocation at the same time, you'd get better results, but it's hard.

On high-level languages, you should get more mileage. Still, you'll hit the exponential blowup before you get the results you want. I think the current approach of finding solvable problems with big payoff is going to stay around for a while. Kind of the principle of 90% of your opportunities for performance improvement are in 10% of your code.

Again, the linker would operate semantically, not syntactically. Syntactically, if A calls B, then the linker will link both A.o and B.o into the final binary. If you were to do the same thing semantically, you'd want to know what bits of A and B are "live". If chunks of A are never reachable, and those are the only ones to call B, then you can omit those parts of A.o and omit B.o entirely.
Sure. On the other hand, "never reachable" can be tricky in low-level languages. Can't you open an executable and resolve symbols in it?
Besides potentially producing smaller compilers, what would be the benefit of semantic compilation?

First, you'd pick up a whole lot more errors. Things can't just "look right", and get away with it. This is where this new element, the "Prover" comes in. You can state, mathematically, the bounds given elements can lie within, and then compare these specifications with the potential values those elements can have at those same points.

Let's say you have two files, "myprog.c, myprog.vfy". myprog.c contains within it the line: a += b ** 2;

Now, myprog.vfy contains the following line to go along with that: a >= 0

The rule in the .vfy file is your checkpoint. It can only be true if the value of 'a' is positive or greater than 'b ** 2'. Assuming all conditions at the last checkpoint were met, then if this check fails, the error exists between here and the last checkpoint, and nowhere else.

As another reply commented, this is assert(). Or this is Eiffel. Or this is Java with pre/postcondition extensions (iContract). The trick is getting people to use these features, and getting them to write perfect assertions. Kind of like getting people to write perfect test cases with good coverage.
By validating programs this way, you can ensure that they either have no bugs at all, or a known set of bugs under a known set of conditions. (And, ideally, you'd aim for the "no bugs" category.)
The fundamental problem with this is that proving the correctness of a program requires a precise definition of what the correct behavior of the program is. This is as hard or harder to write than the program itself. The goal of language design and language tools is to find the cases that provide the most significant benefit and are easy enough to actually implement (bang for your buck). A type system is an example of this. Type checkers prove that your program doesn't contain a certain class of errors, or it refuses to compile it.
Now, at first you might argue that the validator violates the Turing Halting Problem. After all, you can't know (in general) if a program will work correctly or not, without running it. This is still true. ALL you have done is automated the proving of those sections that can be proved. However, unless you like using lots 'n' lots of recursion, you really shouldn't have a lot of non-provable code, so this kind of validation should be extremely effective at eliminating bugs very early on.
Recursion isn't the problem. Recursion is easy. Low-level language features are the problem. If your program reads an integer in from some I/O device and turns it into a pointer, you're out of luck as far as analysis goes. Maybe your code doesn't do that, but what about those libraries you're linking against?

In general, the problem is getting your compiler to be able to efficiently model the behavior of your program. You can work on this from the compiler end or from the language end. Hopefully both.

[ Parent ]

Are you looking at things backward? (5.00 / 1) (#114)
by tmoertel on Mon Sep 10, 2001 at 07:10:56 PM EST

jd wrote:
What you -really- want is to first "semantically" compile the program. In other words, establish what it does, rather than how.
Unfortunately, this is not a practical goal for most popular languages, none which have formal semantics that define the meaning of atomic expressions let alone entire programs. A C++ statement, for example, has no formal meaning, so not even the programmer who wrote the statement knows exactly what it means. Now if a C++ compiler supplied a precise meaning via "semantic compilation," the meaning wouldn't necessarily be the same as the one in the programmer's head, and the subtle difference could cause errors. Thus semantic compilation for a language is worthless unless the language has formally defined semantics that end-programmers and semantic-compiler writers can agree upon.

Additionally, "semantic compilation" seems like a backward process: The programmer writes programs that specify how to do something (i.e., the programs are imperative) and the semantic compiler deduces what the programmer means. Why not let the programmer declare what he wants in the first place?

As it happens, most functional programming languages work this way. They are declarative: the programmer declares what he means and leaves the underlying representation to the compiler.

To make the distinction between the two approaches more clearer, consider the Fibonacci Series: 1, 1, 2, 3, 5, 8, 13, 21, 34, ..., which is the infinite series whose first two elements are both defined to be 1 and whose remaining elements are each defined to be the sum of the two elements that come before them.

How would you express the Fibonacci Series in a program? In most mainstream imperative languages, say Java, Perl, or C/C++, the task immediately forces you to delve into hows: how are you going to represent an infinite series within a finite run-time environment? how are you going to handle the distinction between the first two elements and all the rest? how will you speed successive uses of the Series? and so on. Notice how considering even the first of these how-questions forces your design decisions into the interface of your representation, causing how issues to ripple ever outward into your callers' code.

One implementation in C might be:

   /* Starting with start_elem of the Fibonacci Series (0 being the
   ** first), fill the buffer at buf with count successive elements.
   ** E.g., to compute the first five elements:  fibs(0, buf, 5); */

    static void fibs(int start_elem, int* buf, int count)
    {
        int a = 1, b = 1, elem = 2;
    
        while (start_elem < 2 && count)
            *buf++ = 1, start_elem++, count--;
    
        while (count) {
            int next_val = a + b;
            a = b;
            b = next_val;
            if (elem >= start_elem)
                *buf++ = next_val, count--;
            elem++;
        }
    }


In writing that code, most of my effort related to minutiae -- caching, keeping track of how many elements I still need to compute, whether I'm in the special-case section of the first two elements, etc. All that stuff is noise. The signal -- the definition of the Fibonacci Series -- is buried somewhere in the noise. Good luck finding it.

Now, given that we have a perfect semantic compiler for C, what meaning is it going to assign to this code? Whatever it comes up with, the result is going to be way more complicated than what we really want (the Fib. Series) because the compiler will be forced to infer what I meant from how I implemented the code. The essential semantics -- those that define the Fibonacci Series -- will be lost in a sea of semantic noise deduced from implementation noise. How is the compiler to decide which semantics are essential and which are extraneous (and perhaps fair game to change during optimization) or potential errors? This implementation overflows a 32-bit int and "wraps around" after the 45th element of the Series. Is this wrap-around intentional -- part of the programmer-intended semantics -- or an implementation artifact? How is the semantic compiler to decide?

All of which goes to support my original point: If you care about semantics, start with semantics and let them govern your implementation -- not the other way around. Don't try to deduce semantics from implementation. There's too much noise. Rather, declare your essential semantics, and let the compiler do the grunge work. Let it hide the noise from you.

Of course, I'm not the first guy to have thought of this, which is why we have wonderful things like typed lambda calculus and declarative languages like Haskell. Just for comparison, here's how I might define the Fibonacci Series in Haskell:

    fibs = 1 : 1 : zipWith (+) fibs (tail fibs)


That's it. The whole infinite series in one line. I've declared the essence of the Series, and the compiler did the rest. No problem with overflows, either. Want the 1000th element of the Series?

    70330367711422815821835254877183549770\
    18126983635873274260490508715453711819\
    69335797422494945626117334877504492417\
    65991088186363265450223647106012053374\
    12127386733911119813937312559876769009\
    1902245245323403501


Now, regarding this comment:

However, unless you like using lots 'n' lots of recursion, you really shouldn't have a lot of non-provable code, so this kind of validation should be extremely effective at eliminating bugs very early on.
If your language is imperative and has side effects (like almost all popular languages), plan on having a lot of unprovable code. Additionally, "lots 'n' lots of recursion" usually makes correctness proofs easier: it often maps straight to proof by induction.

Again, if you care about being able to prove the correctness of your code, you'll want to look at declarative languages. They let you (and the compiler) use equational reasoning to prove properties about your code. (The above Haskell code is an equation: the left- and right-hand sides are considered equal to one another.) You'll often hear functional programmers remark that once their code "type checks," it works perfectly on the very first run. Types are closely related to theorems, and most functional languages take advantage of this relationship to catch large classes of programming errors.

In summary, you're working backwards. Instead of starting with implementation details and deducing semantics and then using the semantics to prove things, start with the semantics. There are lots of programming systems that do this already, and they provide much of what you want out of the box. Check 'em out.

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
Everyone, try Formal Methods! (My crazy idea :) (5.00 / 1) (#146)
by larsdahl on Sun Sep 23, 2001 at 09:58:37 PM EST

I agree - but even the best programming system (and declarative functional languages are very good) can't infer what you want from it, if you don't know what it is you want first. This is where Formal Methods come in - the (seemingly) heretical idea that you plan out what you want to develop, before you develop it :)

Once you've got your plan (a specification, to use the correct term), you can do a lot more than just type-checking - you can model-check, which is a lot like ordinary, post-production testing, except that it's -

  • complete - it'll find all the faults in your design/declarative code (unlike testing, which can only prove the presence of faults, not their absence),
  • largely automatic,
  • can be done before a single line of development code is written, and
  • costs nothing but the time to write the original specification and run the model checker - brief compared to the time (typically spent by paying customers) in finding your bugs after you've written your code.

If that wasn't good enough, you can also validate your specification (through a technique called 'specification animation'), and show off most of what your program will do to your clients before you start development. Then, if the client finds a failure of the design, you can re-design without having to scrap months (even years) of development, because your client didn't like the way your final program looked, let alone worked.

Unfortunately, planning ahead in this fashion seems to be a secret vice indulged by far too few - and typically only where the consequences of faults (i.e. being sued, because the software controls potentially life-threatening devices) outweigh the perceived costs of specification. But some big companies and organizations (for example, Boeing, HP, and the US and Australian DoDs, in parts) have been using these techniques for years, and report record levels of cost savings, reliability and consumer confidence in their software development process, as compared to the cycle of early (premature?) release, and continually patching up bugs afterwards.

Of course, Formal Methods aren't a panacea for all the ills of software development and testing - there are theoretical limitations to the size of the specification you can validate or verify with some of these techniques, and you still need to come up with the specification, which requires specifiers with special training. However, a lot of leading Universities are surging forward with research to overcome these limitations (OO specification languages and tools - good :), and are providing new graduates with these skills. Try asking for Formal Methods skills next time you hire, and see how much better your software development project goes.

Try Formal Methods in your project today! </rampant advocacy> :)


--
A .sig? Now what would I want with one of those?
[ Parent ]
Heavy Modular Coding (4.66 / 6) (#19)
by jd on Fri Sep 07, 2001 at 02:16:51 PM EST

One of the biggest problems with large programs is that they tend to be unwieldy to modify and maintain. Modular code is much cleaner, as you can focus on one task in one module, without worrying about what another module is doing.

Linux has gone this route, some. It's a far cry from being totally that way, though. Many options are not compilable as modules, and many options probably never will be made modular.

Other applications, such as Netscape, use plug-ins which are loaded at load-time, as a kind-of pseudo-module. But they aren't terribly effective, and the bulk of the software is still in one place.

When you were taught computer programming, at school, you probably learned that the "main" part of the program should really only ever be between one to three lines long - a call to the actual program, and an optional loop bracketing that. I'm going to suggest something slightly different - the "main" part of the program should consist of exactly three items - a loadmodule call, a runmodule call, and an unloadmodule call. Nothing else.

Why should the main program be so tiny? In "classical" computing, the real reason was because that main function was considered "special" and was an expensive place to call. Getting out & staying out was generally a very good idea.

In modular code, the reason is the same, PLUS you no longer have massive load times. Your tiny routine is all the application needs, to start up. Everything else can be loaded, as and when needed.

Ok, but isn't it still better to have everything compiled in? After all, there are overheads with modules, and the loading isn't free.

Modular code enforces very strict rules on how data can be passed from one point to another. This means that incorrect calls, memory leaks, etc, can very quickly be detected and traced back to their origin. Since these are two of the big killers for applications, that would suggest that modular code will generally be of a higher standard.

The loading is also not as bad as it might first appear. Since, at point X, there are only a finite number of modules you can call, you can pre-load the module you need, before you get to it. Actually, a slightly better way to achieve the same result is to map out a graph of how the modules inter-relate. Provided all modules a distance of 1 from your current module are loaded, you are guaranteed no "visible" load-time. By retaining the modules listed as "most probable next but one", when switching, you reduce the loading and unloading this would involve to a minimum.

Lastly, modular code gives you infinite extensibility. Instead of having to massively patch, re-patch, over-patch and psycho-patch the entire program, every time you want to add a new menu option, ALL you need to do is add that module to the module set. The other modules can detect it and see if it's relevent to them, as needed.

You'd like the HURD (4.00 / 2) (#23)
by BlckKnght on Fri Sep 07, 2001 at 04:11:53 PM EST

If you are interested in an extrememly modular OS, you should check out the GNU HURD. It moves most of the kernel functionality to user space servers, and only leaves a bare minimum in a microkernel (well, thats how it should be. Gnumach is bloated by microkernel standards). This means that changing the running version is as simple as killing and restarting the process.

You can do lots of other cool stuff with the HURD too, as userspace servers often can be run as any user without any less security (modulo bugs, of course). Users can mount NFS shares and filesystems. Daemons can run without any UID. Other cool things include a lot of translators (filesystem servers) that make networking a transparent part of the filesystem (so cp /ftp/alpha.gnu.org/gnu/hurd/contrib/marcus/gnu-20010308.tar.gz . will download the latest HURD base tarballl), or allow multiple filesystems to be "shadowed" on top of each other.

Ok, thats enought HURD propaganda for today. I encourage anyone interested in wild OS ideas to check it out. It has a lot of cool ideas already, and offers the flexablility to add many more.

-- 
Error: .signature: No such file or directory


[ Parent ]
Patching Code (4.20 / 5) (#20)
by jd on Fri Sep 07, 2001 at 02:47:40 PM EST

Ok, one last idea. :)

One thing I've learned, with maintaining the FOLK project, is that there are lots of lists of items, where the order really isn't that critical. HOWEVER, you try applying a "diff" to those lists, with the elements -out-of-order- and you're in for a fun game of "chase the glitch".

What is needed is another style of "patch" - a "order-independent, context-independent" patch, for those code blocks where it just doesn't matter what goes where, so long as it goes in the right general area.

For these, the patch might look something like:

--- myList.c.orig
+++ myList.c
typedef struct __myStruct {
*
| int myNewElement;
| int myOtherNewElement;
*
} myStruct;

This would read as follows:

"Find a line which matches the typedef and another which matches the terminator and contains zero or more elements between. Add the two new elements anywhere in that list, provided it's at the base level. If one already exists, add the other, relative to it, and skip the existing one. If both exist, skip both."

This would mean that applying patch A, followed by B, even if both were successful, would NOT necessarily produce -syntactically- the same code as applying B followed by A, but WOULD produce semantically identical code.

You'd need one other operation in there, for enumerated types and other lists. Instead of having something like:

enum {
previousEntry = 122,
+ aNewEntry = 123,
nextEntry = 124,
}

which would require that all three entries existed, existed in that order, and existed with those numbers, you might want to have something like:

enum {
*
> aNewEntry++
> nextEntry++
*++
}

This would read as follows: Find the enumeration. Ignore all the elements at the start of the list. Add or replace "aNewEntry", with a value one greater than the previous entry. Add or replace "nextEntry" with a value one greater than the previous entry. Keep the order of those two as above. Renumber all subsequent elements, starting from one more than nextEntry.

These "enhancements" would enable "patch" to add patches much more intelligently. And it's often in the "blind patching" that problems arise. Patch can't tell if the code it is producing is meaningless, as things stand. The simplest solution to this is to reduce the chances of the code being produced being meaningless. Order-independence, increments, and other subtleties of the semantics can often totally wreck a perfectly good, syntactically-useful diff, because the way that the syntax is handled can't cope with these.

This means you can either throw away your development tools, and go entirely semantic, or you can cobble together some sort of half-way toolset, that's not so rigid.

Semantic tools, though, are much more complex to design and maintain, turning a simple few minute task of fixing a glitchy patch into a few decades of hair-pulling and frustrated screams of rage. Going for a mid-way point seems much more logical, and offers many of the benefits of both styles of operation.

RE: Patching Code (none / 0) (#109)
by Eddie the Jedi on Mon Sep 10, 2001 at 03:17:04 PM EST

I could definitely see how that would be useful. I go thru enough trouble applying Alan Cox, Intel ACPI, and crypto patches, and then trying to reverse the Alan Cox to apply a newer one. I don't even wanna think about what you have to go thru with FOLK.

But there's one obstacle that I see: how on Earth would you modify diff to create order-independent patch files? Far as I can tell you couldn't; you'd have to create the patches by hand. At best you could edit a unified diff to turn it into a order-independent diff, but that could get ugly with large patches (and I'd guess that most kernel patches are large).

[ Parent ]
Some spam.. (4.00 / 1) (#21)
by BigZaphod on Fri Sep 07, 2001 at 03:41:26 PM EST

Oddly enough, I'm trying to start a site dedicated to discussing crazy ideas in computing. It uses the Scoop engine and everything, so it should be familiar to everyone. It is currently very much early in it's production (so basically all we have is a nice logo :-). However, that doesn't mean it can't be used. Why not post your ideas here and then think about writting up a complete story on them and posting them over at Sumballo? Since the site was just put up a couple days ago there are currently no major users (aside from myself and a couple friends) and just some news and information postings I put up to get some content on the site. But hey, it has to start somewhere. :-) Eventually we'll even have topic icons! Hee hee..

"We're all patients, there are no doctors, our meds ran out a long time ago and nobody loves us." - skyknight
A suggestion about that... (5.00 / 1) (#26)
by Elendale on Fri Sep 07, 2001 at 08:40:22 PM EST

I would say that if you end up doing this, encourage people not to vote stories down on the quality of the ideas- with the exception of truly inane ideas. People won't post if they just get laughed out of the queue.

-Elendale (of course, you already knew that...)
---

When free speech is outlawed, only criminals will complain.


[ Parent ]
Thanks! (none / 0) (#28)
by BigZaphod on Fri Sep 07, 2001 at 11:15:59 PM EST

Thanks for the tip. I've thought a bit about that. I'm going to look into changing the actual text of the -1, +1 options when voting to "-1, garbage submission", "+1 good discussion material" or something similar. The idea being to drive home the point that this is not a vote of agreement.

"We're all patients, there are no doctors, our meds ran out a long time ago and nobody loves us." - skyknight
[ Parent ]
actually (4.00 / 1) (#129)
by yesterdays children on Fri Sep 14, 2001 at 07:46:11 PM EST

allow the inane ideas too. That way folks like me could learn something in the process. Its hard to have fun around elitist uberhax0rs. I'd even accept a moderation system having -1 clueless as a bit of feedback hehe

[ Parent ]
Transputers for web serving (3.00 / 1) (#27)
by MSBob on Fri Sep 07, 2001 at 09:08:04 PM EST

I'm currently evaluating clustering solutions for my employer and I've come to the conclusion that we simply take the wrong approach in the whole issue of web serving. A typical J2EE clustered application resides on say, five fat boxes where each box handles n sessions concurrently, usually by spawning n separate threads or processes. However, if one of the boxes crashes all sessions on that system are usually lost unless you app persists all conversational state each time the user interacts with it.

My idea is to use a massively parallel system to host enterprise applications. Imagine a cluster of say, 10,000 very lightweight CPU systems each equipped with its own memory, small solid state disk for config data and a cheap CPU (would an M68000 suffice)? Now each of these transputers would handle only one session at a time. It would be a single process, single thread machine equipped with a basic JVM. Obviously the size constraint would imply giving up on all J2EE bloat and writing your app as a single user, single thread program that must fit within a single transputer environment. Not only would this baby fly it would also be much more resilient to failure. If any single node dies only one session is lost! If you have 10,000 transputer nodes in your server you can (theoretically) handle 10,000 concurrent client sessions. The actual limitation would lie in the DBMS. However, I have an answer to that too. Each transputer that couldn't get access to the DBMS could keep its persistent data in its local storage and only persist it when more DBMS connections become available.

One might notice that while this highly parallel application server would easily accomodate N concurrent users where N is the number of transputers in the matrix one can see that N is also the maximum number of concurrent users the system can support. Now that means there is no so called "graceful degradation". In order to accomodate more than N users one must purchase additional transputer nodes. That in my opinion is not a limitation as "graceful degradation" is hardly ever graceful and usually if the application cluster gets saturated it goes tits up for everybody.

So there. My wild computing idea is to resuscitate transputers to use them in enterprise applications. The benefits would include lightening fast performance, immense scalability and easier coding as programmers would code to single threaded systems.

I don't mind paying taxes, they buy me civilization.

You must keep in mind ACID (5.00 / 1) (#134)
by pin0cchio on Sun Sep 16, 2001 at 11:19:39 PM EST

The actual limitation would lie in the DBMS. However, I have an answer to that too. Each transputer that couldn't get access to the DBMS could keep its persistent data in its local storage and only persist it when more DBMS connections become available.

It may have an advantage for the web application code, but you still have to keep in mind ACID. If you try to free a connection to the DBMS before you "persist" (i.e. commit) the transaction, the DBMS will cancel the transaction to maintain atomicity (either full commit or full rollback). And you can't just open, read, commit, write, commit because the database may have changed between the commits, sending consistency down the toilet if you try to update a row that another transaction has already updated.


lj65
[ Parent ]
User friendliness (4.25 / 4) (#29)
by Lord INSERT NAME HERE on Sat Sep 08, 2001 at 01:07:38 AM EST

Okay, here's a crazy usability idea. It occured to me while playing about with MenuetOS (this OS has recently risen to fame in a story on the other site). It occured to me that if Menuet, which fits on a single floppy, were to include an RTF editor, a web browser, an email client, an IM client, and an art package, then I wouldn't really need anythign else. It already has an assembler with full documentation.

So my second thought was, if all that stuff fit onto another couple of floppies, I could carry my OS with me wherever I went, and all my favourite programs with it. Never again would I have to use Internet Explorer on someone luser's machine, a quick reboot and I'd be in Menuet.

But that wasn't the height of my craziness. My next thought was, why do I want a fancy GUI anyways? By that, I don't mean "command prompt is good", I mean, what's the point of all these fancy menus and so on? So here's my idea for a new type of interface, in some ways a return to the good old days of the Amiga and Atari ST...

When you boot up, you would be presented with a menu. It would heirachial, and contain every installed program on the system, including a terminal emulator and file manager, though ordinary users would never go near those. When you run a program, it would open up on a desktop all of its very own. Desktops would be switched between with some simple key combination, possibly just by hitting the function keys. On those occasions when you need to see more than one window at a time, another key combination (come to think of it, a right-click menu would do just as well) would allow you to split the screen into halves or quarters and display a different desktop in each.

It seems to me that this system would be extremely easy to use, but still allow power for more advanced users (through the terminal emulation and so on). It could have different skins, with a "beginners" setup that only includes a few essential programs (office suite, email, graphics package, web browser, IM)

Would I use this system myself? Maybe. Would it be perfect for my technophobe grandmother? Undoubtedly.


--
Comics are good. Read mine. That's an order.
Anonymous, signed currency. (3.25 / 4) (#30)
by ramses0 on Sat Sep 08, 2001 at 04:58:34 AM EST

The U.S. government wants key escrow. There's no reason for it. However, if the U.S. Treasury 'minted' electronic bills, in certain denominations, and signed them with the "Treasury Private Key", anybody could verify that the signed bundle they have is an e-$100-bill (or something).

Now, all is needed is a way to establish that one person owns a bill, to the exclusion of others. Perhaps the currency would no longer be anonymous, but imagine a structure similar to the following:

DATABLOCK{
Denomination: $100
Minted By: U.S. Government.
Serial #1234567
}
------------
SIGNATURE {
54adfa098adf...098098f908bec5
}

Due to that whole "bits can't be copy-protected" thing... I can't see how one user could *give* a bill to another user without somehow being able to maintain a copy of it themselves. However, if Alice gives Bob the $100 bill, and signs it with her private key, it can be proven that the bill that Bob received was indeed given to him by Alice. Only trouble is that Alice can "give" that bill to many people without getting caught unless signatures are checked against a central authority.

If anybody can figure out how to make this scheme work, please post below. The first person to figure it out will end up being a very wealthy person. :^)=

--Robert
[ rate all comments , for great ju

Protection from double spending. (4.00 / 1) (#42)
by i on Sun Sep 09, 2001 at 02:19:06 AM EST

There are schemes that can successfully prevent double spending. The idea is like this. Your personal data is encrypted within each e-note bank gives you (in such a way even the bank itself cannot trace the note to your account -- the technique is called "blind signature".) Now, when spending a e-note, you must partially reveal the encryption key. This is not enough to recover your data. If you try and spend your e-bill once again, you reveal another portion of the key -- and two portions are enough to recover the data and trace the bill back to you.

So if Alice gives Bob a $100 e-bill, and both spend it, this will be traced back to Alice who shall be held accountable.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

This only works if somebody is checking... (none / 0) (#120)
by ramses0 on Tue Sep 11, 2001 at 02:10:31 AM EST

...so if I took my hypothetical e-bill to Saudi Arabia, and spent it there, nobody would know if I had alread spent it in the U.S, or France, or Australia.

Once they were able to match everything up, my key would be comprimised, but it still doesn't affect the fact that it appears to be much easier to double-spend than it is to counterfeit.

--Robert
[ rate all comments , for great ju
[
Parent ]

it's been tried before. (none / 0) (#66)
by Wouter Coene on Sun Sep 09, 2001 at 04:36:44 PM EST

It's called DigiCash, and it's been tried before (at least here in the Netherlands). The basic idea was that you could spend money anonymously, but to receive money, to add it to your own account, and to validate it, you had to identify yourself.

But the banks didn't like the idea of anonymous spending (even for small amounts) and some years ago DigiCash was bought by a large Dutch bank and simply stopped.

The still have a site (http://www.digicash.com/), and there are rumours they're planning to do a restart. I do hope so, since as far as I understood at the time it was a fairly decent, and secure, system.

[ Parent ]

Hrm... (none / 0) (#77)
by delmoi on Sun Sep 09, 2001 at 11:12:48 PM EST

But the banks didn't like the idea of anonymous spending (even for small amounts) and some years ago DigiCash was bought by a large Dutch bank and simply stopped.

Does this mean they don't let people get cash?
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Overlapping windows are bad (4.66 / 3) (#31)
by BigZaphod on Sat Sep 08, 2001 at 05:00:09 AM EST

Plug: I posted this on my new site (which I hope can become a place to discuss things like this), but I'm posting it here because this is where the discussion is happening. :-)

I've been giving some thought to user interfaces, and I think one of the biggest problems with them right now is the whole concept of overlapping windows. They waste screen space, they can be confusing, and most of the time they are totally unnecessary.

One requirement for an overlapping window is a border. Without it, it would be quite difficult to figure out where one window ends and another begins. That means you need to waste a few pixels all the way around the window in order to draw a border. To make matters worse, windows often have larger title bars and resize tabs at the tops and bottoms. And as if that wasn't enough pain, a lot of GUIs end up putting a whole menu bar in each window with common names such as "File", "Edit", "Help", etc. That's quite a lot of wasted screen space only required because windows overlap.

Overlapping windows are also confusing. It is almost never obvious when there is a smaller window being totally blocked out by larger foreground windows. Often you are required to move windows out of the way of each other just to get work done. When applications pop up information or error windows, they pop on top of everything else regardless of what you were doing. This can lead to even more confusion because it may not always be obvious where this new window came from and what actions caused it to pop up.

I would argue that there are very few instances where overlapping windows are even needed. Usually you are only interacting with one window at a time. Even in something like an image editor you are really only concerned with the image at hand and not all the others you might have open at the same time. The only times when you need access to other windows is when choosing new tools or dragging elements from one to another.

So what's my solution? In general I think windows should not be allowed to overlap and the entire screen should be dedicated to the app you are currently interacting with. That means things should be full screen all the time with no way of seeing what's behind anything else. Assuming a handy app switching bar and key combo, I think the general computing experience would be improved as less clutter on screen means there is less to confuse. When applications need to pop message up or do other things like that, they can do it within their own context and nothing interferes with anything else. In the case of an image editor where multiple images may be open at the same time, I would suggest that each open image should be treated as an independent app instance and should get its own context. Why bother with windows-in-windows when there is no reason to?

Anyway, I'm sure there are some cases where overlapping windows might be nice, but I think those situations should be rethought and solved in other ways.

"We're all patients, there are no doctors, our meds ran out a long time ago and nobody loves us." - skyknight
Overlapping (5.00 / 1) (#34)
by fluffy grue on Sat Sep 08, 2001 at 11:20:48 AM EST

That's why I run pwm. It's probably the most intuitive window manager I've used, and it's too bad that it's only seen as a toy niche one. If you try it, pick up my theming engine (on the same page), which also has a nicer (IMO) set of default key/mouse bindings.

FWIW, when I helped my brother install Linux on his computer, I showed him a bunch of window managers, and he didn't like any of them... he asked me, "Which do you use?" and I told him, "I run pwm, but it's probably a bit weird for a beginner." But I installed it anyway, and he immediately fell in love with it. Now he can't stand traditional WMs, including fvwm (which he's forced to use at school) and MS Windows.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Ion is the next evolutionary step from pwm (2.00 / 1) (#47)
by lollipop on Sun Sep 09, 2001 at 11:19:24 AM EST

I also used pwm for a while and found it to be the best window manager by far. Of course that was until I tried pwm's creators second work. Ion, http://www.students.tut.fi/~tuomov/ion/, at first it is nothing but pain. However, after you customize the keys and get used to the program, its superiority to pwm becomes clear. Give ion a try if you get a chance, I reccomend it highly

[ Parent ]
I did (2.50 / 2) (#52)
by fluffy grue on Sun Sep 09, 2001 at 12:54:31 PM EST

I tried ion, but it made things all fungy with some programs (especially games, but I really hated not being able to easily size an xterm to 80 columns wide, and Netscape hated it).

IMO, pwm is still more usable... also, I don't want to give up mouse functionality, I just don't want it to be so necessary. pwm seems to strike the perfect balance. Also, now that pwm has the "throw a window until it hits" functionality, repacking windows which have gone astray is easy.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Gah? (5.00 / 1) (#40)
by ajf on Sun Sep 09, 2001 at 01:35:36 AM EST

I agree that overlapping windows are an inconvenience. But...

They waste screen space,

The "wasted" screen space you're talking about really is tiny, unless the user chooses absurdly thick window borders and large title bar font.

And as if that wasn't enough pain, a lot of GUIs end up putting a whole menu bar in each window with common names such as "File", "Edit", "Help", etc. That's quite a lot of wasted screen space only required because windows overlap.

I'm totally missing the connection between menu bars and window overlap. Even if this browser window I'm typing in right now were taking up 100% of my screen space and it had no borders or title bar, I'd still want to be able to use all the functions that are only accessible from those menus.

While it's true that applications can and have been written without a menu bar at all - applications running on Acorn's RISC OS, the first GUI I ever used, treats all menus like context menus; they're activated by middle-clicking on the application window - it's really got nothing to do with whether or not windows overlap.

Overlapping windows are also confusing. It is almost never obvious when there is a smaller window being totally blocked out by larger foreground windows.

While that's true, it's a shortcoming of the window manager, rather than inherent to overlapping windows.

I'm currently using a Gnome applet which displays the position of all visible windows on my various virtual desktops. I expect it could trivially be modified to draw the outlines of all windows, rather than just the visible ones.

The Enlightenment crowd (and other eye candy fans) might prefer some sort of translucent glowing border which shines through all higher windows to show where these hidden small windows appear. I don't know. But I suspect that if people really are annoyed by "losing" windows in this way, somebody has already written a window manager or utility application that solves the problem.

Often you are required to move windows out of the way of each other just to get work done.

Well, the problem there is that you haven't got enough desktop space. Allowing windows to overlap can mean the difference between showing you most of the information you need, and only showing half.

By proposing that no windows overlap, you're saying that instead of being able to see all of this window and some of that, I should see all of this window and none of that one. I don't see how that's an improvement.

Right now I've got two browser windows visible; this reply composition window, and the story page on which your comment appears. Although they overlap, I can see enough of your comment as I write to see the point I'm going to reply to next. If I couldn't overlap the windows in this fashion, I'd have to make the two windows so narrow that they didn't overlap (which would mean I would have to scroll to see your entire comment), switch between two separate screens in some fashion, or scroll up and down this reply window to refer to the copy of your comment provided by k5 which I can't actually see while typing in this textarea.

Although it's annoying to have to scroll or switch windows to read your comment in full, the problem exists because I have limited screen space. Causing the windows to overlap uses the space I have available better than not allowing overlap, as I can see some of your text all the time, which I consider better than nothing.

When applications pop up information or error windows, they pop on top of everything else regardless of what you were doing. This can lead to even more confusion because it may not always be obvious where this new window came from and what actions caused it to pop up.

True, that can be annoying. But I believe good GUIs can arrange it so that such windows only interfere with windows belonging to that application. I'm assuming that the application has a good reason to be popping up an error message; if not, again your problem isn't with allowing windows to overlap.

I would argue that there are very few instances where overlapping windows are even needed. Usually you are only interacting with one window at a time. Even in something like an image editor you are really only concerned with the image at hand and not all the others you might have open at the same time. The only times when you need access to other windows is when choosing new tools or dragging elements from one to another.

I disagree strongly. Suppose I'm using a GUI text editor and a terminal to run "make" in. I want to be able to look at my code and the compiler error messages relating to it. I may only be interacting with one or the other at a given moment, but I'm using both.

Perhaps someone more artistically inclined than I am would know better, but I suspect if somebody was trying to create one image in a similar style to another (f'rexample, creating a program launch icon from a pretty splash screen) then they'd want to be able to see the source image while creating the new one.

I'd also expect that if somebody is attempting to create an illustration to accompany a piece of text, that it would be useful to be able to read the text while editing the image. :-)

So what's my solution? In general I think windows should not be allowed to overlap and the entire screen should be dedicated to the app you are currently interacting with.

Your "solution" is to reduce useful flexibility. Of your assumptions, I believe the most problematic is that the user is only using the application with which he/she is directly interacting. I believe being able to see two or more completely separate applications at a time is far too useful to give up simply to avoid the confusion or inconvenience you describe.

That means things should be full screen all the time with no way of seeing what's behind anything else. Assuming a handy app switching bar and key combo, I think the general computing experience would be improved as less clutter on screen means there is less to confuse.

For what it's worth, I keep unrelated windows in separate virtual desktops. This solves precisely that problem. I've got an IRC client open in one desktop, and these browser windows in another. I can switch from Mozilla to xchat by pressing Ctrl+Alt+4, and back with Ctrl+Alt+8, to my mail client with Ctrl+Alt+6, Ctrl+Alt+1 to get to the terminals where I'm editing some code, Ctrl+Alt+5 where I've got some documentation open, and so on. Because I always use the same virtual desktop for particular types of application, these keypresses are almost subconscious.

When somebody speaks in one of the IRC channels I'm in, xchat's gnome applet highlights the channel name, using a different colour if my nick is mentioned, and I find that quite useful, because it means I can respond to trout-slapping attacks rapidly even when my attention is directed at another application.

When applications need to pop message up or do other things like that, they can do it within their own context and nothing interferes with anything else.

I must admit, I would prefer that frequent messages (such as Mozilla's "do you want to accept this cookie?" window) would appear in the existing window - I find lynx and w3m's way of sticking it in the status line quite handy, though I must admit sometimes I don't notice it appear if I'm paying attention to something else - it really doesn't work with the kinds of GUI widgets we're accustomed to using. I suspect that, for what it's worth, you'd have to do a lot more than forbid overlapping windows to achieve the usability improvements you're after.



"I have no idea if it is true or not, but given what you read on the Web, it seems to be a valid concern." -jjayson
[ Parent ]
Mac OS X (5.00 / 1) (#50)
by calimehtar on Sun Sep 09, 2001 at 12:35:02 PM EST

OS X uses overlapping windows and has solved most of the problems you mention:

  1. It doesn't use borders, but soft drop-shadows instead
  2. Like all Mac Oss before it, OS X has only one menu visible on the screen, that of the window in the foreground. This is one of my favorite things about Mac OS.
  3. The drop-shadows actually emphasize the foreground window while partially obscuring background windows.

Overlapping windows, to me, are primarily useful while navigating the filesystem because they are still the best, most intuitive of moving things from folder to folder and accessing several related folders simultaneously.

They can also come in handy when dealing with large quantities of relatively insignificant files . Using photoshop and Textpad simultaneously to create a complicated layout with dozens of little images is a use-case many k5'ers will be familiar with. I find myself comparing image files to eachother, using Photoshop to check pixel image sizes, and flipping back and forth between text's html view and the browser's rendering.

While I agree that the overlapping windows are frequently too awkward to compensate for most benefits, I haven't yet seen or conceived of a reasonable replacement.


+++

The whole point of the Doomsday Machine is lost if you keep it a secret.


[ Parent ]
window selection/identity (none / 0) (#84)
by ant on Mon Sep 10, 2001 at 03:40:41 AM EST

Presumably, the user only has windows open that they need open. If the user is focusing on a window for an extended period of time, that window can be maximized to fill the screen, problem solved. If the user is switching between windows often, then they need some way of communicating what window they want to make active.

By having windows overlapping on the desktop, the user can do this by clicking on the window itself. The visual position and what is visible in the window make up a way for the user to tell a window's identity. As windows are moved, created, and destroyed, the user will subconsciously be able to keep track of the visual identity of a particular window fairly easily. Because there's a fair amount of visual information to associate, even windows which the user hasn't used in a bit can be recalled quickly by seeing them in the background (for exmample, the window for the account confirmation e-mail I received for this site has about 20 pixels on the left edge visible, which jogged my memory when I glanced over at it just now).

This is a core issue I see - overlapping windows fill this need of window identity and selection. Say you do away with overlapping windows. Either you can only have as much information ever available as will fit on the screen, or you hide background windows. If you hide them, you need to provide another means for the user to make them visible. Whatever the means, you lose this visual connection, thus the user has to make a fairly conscious mental mapping between windows and their identifier in the selection mechanism (say, a menu listing the window's titles).

I use a GUI-based text editor, and it has a menu of open windows for files, listing the filename as the menu text. At times when I have to use this menu, I notice myself devoting more mental resources to finding what filename corresponds to the window I'm looking for. If I am editing many files at once and using the windows menu often, I do form a stronger "muscle memory" map in my mind of where the items are in the menu, speeding use of it. But it's still a more distinct mapping than the direct mapping with the window's visual location and appearance.

At a mental level, I notice that as a focus in on particular tasks, I'm progressively think more and more in terms of what I'm working on, at its basic level, rather than the user interface or other artifacts of my tools. I may be editing two text files, switching between them often. After a while, the switching becomes automatic, relying on visual association. I'm thinking "OK, edit this here, now that there" and the rest is subconscious. Overlapping windows map closely with physical rectangular objects stacked on one another, each at different positions on a flat surface. Because of this mapping, we can use our automatic responses to physical objects on the overlapping windows.

Since most of the above is conjecture, it would probably be best to run experiments, if one is interested in finding solutions to these problems. Others have mentioned that there are window managers available that attempt to solve some of the problems in various ways.

Personally I get pretty frustrated at times when I have a mess of 10 or more windows on my desktop. Usually when that's the case, it's because I haven't cleaned up previous work.

Thanks for your posting, as it was a small catalyst for some thought on this matter.

[ Parent ]

Tried larswm ? (none / 0) (#131)
by pandeviant on Sun Sep 16, 2001 at 10:21:30 PM EST

You should try using Larswm. This window manager operates in a very different way from others. Here is the documentation pdf. Have a read if the design criteria on page 6. The Window manager itself is not much to look at, and it is best experienced by using it.

[ Parent ]
the links - I'll use preview next time ! (none / 0) (#132)
by pandeviant on Sun Sep 16, 2001 at 10:39:02 PM EST

Larswm and the Documentation

[ Parent ]
:tcujbuS (none / 0) (#85)
by Holloway on Mon Sep 10, 2001 at 04:16:19 AM EST

You, my friend, need a ZUI.


== Human's wear pants, if they don't wear pants they stand out in a crowd. But if a monkey didn't wear pants it would be anonymous

[ Parent ]
Optimising debugger watchpoints (4.42 / 7) (#32)
by jesterzog on Sat Sep 08, 2001 at 06:56:40 AM EST

I've been doing lots of work with gdb lately (meaning the gnu debugger). This isn't directly about debugging, it's more about profiling processes as they run for a project I'm doing.

So anyway, throughout this I've learnt much more about gdb than I ever wanted to know... including that it has a hideous and unintuitive interface for driving it from a third party application. But that's another story. To be fair, it's designed to be used by people.

My main problem has been with tracking watchpoints, because one of the things I'm profiling is to do with variable assignments and referencing from other parts of the program.

Watchpoints are sloooooow, because to follow a watchpoint, a debugger usually has to check the value of a variable before and after each instruction step in the program. Sometimes it can be more intelligent, but generally not. After setting a watchpoint, it was taking several seconds to get through a method call that did almost nothing. You can also get hardware assisted watchpoints, but to do that you need the right hardware.

I woke up this morning with a revelation of how to fix this problem. It's too bad that I'm not in the gcc/g++ or gdb development teams.

I was thinking that in addition to the debugging-information option, a compiler should have an option to allow optimising of watchpoints.

What would happen with this option is that the compiler would translate variable accesses to internally hidden method or function calls. There would be a method for setting and a method for getting, so that assignment and references could be distinguished. This is a lot like good programming policies with hiding variables at the moment, but this would be completely internal to the compiler.

Every time the program accessed a variable, the compiler would use the method instead of the direct access. This way when setting a watchpoint, the debugger could actually get away with setting a breakpoint and claim it was a watchpoint. Breakpoints are much more efficient than watchpoints, so every time the respective breakpoints were hit, the debugger could report it as a watchpoint.

Is this a good idea, or am I too late? It could really speed up watchpoints and make them much more efficient for debugging sessions. I just wish it was implemented accross gcc/g++ and gdb. Maybe I should ask for it.

And gdb should also be able to catch c++ exceptions on alphas and sparcs. That would just make my day.


jesterzog Fight the light


that rules. (3.00 / 2) (#38)
by sayke on Sat Sep 08, 2001 at 05:34:02 PM EST

write to the gdb people.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Bijou problemette (5.00 / 1) (#90)
by pw201 on Mon Sep 10, 2001 at 07:16:41 AM EST

If I'm doing this in C and I'm looking for the point at which a variable gets trashed by some horrible pointer overrun, then your method won't work since the problem I'm trying to catch is precisely that the variable isn't being accessed in the usual way.

I guess there'd be some more general problems with mixing this idea with pointers, too. What do you do about

int a = 10;
int *b;
b = &a;
*b = 11;
for example?

[ Parent ]

Well, (3.00 / 1) (#92)
by delmoi on Mon Sep 10, 2001 at 08:30:41 AM EST

*b and b would all use wrapper functions too, so it would be possible to handle stuff like that. Another possiblity would be to use an array of function pointers as 'memory' but that might get to be really slow :P
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Something I'd like in a debugger... (5.00 / 1) (#126)
by SIGFPE on Thu Sep 13, 2001 at 01:00:04 PM EST

What would you give for a debugger that allowed you to roll back the last 1000 instructions or so? If CPUs were reversible that would be easy. However the amount of information required to reverse a non-reversible instruction isn't that large. If an Intel CPU were to execute "mov eax,ebx", say, you'd need store the old value of eax and some flags saying that eax had been changed. If a CPU came with a few K of memory to allow the reversal of the last 1000 instructions it'd make debugging so much easier. Maybe this is something you could implement with a transmeta chip. I wouldn't mind if while debugging the CPU ran at half the speed because of the overhead. The ability to debug instructions before your breakpoint would be awesome!
SIGFPE
[ Parent ]
Bidirectional debugging (none / 0) (#135)
by jesterzog on Tue Sep 18, 2001 at 06:48:45 AM EST

Strangely enough the other day as part of the project I'm working on, I actually found a paper someone had written on this.. so apparently you're not the only person who's been thinking about it.

It's called Efficient Algorithms for Bidirectional Debugging by Bob Boothe of the Computer Science Dept at the University of Southern Maine, Portland, ME. (Mail boothe%cs-usm-maine-edu)

It was published in 2000 by ACM. The abstract is here but you might need to be a member to download it.


jesterzog Fight the light


[ Parent ]
Already done -- for Windows (none / 0) (#149)
by Zan Lynx on Tue Sep 25, 2001 at 08:28:09 PM EST

SoftIce by NuMega does this. They have a thing called a back trace buffer. It works by setting a range of hardware breakpoints and at each breakpoint it records the instruction and cpu state. Its very slow and you need to know in advance which part of the program you want to record. You could record everything, but then your 1GHz Pentium will run code at about 50 MHz. Doing something like this for GDB would be an awesome project. It might be possible to get GCC to do something like it with the IA64 chip as well. Insert code to run in parallel with the real code and record cpu state. As a programmer I would love this ability. It would be like having a stack trace but 100 times better.

[ Parent ]
Watchpoints (none / 0) (#130)
by Sax Maniac on Fri Sep 14, 2001 at 10:37:26 PM EST

Disclaimer: I work for a company that sells debuggers.

I was thinking that in addition to the debugging-information option, a compiler should have an option to allow optimising of watchpoints.

While real high-performance watchpoints require hardware support that generate a trap when an address is modified, watchpoints can be emulated. Simplified, it's like this: Page-protection is built into most hardware. With the right tricks, it's possible to mark the page as read-only. This causes a trap to be sent that the debugger can interpret. It checks if the instruction that generated the trap is trying to write to the address being watched. Not as fast as a true hardware watchpoint (it generates a lot of false hits), but quite a bit faster than checking between each instruction. This is a similar concept to Electric Fence

Your idea has merit, but I don't know how useful it is. Here's the wrong word in this sentence: compiler. When you add breakpoints or watchpoints to a program, it has to be done on-the-fly, otherwise it's of limited use. Are you saying that to find a bug, you will stop, recompile the application, and then fire it back up again? Could you imagine recompiling your program to add a breakpoint?

If you really wanted to do this, you don't even need compiler support. Write a C++ wrapper class that wraps your data and defines operator=. Now all your assignments are channeled through one source line. See where I'm going? Set a conditional breakpoint on operator= that says "if this equals address X, stop". There, cheap watchpoint on a single variable.

And gdb should also be able to catch c++ exceptions on alphas and sparcs. That would just make my day.

<PLUG> You might have to go out and buy a real debugger... </PLUG>


Stop screwing around with printf and gdb and get a debugger that doesn't suck.
[ Parent ]

compilers and watchpoints (none / 0) (#136)
by jesterzog on Tue Sep 18, 2001 at 07:00:29 AM EST

Thanks for the input. It's great having input from someone experienced and qualified to talk about it.

With regard to the compiler, I wouldn't go and recompile something in the exact way you describe.

I wasn't thinking of the compilation issues as much of an issue though. gcc needs a compiler option (-g) to include general debugging information anyway and I usually leave this on when I'm developing something. Being able to refer to things by their actual name in the source code is so refreshing.

If I found I was setting lots of watchpoints in my debugging, I'd probably consider a watchpoint optimisation switch, too. I can see how this could be problematic with a really big project, but I can also see how it would be very useful with small to medium projects if it was possible.

Watchpoint optimisation isn't something I'd leave on in any production release. Production releases wouldn't usually be used for debugging. If they need to be, there's always the existing way of setting watchpoints.


jesterzog Fight the light


[ Parent ]
Debug information vs. watchpoint optimization (none / 0) (#145)
by Sax Maniac on Sat Sep 22, 2001 at 09:25:01 AM EST

Well, when you use -g, it produces debug information for the compilation unit you're compiling. Typically, this is only useful if you set it on the whole program: since the nature of debugging is exploratory, you typically don't know where a bug is. You don't know exactly which modules you are going to need to look at, so you make them all debug.

Putting debug information has no effect on the code generated. It just puts more info into the file, and doesn't slow the executable down at all. This is why you can strip the debug information out after the fact. So, it's typical to leave the -g option on most of the time, because it only costs you a bit of disk space.

Compare that to a "watchpoint optimization switch". Which variables does it trap then? All variables defined in the unit? Declared? Referenced? Or just one called out by name, as in -g myvariable? The first way would slow your program to a crawl and make it unusable, and the second violates the "just-in-time rule" of debugging. You'd have to recompile each time you wanted to debug a different variable.

Stop screwing around with printf and gdb and get a debugger that doesn't suck.
[ Parent ]

I don't have any computer science related ideas... (2.80 / 5) (#41)
by theboz on Sun Sep 09, 2001 at 01:58:57 AM EST

But, I do have one good idea for clothing.

I think all shoes should come with a teflon coating on the bottom so when I step in dog shit it doesn't stick all over my shoes and piss people off when I walk into their house like smelling so natural and soiling their carpet.

Stuff.

Frictionless shoes (4.00 / 4) (#54)
by fluffy grue on Sun Sep 09, 2001 at 12:57:14 PM EST

I don't care what the creators of Sonic the Hedgehog say, but frictionless shoes will make walking very difficult. :)

Wouldn't it be easier to just not step in dog shit?
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Uh (4.25 / 4) (#57)
by delmoi on Sun Sep 09, 2001 at 02:38:13 PM EST

The traction of teflon on anything is the same as wet ice on wet ice. Having teflon on the bottom of your shoes would be like living in a world of ice. At least for the few days you have before the teflon scraches off...
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Disposable Soles (none / 0) (#82)
by farkit on Mon Sep 10, 2001 at 01:54:00 AM EST

Why not just attach your soles with velcro, and change them as you need to. You could even have "slicks" and "wets"...

[ Parent ]
alright! (2.00 / 2) (#44)
by Ender Ryan on Sun Sep 09, 2001 at 10:06:04 AM EST

So, when we have quantum computers, we won't need to worry about bandwidth anymore!


-
Exposing vast conspiracies! Experts at everything even outside our expertise! Liberators of the world from the oppression of the evil USian Empire!

We are Kuro5hin!


A new user interface (4.75 / 4) (#45)
by Skippy on Sun Sep 09, 2001 at 10:09:52 AM EST

I'd like a completely new computer user interface. Rodents need not apply. The hardware doesn't exist to use it properly but we're getting there. Here goes

Its still a little fuzzy in my mind but I'll try and describe it as best I can. The desktop is exactly that, the desktop. It's a BIG (meter wide and 2/3 meter high) and touch interface capable LCD. There will be no mice.

The work area is a 2D representation of a 3D space. This will work, lemme get there. The 3D workspace is the surface of a sphere the top of which touches the screen at a user-definable point (within easy reach of the persons arms). At the point where the sphere touches the desktop is the point at which a document is maximized,call this the workpoint. As documents are moved away from the workpoint they get smaller as they move "down" the surface of the sphere BUT they remain open. At a certain distance from the workpoint they become icons. All moving of documents is done by dragging with a finger. This allows you to "spread out" multiple working documents and still see them. You would close a document by moving it past the point it becomes an icon. This mirrors how people actually work with documents on a physical desktop. What is being worked on is right in front of you and other documents you need are on the periphery.

Applications would have all interface on floating palettes that are always at the the level of the desktop regardless of where on the desktop they ar moved. All interaction, menu use, hitting the bold "button" is done with a finger.

The desktop interacts with the filesystem like this. The desktop represents only one folder in a filesystem at a time. As a folder is dragged to the workpoint the desktop changes to reflect the contents of that folder. Moving back out of a folder ("up") is accomplished by another user-defined hotpoint. File attributes are shown as part of the icon. Either as color tints or small overlays (like the shortcut overlay in Windows).

I don't think I've described it very well but that's about the best I could do. I think of it looking kind of like Apple's scalable displays where documents iconize into the dock but are still identifiable. If anyone has questions, I'll certainly elaborate.

# I am now finished talking out my ass about things that I am not qualified to discuss. #

HUD (4.00 / 1) (#49)
by mrBlond on Sun Sep 09, 2001 at 12:03:38 PM EST

> The desktop is exactly that, the desktop. It's a BIG (meter
> wide and 2/3 meter high) and touch interface capable LCD.

Why not just a heads up display on your eyeglasses and little sensors on your fingers? That way you can still focus behind the image unlike normal VR helmets, you're pc is as portable as you are, and you don't need an expensive "desktop" - reason knows big monitors are already expensive.
--
Inoshiro for cabal leader.
[ Parent ]

User Interface: deeper issues. Parallels? (5.00 / 1) (#138)
by gnomon on Tue Sep 18, 2001 at 04:56:53 PM EST

I really like the idea of having a user interface that fits into the flow of how work is done, rather than providing various bits and bobs that help out with isolated parts or sections of the overall task. Document organization is a very important part of this concept, especially in our current data-rich paradigm (it's a shame that a truly powerful hyperlinking infrastructure was never established. HTML links work, most of the time, but do some reading about Ted Nelson's Xanadu project, HyTime - heck, even XLink and XPath - and you'll realize that HTML linking delivers only the barest minimum of the potential power of a deep hypermedia system).

There are reams upon reams of writing about interesting new directions for organization systems, at various levels of abstraction: a Lifestreams is a fascinating concept for long-term personal data organization; Startrees, formerly known as hyperbolic trees, are interesting constructions that allow users to navigate huge dataspaces without getting hopelessly lost; heck, if you think about it, there's nothing that Tim Berners-Lee's new creation Curl (or even other interesting ideas like adding support for ontologies to the structure of online documents - the so-called "semantic web") attempts to do that a little creative Scheme or Common Lisp can't accomplish.

Aside:

In fact, there are a great many parallels between these languages. Also, if you decide to investigate, remember that although Curl offers what looks like a relatively familiar syntax, packages like CTAX can make Scheme look and work like other languages. Carrying this even further, although unfortunately back into the realm of (currently) proprietary languages, Rebol seems to combine the power and elegance of Scheme and Forth, the familiar syntax of C and the text-hacking power of Perl with a level of network transparency that I've seen nowhere else.

Anyhow, I digress. My point is that there are hundreds of really cool paradigms out there just looking to be implemented. Something that very few have, however, is a sense of humbleness - most of these concepts work while excluding every other paradigm, sometimes radically. Doing away with traditional filesystems is a huge change, for example, as is abandoning the concept of starting, using and then exiting applications.

What I would really like to see is an emphasis on the semantics of productive computer work in addition to the current focus on and interest in how these tasks are presented to the user. Instead of forcing the user into a new overarching philosophy of computation that allows vast benefits in a narrow scope of tasks, I would rather see specific applications that allow these kinds of benefits while simultaneously offering clear, powerful interoperability with other applications - any other applications, not just the ones written by some particular vendor. Ideally, this interoperability would be so seamless that the user wouldn't know (or even have to bother knowing) which application is currently providing the services in use - there would just be a set of verbs to choose from in a context-dependent fashion.

Aside:

Note that the method for choosing these verbs is left deliberately undefined - it could be dragging virtual documents around on an electronic desktop with your fingertips, pointing and clicking on icons, running text-based commands from an interactive shell or waving at a video-capture device hooked up to a gesture-recognition system. The interface should be dictated by the verbs available to the user, not by the whims of an interface designer (except, of course, if the point of the application is to demonstrate the interface): there are some tasks to which a system that provides haptic feedback is best (playing with protein-folding in 3-dimensional space comes to mind), others that are best served by a standard "sheet of paper" WYSIWIG interface (desktop publishing of small documents), still others for which a gesture-based interface would be ideal (home automation, for example - motion-sensor lights and "clappers" already occupy a corner of this interface space).

Most software tools work best in conjunction with others, but integration can sometimes lead to a tool with a vast number of functions, each performed in a sub-par fashion; the field of home electronics is rife with examples of this (TVs with built-in VCRs that have terrible picture quality and no programmability; sound cards that feature built-in modems, or vice-vera; CD players with built-in radios that couldn't tune in to an FM station if they were a meter away from a 3000-Watt broadcast antenna). On the other hand, a tool that is built to be incrementally modified, and for which the modifications can be easily distributed, can eventually become the locally optimal solution for a good many problems. Emacs is (arguably) a good example of this, although I personally don't think that enough people engage is serious elisp hacking to make the program ideal for many tasks. If you think about it a little, though, you might come to the same conclusion that I have: in order to have a truly mature, useful set of tools that work together in concert to satisfy complex demands by the user in such an intuitive fashion that the tools themselves sink into the background and only the task is emphasized (a tenet of calm and ubiquitous computing, if you're curious), the user must be able to customize those tools in some fashion. In fact, there should be no distinction between "using" and "customizing" the tools - like conjugating verbs and pluralizing nouns in a sentence, modifying the structure of a software tool should be a natural thing to do.

I think that some basic rethinking of the role of computers is necessary for the entire field of computing to move forward. I don't mean to imply that there is no more innovation possible in our current paradigm - far from it! - only that as we press on further and further with our current assumptions about software development, system architecture and interface design (and the issues raised by the deep interrelation of these concepts), it becomes more and more difficult to create powerful, simple systems for non-technical users.



[ Parent ]
Good source for information on different ideas (none / 0) (#150)
by bpt on Mon Oct 08, 2001 at 01:43:36 PM EST

For a general overview of lots of new ideas, the TUNES project is a good place to start. Besides the actual project, there is a Review subproject which has information on different programming languages and operating systems.
--
Lisp Users: Due to the holiday next Monday, there will be no garbage collection.
[ Parent ]
video compression (3.00 / 3) (#46)
by Ender Ryan on Sun Sep 09, 2001 at 10:15:37 AM EST

Ok, I have a rather crazy idea, IMO. I have no idea how practical it would be, or even if it could work, or even if anyone else has ever tried this, as I know absolutely nothing at all about video compression. In fact, I know nothing about compression at all.

So, here goes.

From what I understand, video compression is usually done with similar techniques to mp3. This works by removing imperceivable data from the video, and by compressing data that is the same the same way normal non-lossy compression tools work.

My idea, however, is completely different. Instead of modifying the data in such a manner to make it small, why not actually store a few hundred megs, or more, of likely imagery to appear in any given video. These could simply be a block of pixels 10x10 or 20x20, and instead of sending the actual video, re-create the video using the blocks and stream only information telling which blocks appear where in the video.

Obviously, the more data stored for this, the better the video quality would be. Is this absolutely ridiculous, or could it work? Has it ever been tried before? Like I said, I have absolutely no idea at all.


-
Exposing vast conspiracies! Experts at everything even outside our expertise! Liberators of the world from the oppression of the evil USian Empire!

We are Kuro5hin!


A message (2.20 / 5) (#53)
by fluffy grue on Sun Sep 09, 2001 at 12:55:37 PM EST

Kids, this is why you shouldn't talk out your ass about things you know nothing about.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

oh I don't know (none / 0) (#79)
by odaiwai on Mon Sep 10, 2001 at 12:53:17 AM EST

The thought of someone downloading a warez copy of Star Wars EpIII: Yoda Kicks Butt only to find that the actors and the scenery have all been replaced with random images from his hard drive sounds pretty nifty.

A compression scheme which would utilise previously stored images already exists. It's called a Book. The description says "a tall blonde in a small black dress" and you search your brain for pictures of tall blondes in black dresses and use that image unless the author says something which makes you change the image.

dave
-- "They're chefs! Chefs with chainsaws!"
[ Parent ]
Thing is... (4.00 / 1) (#80)
by fluffy grue on Mon Sep 10, 2001 at 12:58:43 AM EST

What he was talking about falls under a similar category as the other crackheaded compression thread on this story. It'd take less data to represent the 16x16-pixel block algorithmically than it would be to enumerate existing, catalogued data, and use a lot less I/O bandwidth as well.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

message (none / 0) (#128)
by yesterdays children on Fri Sep 14, 2001 at 07:39:35 PM EST

Kids, let this be a lesson to you. You can be as smart and knowlegeable as anybody and negate all of these advantages by being an ass.

[ Parent ]
subject: (2.00 / 3) (#63)
by delmoi on Sun Sep 09, 2001 at 03:39:52 PM EST

From what I understand, video compression is usually done with similar techniques to mp3. This works by removing imperceivable data from the video, and by compressing data that is the same the same way normal non-lossy compression tools work.

Your understanding, then, is totaly incorect.

Obviously, the more data stored for this, the better the video quality would be. Is this absolutely ridiculous, or could it work?

It's absolutly rediculous.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
curt response (1.00 / 1) (#69)
by Ender Ryan on Sun Sep 09, 2001 at 07:27:09 PM EST

Care to elaborate? Why is it ridiculous?

Another poster didn't think it was ridiculous and gave an example of something similar.

Fucker


-
Exposing vast conspiracies! Experts at everything even outside our expertise! Liberators of the world from the oppression of the evil USian Empire!

We are Kuro5hin!


[ Parent ]

Fucker? (none / 0) (#76)
by delmoi on Sun Sep 09, 2001 at 11:10:48 PM EST

I'm sorry, you're idea is bunk. But thanks for the personal insult. It makes me feel special.

Vector Quantization deals with pixels, not squares. And while you're example shares some traits with VQ, It is radically different in scope. The samples on gamasutra used codebooks of 256 or so. You're talking about a system that would need to use codebooks in the size of 22,400 to 240,000 Not exactly trivial, especialy the storage requirement. And even if you could figure out a way around those encoding requirements, you'd still have to do a huge search on each block. for each frame for encoding. Encoding a second 640x480 video would require 92,160 lookups. And that's sifting through gigs and gigs of data... even you only use 1/1.58e+12041 of all the possible blocks.

The idea just sounds ridiculous, and ill-consived. If you think you can do it, the prove me wrong. Until then, I think it's not a viable solution.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
thank you (none / 0) (#102)
by Ender Ryan on Mon Sep 10, 2001 at 10:28:41 AM EST

Sorry for the personal insult, I felt insulted that you answered so curtly without any explanation about why you thought it was ridiculous.

The topic at hand was, afterall, crazy ideas in your head that may not seem practical at all.


-
Exposing vast conspiracies! Experts at everything even outside our expertise! Liberators of the world from the oppression of the evil USian Empire!

We are Kuro5hin!


[ Parent ]

i don't believe you (5.00 / 1) (#127)
by yesterdays children on Fri Sep 14, 2001 at 07:35:30 PM EST

Explain why the 'codebooks' would have to be so big? Why does the entire possible set of images have to be in the codebook? I once read that every programming problem could be expressed as a case of 'caching'. I'm actually pretty interested in what he posted, if only as something to spur good learning all around. If you are interested in teaching, it'd be great to do this in as non-asshole a fashion as possible.

So what if ultimately what somebody proposes would have no feasibility or provide no net gain? Its just as valid to learn why this would be the case, and your sour post just ruins the spirit of what seemed like a fun thread where folks could pick up some knowlege and have a bit of fun at the same time.

[ Parent ]

Vector Quantization? (3.50 / 2) (#68)
by arjan de lumens on Sun Sep 09, 2001 at 07:01:54 PM EST

Such a method sounds a lot like a Vector Quantization - dunno if it has ever been tried with video, but variations of this method have been tried with still images, apparently with quite good results. Building a codebook of 'likely' imagery for a movie or even a set of movies sounds like a rather diificult and time-consuming, if even feasible, task, though.

[ Parent ]
Another message (5.00 / 2) (#75)
by reishus on Sun Sep 09, 2001 at 11:10:21 PM EST

2 of the responses you get have totally dismissed your idea without going into to the details to why it will not work. They were also rude and discouraging.

I say, (to be more general) everyone should feel free to come up with ideas in subjects that you don't know much about. 95% of what comes out of brainstorming is complete crap - but the idea is to defeat your "internal censor" and to spit out anything you can think of, and then seperate the good from the bad. An outsider might even bring a fresh perspective to the table.

In short, don't disrespect someone because you think their idea is bad. Everyone comes up with both crappy ideas and good ideas.

[ Parent ]
A message (3.00 / 2) (#81)
by fluffy grue on Mon Sep 10, 2001 at 01:07:35 AM EST

The real world does not work like the happy special fuzzy sunshine bunch. I also can't stand to see "false experts" rambling on without even the slightest clue about what they're talking about. It does no good to talk about stuff without even a basic goddamned understanding, because all it does is raise the hopes of others who don't know anything about the field while, at the same time, pissing off people who actually do know at least something about this stuff.

His compression scheme was akin to someone, on the topic of social policy, saying, "Why don't we just have everyone cooperate and be nice to each other?" It's like someone on the subject of chemistry asking, "Why don't we just extract the energy from the chemical bonds of everything around us? That would solve the energy crisis!"

I mean, come on, people, at least have some basic understanding of what you're talking about first. Or at least show that you put some basic goddamned thought into what implementation would require. It's not a matter of crappy vs. good ideas, it's a matter of completely fucking clueless vs. somewhat-informed ideas.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

put up or shut up (5.00 / 1) (#83)
by ant on Mon Sep 10, 2001 at 03:06:53 AM EST

  • Current ideas have flaws. To use them to absolutely judge new ideas is limiting.
  • If someone posts an idea that you think has flaws, point the flaws out. Put up or shut up.
  • Great, you're an expert on clueless people who think they have good ideas. What are your good ideas?


[ Parent ]
I'll put up (none / 0) (#105)
by fluffy grue on Mon Sep 10, 2001 at 01:31:02 PM EST

How about a 3D engine which has realtime reflections and shadows, LOD, wavelet-compressed mesh geometry, dynamic geometry morphing (regardess of source and destination topologies), blah blah blah.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

grrr... (none / 0) (#103)
by Ender Ryan on Mon Sep 10, 2001 at 10:42:41 AM EST

I am not part of some "happy special fuzzy sunshine bunch". I had an idea about something I know nothing about, it was a completely different approach to a problem that is extremely difficult. I have put SOME thought into it, and I assure you, it is POSSIBLE, but almost definately not PRACTICAL.

The point of this discussion was ideas that seem crazy and didn't seem practical, which describes my idea fairly well.

Also, consider, there is a company claiming to be able to compress video to about 100,000:1(or something crazy like that) and has had outside parties verify that it actually works. Yes, it's most likely bullshit, but maybe not. If in fact it isn't bullshit, it seems likely to me that however they are doing it is something totally off the wall that would probably sound ridiculous to you.


-
Exposing vast conspiracies! Experts at everything even outside our expertise! Liberators of the world from the oppression of the evil USian Empire!

We are Kuro5hin!


[ Parent ]

I see (none / 0) (#106)
by fluffy grue on Mon Sep 10, 2001 at 01:31:45 PM EST

You yourself said you had no clue about video compression when you posted your illformed idea. delmoi has already stated in detail why it is a bad idea; I feel no need to repeat him.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Hey, what gives? (5.00 / 1) (#119)
by locke baron on Tue Sep 11, 2001 at 01:51:05 AM EST

You guys (fluffy grue and delmoi) are flaming him stupid over this... So he's talking out his ass about something he knows nothing about. Big fat hairy fucking deal. He included the disclaimer, right? E'gads. Show some frigging respect.


Micro$oft uses Quake clannies to wage war on Iraq! - explodingheadboy
[ Parent ]
Flames (none / 0) (#123)
by fluffy grue on Tue Sep 11, 2001 at 10:17:34 AM EST

Ender Ryan posted his idea. delmoi posted a brief (but polite) message saying that it wouldn't work. Ender Ryan then inflated it to a personal attack against delmoi. Other people came to his "rescue" by bringing up this whole "so what if he doesn't know what he's talking about, he might have a legitimate idea anyway!" thing. At that point, I consider it fair game.
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Voice CODECs (5.00 / 1) (#116)
by mmcc on Mon Sep 10, 2001 at 11:47:32 PM EST

Actually some Voice CODECs work in this way.

They keep a small codebook of commonly occuring sound snippets and try to match them to the raw sound, then send an index and some parameters (eg. volume, freq).

The algorithm in question is the CELP (Code Excited Linear Predictive) voice compression algorithm. IIRC, it's used in the GSM standard.

It might be impractical for image compression because of the large number of comparisons you would need to perform on each block of each frame.

Maybe a good thing to match would be pieces of flesh :-)



[ Parent ]

Holographic Thought Recognition (1.75 / 4) (#58)
by ganglian on Sun Sep 09, 2001 at 03:03:13 PM EST

Fuck windows, and everything it runs on. A 3d dimensional open air interface that streams out of hardware hook into a Thought recognition interface. You think it, the hologram shows it, and in my demented vision, sorry Redmond, it's open source and renders windows anything not so much obselete, as irrelevant..... sleep well
You heard me.
Some crazy or not so crazy ideas (4.50 / 4) (#65)
by Misagon on Sun Sep 09, 2001 at 04:10:33 PM EST

Something like Java's retargetable class files, but geared for code generation of optimized code on modern processors and with the source language being C. Many types of architecture independent optimizations and analysis techniques would have been done by the front end, with the results encoded in the files to make the job easier for the code generator. I have also thought of many ways in which the code can be transformed for better compression. This would be great for handheld computers, where there are a large number of target cpu architectures but a smaller number of developers willing to install a cross-compiler. A Linux distribution could be based on it, and hopefully, it would make binary compatibility between CPU's less important ...

A hybrid between an airship and an airplane. It would be a "lifting body" where the wing is the fuselage - this would provide good space/wing-area ratio while still being a good lifting wing. The idea is that some of the extra space could be filled with helium to make the craft lighter, needing shorter runways and less fuel.

A computer on your wrist. The problem is heat, sweat and comfort. I am thinking about plastic bubbles filled with air next to the skin. Running through these bubbles there would be channels filled with a cooling liquid pumped around using kinetic energy from the user's arm motions.

A key-chain USB credit-card adapter. Insert your credit card in the adapter, insert the adapter into the USB port of a computer and the adapter would emulate a disk drive with keys being files and with an executable program that could be run for creating use-once credit card numbers for online purchases. With this contraption, it should be possible to buy stuff over the Internet from just about any Internet Café where the computers have USB peripherals (such as Macintoshes), and it would be secure.

Sorry about incoherent/creative use of the English language here. It is not mine.
--
Don't Allow Yourself To Be Programmed!

Those ideas (none / 0) (#117)
by fluffy grue on Tue Sep 11, 2001 at 12:50:45 AM EST

Retargetable objects: I had discussed this with a friend of mine a while ago... seemed like a great idea. What we came up with was pretty simple: gcc has a number of stages, one of the specific ones being 'language frontend' which compiles everything into an internal representation (this is how a single compiler can support C, C++, Fortran, Java, etc., and why gcc now stands for "GNU Compiler Collection" instead of "GNU C Compiler"). What we decided was that if there were some way to get gcc's language frontends to poot out the internal representation in some nice, incrementally-supportable form, then we could effectively put out "binaries" which could be retargeted on any system. We never really took it anywhere, though. It might be a nice feature to add into gcc 3 now that it's nearing some semblance of release.

Airship/airplane hybrid: Hm, my gut feeling (bearing in mind that I'm a computer scientist, not a physicist) would be that the internal lift would end up cancelling out its effect with Bernoulli (since it'd be adding pressure to the top and removing from the bottom), and so the net change in lift would be 0. That's just my uninformed opinion, though.

Wrist computer: The big problem isn't heat, sweat, or comfort. The big problem with today's tech is battery life.

Key-chain USB credit-card adaptor: Even assuming you mean one of the newer credit cards with a chip on it, I don't think the credit card companies would go for it, since it puts too much trust in the client. And anyway, in order to authorize the new credit card number it'd probably have to go through a central authority anyway, and an SSL-enabled website is a better way in general. Plus, most Internet cafes don't like people plugging random stuff into their ports (most of the ones I've been to don't even give you physical access to anything other than the mouse and keyboard).
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

GUIs disrupt user activity (3.66 / 3) (#87)
by Steeltoe on Mon Sep 10, 2001 at 05:52:29 AM EST

One really big annoyance of modern GUIs is that they show no regard of what the user is currently doing. Let's say you are writing a memo, and new mail arrives in your mailbox. However, as you are watching the keyboard or a paper you are copying, you don't see the pop-up. Suddenly, you have lost half your memo as you typed it in the pop-up window. Other times, I feel like I have to fight the other applications for focus in order to get some work done. Multitasking is a joke on GUI OSes.

GUIs should NEVER disrupt what the user is currently doing. It's really amazing that this hasn't been fixed yet. We have the taskbar, somethis similar could be used for pop-ups too. A place where they can pop-up, without shifting focus, and giving a good overview on what came first, from where and preferrably with a button to get more details and help. This means that new GUI applications being loaded should not steal the user-focus either.

As with everything in a modern GUI-OS/Window Manager, this should be standardized. I'm sick & tired of all the bastardized pop-ups you can get, and I'm sure programmers are getting tired of reinventing the wheel over and over again too.

- Steeltoe
Explore the Art of Living

I want to create a computer interface ... (5.00 / 1) (#104)
by drivers on Mon Sep 10, 2001 at 11:16:34 AM EST

Since I read Jef Raskin's "The Humane Interface" I've thought of making a new computer interface with the principles from the book. The computer not interrupting what you are doing, is definitely something I would not allow. I haven't designed everything that it should be able to do yet, but I know it will not have:
icons for commands
filenames
CAPS LOCK
"applications" as such. (all commands will be available at all times)
"modes". The computer will always do the same thing. Type an "h", an h appears on the screen at the cursor.

Some things I want it to have:
universal undo and redo. (and somehow always be able to see what the undo and redo command will do, if it is a command)
incremental search forward and backward (like emacs). I'm thinking of perhaps using the ALT keys. For example, if you want to search forward, hold down the right ALT key and start typing. Backwards is left ALT. To "Search again" just press and release the ALT key again. (Leap keys)
Of course all these ideas are straight out of the book. The interesting part will be defining what the user should be able to do in the system and making it as easy as possible.

A lot of people think easy means "dumbing" down. On the contrary, I think that it would be cool to design such a consistent ("Humane") interface but expose as much of the computer (hardware and/or software) system without too much abstraction.



[ Parent ]
Prior work (none / 0) (#118)
by fluffy grue on Tue Sep 11, 2001 at 01:26:27 AM EST

The interface you describe sounds a lot like Emacs. :)
--
"Is not a quine" is not a quine.
I have a master's degree in science!

[ Hug Your Trikuare ]
[ Parent ]

Tried Linux? (none / 0) (#111)
by Dlugar on Mon Sep 10, 2001 at 04:53:39 PM EST

Just about every Window Manager for Linux (including the one I'm using right now, Enlightenment) has an option to have pop-up windows not get the focus. It even has an "exception" to this rule that if the parent window is focused, then the pop-up window does get the focus after all. That's the configuration I like best.

Dlugar

[ Parent ]
Mac OS pre-X (none / 0) (#139)
by scruffyMark on Tue Sep 18, 2001 at 10:45:58 PM EST

The Mac OS up to OS 9 is beautiful in this way - if an application that is not the foreground app shows a popup window, or for whatever reason wants your attention, here is what happens:
  • There is a single beep. No more.
  • The application's icon flashes in the application menu (top right of the screen).
I used to get furious with Windows every time I had to use it, because of the bloody popup windows that would appear to tell me the most trivial things about an app I wasn't even using.

Now, with OS X, Mac has copied this wretched behaviour - new windows, popup or otherwise, always become foreground windows. Apps that you are not currently using interrupt your work. OS X may be more powerful and stable, but it is less usable (in my limited experience, still the most usable out there, but a step back from OS 9).

[ Parent ]

Exactly... (none / 0) (#142)
by lithmonkey on Thu Sep 20, 2001 at 05:39:09 PM EST

...the problem I had with WinAmp and ShoutCast. Between tracks, winamp had to reconnect to the streaming server. This would cause a window to popup with the 'connecting to such and such.. blah blah.' Totally useless, and since I use shortcuts in Photoshop, whenever it would popup I'd hit a button and not realizing what's going on, i'd bring up the tracklist or something like that. I made a comment about it in winamp's 'suggestions' forum and they fixed it! I do believe that's the first suggestion winamp took and implimented.

To everyone who uses winamp: You're welcome. :)

[ Parent ]
Mute Button for my ears (3.50 / 2) (#101)
by MicroBerto on Mon Sep 10, 2001 at 10:18:16 AM EST

I'm still looking for a biomedical device that can be put into my ears so that I can simply push a button, and I will not hear anything at all - a mute button!

This way, I can study ANYWHERE, and when I'm married, I can just shut the wifey off after a hard/hungover day of work :-)

Berto
- GAIM: MicroBerto
Bertoline - My comic strip

A bit modified (none / 0) (#110)
by akharon on Mon Sep 10, 2001 at 03:29:29 PM EST

What I'd like is something that I can wear to block out certain pitches. I have minorly muffled hearing (varies by day too), so when I get in large groups of people talking, I can't make out what anyone is saying. If I could block everything out but the pitch that person speaks at though, that would make it much easier. I realize this is a pretty crude method, but I don't imagine it would be terribly difficult to implement.

[ Parent ]
Already being done and surpassed (none / 0) (#112)
by arjan de lumens on Mon Sep 10, 2001 at 05:14:29 PM EST

at least in research labs, or so it seems - check out this.

[ Parent ]
Earplug, noise, and headphones (none / 0) (#133)
by pin0cchio on Sun Sep 16, 2001 at 10:50:47 PM EST

I'm still looking for a biomedical device that can be put into my ears so that I can simply push a button, and I will not hear anything at all - a mute button!

Put earphones in your ears, and then play brown noise over headphones. You will hear nothing.


lj65
[ Parent ]
Pain based User Interface (3.00 / 1) (#137)
by Scrymarch on Tue Sep 18, 2001 at 07:07:29 AM EST

Friend and I came up with this one a while ago. Harness one of the strongest learning curves humans have - the avoidance of pain - when learning computer interfaces. Still design your interface well ala Neilsen, Norman et al, just use pain to reinforce the foolishness of using parts of it a certain way.

Another idea for compression (none / 0) (#140)
by Filip on Thu Sep 20, 2001 at 05:39:22 AM EST

I got this idea when I read about PI being ilegal due to the DMCA. Someone had calculated how long into PI you'd have have to go to find DeCSS in tar.gz format.

Why not store compressed files as how long into an irational number they start (and which irational number too BTW)? The files would have to be compressed by normal means first, so that there'd be as short a sequence as possible to match. The distribution format would look like:
PI 1896324649873267:3487346
Looks convenient, doesn't it? :)

I realize that one would need a pretty silly precision on a couple of irational numbers in order to store software of any significant size in them. Still, large software packages could be split into chunks, which would then be individually matched.

The idea require that the irational numbers aren't stored, but calculated. Otherwise we'd have to distribute GBs of irational numbers in order to mine MBs of rational data out of it.

Talk about irational ideas!

/Filip


-- I'm just a figment of your imagination.
Compression (none / 0) (#141)
by gjbloom on Thu Sep 20, 2001 at 10:09:13 AM EST

Yeah, you could use an offset and length to identify a patch of a pseudo-random digit string to "compress" something, but, as it turns out, the number of bits needed to express the offset and length are larger than what you're trying to compress.

To see that this is true, imagine we're trying to compress a single byte of data by giving the offset into a random byte stream where that byte is found. Each byte in the stream has a 1/256 chance of matching. The probability of finding a match within two bytes is 2/256. I think you can see where this is headed. Even after looking at the first 256 bytes in the random stream, there is no guarantee that the byte you're finding an offset for will have turned up, even though the probability approaches 1. That would mean you'd need more than one byte of offset to "compress" the byte. And, in a real application, you would also have to record how many bytes you're compressing.

This is akin to using "God's Dictionary" to compress things. In theory, there will only be a finite number of documents that anyone in the universe will ever want to compress. Since an all-knowing being can know every document that will ever be compressed, it can assign a serial number to each one. To uncompress the document, you'd just hand over your serial number and get back your original pile of bits. This algorithm would compress everything, even the longest video, down to something in the neighborhood of 64 bits. The only tricky bit is supplying the all-knowing being who is content to handle all your requests for compression and decompression.

[ Parent ]

Indeed, though... (none / 0) (#143)
by Filip on Fri Sep 21, 2001 at 06:10:57 AM EST

...I'd go looking for a match in more than one bytestream. What are the odds if I have a large number of irational numbers, that I look through in parallel?

This is a variation of God's Dictionary, though the sentient being is not God - but the compressor, with the aid of his/hers 'puter. And though the compressor isn't all-knowing, s/he is an investigating being.

Still, if the compression turns up ineffective - there is always the choice not to use this method. Or to alter the bytestream that is to be compressed (using methods like Ceasar's cipher), and make another go.

In the previous post I stated that someone found DeCSS in PI, it seems I was wrong. It was in a prime number. (URL: http://primes.utm.edu/curios/page.php?number_id=953 )

So, even though it appears to be folly, it can be done (though I realize finding a match in an irational number is probably a bit harder than turning an existing number into a prime, and then distributing the order of the prime).

So here is the revised idea:
Treat the by normal means compressed program as a number. Find the closest prime. Then find out how many smaller primes there are, and how far to offset from the prime to the program-number.

It seems to me that this method is guaranteed to give a reduction in size.

It is possible that one could compress these two numbers again (with normal means), after the prime compression. If so, they may be possible to prime compress again.

/Filip
-- I'm just a figment of your imagination.
[ Parent ]

A related scheme used in 8-bit micro games (none / 0) (#144)
by simon farnz on Fri Sep 21, 2001 at 06:26:11 AM EST

Although as other posters have pointed out, this scheme is rather impractical, 8-bit micro games occasionally used a related trick to store game data; it would be kept as a sequence of addresses and sizes. Now that we have more than 32k of program memory, this is no longer needed.
--
If guns are outlawed, only outlaws have guns
[ Parent ]
Crazy ideas in your head | 151 comments (143 topical, 8 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!