Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Dream Tools

By QuantumG in Op-Ed
Thu May 16, 2002 at 03:51:38 PM EST
Tags: Software (all tags)

This is a collection of thoughts I have had on compilers, re-engineering and debugging tools for software engineering. None of these ideas are meant to be plausible. One could say this is a foray into the realm of science fiction, although certainly not "popular" science fiction. I think there is some insight here as to the state of the art in computer science and software development. If we had these tools, just think of how productive we all could be.

Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad

As I write this the Perl language is at version 5.6 and yet, a large amount of Perl code written in the days of 4.x will still run happily with little to no modification. This is a tribute to the concept of backwards compatibility, something that the mongers of Perl honour with reverence, but are there alternatives to backwards compatibility? The most obvious alternative is to throw away old code. Another alternative is to not update your compiler/interpreter. One can happily use Perl 3 today as long as one wants not for the features and support of Perl 5.

It is unfortunate that sometimes languages die. The community around the Perl language is quite strong, so this may not apply, but in the case of small or application specific languages, a lack of support can lead to programmers abandoning the language. If one has a lot of money invested in source code written in this language it can be difficult or expensive to find programmers able to maintain or extend this source code.

A solution to both these problems presents itself in source to source translation. A program written in Perl 3 can be translated to a program written in Perl 5 with an automatic translator. A program written in an arcane or unsupported language (including assembler languages) can be translated to a language cheaper to maintain. The translator is likely to be somewhat limited and will probably have to be hand coded, although there are tools like compiler compilers which can help. This leads me to my first dream tool:

readcode is a (non-existing) program which extracts the meaning of a program given a specification for the language in which it is written. This program requires some explanation, so allow me to digress for a moment and define the acronym YINACC: Yacc Is Not A Compiler Compiler. At best, Yacc is a parser compiler. It takes a representation of Backus Naur Form and C code and generates a table based parser. The parser depends on a scanner to break the input stream up into tokens. The program lex is usually used to make the scanner. Other tools exist that solve some of Yacc's problems: non LALR and ambiguous grammars, EBNF notation, languages other than C. All of these have been an issue to me in the past but the later is, I think, most telling.

A Yacc file is essentially a declaration of the syntax of a programming language using a set of rules and alternatives. The Yacc engine does some magic to transform the grammar into something that can be efficiently parsed and outputs a finite state machine. For each rule in the file the programmer can attach "side effects" which are executed when the rule is matched. The side effects essentially define the semantics of the language (in the case of a one pass compiler) or build some internal representation which is then transformed by the compiler proper to turn that input program into something executable by a machine.

It is absolutely true that some compilers (if not all) have no real "understanding" of what it is the program means. However, some attempts at specification based compilation (such as the New Jersey Machine Code Toolkit and the related compiler projects, and to a lesser degree, the Gnu Compiler Collection) have blurred the truth of this assertion. They've done this by adding to the grammar file a specification of the semantics of the language as well as the syntax. One could say the specification tells the compiler how to read a given source file. The compiler forms an internal understanding of the program and can then manipulate it to perform sophisticated optimisations and generate outputs.

This front end of a compiler on steroids has been called a "fact extractor" by some in the reverse engineering community. Although language specific a very promising project in this area is CPPX, a fact extractor for the C++ language based on GCC. There are a number of things wrong with CPPX, not the least of which is that it extracts facts from pre-processed C++ code. Any facts that may be gathered from preprocessing directives are lost, as are formatting and comments. For the case of a compiler this is inconsequential, but other reverse engineering applications require this kind of information, so CPPX isn't much use for them.

Surely there is a better language than C to specify the semantics of a programming language but if we are to replace C in specification files, what should we replace it with? I think it is obvious that a declarative programming language is what we need, as clearly we are declaring the semantics of the language. This is, however, deceptive. The apparent completeness here hides some unhappy truths about information loss. True, we can define the semantics of any language by defining the exact untyped lambda calculus which when instantiated and evaluated will yield the correct result, but how much information about the source language are we losing? A language like Java hides many important semantics in its superficially simple syntax. The constraints of the type system, for example, vastly outweigh the imperative object oriented execution. Are we to add these constraints to the specification in this low level format also?

lang2lang is a (non-existing) program which translates any arbitrary computer language to any other computer language, maintaining comments, formatting and symbols where-ever possible. There are some things of importance to note here:

  • the tool does not discriminate between languages. If you give procedural code (like say, C code) to the tool and request a functional program be generated in some language such as Haskel, the tool is expected to be able to do it.
  • the tool is good for reverse as well as forward engineering. Again, if I give procedural code and ask for object oriented code (say, C to Java) I will expect to get some reasonable clustering of procedures into classes.
  • the tool generates good code. What is "good code?" Is it objectifiable or is good code purely a subjective concept? Below I will have more to say on code quality, but for the moment, let's just say that we can specify what is good code in some given language and the tool is expected to meet these requirements.

One could imagine the tool being used in more than the situations we have outlined above. One can even imagine the tool being used on a daily basis. Perhaps I have written a lot of code and it occurs to me that a different language would have been more appropriate. Today, changing languages in mid-project is a significant investment and the pay-off is anything but measurable. Perhaps I simply dislike or don't know a language in which I have obtained a large portion of code.

coderate is a (non-existing) program which evaluates the quality of the source code of a program written in an arbitrary computer language. What is code quality? I would define code quality as the degree to which the source code of a program conveys the meaning/functionality/semantics to a reader of that source code. Some people define code quality as an inverse function to the amount of time it takes for a skilled programmer who is unfamiliar with the source code to extend or fix a flaw in the program. There are other definitions, some objective, some subjective, but one can envision the possibility that all are somehow machine describable. That is, there is some way that each of us with a concept of "good code" can write a specification for at least some of what we are looking for in a well written program.

Some programs like this tool already exist, although they are largely language specific and not specifiable. As we have already stated, opinions on code quality differ. The writer of lint, a static analyser of C programs, may have very different opinions to me as to what makes good C code, and he/she appears to have no opinion on the quality of java programs.

The use of this tool is more than academic. When making source code acquisitions (or IP-centric mergers) a company will often evaluate a program solely on its functionality. This can lead to disproportionate estimations of the acquisitions value when more money must be spent later adapting the source code to the needs of its new owner.

improvecode is a (non-existing) program which performs source to source translations to improve code quality. As a logical next step, how hard is it to transform code as to maintain the same semantic meaning whilst maximizing some specified metric. Compiler developers will immediately recognise this as the core of an optimising compiler. However, unlike an optimising compiler (most of which perform only local conservative optimisations) the tool will understand the code to a degree such that it can perform not only interprocedural analysis but actual algorithmic complexity analysis.

Suppose you write a program which, for some reason or another, performs a naive search in a linked list for an element and returns the result. The complexity of using this data structure is suboptimal at a linear function of the number of elements to be searched. A better data structure would be a tree or a hash table. These are the kinds of concerns that typically dog programmers who are concerned with performance. To perform the optimisation a programmer must know something about the likely input to the program, ie. the number of elements likely to be in the list. This is because at small numbers of elements a linked list may be more efficient than a tree or a hash table. If we can provide this kind of information to our tool, it can perform the optimisation for us.

codequery is a (non-existing) program which answers questions about source code. The field of program understanding is a reverse engineering subject interested in developing techniques to help programmers understand source code faster or better. It draws from many fields including program visualisation, compilation and slicing technologies, debuggers and profilers. Although most of these tools are rarely used by programmers, their value is enormous to anyone who has inherited a large code base or is working on something which is simply too big to keep all in one's head.

A field that has yet to be seriously incorporated into program understanding is computer reasoning. Indeed, a similar field to program understanding, text understanding, has been attacking the same problems from a different perspective. Text understanding has a much loftier goal than program understanding. Rather than supplying tools to aid humans to understand content, text understanding aims to write a program that, in some sense, itself understands the content.

For example, a text understanding program may be given a novel to read. A series of questions may then be queried of the novel. Who did Alice follow into Wonderland? Why did Alice accept tea from the Mad Hatter? When did Alice's discontent with the inverted value system of Wonderland first become apparent? To answer all these questions the program truly must understand both the book and the question and possess some vast array of common (and not so common) knowledge.

The major barriers to text understanding today lie in the difficulties of parsing the english language. Ambiguity, missing information about inflection, tone, etc. A vast array of techniques have been developed to overcome these problems (some of which could be applied to programming languages with interesting results, such as probabilistic parsing) but perhaps the techniques already developed can be applied to languages without these problems. In fact, suggestions such as this have been made in the text understanding community and have lead to constructed natural languages such as Lojban which has an unambiguous grammar, phonetic spelling, regular rules and claims to be culturally neutral.

However this raises another problem: there's nothing written in Lojban (yet), at least not enough to warrant the construction of a text understanding program focused on it. Languages that are easily parsable and do have a lot written in them are programming languages. Once we apply the principles of text understanding to programming languages we get something truly interesting. Perhaps we can even teach a program to code.

code2spec is a (non-existing) program which extracts a high level formal specification from source code. Again, the tool is language independant. I should state up front that this dream tool is in some ways already a reality. Software Migrations Ltd will, for a fee, take your Assembler, C or COBOL code and transform it into a Wide Spectrum Language which can then be abstracted into a formal specification. Formal specifications people may be a little confused at this point. I seem to be advocating a backwards process of formal methods: writing the specification after writing the code is bad enough, so automatically generating it must be paramount to heresy. The use I see for such a tool is simple: summaries. Any tool which can take a million lines of code and generate a smaller specification stating what the program is doing is an aid to program understanding. The reverse engineering step tells me what the program is doing. If I modify the specification I will expect to be able to "compile" the specification back down to code. Using such a tool I could fix a bug in a program without ever writing a line of code.

code4me is a (non-existing) program which writes code, fixes bugs, and bakes cookies for you. Ok, I'm lying about the cookies, but it is about the level of credence given to such an idea, and not without just cause. To be a good (or even mediocre) programmer a program would have to be able to read and understand not only source code but also the english language. Or would it? Using the previous tool we expect to be able to hand some useful information to the program that is automatically generated. Could we not add more information in machine readable formats? Be they higher or lower level than specifications we can baby sit our tool until it produces acceptable code. Using our coderate tool we can even automate the baby sitting.

At this point a lot of people are fearing for their jobs. Is history doomed to repeat itself, as factory workers were (supposedly) ousted by robot workers, are programmers to be ousted by their programs? To answer this question I will turn to the old standby: creativity. Too much I hear that programmers are artists, although as artists we spend most of our day hacking out code to do the same old things. Code reuse, dynamic programming, new programming languages, are all symptoms of us trying to throw off the shackles of actually having to program. If machines can write the programs for us, then what will we do, other than tell them what to write. We've answered our own question. What we'll do is tell them what to write in a super-declarative fashion.

I've largely focused on source transformations, compilers, and what I suppose someone might call AI. This is because these are the thoughts that largely dominate my day holed up, as I am, at the Centre for Software Maintanence at the University of Queensland. There I am working on a decompiler: a real life dream tool that gives the user a high level source view and navigation of a program's binary, ie. one that has been compiled and the source code is no longer available. The problem is not solvable in the general case, it's equivilent to the halting problem, but that's ok, a partial solution is better than no solution at all.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o Also by QuantumG

Display: Sort:
Dream Tools | 169 comments (97 topical, 72 editorial, 0 hidden)
Fearing for jobs. (4.60 / 5) (#2)
by Stealth Tuna on Wed May 15, 2002 at 05:57:23 AM EST

I always used to tease CS majors by telling them their days were numbered. Soon someone would write a program that write programs and would effectively obsolete his own profession.

I managed to get a few good trolls at the uni bar with this story until someone came up with what i regard as the definitive answer: Even if such a program is created, someone must still hand it a specification of the program to be created. End users aren't able to do this, unless the program can somehow read minds, and even then the undecidedness of clients is a nottorious factor. Therefore there will always be something akin to a programmer/analyst.

What this will do is dramatically cut down on development time and inertia, allowing for much more rapid prototype/test cycles and ultimately (hopefully) better, more mature, software.

Dumbing down (5.00 / 2) (#5)
by LQ on Wed May 15, 2002 at 07:43:19 AM EST

There have been advances in programming tools to make programming easier. They suck up resources to add increasing levels of abstraction but most software is still rubbish. There is still a short-fall in the number of people available with the skills to fit together usable systems.

You can have all sorts of clever tools but there remains the need for somebody to devise and specify the requirement. It's no good having a tool that can understand natural language specs if you can't express clearly what it is you want. The person describing the desired system is a programmer: providing a set of instructions to a computer.

[ Parent ]

Dumbing down, or building up? (none / 0) (#154)
by piman on Sun May 19, 2002 at 06:49:07 PM EST

I assume you are referring to comprehensive IDEs like VC++, RAD environments like Delphi, and "easy" languages like PHP. Yes, in general, these produce poor quality programs, because people don't need to understand as much to use them.

But don't extrapolate from that that things will always be that way, or that higher levels of abstraction always produce poorer code. For example, people really didn't want to write machine code, so they wrote an assembler. Assembler proved too low level for some tasks, so along came compilers. Compilers are too generalized for some problem domains, so we get languages like Perl for text parsing, MAPLE for math, and so on. The fact that the highest level of abstractions currently suck doesn't mean they always will - by definition, each level of abstraction will be less mature than the ones under it.

I would say that absolutely no one can fit together a "usable system" anymore, and I would question if anyone every could. Building from your own logic gates to a computer just isn't possible anymore, because "usable system" has changed so much. Even back in the 50s and 60s, I doubt many single people were capable of designing and building an entire system.

A question for people older than me: Was there the same kind of "stupid wussy programming" backlash against compilers when they were new, like there often seems to be now against IDEs or RAD environments?

[ Parent ]

Heh (none / 0) (#45)
by karb on Wed May 15, 2002 at 03:09:38 PM EST

Until they took their software engineering course and read "The Mythical Man-Month" by Frederick Brooks. (And it's been around for about 30 years, I believe)

What he said is that programmers spend at least 10% of their time designing. Hence, a order-of-magnitude increase in production speed is impossible.

Besides, we have better languages, better tools, better understanding, better education, better platforms, and I guarantee you we still have more programmers today than we did just 10 years ago.
Who is the geek who would risk his neck for his brother geek?
[ Parent ]

I say that we have fewer programmers then ever. (4.50 / 2) (#57)
by steveftoth on Wed May 15, 2002 at 04:08:03 PM EST

We have fewer programmers but more code monkeys then ever before.  More people are churned through this machine of software development and none of them are actually programming, but rather re-implementing the same old tired retoric for the new machine/language.  
Rather then moving forward with software development, we are running faster and faster in place.

[ Parent ]
Code Monkeys (none / 0) (#135)
by swr on Fri May 17, 2002 at 02:18:10 AM EST

More people are churned through this machine of software development and none of them are actually programming, but rather re-implementing the same old tired retoric for the new machine/language.

Earlier this afternoon I had one of the "code monkeys" who was working on his ASP + javascript stuff ask me how to parse a comma-delimited string with "java" (meaning javascript). Smelling something funny (how often do you have to parse a comma-delimited string in a web app?) I asked him to show me what he was doing.

It turns out, he was trying to reinvent two-dimensional arrays by using an array of comma-delimited strings.

Further examination of the problem at hand made it clear that what he really needed was an associative array of regular arrays. He was using two arrays: one with values that matched the "real" indexes, and another with the comma-delimited list, and the indexes in the two arrays had to line up.

When I said he really needed "an associative array of arrays" I thought his head would explode. I know many languages, but ASP and Javascript are not among them, so I had to spend a few minutes with Google before I was able to show him the Right Way to do what he needed. When I showed him how easy it was he was ecstatic. "Wow, javascript can do that? javascript is cool!"

The weird thing is, this guy seems to be able to read UML, and generally has no trouble navigating a relational database with over 100 tables. Go figure.

[ Parent ]
Computability (4.58 / 12) (#3)
by IwakuraLain on Wed May 15, 2002 at 06:29:39 AM EST

A lot of these tools are inherently undoable, as they can be reduced to basic computability problems.

Get some books on Theoretical Computing, Complexity and Computability and you will see why most of this is impossible.
-- close the world, open the next

Re: Computability (5.00 / 4) (#46)
by Qarl on Wed May 15, 2002 at 03:13:18 PM EST

That's not entirely true. A program does not have to be 100% successful to be useful. Problems which are not computable in the general case may be computable in most specific cases, and in the few cases where you can't find a quick solution you can give up with an error message. Practically, I think most of the utilities listed above can be implemented well enough to be useful.
[ Parent ]
Which ones ? (4.75 / 4) (#59)
by Simon Kinahan on Wed May 15, 2002 at 04:12:40 PM EST

And why ? I'm not convinced any of these problems reduces to the halting problem, in spite of the attempts some have made at proof by repeated assertion.

Some of them are probably impossible, but not for that reason, but rather because they involve reintroducing information into the system that has been lost.


If you disagree, post, don't moderate
[ Parent ]

like type systems? (5.00 / 1) (#85)
by QuantumG on Wed May 15, 2002 at 07:58:30 PM EST

When you compile a typed language to machine code you lose typing information. right? Well, as demonstrated by Mycroft, type information can be recovered from machine code. It's still there, it just needs to be infered. However, inevitably, there is going to be a loss of information at some point, and it isn't going to be recoverable, but that's why we have programmers. I've mentioned specifications in this a fair bit. I think that the problem with formal methods is basically that you're told to write this stringent document that is to be set in stone before you start coding, and yet, we often dont know what we want from a program until it is partially written. Extending specifications to match implementations is error prone, so at least one other dream tool would be the ability to validate an implementation as conforming to a specification. Tools like this already exist and are getting better, but I still have to write that specification and for something like Mozilla, that's not going to be easy. Which leads me to a question: is extracting a specification from code a forward or reverse engineering problem? If you take the software process view it's a reverse engineering problem, because you are going back to specifications from source. But if you take the informational point of view it is a forward engineering problem because you are losing information about the code by abstracting it into a specification.

Gun fire is the sound of freedom.
[ Parent ]
Reconstituting information (5.00 / 1) (#105)
by Simon Kinahan on Thu May 16, 2002 at 04:54:43 AM EST

In going from a lower level to a higher level language you may well be able to infer low level type information, such as what is an integer and what is a pointer, and maybe even the sizes of arrays and constitution of structures. What I doubt you could recover, unless the code was produced by some known compiler for the high level language, is any indication of the semantics behind the higher level constructs.

You can almost certainly work out from machine code that a given object consists of two strings and an integer, but what you won't be able to discover is that its intended to represent someone's personal information, or that one string is their name and the other their address while the integer is their age. Those things are indicated by human conventions in higher level languages, such as variable naming and the use of encapsulation.


If you disagree, post, don't moderate
[ Parent ]

Usage of data (5.00 / 1) (#130)
by Cheetah on Thu May 16, 2002 at 05:51:43 PM EST

It seems to me that all data in a computer can eventually be traced back to two sources: the real world (i.e. input from a human, or perhaps some measurement device, e.g. a camera), and entropy (i.e. a random number generator).  Data stored on a disk is often an input to a program, but it had to come from somewhere, and that can be traced back to one of these two sources.

If a program has a structure with two strings and a number that represent personal information, then at some point it will either input into this structure or output its contents.  Inputs and outputs have to have defined meaning, or they are useless.  So you only need to see what the input source is giving (either by protocol definition, or by the program prompting the user), or what the output destination is expecting (again, protocol or prompt).  If a program is dealing with personal information, then it might dump out mailing labels, or ask the user to enter their information.

This kind of analysis is actually fairly simple, and is the basis for a lot of low level reverse enginnering.  You give a program some easily tracked input, see where it ends up, and then see what the program does with whatever place(s) your bit of data ends up.

In short, of course you can't extract semantics from in memory storage format.  Semantics are infered from what is actually there in memory, and what a program does to that bit of memory (i.e. the code that references it).

[ Parent ]

Don't agree (5.00 / 2) (#140)
by Simon Kinahan on Fri May 17, 2002 at 05:09:10 AM EST

Inputs and outputs have to have defined meaning, or they are useless. So you only need to see what the input source is giving (either by protocol definition, or by the program prompting the user), or what the output destination is expecting (again, protocol or prompt). If a program is dealing with personal information, then it might dump out mailing labels, or ask the user to enter their information.

This is where you go wrong, I think. I agree that a human being can reverse engineer the intent behing a piece of information in this way, and indeed using other tricks (we know what addresses look like, for instance).

I don't agree that there's any general way to program a computer to do it: its a question of symbol and their referrents. When we see particular symbols, they reach out into the world, through our minds, and refer to particular things, and its this process that gives the symbols meaning. They borrow it from other things. There's no way to come up with a general formalism for meanings. So, since computers just manipulate symbols, there's no way for them to divine meaning.

You might be able to come up with a partial solution of some kind, based on heuristics about particular IO devices and the formatting of the information, but it would not be a general solution.


If you disagree, post, don't moderate
[ Parent ]

loss of information (5.00 / 1) (#146)
by kubalaa on Fri May 17, 2002 at 09:36:28 AM EST

Information is never lost, it's just stored at a higher level. But that's why extracting it is so hard, because usually that higher level is the "human culture/philosophy/abstract thought" level. That's the difference between the statements "compute the fibonacci sequence" and the equivalent code; the fibonacci sequence is a concept with a history and a great deal of knowledge surrounding it, while the code is just code which happens to produce some numbers. In this case, it's not too hard to match the two together, but there are more ambiguous cases; for example, where's the bug in this haskell code?

k p d@(s:r) = u (o s) p d where
u _ _ []= []
u c [] y@(a:r)
| o a == c= (k p r)
| otherwise= k p y
u c (w:j) d@(s:m)
| o s == c= (u c j (w s)) ++ (k p m)
| otherwise= k p d

(I've simulated the computer's limited knowledge by taking away cultural-linguistic cues like real names and documentation.) It's not so easy to figure out what the heck this function is doing, much less use your knowledge of programming to deduce what it's supposed to do. But you expect a computer to do this?

[ Parent ]

Bug in your code. (none / 0) (#153)
by i on Sat May 18, 2002 at 12:44:23 PM EST

It doesn't compile. If you give a definition for 'o', or just its type, I think I'll find the bug.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
make something up (none / 0) (#163)
by kubalaa on Tue May 21, 2002 at 12:53:10 PM EST

It's a record selector.

[ Parent ]
Ok. (none / 0) (#165)
by i on Wed May 22, 2002 at 02:11:04 AM EST

First, k has wrong type. Assuming the following:

data A = A x
o (A x) = x

we have

k :: [A -> [A]] -> [A] -> [a]

which is obviously wrong. No wonder: it either returns a [] or concatenates two lists returned earlier.

Further, the == comparisons are only performed with equals and therefore redundant.

Further, there is a recursion over the second list (of type [A]) but k doesn't say what to do with the empty list.

What it's supposed to do is beyond me as it's garbled on so many levels.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

Not quite (none / 0) (#166)
by kubalaa on Wed May 22, 2002 at 08:12:56 AM EST

It's possible I made a mistake in the garbling, but the types should be
k :: [(A -> [A])] -> ([A] -> [A])
u :: x -> [(A -> [A])] -> ([A] -> [A])

[ Parent ]
I could recover this much. (none / 0) (#168)
by i on Wed May 22, 2002 at 09:15:26 AM EST

Also you probably wanted to compare adjacent list elements instead of same elements. When the list of functions is empty, k probably wants to leave just one of the adjacent elements that compare equal. Then the stuff gets too special-purpose to guess correctly. Why would anyone want a function of type A->[A]? Why would anyone want a list of such functions? What's the significance of comparison here? And so on. Too many unknowns. My AI is unable to grok it. Perhaps if you supply documentation, it will do better :)

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
the "answer" (none / 0) (#169)
by kubalaa on Thu May 23, 2002 at 07:46:02 AM EST

You're right, it is too hard; that's the problem with functional programs, is they're usually only mysterious at large scales. What it does, generically, is create a pipeline out of one-to-many functions, making sure every unique object goes through the entire pipeline once. Since new objects can be produced at any point in the pipeline, it has to catch them and feed them back through the beginning.

The bug is there should be an `a:' in the 4th line; this handles the case where the original object has reached the end of the pipeline and should be added to the results without further processing.

What it's used for is a plugin system for transforming files into rich objects. The one-to-many functions analyze and "embellish" file objects. To-many is necessary because many files (like mbox, or directories) actually represent many objects.

[ Parent ]

One more dream tool (3.50 / 2) (#6)
by FredBloggs on Wed May 15, 2002 at 07:50:03 AM EST


bs4you (4.85 / 7) (#22)
by Sir Rastus Bear on Wed May 15, 2002 at 12:27:13 PM EST

bs4you is a (non-existing) program which posts meaningful, well-crafted and correctly-spelled comments to Internet-based discussion communities such as K5. In a nutshell, it reads articles, figures out what the author is talking about, and posts a trenchant response. These responses are consistently rated between 4 and 5 by the K5 community. Note that soon the K5 community will be entirely populated by bs4you-based bots rating up each other's comments, and the unsightly human population can then be eliminated.

The description of troll4you and porn4me is left as an exercise for the reader...

"It's the dog's fault, but she irrationally yells at me that I shouldn't use the wood chipper when I'm drunk."

porn4me (4.00 / 1) (#125)
by am3nhot3p on Thu May 16, 2002 at 04:54:23 PM EST

porn4me uses a list of 'TGP' sites to gather a list of free porn galleries currently available.

Each gallery is compared against a list of 'require' and 'exclude' expressions to determine whether the content is of interest to the owner.

Finally, the images from each selected gallery are downloaded into a directory. Images below a preset size are rejected. If a gallery contains less than a predefined number of good (i.e. big enough and successfully transferred) pictures, the gallery is aborted.

The URLs of visited galleries are stored in a history file to prevent duplicate visits.

porn4me provides a great service to the community by reducing incidence of RSI through deleterious repetitive right-click-save-picture actions.

Oh, and by the way, it already exists. It's actually called picbot, and I wrote it. I might change the name now, though!

[ Parent ]

Me too! (none / 0) (#129)
by Mr.Surly on Thu May 16, 2002 at 05:43:51 PM EST

Mine is called 'pornreaper'. It would search (predefined) newsgroups for binaries, ignore those with certain keywords, and then it would automatically re-construct multi-part files into one, and then save them into appropriate directories. Written using Perl/Mysql.

I haven't used it in over a year, but my friend still does.

[ Parent ]
So, ummm (none / 0) (#161)
by Sir Rastus Bear on Mon May 20, 2002 at 01:11:31 PM EST

Is this program available? Sounds like it would save me some serious time, although it might in fact increase my risk of RSI ... ;)

What does "TGP" stand for?
"It's the dog's fault, but she irrationally yells at me that I shouldn't use the wood chipper when I'm drunk."
[ Parent ]

New proposed tool: "Hacker" (5.00 / 7) (#23)
by avdi on Wed May 15, 2002 at 12:29:58 PM EST

hacker is an unkempt biped with a predilection for pizza, caffiene, and electronic devices.  Hacker can construct a fully-functional, debugged, tersely documented program to accomplish any well-defined task given minimal instruction and a steady supply of coffee.  Hacker is a versatile tool, being able to perform the function of all of the above-mentioned special-purpose tools, including optimization, translation of legacy code to new languages, answering arbitrary questions about a given codebase, and extracting metrics (although hacker may look at you funny for requesting this last). compared to the other tools mentioned in this article, hacker has one great advantage: it exists in plentiful supply.

Now leave us, and take your fish with you. - Faramir
two types of hackers (none / 0) (#27)
by speek on Wed May 15, 2002 at 01:14:31 PM EST

One must distinguish between the two types of Hacker tool - the Smart Hacker and the Dumbass Hacker.

The Smart Hacker will spend much of its downtime designing and creating new tools for itself that increase its productivity significantly (as in orders of magnitude increases). This means your Smart Hacker tool will become more productive as times goes on.

The Dumbass Hacker (easily recognized by it masters IT degree) will not create new tools, nor is it able to accept new tools as plugins, and the Dumbass Hacker's productivity will never increase (that is to say, it will stay at or near zero).

al queda is kicking themsleves for not knowing about the levees
[ Parent ]

"Dumbass Hacker" an oxymoron (none / 0) (#54)
by avdi on Wed May 15, 2002 at 04:05:09 PM EST

By the accepted nonderogatory definitions of "hacker", the "Dumbass Hacker" as you define is an impossibility.  The term "hacker" implies a certain skill level, not just a profession.  If a programmer has the traits you describe under "Dumbass Hacker", they are not a hacker.  They are simply a bad or mediocre programmer.  

Now leave us, and take your fish with you. - Faramir
[ Parent ]
Doing the impossible (5.00 / 4) (#53)
by ucblockhead on Wed May 15, 2002 at 04:01:10 PM EST

Convert a program in a procedural language to a one in a functional language, and vice versa? Well, maybe... Do it and produce good code? Not in a million years...

At best, you'll take code in a functional language and create functional code for a procedural language. And that code is going to suck, because the language won't be meant to work that way. It's hammering a square peg into a round whole. I'm sure you could take a C program and do some line-by-line conversion to a Lisp program. It would probably even run. And it would probably be the worst piece of shit Lisp program ever written.

Improve code? Programmatically? Be real... At best, you can have things that refactor code, but they need human brains behind them. As for the mythical thing that codes itself, well, people have tried that before. There is a fundamental problem here in that the hardest part of programming is not coding, it is specifying exactly what you want to do. That is the hard thing to do, and that is exactly what a computer gives you little help with, since computers don't do "want".
This is k5. We're all tools - duxup

I don't think you've thought that through. (5.00 / 1) (#56)
by autonomous on Wed May 15, 2002 at 04:07:51 PM EST

Perhaps if you did a straight literal conversion between the programs you would end up with crap. I agree. However, why not make several passes, I mean the only thing limiting how well you can translate between two formats for creating machine code is how much time you want to spend on the translation. I'm sure given enough processor power and scratch space, you could come up with some of the prettiest code writable. You point out that computers do not want so we can't expect good code from them, I think that is flawed. I mean, computers don't want to balance my resources, but my BSD does a pretty reasonable job of allocating resources and reclaiming them once a task is done. And it is only using a few simple rules. Isn't manipulation of resources what programming is all about?
-- Always remember you are nothing more than a collection of complementary chemicals worth not more than $5.00
[ Parent ]
the thing is... (5.00 / 2) (#60)
by ucblockhead on Wed May 15, 2002 at 04:16:27 PM EST

The thing is that this sort of thing requires intelligence, not processing power. Much of what "good code" is is using appropriate algorithms for the language and for the problem. That's very, very difficult. An alorithm that is fast in Lisp might be slow in C. This might matter, if speed is a concern, and it might not. Making it faster may result in harder to read code. Whether this is a problem may depend on how fast it needs to be. These are the sorts of decisions only people can make.

And if you really think it is only about a few simple rules, then I strongly suggest that you do it by hand a couple times. Try translating a significantly sized program from one language into another.
This is k5. We're all tools - duxup
[ Parent ]

Silly. (3.33 / 3) (#61)
by autonomous on Wed May 15, 2002 at 04:25:41 PM EST

For a long time there have been people saying, "It requires intelligence to foo", where foo is a task that appears to require intelligence to be executed successfully. For a long time, there have been people thumbing their noses at that, and creating programs using logical sidesteps, brute force, clever hacks or combinations of all the above, which perform the task. Big blue broke chess, we've got quite skillful robotic drivers, robots that construct 3d models of what is pulled in via camera, programs to create paintings, poems, haiku. Don't be so silly as to assume programming takes intelligence. If it did, most of the people currently employed as programmers wouldn't be there.
-- Always remember you are nothing more than a collection of complementary chemicals worth not more than $5.00
[ Parent ]
uh....yeah.... (4.00 / 2) (#65)
by ucblockhead on Wed May 15, 2002 at 04:34:11 PM EST

Do yourself a favor, and take a class in AI.

Chess is a classic example of something where people underestimated the intelligence needed...and chess is easy compared to many other things. We still can't program something to play a good game of "go".

Anyway, you might want to investigate the Japanese "Fifth Generation" project, which was designed to get rid of the messy programmer, doing much of the things mentioned here. It failed. Badly.
This is k5. We're all tools - duxup
[ Parent ]

what takes intelligence (5.00 / 1) (#145)
by kubalaa on Fri May 17, 2002 at 09:18:16 AM EST

The problem of intelligence is precisely one of scope; how much information is in the form, and how much in the context? Chess has a limited scope; all the information you need to play chess is pretty well contained in the rules and the layout of the board. Language, on the other hand, most of the rules/information are in the context. That's the whole idea of "meaning"; that a simple sentence is interpreted in the context of human experience to convey a great deal of information. We can get around this by narrowing the context -- for example, a computer can obey spoken commands from a set it recognizes -- but there is no getting around the fact that there is more information behind everyday speech than computers have access to. If you think we can brute-force this, consider how hard it is to brute-force chess (with rules so simple a kid can learn them in a few minutes), and then consider that it takes a human 1-2 years just to learn the /rules/ for language, and then once your computer has done that it still has to do the real work of making decision trees. Ludicrous.

Likewise for programming. Programming is the act of solving a problem, and to do this well requires information about the universe, about humans, about language, and emotions, and philosophy, that a computer doesn't have.

[ Parent ]

ehh... (5.00 / 1) (#74)
by pb on Wed May 15, 2002 at 05:28:34 PM EST

I don't believe it.  I think it's certainly possible to translete between the two and write efficient code.  Maybe not "good" code in the sense of "readable" code--that would be a tougher problem, but likely still doable.

Functional code in procedural languages often does suck; I know that tail-recursion is rarely implemented in your average C compiler, for example.  However, it is possible to transform recursive code into iterative code and vice versa.  It's easy enough to write a function in Lisp that loops, or to transform for loops in C into functions.

In my mind, the tougher challenge in translating between Lisp and C is the type systems, not the loops vs. functions.  Obviously closures are a problem too (and garbage collection vs. memory allocation), but of course there are libraries to do all of these things in C (often used to implement Lisp interpreters), and of course you can put type checking into Lisp, and keep track of memory management as needed, and write simple functions to implement C keywords.

The question isn't "is it doable"; of course it is.  The question is "why".  And the answer can always be--if nothing else--"for hack value".  :)

I think the best--and perhaps most underrated--tools we have for programmatically improving code today are code profilers.  Compilers have gotten quite good at optimizing, and dynamic recompilation techniques are quite impressive now, but combining dynamic recompilation, profiling, and optimizing seems to be the most interesting and promising (as well as tricky) approach I've seen yet.  (that's the sort of thing Transmeta was doing, and I believe HP was researching some of the same techniques)

I think that allowing code to use different implementations of the same algorithms would also be pretty neat (if Java could profile your data accesses and allow you to use different collections that supply the same interfaces, but might be more appropriate for your access patterns, for example); this would be similar in spirit to my suggestion that you can generate an iterative and a recursive version of the same algorithm, benchmark the two, and use the faster one.

No, computers aren't going to magically make your code better without a lot of work on our part.  But I don't think that these are entirely impossible goals; rather, we have a lot more room for improvement left to us before we run out of steam.
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]

lisp2c (4.00 / 1) (#144)
by kubalaa on Fri May 17, 2002 at 09:02:18 AM EST

If you write a lisp compiler in c, haven't you essentially created lisp2c -- all you have to do is wrap the compiler with the data. I'm thinking of the way currying works in a functional language -- you've just curried your c-functor with a lisp-function to create a new c-function. Damn ugly, though.

[ Parent ]
Maybe I'm way off here but... (5.00 / 1) (#70)
by dissonant on Wed May 15, 2002 at 04:52:41 PM EST

Isn't something like "code4me" essentially just a supersmart compiler? I mean at some point, you'd have to input parameters for the program you want created, and presumably code4me (or any program) isn't going be able to deal with the utter clueless ambigiuity some mindless end user would spout at the poor thing... You basically would have just created a really really high level language.

I mean, you could probably make tools like you've mentioned work by creating some sort of metalanguage that describes different peices of commonly used logic, as well as how they interact, and then relates those terms to language specific implementations, then maybe have language specific meta languages that apply "good form" and "style" type rules to the chunks of logic handed to it from the big bad metalanguage but the end result will probably still be fugly and full of wholes. Seems like even if it were executed flawlessly you'd end up with something similar to Java or .NET but with multiple interrelated intermediary languages that would be as easy to decompile as compile...

Could be interesting. Feel free to give it a try...

Think Star Trek style programming (5.00 / 1) (#151)
by Arkaein on Fri May 17, 2002 at 09:45:03 PM EST

I think it would be a little more than a very high level compiler. It would probably parse natural language. This would give a trade-off of less precise specifications for more flexibility and ease of use.

Another thing that would probably be necessary would be interactive refactoring and feature addition. My idea for this is from anyone "programming" a computer on any of the newer Star Trek shows, especially for the holodeck. The user describes the basic specification very briefly. When there are multiple options available the computer queries the user for more info. At any step the user can make changes or add features. After the initial programming is done the user executes the program. Since only a basic specification is given it is up to the computer to fill in "intelligent default" values or infer the remaining values from the given input. In many cases the program is a bit lacking when first run so the user will perform live modifications.

Something like this might be possible in a limited fashion for well known domains of programs (minus the natural language parsing). The user could start out by specifying the basic type of program, say a web browser. The compiler (or whatever you want to call it) would load a template for a web browser including a basic HTML renderer, simple GUI and HTTP support modules. At this point the user could compile a finished program, but in most cases would want customization. JavaScript might be turned off and the HTML renderer may disable frames. This type of thing can pretty much be done with reusable components, but still requires writing some actual code to bind the components, and making custom modifications varies greatly in capability.

As the programming discipline evolves we should see an ever growing number of reusable code components that tackle more and more sophisticated needs. A plug-in HTML renderer is one example of this, sound and video players are others. A lot of these are fairly GUI-centric though, more work will need to be done making more general purpose algorithms than can be used as portable modules. Code4me is a ways off, but I could see a lot of programming 10 or 15 years down the road done with drag and drop or wizards that make it easy to build basic custom apps. These apps will not perfrom great out of the box but would be usefull for rapid prototyping and could be refactored into final, custom applications.

The ultimate plays for Madden 2003-2006
[ Parent ]

One's missing (none / 0) (#72)
by epepke on Wed May 15, 2002 at 05:03:33 PM EST

To be executed once all the other tools are in place.

The truth may be out there, but lies are inside your head.--Terry Pratchett

brainsim (none / 0) (#78)
by xriso on Wed May 15, 2002 at 06:04:03 PM EST

Emulate the human brain. Comes with various premade brains such as "hacker" and "ranter".
*** Quits: xriso:#kuro5hin (Forever)
brainsim (none / 0) (#116)
by vrai on Thu May 16, 2002 at 01:37:39 PM EST

Basic code - add input cases as needed, patches welcome:

#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <humanio.h>
#include <humanreasoning.h> // TODO - Still core dumps occasionally

int main ( int argc, char * argv [] )
    char * longTerm, shortTerm;
    unsigned long heartbeats;

    longTerm = ( char * ) malloc ( HUMAN_BRAINSIZE );
    memcpy ( longTerm, 0, HUMAN_BRAINSIZE );
    shortTerm = ( char * ) malloc ( HUMUNIO_BUFFERSIZE );
    memcpy ( shortTerm, 0, HUMANIO_BUFFERSIZE );
    srand ( clock ( ) );

    // Loop until dead
    for ( heartbeats = rand ( ) % HUMAN_MAXBEATS; heartbeats > 0; --heartbeats )
        // You never know
        if ( rand ( ) % 1000 < 5 )
            memcpy ( longTerm, 0, HUMAN_BRAINSIZE );
            if ( rand ( ) % 10 == 0 )

        // Input cases
        if ( HumanIO_IsInput ( ) )
           switch ( HumanIO_GetInputType ( ) )
               // TODO - Add more cases
               case HUMANIO_EATING:

               case HUMANIO_SMOKING:
                   if ( HumanIO_AnalyseBreath & HUMANIO_CHEMICAL_THC )
                       memcpy ( shortTerm, 0, HUMANIO_BUFFERSIZE );

                case HUMANIO_VOTING:
                    int electoralCycle = HUMANIO_TIME_YEAR * atoi ( getenv ( ELECTORAL_CYCLE ) );
                    memcpy ( longTerm, 0, HumanIO_CurrentMemPos - electoralCycle, electoralCycle );
                    HumanIO_CastVote ( );

           HumanIO_PopInput ( );

    // You're screwed now
    free ( shortTerm );
    free ( longTerm );
    return 0;

[ Parent ]

Memento? (3.66 / 3) (#126)
by bags43 on Thu May 16, 2002 at 05:12:23 PM EST

// You never know
if ( rand ( ) % 1000 < 5 )
    memcpy ( longTerm, 0, HUMAN_BRAINSIZE );
    if ( rand ( ) % 10 == 0 )

(emphasis mine)

Why would I want to erase my memory approximately once in every two hundred heartbeats?

[ Parent ]

D'oh! (4.00 / 1) (#136)
by vrai on Fri May 17, 2002 at 02:49:35 AM EST

Clearly this 'feature' would cause a problem in the field (perhaps a one in a hundred million would be better). But rather than fix this it can be sold as-is by simply branding it: BrainSimXP Goldfish Edition.

Also the memcpys should be memsets. The current version will coredump like a bastard.

[ Parent ]

The crux of the problem (4.66 / 9) (#79)
by tmoertel on Wed May 15, 2002 at 06:09:25 PM EST

As I demonstrated in a comment I made regarding an earlier story on a similar topic, the problem with the proposed "Dream Tools" is that while it may be possible to inspect a program's source code to determine what the program does, it is not possible to determine what the program is supposed to do, and ultimately the "supposed-to" knowledge is what is necessary to make the tools like the proposed Dream Tools useful. Without this knowledge, Dream Tools such as lang2lang are reduced to little more than specialized run-time generators for the original language, and tools like coderate and improvecode are worthless.

What matters are the semantics of what the programmer was trying to do, not the semantics of what a particular implementation of the former semantics happens to be. I give an example of this problem in my earlier comment. In short, the semantics deduced from a C implementation of the Fibonacci Series is so noisy that the original underlying definition of the Series is lost. Thus, no translation to any other language can make effective use of that other language's features and idioms.

My blog | LectroTest

[ Disagree? Reply. ]

quick reply (4.00 / 1) (#81)
by QuantumG on Wed May 15, 2002 at 07:14:38 PM EST

Thank you for your well thought out comment. Indeed the problem of writing good code is probably AI complete, you have to be a human to do it and even then it is hard. If after running my code through lang2lang I see that the output code is a little crufty I can use code2spec to see the meaning of the program and modify it to reflect my intent. Propogating those changes down may not be necessary, I could just supply the specification and the input code to lang2lang and see if it does a better job. I dont even have to be subjective about "better" if I dont feel I can be unbiased, I can use coderate to give me an objective opinion on how crufty the code is. Actually doing any of this stuff isn't necessarily possible (and certainly wont be without some serious research into these areas) but it's the idea of what programming would be like if we had these tools which I was trying to convey.

Gun fire is the sound of freedom.
[ Parent ]
The problem is tricky, but not totally intractable (4.00 / 2) (#127)
by Cheetah on Thu May 16, 2002 at 05:26:55 PM EST

Consider the (bad) sentence "Store went I to."  It is very bad English, but it doesn't take a very intelligent person to understand what it's supposed to be.

Jump over to our magic programming tools.  It seems to me that, for most of the tools QuantumG described, you would need a system with a fairly advanced and intelligent AI.  Given that such an AI is present, I think it would be possible (but far from easy) to write a code reader that can be told to 'read what I meant, not what I wrote.'

In fact, I think that would be necessary for many of these tools, especially the ones that deal with high level specifications, since that specification is essentially what the program means.

And remember, these are dream tools.  Of course they don't work without a miraculous AI that does a good job of understanding what a program really means.  Extracting and manipulating that meaning is what most of these tools are about.  And if the meaning finding AI were to output lots of noise for your fibonacci C code, then I'd argue that it wasn't a good AI.  Part of the point of extracting meaning is to get rid of noise and language specific semantics.

Using a declarative language certainly makes the extractor's job easier.  However, the fact that just about any programmer who knows a bit of C and what the fibonacci sequence is can understand what your bit of code means is proof that extracting meaning is not an impossible task.

[ Parent ]

But, then, why would you need the tools? (4.66 / 3) (#133)
by tmoertel on Thu May 16, 2002 at 10:01:11 PM EST

Jump over to our magic programming tools. It seems to me that, for most of the tools QuantumG described, you would need a system with a fairly advanced and intelligent AI. Given that such an AI is present, I think it would be possible (but far from easy) to write a code reader that can be told to 'read what I meant, not what I wrote.'
I think you're missing my point about the Dream Tools. Without an all-powerful AI like you imagined above, the Tools won't work. And with the all-powerful AI, there's no need for the Tools: Given that you have an AI that is sufficiently intelligent to be able to examine some code and not only figure out what the code does but also deduce the underlying problem that the original programmer was trying to solve -- something many human programmers cannot do -- and then solve that problem in terms of some given other languages, platforms, etc. -- given that you had such an amazing AI, why would you need the Tools at all? Why couldn't that same AI just do all your programming chores for you? Why would you need to interfere with its work by means of the crude manipulation afforded by the Dream Tools? You don't need the Tools, you have the AI!

So, just to be clear, without a brilliant AI, the tools are practically worthless for the reasons I give in my earlier post. With the brilliant AI, the tools are practically worthless because you don't need the tools.

My blog | LectroTest

[ Disagree? Reply. ]

[ Parent ]
perl3 perl4 (4.00 / 1) (#86)
by mpalczew on Wed May 15, 2002 at 08:08:51 PM EST

You could just install perl3 along side of perl4 and perl5 and specify wich one in each perl file.

i.e. #!/usr/local/bin/perl4

This has been done forever what's the big deal.
-- Death to all Fanatics!

Hey (none / 0) (#87)
by naugerrooger on Wed May 15, 2002 at 09:00:39 PM EST

Why would you want to use an outdated version of perl?  Did they take out some good features, or are there like compatibility issues and shit?  I'm trying to learn Awk.
-- "Would?" Alice in Chablis
[ Parent ]
Mostly compatibility issues (none / 0) (#99)
by carbon on Thu May 16, 2002 at 02:10:07 AM EST

Perl3 apps don't always run on Perl5, though many will. However, this isn't that much of an issue recently, as Perl5 apps will run in Perl6, even though Perl6 has radically different syntax in most areas. This is done simply with the a differentiating tag at the top of a given source file. I believe that 5 uses 'package' and 6 uses 'module' to define code blocks, and which you use determines which kind of Perl it's expecting to find.

Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
tags are cool (none / 0) (#103)
by QuantumG on Thu May 16, 2002 at 03:21:52 AM EST

Wouldn't it be nice if every program you made was expected to have a language version number embedded in it. Whoever you gave the source to would know exactly what compiler/interpreter you expected to use. If we had lang2lang we could translate every time we compile, so we would always be up to date with the latest version of the language.

Gun fire is the sound of freedom.
[ Parent ]
Program transformation (4.00 / 1) (#89)
by ka9dgx on Wed May 15, 2002 at 10:38:13 PM EST

You need to change your view a bit. Consider the source code for a program, as it flows through a compiler on the way to being an executable. If you link the source code bidirectionally to a symbol table and the parsed structures, you get a very powerful tool.

If you take a Pascal compiler, and use it to crunch on some source, you then end up with the symbols, type information, structures, etc. If you took care to write a decompiler in C, or Forth, or whatever... you could suck source in, and spit it back out in whatever language you wrote decompilers for. It's not rocket science, but just a different way of looking at things.

I personally can't stand C/C++, if I ever get this off the ground, I'll be able to suck in source code in C, and spit it back out in nice, case insensitive Delphi/Pascal. 8)


I not really seeing your point (none / 0) (#90)
by QuantumG on Wed May 15, 2002 at 11:06:10 PM EST

sounds like you are talking about a compiler specific decompiler, but I cant really be sure, can you expand on your points a little?

Gun fire is the sound of freedom.
[ Parent ]
Integrating the compiler into an environment (5.00 / 1) (#120)
by ka9dgx on Thu May 16, 2002 at 02:46:13 PM EST

If you consider the idea of an honestly integrated IDE, one where the source code window connects bidirectionally to the symbol table, etc... you get a boatload of power back that you currently throw away. Consider the following Pascal fragment:

procedure a;
    i : integer;
   for i := 1 to 5

procedure b;
   i : integer
   for i := 1 to 8

If you want to change the second I, and not the first, you have to do it manually. The compiler knows, however, that the two references are different in scope, and so it could "magically" change only the second set of references, with little to no work.

You should be able to see a list, in real time, of the variables in a program, change them, sort them, etc.. If you decide to split the use of a variable into two parts, the tools should provide an EASY means to see all the references of THAT variable (not just all string matches).

The point is that a separate TEXT editor and Compiler, just because they share the same menu structure, are not truely integral in today's IDE suites.. this needs to change.

Properly done, an IDE could result in an order of magnatude improvement in productivity.


[ Parent ]

Have a look at ... (5.00 / 1) (#121)
by Simon Kinahan on Thu May 16, 2002 at 03:13:47 PM EST

The Smalltalk refactoring browser, and the new generation of Java IDEs that support automated refactoring. I think these do pretty much what you're looking for.


If you disagree, post, don't moderate
[ Parent ]
conversions (none / 0) (#113)
by aphrael on Thu May 16, 2002 at 10:12:52 AM EST

we've got some tools that will munch c code into pascal. sadly, though, there are some things that aren't expressable --- a lot of the preprocessor magic in c has no equivalent in pascal --- and c++ is an impossibility: converting MI just isn't going to work.

[ Parent ]
One more tool (5.00 / 4) (#91)
by rodoke3 on Thu May 16, 2002 at 12:15:19 AM EST

How about "InfLoop", a program that tests if source code will run into an never-ending loop.  Maybe I'll just code one up ;-)

I take umbrage with such statments and am induced to pull out archaic and over pompous words to refute such insipid vitriol. -- kerinsky

no you wont (3.25 / 4) (#92)
by QuantumG on Thu May 16, 2002 at 01:17:17 AM EST

because you're a peon who accepts the gospel of Godel. Read the rest of the comments, I wont bother expounding, yet again, why the halting problem is purely theoretical.

Gun fire is the sound of freedom.
[ Parent ]
yes i will, biatch! (5.00 / 1) (#94)
by rodoke3 on Thu May 16, 2002 at 01:46:20 AM EST

It was a joke, QuantumG!(hence the smiley) Of course you learn in Discrete Math courses show that this is impossible. I read that problem in my textbook just a couple of weeks ago, but I can't find it. I would welcome anyone who could provide a proof, though.

Don't you hate it when the subject has nothing to do with the content? :-)

I take umbrage with such statments and am induced to pull out archaic and over pompous words to refute such insipid vitriol. -- kerinsky

[ Parent ]
cut and paste (4.00 / 3) (#96)
by QuantumG on Thu May 16, 2002 at 01:52:49 AM EST

You clearly misunderstand how the halting problem (and Godel) limit us. No, we cant write a tool to solve the halting problem for every possible program but we can write a tool which will solve the halting problem for some programs and if my program is one of those programs then the tool is useful to me. If the tool runs for more than 15 minutes I'll stop it. The theoretical constraint of the halting problem has done more to harm computer science than it has to help it -- as evidenced by the fact that serious attempts at partial solutions to the halting problem for real languages are not generally available. I would have thought that the hacker ethic dictated that we all give it a shot. Just the journey to write a partial solution would be worthwhile -- you'd learn about parsing, referential transparency, theorem proving -- and that would take a lot of effort and a lot of code, in which time the skills you would learn would be more valuable than the tool itself.

Gun fire is the sound of freedom.
[ Parent ]
Ancient Chinese Secret (5.00 / 3) (#97)
by rodoke3 on Thu May 16, 2002 at 01:59:46 AM EST

I still need to catch up on my Discrete Math.  Therefore, I'll have to take your word for it.  Though, I had no idea the "halting problem" was such a "hot button" issue in Computer Science.

I take umbrage with such statments and am induced to pull out archaic and over pompous words to refute such insipid vitriol. -- kerinsky

[ Parent ]
"it cant be done" (4.00 / 2) (#98)
by QuantumG on Thu May 16, 2002 at 02:05:09 AM EST

is always a hot button in any science. If you're gunna tell someone to stop researching something because it is a waste of time you better be ready for a fire fight and you better have a well thought out argument to back you up. "My professor told me so" is not such an argument.

Gun fire is the sound of freedom.
[ Parent ]
For the 'Halting Problem', there is proof (4.50 / 2) (#100)
by rodoke3 on Thu May 16, 2002 at 02:13:33 AM EST

In the halting problem, you could use Turing's proof(since you called it the "Halting Problem", I was able to find the section) of why it can't be done.  Though, I don't know if this has been refuted yet.  

I take umbrage with such statments and am induced to pull out archaic and over pompous words to refute such insipid vitriol. -- kerinsky

[ Parent ]
you simply dont die do you? (2.80 / 5) (#101)
by QuantumG on Thu May 16, 2002 at 02:31:47 AM EST

Have you read the proof? Do you understand how it works? If you had, and you actually thought about it, you should be anything but sure of how undecidable the halting problem is. The proof that the halting problem is undecidable relies on a philosopher's trick:

This statement is a lie.

What is the trueth of this statement? The answer is undecidable. Why? Because the only way the statement can be true is if it is false. This is a contradiction, and when we find contradictions we know that the question is nonsensical.

The proof that the halting problem is undecidable works the same way. If we write some program which solves the halting problem and wrap it up in a statement like the one above, running this program though our solution will cause it to loop forever, therefore we have given one example where the logically the halting problem cannot be solved.

Big deal! It's a mathematical curiousity. The simple answer is: dont do that. If I was to write you a program that told you whether your program halts or not you are guarenteed to be able to write a program that will cause it to loop forever. Does this somehow reduce the worth of this program to you when you use it in sensical ways? Of course not. But as a result of your discrete logic teacher telling you it isn't possible (what this has to do with discrete logic is anyone's guess, this is pure mathematics, not set theory) you've never had a go at it. Not too many people have had a go at it. This isn't because such a program wouldn't be useful -- it would -- but because writing such a partial solution to the halting problem would be hard and we've all got the perfect excuse not to bother -- "it's mathematically impossible!"

Gun fire is the sound of freedom.
[ Parent ]

Its harder than you think. (4.80 / 5) (#111)
by zakalwe on Thu May 16, 2002 at 09:41:30 AM EST

The proof that the halting problem is undecidable relies on a philosopher's trick:

This statement is a lie.

No.  You can produce an example of such a program as a quick example of one such problem, but that is not the only way to prove it.  Turing I think used a variation on Cantor's diagonalisation method when he was proving it.  It is not only such problems as "if (!halts(me)): halt" that are neccessarily undecidable.

As regards writing a "partial solution" I think you're mis-stating things.  No-one has said "Don't bother" - in fact formal analysis of algorithms is a big area in Computer Science.  Certainly it is possible to prove that "if 1: halt" halts.  But if you can prove that (for example):

while 1:
  if is_not_sum_of_two_primes(i): break
  i += 1

Halts, you've got something.  As you can see though, its not an easy problem.

[ Parent ]

So, here's the deal (3.66 / 3) (#117)
by jacob on Thu May 16, 2002 at 01:56:14 PM EST

A program that solved the halting problem for input programs would be useful for programmers because it would catch unintentional infinite-loop mistakes. We all know that there will be programs that can be specifically designed to break your halting-problem solver; that's not the point.

"it's not rocket science" right right insofar as rocket science is boring


[ Parent ]
No, thats not the point (5.00 / 3) (#138)
by zakalwe on Fri May 17, 2002 at 04:32:50 AM EST

A program that could solve the halting program would be a lot more useful than catching infinite loop mistakes.

The above program is not designed to catch out a halting checker.  Its designed to show how difficult such a thing would be.  It may be theoretically possible to design a checker that tells whether that program stops.  If you do so, please feel free to collect your million dollars - You've just proved (or disproved) Goldbach's conjecture [1].

If your program eventually halts, then there must exist some even number that is not the sum of two primes - Goldbach is disproven.  If you prove it never halts, then you've shown that Goldbach's Conjecture is valid for all even integers.

In fact, your solver is applicable to any problem that can be expressed in terms like "p(x) is true for all x where x=f(x) "(Assuming the range is countably infinite - this still isn't too good for solving theorems about real numbers."

This is why even a "sometimes" halting problem solver is so difficult.  Scratch beneath the surface and you find that you're really trying to develop a formal system for showing mathematical proofs - something most mathematicians would give their left arm for - hardly an unexplored problem.  And don't think that just because Goldbach's conjecture is a mathematical theorem that it is different from real problems.  The logic used to solve it would be fairly simple compared to many real world programs.

[1] Or would have if I had actually written the program correctly.  Replace "i=0" with "i=2" and "i+=1" with "i+=2"  Doh.  As it stands, its less impressive, since it fails on the first pass.

[ Parent ]

Loop end conditions (5.00 / 2) (#147)
by jacob on Fri May 17, 2002 at 10:39:21 AM EST

I think I wasn't very clear in my last message. My bad. You're right, of course; a general-purpose halting problem solver will necessarily choke on lots of inputs, and basically any problem that's unsolvable can be reduced to the halting problem given the method you've shown. What I want to know is, how often do such programs come up in real live programs? I would expect that in most programs, and even most 'interesting' or 'difficult' programs there's very little iteration or recursion that isn't inductive on the shape of the data coming in.

"it's not rocket science" right right insofar as rocket science is boring


[ Parent ]
I'd expect even common programs would be difficult (5.00 / 1) (#148)
by zakalwe on Fri May 17, 2002 at 11:38:56 AM EST

I'm not sure, but i'd expect even common programs would be difficult.  Goldbach's conjecture isn't really that complex to express as a program, and while its probably possible to determine whether it halts or not, doing so would consitute solving one of the most difficult mathematical problems currently unsolved.  Consider other popular problems like the "3n+1" function that we also don't know.  The logic and math are trivial, but proving them is incredibly hard.

Most examples I can think of are either so trivially solvable as to be of no practical use, or virtually impossible as above.  When it comes down to it, "detecting halting" is really a very active field - it just happens to be called "Formal Systems"

[ Parent ]

goldbach, 3n+1, etc (none / 0) (#149)
by jacob on Fri May 17, 2002 at 12:09:27 PM EST

are all examples of 'unnatural recursion,' for lack of a better name -- recursion (or iteration) that doesn't take the form of a natural descent over some part of the data its given. For example, a loop that processes all the elements in a list is naturally iterating over the list, visiting it in a regular pattern. So is a program that compiles a C syntax tree to assemby. The recursion in the 3n+1 problem, however, isn't. It iterates, but without a bound that's determined by the shape of the data it's provided.

The largest program I've yet written -- a large compiler that still isn't finished -- contains precisely zero occurences of an unnatural recursion. Honestly the only practical loop I've ever written that took advantage of an unnatural recursion was deep inside a genetic algorithm solver I wrote once for a class.

"it's not rocket science" right right insofar as rocket science is boring


[ Parent ]

oh very witty (none / 0) (#156)
by QuantumG on Sun May 19, 2002 at 08:09:43 PM EST

I can see what you are saying here, I just dont think it is anything useful. Rewrite your code without that big huge black box in the middle. About the only thing we can prove about a black box is that it is black, take it out and we'll talk about how impossible this is.

Gun fire is the sound of freedom.
[ Parent ]
Aargh (5.00 / 1) (#139)
by zakalwe on Fri May 17, 2002 at 04:34:13 AM EST

Oops - just noticed a mistake in the program.  What I meant to say is:

while 1:
  if is_not_sum_of_two_primes(i): break
  i += 2

[ Parent ]

Glad I read down to this comment, (3.00 / 1) (#152)
by Ian Clelland on Fri May 17, 2002 at 11:37:38 PM EST

because I was about to point out that your initial program halts at i==11 :)

[ Parent ]
You don't seem to understand logic very well. (5.00 / 2) (#131)
by lordpixel on Thu May 16, 2002 at 06:02:28 PM EST

Leaving aside the fact there are other ways to prove the halting problem than the standard proof you refer to, your assertion that its based on the philosopher's trick

"This statement is a lie"

is completely wrong.
It actually uses "proof by contradiction", which I can demonstrate with a silly example:

Here's some known facts in my world:
My ball is entirely red
Red and blue are not the same color

Here's something we'd like to prove:
My ball is not blue.

Proof by contradiction goes like this...

[*] Assume (for a contradiction) that my ball is blue.

but - we know "My ball is entirely red"
and - we know "Red and blue are not the same color"
Therefore we have a contradiction.

When you get a contradiction in a proof by contradiction, it means your assumption (line [*]) is wrong.

As the only assumption we made was "My ball is blue" we have therefore proved "My ball is NOT blue" which is what we set out to do [1].

Of course, this case is trivial, a child could understand a red ball is not also blue.

Its also easy to understand the principle of proof by contradiction just demonstrated, and I hope everyone can see why it has nothing to do with:

"this statement is a lie"

That's not a proof by contradiction - its something different.

[1] well, there are a lot of implicit assumptions about the way the world works in my discussion, but naturally one can be more precise in other problem domains. eg, we assumed the world of my ball is logically consistent and that the ball doesn't magically change color when it feels like it ;)

I am the cat who walks through walls, all places and all times are alike to me.
[ Parent ]

Eat my dust, Gödel ;) (3.33 / 3) (#106)
by cyberdruid on Thu May 16, 2002 at 05:53:38 AM EST

I have always thought that just because the halting problem is a nice piece of logic, people tend to think it says something of practical importance. It does not.

Now, this may be complete nonsense, but I think that I have found a flaw in the halting problem that actually shows that a general "inf-loop-detector" is just as plausible as any other program. The thing is that on a very fundamental level, when we talk of symbol manipulation, we have to include a manipulator (often the Turing machine). But the quantum laws of nature basically states that you cannot have such a thing as a machine that process information and give the same answer every time. There must always be a tiny probability of error, for example tunneling electrons in a digital computer.

This means that the Turing machine is a false model for computing. To really model computing it has to have a (incredibly small) chance that it executes the wrong instruction each step. This probability can be made so small that it normally is completely unneccesary even to include it. There is however one time when it does get important. You guessed it - infinite loops. Infinite loops are simply no more rational to discuss than infinite mass, or whatever. All the paradox of the halting problem shows is that infinite loops are not a logically possibly entity. You cannot write a program that have an infinite loop, because your hardware has no such instruction. You can write a program that has an almost infinite loop, but this will give no paradox.

<wild speculation>
An interesting effect that shows up here is that when you decrease the probability of error in your CPU, you drastically increase the time it takes to give an answer to some questions. When you increase the error frequency, you will get more errors, but the occasional almost infinite loop will be dealt with faster. Reminds me of the no-free-lunch theorem. Perhaps hardware that should deal with these things has to be as error-prone as human wetware to push down those almost infinite loops to just a few seconds?

[ Parent ]

re: Eat my dust, Gödel (3.00 / 1) (#107)
by jlm on Thu May 16, 2002 at 07:51:23 AM EST

As I recall (please note: I am not a mathematician, I may well be wrong, please tell me to shut up), this is part of the Gödel/Turing thesis: a mechanistic system can't be both complete and consistent. What you are saying is that real, physical computers can't possibly be complete (no infinite loops), so they can be consistent (no paradoxes). Increase the completeness (accuracy of the CPU), and you approach paradox.

It's an interesting limitation of formal systems, not just a 'mathematical curiosity' (see, for example, Roger Penrose's 'Shadows of the Mind' for some real-world implications), but as several people have pointed out, it doesn't have much to do with actually building software, which is always about compromises and trade-offs. You can build all of the tools the author suggests, as long as you don't require that they be formally 'complete' - i.e. work perfectly for any possible program.

"He who sleeps is a looser" -- John Bunyan, Pilgrim's Progress.
[ Parent ]

Computation != Computers (4.00 / 1) (#110)
by zakalwe on Thu May 16, 2002 at 08:48:48 AM EST

This means that the Turing machine is a false model for computing.

No.  It means that the Turing machine[1] is a false model for computers Arguing that an infinite loop is impossible because the atoms of your computer could decide to jump 3 feet left is like arguing addition is impossible because someone counting stones could have a heart attack before he finishes.

The symbol manipulation is an abstract theory, which we can imperfectly model by real-world things.  Otherwise you don't even have to bring in Quantum theory - the solution to "does this program halt?" is:  "Yes - because the eventual heat death of the universe will eventually lead to a situation where the computer will not run"

You could propose a theory to more accurately model computing by adding the possibility of such random events.  This theory would be completely useless however.  Computing is only useful if we assume that everything was calculated correctly.  Introduce "Cosmic Rays" and you have to admit that it may produce a completely wrong answer.  This may in fact happen - but its not useful.  The theory of computing "Assuming everythin works" is far more useful, since we can use it to get actual answers.

[1] <Pedantic>Actually, computers are not even theoretically as powerful as Turing machines.  Its more accurate to call them Finite State Machines.</Pedantic>

[ Parent ]

wrong (none / 0) (#155)
by washort on Sun May 19, 2002 at 06:53:43 PM EST

<Pedantic>Actually, computers are not even theoretically as powerful as Turing machines. Its more accurate to call them Finite State Machines.</Pedantic>

finite state machines are things like regular expressions, and definitely nowhere near turing equivalent; the von neumann computer design is certainly turing equivalent.

[ Parent ]

Should have been more explicit (none / 0) (#158)
by zakalwe on Mon May 20, 2002 at 04:29:53 AM EST

By computers, I meant real computers (ie. the thing on my desk.)  Theoretical models of computers are turing equivalent, but any actually existing computer is no more than a finite state machine, because it has only finite memory/storage.  In practice this is rarely important because there are lots of states.  (2bytes_of_storage*8 states in fact - though lots are irrelevant, meaningless or redundant)

[ Parent ]
Compilers & the Halting Problem (3.50 / 2) (#141)
by sangdrax on Fri May 17, 2002 at 05:10:22 AM EST

The Halting Problem only takes our dreams away about the existence of a Perfect Compiler. As stated before, the HP is not only restraining infinite-loop-detection, it also restrains code optimisations (because if you have the Perfect Optimizer, infinite loop programs will become '10: goto 10' and tada you have solved the HP, which is proven to be unsolvable).

But that doesn't mean CS is giving up on trying to make better optimizers (see the ever increasing complexity in both compilers and microprocessors) or things like deadlock-detection.

Ofcourse, it would be nice to have a program detecting (some) infinite loops, but it is impossible to ever detect them all. Doesn't mean we can try and detect 99% :).

PS: But if you thought HP is restaining CS: There exist more problems (infinite more) that are non-computational (I.E. cannot be solved with a Turing Machine) than that there are computational problems :)

[ Parent ]

quantum computation (5.00 / 1) (#124)
by Shren on Thu May 16, 2002 at 04:12:13 PM EST

If we had quantum computation, could a quantum computing program determine if a non-quantum program halts?

[ Parent ]
no (5.00 / 1) (#128)
by mikpos on Thu May 16, 2002 at 05:39:26 PM EST

The answer is no. The Church-Turing thesis still holds, even with quantum mechanics involved. Quantum computers cannot decide languages which other ("non-quantum") Turing machines cannot. Quantum computers are just faster (in theory). Nothing more.

[ Parent ]
Can't you ... (4.00 / 1) (#142)
by Simon Kinahan on Fri May 17, 2002 at 05:13:40 AM EST

... in principle solve NP problems in P time ? because you have an infinite number of world to which you could delegate parts of the computation ? or is there some limitation that prevents this ?


If you disagree, post, don't moderate
[ Parent ]
Computability vs. Complexity Theory (none / 0) (#160)
by nicksand on Mon May 20, 2002 at 12:28:55 PM EST

You can solve NP problems in P time on a quantum computer because a quantum register allows you to run operations on all the numbers holdable in that register at the same time. This essentially allows you to simulate a nondeterministic turing machine in the same amount of time as you could a deterministic one. Writing quantum algorithms is still extremely tricky because it involves collapsing the final solution into a usable state.

NP and P belong to complexity theory. The halting problem (where this whole thread started on) belongs to computability theory. It really doesn't matter how powerful you make your machine (even if you give it god-like, unlimited powers to know the answer to any question you pose it), the halting problem is still undecidable. If you want to know why, pick up any good introductory text on computability theory (eg: Sipers' Theory of Computation).

[ Parent ]
if... (none / 0) (#159)
by Shren on Mon May 20, 2002 at 08:26:01 AM EST

the quantum computers are "just faster", then why do the programs look so different?

[ Parent ]
Godel (4.50 / 2) (#95)
by rodoke3 on Thu May 16, 2002 at 01:49:18 AM EST

<ignorance>Who is he, btw?</ignorance>

I take umbrage with such statments and am induced to pull out archaic and over pompous words to refute such insipid vitriol. -- kerinsky

[ Parent ]
ai complete (4.40 / 5) (#93)
by Rainy on Thu May 16, 2002 at 01:29:30 AM EST

To do these things practically, you need ai complete capability. If we had that, I think we'd find more exciting things to do rather than translating perl5 to haskell and back :-).
Rainy "Collect all zero" Day
hear hear (none / 0) (#143)
by kubalaa on Fri May 17, 2002 at 08:54:48 AM EST

At this level, the idea of working with code is silly. The very tools would make it obsolete; just like almost nobody programs in machine code, because we have compilers and cross-compilers to do this for us; once high-level code can be manipulated so easily, we'll take advantage of this to work at an even higher level; for example, simply letting the computer write the program from scratch itself.

The differences between languages are there because they make things easier for humans working with them. If the computer can work with any language as well as a human, then different languages aren't needed.

[ Parent ]

I'm old. This is not new. (4.00 / 2) (#122)
by iGrrrl on Thu May 16, 2002 at 03:37:49 PM EST

Reminds me of the many joke commands circulating for the old big iron mainfraims. They included:
  • RPM - read programmer's mind
  • RDI - reverse drum immediate
Serious geek points for anyone under 30 who can explain the last one.

You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.

well... (3.50 / 2) (#123)
by Shren on Thu May 16, 2002 at 04:08:37 PM EST

If my interpretation of the lost arts from Mel's tale is correct, then drum computers had memory on a rotating drum that spun. You can't reverse a running drum on a moment's notice, and probably wouldn't want to at all. I gather it spins at a near-constant velocity, like a gyroscope.

[ Parent ]
Magnetic drum storage (4.00 / 2) (#134)
by pin0cchio on Fri May 17, 2002 at 01:58:40 AM EST

RDI - reverse drum immediate

Some old computers used a rotating magnetic drum to store information; the head(s) moved up and down the drum.

The modern equivalent of this instruction, referring to modern hard disk systems, would be RPI - reverse platter immediate

Serious geek points for anyone under 30 who can explain the last one.

(I'm of an age roughly comparable to a 21-year-old Homo sapiens.)

[ Parent ]
You forgot: (4.00 / 1) (#150)
by awgsilyari on Fri May 17, 2002 at 01:35:35 PM EST

DWIM -- Do What I Mean
MOH -- Magic Occurs Here

Please direct SPAM to john@neuralnw.com
[ Parent ]
OFA (none / 0) (#157)
by QuantumG on Sun May 19, 2002 at 08:13:52 PM EST

Old Fart Annoying. If you have something to offer, say it.

Gun fire is the sound of freedom.
[ Parent ]
What you're forgetting (4.33 / 3) (#132)
by peanutbadr on Thu May 16, 2002 at 08:35:42 PM EST

What people seem to be forgetting is that "a program that writes code for you" is merely a compiler. You tell it what you want it to do, and it codes it for you (in machine code of course). What the author is suggesting is merely natural-language programming, which has already been suggested, discussed, analyzed, etc...the only difference is that the author seems to want his natural language processed into an intermediary language such as C. This step is completely unnecessary.
Forget new tools (none / 0) (#137)
by Obvious Pseudonym on Fri May 17, 2002 at 03:25:41 AM EST

I just want a new option on the tools I use.

Most C/C++ compilers have options not to generate warnings for your code. Why can't they put in an option not to generate errors in your code?

Life for a programmer would be so much easier if everything compiled and worked first time...

Obvious Pseudonym

I am obviously right, and as you disagree with me, then logically you must be wrong.

The closest you'll get (none / 0) (#162)
by abo on Mon May 20, 2002 at 03:13:52 PM EST

Many have already pointed out that most of these tools are AI complete, but there are people who have similar but more realistic ideas. They've gathered in the Tunes project, in which metaprogramming is an important tool. I find the article Metaprogramming and Free Availability of Sources by Faré interesting.
-- Köp BRUX!
New tool: halt (none / 0) (#164)
by valency on Tue May 21, 2002 at 03:04:57 PM EST

Takes a program in any language as an argument. Returns 1 if the program halts, 0 if it does not halt.

If you disagree, and somebody has already posted the exact rebuttal that you would use: moderate, don't post.
Dream Tools | 169 comments (97 topical, 72 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!