Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Reversible Simulation
There has in fact been some research on developing debuggers that allow reverse execution of code. But it's not just the debuggers that benefit from reverse execution. Consider the problems of writing simulations of complex systems. Many simulations, ranging from simulations of tiny microprocessor circuits to simulations of world-wide computer networks, require the use of clusters of computers working in parallel in order to obtain results in a resonable time. But clusters like this have a problem - the time for one computer to talk to another can take longer than it takes to perform a large chunk of simulation. By time a CPU has received the message that an event in the simulation has taken place on another CPU it might have carried on the simulation well beyond the time of the event. One way to deal with this is to roll-back the CPU clock to before the time of the event and then restart with the new information from the incoming message. It's like receiving a delayed memo from your boss telling you not to start the task you're now working on and work on something else instead. A nice technique for doing that is to use reversible computation. If every instruction in your code can be executed backwards it becomes easy to perform this roll-back from wherever you currently are. So we can see that reversibility is potentially very useful.

Reversibility in General
So what is required to make a computer reversible? Consider a line of C code like this:

a = a+1
It's very easy to roll the clock back on that one. We can execute it backwards simply by running
a = a-1.
In fact, a great many mathematical operations are reversible, for example any operation of the form
a = a-b
is reversible, except for one very special case: when b is another name for the variable a. In that case we have
a = a-a
and that is the same as saying a = 0. This operation isn't reversible because we can no longer recover the original value of a. Now we have come to one of the central issues of reversible computing: how to deal with the clobbering of variables by new values. If we want to reverse a computation that does this we need to log the result of every non-reversible change, in this case storing the previous value of a somewhere. On the other hand we can simply try to write our code in such a way that we never actually need to clobber anything. There are all sorts of workarounds, but you have to watch out: you might end up having to keep a lot of baggage sitting around that you're not allowed to overwrite. But would you believe that this issue is much much deeper than it looks and in fact points to the very heart of modern physics?

Reversibility and Physics
Many years ago computer scientists realized that they were doing a fine job of shrinking down electronic components so that they used less and less power. So an obvious question arose: just how much power do we need to perform a computation? What is the fundamental physical limit on computation? We can imagine the sort of answer we might expect: a kind of menu with different kinds of operations and a cost for each. In fact, in the 1950s von Neuman hypothesized that each fundamental logical operation could be performed using just kTloge(2) joules of energy where T is the absolute temperature at which the computer is running and k, the Boltzman constant, is around 10-23 Joules per Kelvin. But it didn't take long for someone to point out that it was in fact possible to perform a NOT operation with essentially no energy. It turns out that the correct answer is very elegant. There is one fundamental operation that requires energy and if you don't perform it you don't need any energy at all. What's more, it's an operation we've already discussed: clobbering of variables. Destroying 1 bit of information takes kTloge(2) of energy. We can get a vague idea of how this happens quite easily: at a fundamental level the laws of physics are reversible - you can't really destroy information. Instead, when we clobber a variable, we really have to move the information about the previous state out of the computer. This information eventually finds its way into the state of the molecules around the computer adding to their disorder. In other words this information is dissipated as heat. And that's why it costs energy. (Incidentally: think about black holes. It is well known that black holes destroy information that falls into them. They have no hair. In a sense a black hole is the ultimate variable clobberer. But isn't that violating the reversibility of physics? Well this is a subject under much debate at the moment so I can't give any answers here!) One of the interesting things about information erasure is that in some sense it is a very fundamental operation but previously researchers didn't consider important at all. Today we realise that ERASE, and its opposite partner FANOUT, are actually very important to consider in electronic design. In the future you might like to think a little about the information you wipe out each time you assign a value to a variable.

So now we can see that some of the people who most want reversible computers are in fact people who have no interest in performing computations in reverse at all. Reversible computers can, in principle, run with zero energy and so are of great interest to engineers who wish to reduce power consumption. This is no pie-in-the-sky either. Logic gates, including adders, that run without external power supplies have already been developed and reversible computer hardware is well on its way to reality. Anyone familiar with digital logic will be aware that all logic gates can be built from NAND gates. All reversible logic gates can be built using Fredkin gates. In fact there is a whole type of logic I left out of my previous article: reversible or conservative logic that is specially adapted for reversible computing.

Quantum Computing
There is another group of researchers that also have an interest in reversible computing even though they have no desire to actually reverse their computations: these are the quantum computer scientists. This is a large topic but the central idea is simple. In quantum mechanics it is possible to arrange for the existence of physical states that are a superposition, a kind of mixture of different physical states. The most famous such state is Schrödinger's Cat which is both dead and alive. In a quantum computer we can arrange for a single computer to perform many computations simultaneously by being in a superposition of many physical states. Unfortunately we can only get useful output from one of these states but by using cunning ingenuity, before reading the output we can mix these states together in such a way as to perform useful computations that are much slower on classical (i.e. non-quantum) computers. But one of the key points about quantum computers is that they need to be reversible. Unfortunately there is a catch that makes them harder to make than energy saving computers. If we are trying to save power and unintentionally make our computer ever so slightly non-reversible then it's no big deal: we just waste some power. But a quantum computer needs to be 100% reversible or it is almost completely useless. This is a very difficult task and so far only the simplest of quantum devices have been built.

Computing Sensitivities
Time to bring things back down to Earth, at least metaphorically if not literally. Consider the task of the rocket scientists at NASA trying to design the shape of the wings for the space shuttle so as to maximize safety on re-entry. They have a need to reverse computations too. The idea is this: they can run fluid dynamics simulations to understand how the wing interacts with the atmosphere so as to compute just how the wing will be heated. All useful stuff, but once they have made such a computation how should they modify the wing so as to lower temperatures? The obvious thing is to run many simulations with different input parameters and see how the temperature depends on these variables. But if there are, say, a thousand input parameters to the simulation then we need to perform at least 1000 simulations just to get a rough idea of how the result depends on the inputs. What we really need is not only to compute the output of a simulation but also how that output varies with the input. How an output varies with an input is called the sensitivity of the input. (Anyone who knows their calculus will recognise that this is just another name for the derivative.) But a fluid dynamics simulation is a complex operation. Computing the sensitivities is a difficult task. But we can break it down into simpler steps. If the final output is X, and X depepnds on A that in turn depends on B that depepnds on C and so on then we can find the sensitivity of X with respect to A, then use that computation to find the sensitivity of X with respect to B and so on. It turns out that if we take our original program and rewrite it backwards it becomes a relatively routine process to write a new piece of code that computes the sensitivities. (The backwards code is known as the adjoint code.) So even if we have thousands of control variables we can compute all of the sensitivities by running the program forward once and then running a modified version once backwards. These methods aren't just useful for re-entry, they have been used to design efficient ways to kick satellites into higher orbits as well as countless applications that have nothing to do with space. Reverse computation really is turning out to be quite useful!

Reversible Programming in Haskell
So given all this need for reverse computation are there any computer languages out there that make it easy for us? Life for the NASA researchers can be tough - they often have to work with concrete code written in FORTRAN with little abstraction to help them. (Though some tools exist.) It's slightly easier in C++ where we can use operator overloading to transparently record our operations so they can be played back backwards. But at the other extreme of the abstraction scale is a language that is ideally suited to working with reverse computation: Haskell. In some sense Haskell doesn't strictly work in any particular order. That's because it's what is known as a lazy language and will try to put off doing actual work until the last possible moment. If the result of a computation isn't actually needed yet it will just go onto the next thing that's needed. If the thing we skipped was never actually needed it'll never get computed at all. This is a great form of optimization! But Haskell is also unusual in that it is a pure functional language. Everything is a function call but function calls can only return results, they can't have side effects. Going back to the example we started with: you can't write a = a+1 normally in Haskell because after we have executed the line we have caused a side effect, the value of a has been updated. You might imagine this causes major problems: for example in C, printf returns the number of characters printed but also has the side effect of printing stuff out. We can't do this in Haskell! Haskell functions can only return things. Nonetheless, we can still write functions that return an object that contains a list of all the I/O that the program would have done if it were allowed to do I/O. Sounds crazy but it works well enough that you can write a web server in Haskell without too much difficulty. (This is where the laziness comes in, we can start returning the beginning of the list of output even though we haven't bothered to compute the rest of the list yet. You don't have to wait for the end of your program run to see the output!)

Anyway, in order to 'fake' operations that have side effects in Haskell an object called a monad is used. One way to think about a monad is that it is the glue that sticks one statement to the next. For example in C you can think of the semi-colon character as a function that first executes the statement on its left and then executes the statement on its right. But in Haskell the monad is completely under our control and we can write new ones if we like. And if we're in the mood we can use a monad that performs the computation on the right before the computation on the left. In fact, one Haskell researcher uses just such a technique to solve the kind of sensitivities problem that is faced by the NASA researchers.

A Final Thought
Well, we're coming to the end of our survey of reverse computation. We've seen applications ranging from blue-sky quantum computing to mundane debugging. So now I'll leave you with one last thing to ponder. If someone develops an artificial intelligence on a reversible computer, how would it feel to be that intelligence running backwards?



Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o reversible computer
o research
o simulation s of world-wide computer networks
o how much power
o fundamenta l physical limit
o no hair
o debate
o reversible
o Fredkin gates
o previous article
o reversible or conservative logic
o central
o routine
o tools
o C++
o Haskell
o monad
o such a technique
o Also by My Alternative Account

Display: Sort:
Turning back the CPU clock | 218 comments (187 topical, 31 editorial, 0 hidden)
Running cpu operations in reverse huh? (3.33 / 15) (#2)
by Tex Bigballs on Mon Sep 08, 2003 at 02:28:35 PM EST

Well if the servers here get any slower, you might just see this happen on K5
(p.s. to the other trolls for this article: i left "monad" for you guys. hth, tex)

similar idea, different approach. (4.50 / 6) (#10)
by pb on Mon Sep 08, 2003 at 02:59:16 PM EST

This same issue often crops up with filesystems and databases, and the solution used there is journalling -- adding in whatever extra information is necessary to later figure out what you were doing so you can go back a few steps, correct a failure, or possibly arbitrarily roll back to a previous state.

Also, it has been possible to save the state of a computer at a given time and reload it later--however, this generally involves dumping the entire contents of its RAM to disk (many laptops do this), or alternatively just that of the currently running task (DOSShell did this, for example). Recent versions of Windows try to do the same thing for the hard drive, by establishing 'checkpoints'.

Obviously you would want to make the process more efficient (say, by devoting a specific segment of RAM and disk to journalling changes, and only record the differences from specific points) and perhaps add in some extra hardware to do some of this for you (perhaps on a PCI card using DMA and whatnot).

An easy alternative (or test harness) would be to write/use some sort of virtualization software (like VMWare) and build your journalling capabilities into that first. Then the virtual CPU and its resources could run undisturbed, while the host computer does all the dirty work of logging.
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall

I disagree (2.50 / 10) (#12)
by Tex Bigballs on Mon Sep 08, 2003 at 03:20:44 PM EST

[ Parent ]
hey, me too! [nt] (2.37 / 8) (#13)
by pb on Mon Sep 08, 2003 at 03:21:21 PM EST

"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
Been done, actually (4.00 / 3) (#46)
by ghackmann on Mon Sep 08, 2003 at 10:08:06 PM EST

Omniscient Debugger takes periodic snapshots while debugging Java code, so you can unwind the stack and debug programs backwards. Unfortunately, IMHO the UI is so obtuse that actually using the debugger for much more than a tech demo is impractical. (And I frequently use gdb, so that's saying a lot.)

That said, I'm pretty sure it's supposed to be more of a proof-of-concept than something for day-to-day use, so hopefully some other debuggers will start integrating something like this.

[ Parent ]

Actually, (3.00 / 1) (#72)
by tkatchev on Tue Sep 09, 2003 at 05:57:57 AM EST

You could do this in hardware very easily simply by adding a second instruction cache.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Solution for debugging already exists (4.55 / 9) (#22)
by StephenThompson on Mon Sep 08, 2003 at 05:02:26 PM EST

Soft-Ice already has a backtrace feature that lets you 'run' a program backwards to find bugs. It does this by recording the instructions as they get executed. This takes some ram and a bit of time, but it works today on PC logic.

And virus authoring. (3.00 / 1) (#85)
by Ranieri on Tue Sep 09, 2003 at 11:40:29 AM EST

Not that I would know of course.
Taste cold steel, feeble cannon restraint rope!
[ Parent ]
Trite rubbish (1.51 / 33) (#26)
by A Proud American on Mon Sep 08, 2003 at 06:12:41 PM EST

This is the stuff of second-rate computer scientists at the graduate level -- aka those who couldn't get a real job and work on real problems.

The weak are killed and eaten...

Let's work this out (4.33 / 3) (#28)
by My Alternative Account on Mon Sep 08, 2003 at 07:24:29 PM EST

Let's say we have a 3GHz Pentium. Suppose it clobbers 32 bits of information per instruction at one instruction per clock cycle (many instructions don't clobber data in the arithmetic registers, on the other hand a branch instruction can jump anywhere so any instruction needs to keep track of whether it's being executed because it was the target of a jump or simply because it follows the previous instruction). Assume we're running at a temperature of 270K. That gives...about 4x10-10 Watts. Pretty small. I'd guess that's of the same order of magnitude as what a single logic gate puts out today, maybe more. So it's not as insignificant as, say, the limit on memory sizes that comes from Bekenstein's bound. (See the article by Seth Lloyd I link to above.)

Except that... (4.00 / 2) (#47)
by Pseudonym on Mon Sep 08, 2003 at 10:31:53 PM EST

...you're not counting the number of destroyed bits correctly.

A three-terminal logic gate (e.g. NAND, NOR) take two bits and return one bit. This means that one bit is destroyed for every gate that information flows through. How many gates do you suppose are used every cycle in a modern Pentium?

sub f{($f)=@_;print"$f(q{$f});";}f(q{sub f{($f)=@_;print"$f(q{$f});";}f});
[ Parent ]
That's not what I'm trying to compute (3.00 / 1) (#49)
by the on Mon Sep 08, 2003 at 10:48:48 PM EST

I'm trying to compute the minimum energy computation to reproduce the functionality of a Pentium IV. If I compute all of the energy losses in a Pentium I'll just get whatever it is that Pentiums currently consume!

The Definite Article
[ Parent ]
Ah, OK (3.00 / 1) (#54)
by Pseudonym on Tue Sep 09, 2003 at 12:23:45 AM EST

I thought you were trying to understand where all the energy losses in a Pentium come from. That makes sense.

Even so, way more than 32 bits are going to get clobbered in most cycles, I would wager. FPU operations work on at least 80 bits. Most L1 cache operations work on at least 256 bits.

As a thought, it might be worth implementing a simple CPU (say, an older ARM) in as much reversible logic as you can (on paper). Someone's probably already done this.

sub f{($f)=@_;print"$f(q{$f});";}f(q{sub f{($f)=@_;print"$f(q{$f});";}f});
[ Parent ]
Cache line (3.00 / 1) (#60)
by dufke on Tue Sep 09, 2003 at 01:14:46 AM EST

FPU operations work on at least 80 bits. Most L1 cache operations work on at least 256 bits.

Hmm this makes me wonder... do bits that are loaded and then thrown away count? You're not actually using them to calculate anything. Unless you are doing a vector operation, you probably only want 32 or 64 of those bits, and you can't just load half a cache line...

On the other hand, don't forget the instruction bits. The average x86 instruction is, what, 32 bits or somesuch?
I am a Lurker. If you are reading this, I surfaced momentarily.
[ Parent ]

how to count it (3.00 / 1) (#82)
by glor on Tue Sep 09, 2003 at 11:09:09 AM EST

Hmm this makes me wonder... do bits that are loaded and then thrown away count?
It's not when they're thrown away, but when they're overwritten. When you write destructively, you increase the number of possible histories for the disk, which increases its entropy. Transfer of entropy is accompanied by transfer of heat: remember that dS = dQ/T, which is why moving the same heat (Q) from hot to cold temperature (T) increases the total entropy (S).

So, if the bits are written, the heat is generated. I'm not aware of any memory that checks the bits and only flips them if necessary, but that would generate less heat.

Disclaimer: I am not the most intelligent kuron.
[ Parent ]

IN SOVIET RUSSIA (1.62 / 24) (#29)
by noogie on Mon Sep 08, 2003 at 07:31:39 PM EST

fowards is backwards!

No, no, no... (3.42 / 7) (#34)
by evilpenguin on Mon Sep 08, 2003 at 08:16:13 PM EST

IN SOVIET RUSSIA, computer reverses you!
# nohup cat /dev/dsp > /dev/hda & killall -9 getty
[ Parent ]
In Soviet Russia, Goatse is ... (3.00 / 4) (#59)
by BlowCat on Tue Sep 09, 2003 at 01:14:08 AM EST

... nauseating looking at YOU!

[ Parent ]
Useful for branch prediction? (4.00 / 4) (#37)
by curien on Mon Sep 08, 2003 at 08:24:45 PM EST

I'm not computer engineer, so if anyone has any corrections, please share.

As I understand it, modern CPUs are able to achieve their stratospheric frequencies by (among other things) elongating the instruction pipeline. A consequence of this is that the processor must be constantly fed with long chains of instructions in order to keep it busy. But this runs into a problem where a logical branch must occur (say, an if-statement). Modern (compilers? machine code interpreters?) use branch prediction to "guess" which of the branches will get executed, and the processor is fed the instructions from that branch. If the guess is wrong, the entire pipeline must be cleared before proceeding to the correct branch.

Perhaps a "reversible CPU" could simply reverse the last few instructions, saving valuable CPU cycles. If the reversion is cheaper than clearing the pipeline, you'd see a speed benefit.

Or am I totally off?

John Ashcroft hates me for my freedom.

Depends (4.00 / 2) (#43)
by Lacero on Mon Sep 08, 2003 at 09:34:45 PM EST

In the clasic pipeline system the only useful information is the address of the next instruction, and nothing permanent can have been done before you find out what it is. This means you gain nothing by reversing the cpu compared to just flushing the pipeline.

But I heard modern GPU's, as used in nvidia graphics cards, have hundreds of pipeline stages, compared to 5 or so in the classic example used in lectures. I cannot even begin to imagine what those stages would do.

[ Parent ]

branch prediction occurs in hardware (4.50 / 4) (#50)
by Work on Mon Sep 08, 2003 at 10:53:07 PM EST

not by the compiler, though the compilers can have optimizations to avoid stupid or obvious branches among other things.

BP is the most complicated part of a processor and theres a bunch of ways to do it. I don't really see how 'reversing' instructions would be any faster than dumping them out of the pipeline, as you would have to enter in the reverse instructions into the pipeline to execute. But before then you'd have to have hardware to decide how to reverse it, or if it needs to be reversed (say an instruction is only partway through the pipeline, no need to bother). Then the hardware becomes increasingly complicated, and that slows things down.

The way to make branch prediction failures faster is to improve the correctness of the guess and to minimize the number dumped by moving branch execution earlier in the pipeline.

[ Parent ]

Sorry, you're totally off. (4.00 / 3) (#79)
by cburke on Tue Sep 09, 2003 at 10:19:42 AM EST

You have an interesting idea, but sadly not something that can improve branch performance.  It may sound like a lot of work, but "clearing the pipeline" is actually very simple.  While there are examples that shows otherwise*, in general to "clear" the pipeline, you have one signal that gets sent out that just marks all the instructions in the machine as invalid.  So clearing the pipe, which is done in hardware like Work mentioned, takes only a few cycles.  

Clearing the pipe is still bad because it punches holes in the pipeline.  Cycles you spent doing the wrong thing are cycles wasted.  Longer pipelines -- the part between getting the branch and deciding the branch, specifically -- leave longer holes, and thus more wasted cycles when you redirect.

That doesn't change if you undo instructions, unfortunately.

[ Parent ]

good article (2.20 / 10) (#40)
by originalbigj on Mon Sep 08, 2003 at 08:51:14 PM EST

In fact, so good I didn't read it.

AI is a myth (1.90 / 11) (#48)
by losang on Mon Sep 08, 2003 at 10:46:24 PM EST

In response to this comment

If someone develops an artificial intelligence on a reversible computer, how would it feel to be that intelligence running backwards?

Computers will never be able to think as we do.

hmm (3.66 / 3) (#51)
by milk on Mon Sep 08, 2003 at 10:57:02 PM EST

thats a rather arguable statement. no-one knows either way yet, so don't count your chickens before they've hatched and all that.

[ Parent ]
I know it (2.20 / 5) (#69)
by losang on Tue Sep 09, 2003 at 05:36:34 AM EST

Actually some people do know it. Your world is too limited to see this.

[ Parent ]
i don't know if you're a troll or not... (3.00 / 1) (#78)
by jolt rush soon on Tue Sep 09, 2003 at 10:03:44 AM EST

...but i'm interested to see what reasons you have for thinking this.
Subosc — free electronic music.
[ Parent ]
Empirical evidence. (3.00 / 1) (#87)
by tkatchev on Tue Sep 09, 2003 at 12:17:32 PM EST

For the vast majority of people, the difference between "AI" and "dumb machine" isn't the complexity of logical inferences (the "dumb machine" is already beating people hands-down in this department) but rather the fact that, for most people, "intelligence" means the ability to make decisions in a non-deterministic fashion, bypassing the laws of cause and effect.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

My reason (3.00 / 1) (#103)
by losang on Tue Sep 09, 2003 at 06:44:56 PM EST

Because the mind is not made of atoms.

[ Parent ]
What a coincidence! Software isn't made of... (4.00 / 2) (#104)
by the on Tue Sep 09, 2003 at 06:54:31 PM EST

...atoms either. That means you can do things like download it from a website without a single atom being transferred. Hey! Minds aren't made of atoms. Software isn't made of atoms. I see where you're leading me! Next you're going to argue that the mind is a kind of software and that we can therefore implement it on a computer.

The Definite Article
[ Parent ]
Software is not made of atoms (3.33 / 3) (#105)
by losang on Tue Sep 09, 2003 at 07:11:48 PM EST

Then what is it made of.

[ Parent ]
Bits of information (3.00 / 1) (#107)
by the on Tue Sep 09, 2003 at 07:37:09 PM EST

But don't ask what bits are made of. Bits are fundamental, they aren't made of anything and can exist in many different forms: in atoms, electrons, in electromagnetic fields or in any number of things that we are yet to discover. But the bits aren't actually made of these things. The bits form part of the state of these things. That's a quite different kind of existence. Not everything has to be made of atoms.

The Definite Article
[ Parent ]
O.K. then (1.00 / 1) (#117)
by losang on Tue Sep 09, 2003 at 09:42:04 PM EST

Well, why are they not made of atoms?

[ Parent ]
Say I have a computer (none / 0) (#136)
by the on Wed Sep 10, 2003 at 11:06:50 AM EST

And I send it in the mail to someone. Then what they receive will be made of the atoms in the computer I originally had.

If I email someone a piece of software no atoms at all get transferred. They receive the piece of software and yet no atoms were transferred.

Clearly the software is independent of the atoms because it can be transferred from one bunch of atoms to another. So whatever software is, it isn't the atoms that temporarily serve as home for it.

The Definite Article
[ Parent ]

That's not true. (none / 0) (#143)
by tkatchev on Wed Sep 10, 2003 at 01:21:30 PM EST

The only difference between paper and electronic mail is the fact that electronic mail can be assembled and destroyed automatically at great speeds.

It's still made of the same atoms, though.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

It's not made of the same atoms (5.00 / 1) (#147)
by the on Wed Sep 10, 2003 at 02:42:00 PM EST

I can have a bit of software at A and send it to B. The copy at A shares not one single atom with that at B. So whatever the software is, it's not atoms. It might reside temporarily in a substrate made of atoms (just as it can also reside in substrates like an EM field that is not made of atoms).

Consider the task of a person comparing the code at A and B to see if they are the same code. The last thing they would do is compare the atoms. The atoms are completely irrelevant to the software. Saying software is made of atoms is like saying that beer is made of glasses and kegs, after all you usually find beer in glasses and kegs. But that's confusing the content for the container.

The Definite Article
[ Parent ]

If no atoms were transfered (1.00 / 1) (#160)
by losang on Wed Sep 10, 2003 at 07:10:07 PM EST

how did the email get there?

[ Parent ]
When you send an email... (5.00 / 1) (#161)
by the on Wed Sep 10, 2003 at 07:18:57 PM EST

...no atoms are transferred. Not even electrons are transferred - they only move at a very low velocity and they can't travel at all through some types of CMOS logic gate. (Electrons travel through wires at velocities of the order of cm s-1 but an email can travel down a wire at a speed something like a third of the speed of light. Clearly the email can't be identified with the electrons themselves.)

If you drop a pebble in a lake the waves propagate outwards. Nonetheless, no atoms are transferred. The atoms in the water basically move up and down. But the signal still propagates outward. There is more to this world than atoms and a transverse water wave is an excellent example. The atoms act as a substrate but a peak in the wave only resides temporarily in any particular group of atoms before it moves on.

Emails are similar in principle, though more complex in the sense that emails can pass through many different media (and a vacuum if they go by satellite) before arriving.

The Definite Article
[ Parent ]

That is very nice (1.00 / 1) (#162)
by losang on Wed Sep 10, 2003 at 07:38:03 PM EST

but how do you know no atoms are transfered.

[ Parent ]
Er...that's easy... (5.00 / 1) (#163)
by the on Wed Sep 10, 2003 at 07:44:27 PM EST

The devices I'm talking about (modems, wireless cards, satellites dishes, ethernet cables etc.) aren't designed to transmit atoms. What makes you think they do? If an ethernet cable transmitted atoms then it'd eventually wear down as all of the material passes out of it! It'd be easy to stop wireless transmissions by putting a wall between transceivers (OK, it does if you use a crappy wireless card like I have). If you transmit your signal by satellite it goes at the speed of light. I'd like to see someone fling atoms reliably through an atmosphere that fast!

The Definite Article
[ Parent ]
You still have not answered the question. (1.00 / 1) (#164)
by losang on Wed Sep 10, 2003 at 07:49:55 PM EST

How did the email get there if no atoms are transfered?

[ Parent ]
I answered implicitly (5.00 / 1) (#166)
by the on Wed Sep 10, 2003 at 07:55:11 PM EST

By talking about water waves. If you really wanted (and I'm sure you don't) I could write out a bunch of equations that describe how a signal can be transferred through matter, say, without any matter istelf being transferred. You can see in a water wave how the signal gets transferred without transference of matter. Emails are just like that. They may be electromagnetic waves in a vacuum or EM pulses down a wire. Either way a signal arrives without matter being transferred.

The Definite Article
[ Parent ]
I don't want implicit explanations (1.00 / 1) (#167)
by losang on Wed Sep 10, 2003 at 07:57:01 PM EST

Please either answer the question or stop writing. BTW, if you would like to use equations that would be very nice.

[ Parent ]
this guy... (none / 0) (#169)
by jolt rush soon on Wed Sep 10, 2003 at 08:18:10 PM EST

...is fractally idiotic.
Subosc — free electronic music.
[ Parent ]
Shhh...I'm curious to see how his mind works (none / 0) (#170)
by the on Wed Sep 10, 2003 at 08:27:37 PM EST

He could be a troll but I'm suspecting not.

The Definite Article
[ Parent ]
The fact that (none / 0) (#172)
by losang on Wed Sep 10, 2003 at 08:47:51 PM EST

you may think I am a troll shows how little you understand.

[ Parent ]
Hey! (none / 0) (#174)
by the on Wed Sep 10, 2003 at 09:04:48 PM EST

I said 'might' and I'm still answering questions very politely and listening to everything you say. So get back to questioning my answers :-)

The Definite Article
[ Parent ]
You don't read very well. (none / 0) (#175)
by losang on Wed Sep 10, 2003 at 09:16:33 PM EST

Maybe I should start writing in baby English. The fact that you would even consider me a troll shows the limitations of your knowledge.

From before...

Please either answer the question or stop writing. BTW, if you would like to use equations that would be very nice.

[ Parent ]

I've changed my mind (none / 0) (#178)
by the on Wed Sep 10, 2003 at 10:32:04 PM EST

You are in idiot. Any troll could write the stuff you've written. You've not written anything particularly insightful or clever. I was hoping it was going to lead somewhere but clearly it isn't. You writing is indistinguishable from that of a troll and if you can't see that then that's a pity for you.

The Definite Article
[ Parent ]
OK. (none / 0) (#179)
by losang on Wed Sep 10, 2003 at 10:35:10 PM EST

But it helps the conversation if you have an original thought every once in a while.

You still have not answered my question.

[ Parent ]

My mistake (none / 0) (#197)
by losang on Thu Sep 11, 2003 at 10:46:13 PM EST

I thought I was having an intelligent conversation with an intelligent person. Sorry about the mistake.

[ Parent ]
Hey! Cut that out! (none / 0) (#199)
by the on Fri Sep 12, 2003 at 08:38:16 PM EST

Now...back to the question. Could you restate it very carefully as I think I've already answered it.

The Definite Article
[ Parent ]
Still want more? (none / 0) (#200)
by losang on Fri Sep 12, 2003 at 09:37:50 PM EST

I asked How did the email get there if no atoms are transfered?

No implicit explanations.

[ Parent ]

Pussy (none / 0) (#201)
by losang on Sat Sep 13, 2003 at 05:59:04 PM EST


[ Parent ]
You can cut that out too. (none / 0) (#202)
by the on Sat Sep 13, 2003 at 07:06:06 PM EST

I do have other things to do from time to time.

Information can be transferred without atoms being transferred. Atom A interacts with atom B. Some of the kinetic energy in A is transferred to B without A itself being transferred. Then B interacts with C and so on. That way a signal, in the form of kinetic energy, can be transferred along a chain of atoms without any atoms being transferred. But it doesn't even have to involve atoms. A varying electromagnetic field at point X can generate another one at Y which in turns produces one at Z and so on. So a signal in the form of an EM wave can travel through a vacuum without even the medium of atoms. It is well known how to encode emails as such signals.

The Definite Article
[ Parent ]

Oh really, (none / 0) (#203)
by losang on Sat Sep 13, 2003 at 08:40:10 PM EST

if it is so well known please explain how emails are encoded as the signals you describe.

[ Parent ]
In theory I could keep doing this... (none / 0) (#207)
by the on Mon Sep 15, 2003 at 12:54:39 PM EST

I could describe how digital signals are modulated as analog electromagnetic waves. I could describe the protocols used to encode the email and the transport layer. I could describe the error-correcting codes used and so on. I could go into quite a bit of detail. But it would take a long time and the information is widely available on the web and in books.

Most importantly of all - I don't see where any of this is going and these details seem hardly relevant to me. Atoms don't need to be transferred to transfer an email. I don't feel any particular need to defend that statement. It seems to me that if someone wishes to argue the contrary point then the burden of proof lies with them.

The Definite Article
[ Parent ]

I need to step in.. (1.00 / 1) (#209)
by losang on Mon Sep 15, 2003 at 10:35:50 PM EST

since you are off track and going nowhere I am going to make this short and sweet.

First, your analogy with water and waves is not accurate. In the case of a water wave, the wave is nothing more than a particular arrangement of water atoms relative to one another. There is nothing special about this shape. When comparing your example with electical signals you claim there is something that is being transmitted along the wave. In the case of water there is nothing being transmitted.

Second, if software is not made of atoms you need to explain how I can hold a Windows CD in my hand.

Lastly, I spent 6 years at one the the country's best technical universities. I have a degrees in physics and math. After a few of your posts I knew the kind of person you are. My advice to you is that you put down your technical and science books and begin thinking for yourself and a bit more critically. That is, question the very tradition from which you come.

For example, a little investigation will show that the ability of equations to make empirical prediction does not necessitate a particular ontological connection between the mathematical model and reality. This is exampled in physics where many contradictory theories predict the exact same results. If you need a particular example consider the so-called 'wave-particle duality.'

Now that you have seen the time and depth I have put into investigating these issues maybe you will think before you write next time.

[ Parent ]

The least you could do... (none / 0) (#212)
by losang on Wed Sep 17, 2003 at 08:28:44 PM EST

is acknowledge defeat.

[ Parent ]
What happened... (none / 0) (#217)
by losang on Thu Sep 25, 2003 at 05:48:34 AM EST

professor? Did you give up without a fight?

[ Parent ]
I think I'm in love. (none / 0) (#194)
by tkatchev on Thu Sep 11, 2003 at 07:06:57 PM EST


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

A/S/L? (none / 0) (#195)
by the on Thu Sep 11, 2003 at 07:26:27 PM EST


The Definite Article
[ Parent ]
Sorry... (none / 0) (#173)
by losang on Wed Sep 10, 2003 at 08:49:19 PM EST

this discussion is for intelligent people.

[ Parent ]
mind as in brain or mind as in thoughts or what? (none / 0) (#146)
by jolt rush soon on Wed Sep 10, 2003 at 02:17:13 PM EST

because it's conceptual in the same way your 'mind' is. what is your 'mind' made of?
Subosc — free electronic music.
[ Parent ]
Not transferring atoms. (none / 0) (#131)
by tkatchev on Wed Sep 10, 2003 at 08:38:53 AM EST

Tell that to my AMD Athlon when it is about to explode from overheating.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Those aren't atoms. (none / 0) (#138)
by Canthros on Wed Sep 10, 2003 at 12:34:18 PM EST

Those are most likely electrons.

It's now obvious you are either A) Gay or B) Female, or possibly both.
[ Parent ]
Electrons aren't being transferred anywhere (none / 0) (#139)
by the on Wed Sep 10, 2003 at 12:41:07 PM EST

Almost all of that heat comes from the AC power which has a zero net flow of electrons. They just wiggle back and forth in a 60Hz sinusoidal dance.

The Definite Article
[ Parent ]
Of course they wiggle back and forth. (none / 0) (#142)
by tkatchev on Wed Sep 10, 2003 at 01:15:10 PM EST

Just like the water in your tap.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Not like water in a tap (none / 0) (#148)
by the on Wed Sep 10, 2003 at 02:57:44 PM EST

Because usually water supply isn't AC!

The Definite Article
[ Parent ]
Good (4.00 / 2) (#93)
by the on Tue Sep 09, 2003 at 02:20:06 PM EST

I'm more interested in computers that think differently from us. If they thought like us then talking to them would be just like talking to other people. I can do that any time. I look forward to conversing with artificial intelligences that think in ways that are completely alien to me. Especially if they are smarter than me and can teach me new stuff.

The Definite Article
[ Parent ]
My theory. (5.00 / 1) (#118)
by mindstrm on Tue Sep 09, 2003 at 09:56:24 PM EST

That computer will feel exactly the same way. It only looks like it's running backwards to us.. the experience, the pattern, for the computer, the relationships that exist within it, will still feel the same way.

[ Parent ]
AI reversible? (none / 0) (#119)
by thrae on Tue Sep 09, 2003 at 10:00:03 PM EST

If true AI is ever developed, could it possibly be reversible? It's hard to fathom how such a complex system could work without ever blowing away values. It would presumably take in new information as time goes on, so unless it keeps expanding itself, it would eventually run out of space to store the information.

[ Parent ]
Looks like you got picked up... (3.50 / 6) (#53)
by thelizman on Tue Sep 09, 2003 at 12:20:30 AM EST

...over at OSNews. Just when you thought K5 was completely in the shitter...

"Our language is sufficiently clumsy enough to allow us to believe foolish things." - George Orwell
OSNews links to K5, the story is on K5 only (3.40 / 5) (#58)
by BlowCat on Tue Sep 09, 2003 at 01:10:16 AM EST

Which makes me think that My Alternative Account is Eugenia Loli-Queru

[ Parent ]
OSnews comment (3.33 / 3) (#61)
by Repton on Tue Sep 09, 2003 at 01:16:56 AM EST

I don't know enough to evaluate this comment, but the guy acts like he knows what he's talking about :-) → comment.

They say that only an experienced wizard can do the tengu shuffle..
[ Parent ]

I think the guy is a narrow... (3.66 / 3) (#63)
by the on Tue Sep 09, 2003 at 01:35:15 AM EST

...minded little moron but you can probably guess that from my replies to him :-)

The Definite Article
[ Parent ]
WARNING! METACOMMENT: do not read (2.80 / 5) (#123)
by felixrayman on Wed Sep 10, 2003 at 01:37:00 AM EST

Just taking this opportunity to accept your admission of defeat in the debate we were having the other day. It takes a guy with real class to admit he lost, which is what I assume you were doing when, after being soundly whipped in the thread involving this post, you responded to it twice in semi-coherent fashion, then rated it a zero, then went back and rated 10 other comments of mine to zero. When I get a post with 8 ratings of 5 and one of zero, and the zero comes from the guy whose post I responded to, I know I just kicked someone's ass. Anyway, like I said, it takes real class to admit you got your ass kicked. Cheers!

Call Donald Rumsfeld and tell him our sorry asses are ready to go home. Tell him to come spend a night in our building. - Pfc. Matthew C. O'Dell

[ Parent ]
IHBT... (none / 0) (#177)
by piranha jpl on Wed Sep 10, 2003 at 09:26:56 PM EST

Into rating this comment 5.  Notice how the author and content is identical to the one I'm replying to? I didn't until I noticed you're actually spamming the same message to thelizman's comments.  (I've rerated that comment to 2 as a result.) You've also been systematically down-rating dipierro.

It's really unfortunate thelizman is abusing his trusted user status.  Please don't use it as an excuse to (further?) be childish. (I say 'further?' because I don't know--and don't care who "started it".)

- J.P. Larocque
- Life's not fair, but the root password helps. -- BOFH

[ Parent ]

WARNING: Don't read this one either! (1.00 / 2) (#196)
by felixrayman on Thu Sep 11, 2003 at 07:37:14 PM EST

You've also been systematically down-rating dipierro.

If you dug a little deeper you would realize that dipierro was at one time systematically down-rating me

So, I certainly could have responded to thelizman the same way I responded to dipierro. Which, in a perfect world, would have been the ideal way to respond to someone with trusted user status rating 10 of my posts to 0 because he lost an argument, ( You don't go back and rate 10 of someone's comments to zero when you felt you WON an argument ) but this scheme was unworkable because 1) I don't ( and am way too blunt to ever have ) trusted user status, and 2) The guy has posted over 3000 comments and I don't feel like writing a Perl script to rate them all.

So I figured the next best way to deal with it, since ignoring something like that isn't my style, would be to point it out in a few places. And it certainly wasn't a troll, you were in fact warned in the title not to read it as it was as much of a waste of time to read as it was to write, much as this comment is.

Call Donald Rumsfeld and tell him our sorry asses are ready to go home. Tell him to come spend a night in our building. - Pfc. Matthew C. O'Dell

[ Parent ]
I didn't "lose" any argument with you. (2.33 / 3) (#205)
by thelizman on Sun Sep 14, 2003 at 11:51:12 PM EST

You're a mindless troll, and got 10 well deserved 0's because of your trollish content.

"Our language is sufficiently clumsy enough to allow us to believe foolish things." - George Orwell
[ Parent ]
You got your ass kicked and you fully realize it. (none / 0) (#210)
by felixrayman on Tue Sep 16, 2003 at 02:02:29 AM EST

On Mon Sep 8th, 2003 at 01:38:11 AM EST, I made this comment. You came back that morning and replied to it. You came back the morning after that and replied to it again, and after replying to the comment a second time, spent the next five minutes rating related and unrelated comments of mine to 0.

These are not the actions of someone who won an argument. These are the actions of someone who realized that in debate, he was quite simply outclassed. Post whatever you want, you know the facts as well as I do. You can not debate with me on any level whatsoever. You are a moral coward, you are a near illiterate, and you are pathetic. You accuse me of not having "anything better to do than attempt to spread your bereft ideaology on some obscure Internet blog" while in fact, you were the one who has posted over 3000 comments to that blog while I have posted less than 400. Several of the comments I posted, in contrast to yourself, had actual content.

To sum up, you are a mewling pathetic fucktard who aspires one day to reach the level of humanity of Gollum, and perhaps aspires too high. You can not debate, your regularly get slapped down and your only response is to rate your opponents comments to 0. One wonders why you post responses to comments you have rated to 0. Moral bankruptcy is the most likely answer.

Call Donald Rumsfeld and tell him our sorry asses are ready to go home. Tell him to come spend a night in our building. - Pfc. Matthew C. O'Dell

[ Parent ]
thelizman didn't abuse his trusted user status (2.33 / 3) (#206)
by thelizman on Sun Sep 14, 2003 at 11:53:07 PM EST

I rated this guys troll comments as 0 because that is precisely what they deserved.

"Our language is sufficiently clumsy enough to allow us to believe foolish things." - George Orwell
[ Parent ]
I know this article is good... (3.20 / 5) (#55)
by supahmowza on Tue Sep 09, 2003 at 12:38:09 AM EST

I just wish I knew everything I was reading so I could post something worthwhile here.  At any rate, it appears to be a well written and well researched (wish I could prove that, lol) article on an interesting topic.  Zero energy computing? sounds good to me.

Drugs are the solution to all life's problems
Well, drugs and handguns
Say one is 100 lines back (3.25 / 4) (#57)
by CoolName on Tue Sep 09, 2003 at 01:04:55 AM EST

The detection of the error is 100 lines forward. If the error is 100 lines back the evidence of the error is non-existent, though the line 100 lines back is indeed the error. The computer has been reversed. No evidence of an error is given by a computer that has been reversed because the computer has been stepped back from the evidence of the error. If one is just hoping to pick up a syntax error an intepreter will do that. There is no reason for a reversible debugging device to stop at any particular line as there is no evidence of error so there is no guide to the debugger. Say one knows the answer still the computer is unable to compute how the line 100 lines back effects the answer because the 100 successive lines have been reversed out of existence. Reversible computations are interesting but reversible computers as debugging devices I beleive are most likely unhelpful.

"What does your conscience say? -- 'You shall become the person you are.'" Friedrich Nietzsche

You have breakpoints?!! (4.00 / 2) (#67)
by QuantumG on Tue Sep 09, 2003 at 04:39:16 AM EST

I envy you. Right now I'm trying to debug a segfault caused by our garbage collector (I think) and I'm doing it with printf statements.

Gun fire is the sound of freedom.
[ Parent ]
Can you (2.50 / 2) (#70)
by whazat on Tue Sep 09, 2003 at 05:51:08 AM EST

use gdb? Bit of a learning curve but it does have breakpoints.

[ Parent ]
welcome to solaris (none / 0) (#114)
by QuantumG on Tue Sep 09, 2003 at 09:11:23 PM EST

gdb ./boomerang
GNU gdb 5.3
Copyright 2002 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "sparc-sun-solaris2.8"...
(gdb) break main
Breakpoint 1 at 0x888a0: file driver.cpp, line 5.
(gdb) run
Starting program: /home/44/trent/stuff/work1/boomerang/boomerang
[New LWP 1]
[New LWP 2]
[New LWP 3]
[New LWP 4]
[New LWP 5]

Program received signal SIGSEGV, Segmentation fault.
0xff32dd68 in GC_find_limit (p=0xffbee8cc "", up=1) at os_dep.c:806
806 GC_noop1((word)(*result));
Current language: auto; currently c

Gun fire is the sound of freedom.
[ Parent ]
You have printf?!! (3.33 / 3) (#89)
by dufke on Tue Sep 09, 2003 at 01:23:35 PM EST

Hey, I worked on a microcontroller project not long ago, and not only did we not have a debugger, we didn't have printf. :-)

Progress (or lack thereof) through code was monitored by means of single-character printouts to a serial line.

(The platform has a debugger, I just don't have a budget to buy it on.)
I am a Lurker. If you are reading this, I surfaced momentarily.
[ Parent ]

I'm playing with some Atmel AVR stuff (2.50 / 2) (#91)
by the on Tue Sep 09, 2003 at 01:43:55 PM EST

I have a bunch of LEDs and a multimeter for debugging. Serial line? That's luxury :-)

Well, to tell the truth I did eventually get the serial line hooked up and even have an LCD display driver now.

The Definite Article
[ Parent ]

Yes, breakpoints helpful but? (none / 0) (#126)
by CoolName on Wed Sep 10, 2003 at 03:30:36 AM EST

You set a breakpoint and run the program forward from before the breakpoint. How is this reversible computing?

"What does your conscience say? -- 'You shall become the person you are.'" Friedrich Nietzsche

[ Parent ]

Useful for periodic errors (none / 0) (#214)
by coulson on Sat Sep 20, 2003 at 03:41:48 AM EST

This would be most useful for debugging periodic issues (thread timing, interaction with other processes, etc.)

Breakpoints aren't usually much help when trying to track down timing errors. For one thing, the error may only happen 1 in 10,000 times. You may never be able to reproduce it by hand. Second, breaking in and stepping through the code in a debugger may change the timing enough that the symptom no longer occurs.

In the best case scenario, you would be able to walk up to a hung machine (server, client's computer), sit down, and say, "Computer, roll back 10 seconds. Now replay." Momentarily ignoring the problems with voice control (;>), this still isn't easy -- production servers and end-user machine generally won't be running your debugger.

Still, a reversible debugger would let you run the program on a dev machine until it crashes (perhaps by calling it from a test harness loop). After you're finally able to reproduce the problem, you could then say "Aha! Now roll back 10 seconds and replay."

The only effective way I know of now to troubleshoot these errors is with aggressive logging. Log all input and output. Scatter debug printlns throughout sensitive code. Then start trying to backtrack or narrow down the section where the initial error occurred. It's a laborious and time-consuming process.

[ Parent ]
Richard Feynman (3.00 / 3) (#64)
by Shimmer on Tue Sep 09, 2003 at 01:58:07 AM EST

I remember reading one of Richard Feynman's lectures where he mentions the idea of reversible computing.  He described a computer that used almost no energy -- a calculation would sort of slosh backward and forward randomly until it happened to get to the end.  Very slow and strange, but very energy efficient.

Perhaps someone who knows more about this could dig up a link to the lecture I'm thinking of?

-- Brian

Feynman Lectures on Computation (3.66 / 3) (#65)
by CoolName on Tue Sep 09, 2003 at 02:35:24 AM EST

"What does your conscience say? -- 'You shall become the person you are.'" Friedrich Nietzsche

[ Parent ]

Determinism again. (3.00 / 5) (#66)
by sakusha on Tue Sep 09, 2003 at 04:23:55 AM EST

This is all Determinism and Newtonian physics again. If your program is deterministic, if it would always yield the same results in the same sequence of steps, it could be reversible. But once you get to a certain level of complexity, especially if you use realtime data, the sequence becomes an epiphenomenon, results depend on the state of the system and are almost irreproducible. This is like the old Newtonian belief that if you could know the speed and location of every atom in the universe at time T, you could computationally determine where every atom would be at time T+1. Alas it don't work that way.

What kind of pseudo-science bullshit is this? (2.66 / 3) (#71)
by tkatchev on Tue Sep 09, 2003 at 05:53:41 AM EST

WTF do you get this idiocy?

You are flat-out wrong.

The fact that we currently lack computing power to calculate something doesn't make it non-deterministic.

For example, a SNES emulator can be run forwards and backwards at will because we can keep the whole state machine in memory and analyze it as much as we want.

Similarly, if you had a computer as large as the universe, you could calculate the position of any given atom in the universe at any given time.

P.S. A non-deterministic computer hasn't been invented yet.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

So you counter pseudo-science BS with more BS? (3.00 / 3) (#75)
by marinel on Tue Sep 09, 2003 at 09:11:56 AM EST

You say:
Similarly, if you had a computer as large as the universe, you could calculate the position of any given atom in the universe at any given time.
This is impossible [unless you believe in intersecting parallel universes]...
Proud supporter of Students for an Orwellian Society
[ Parent ]
Now, you need to elaborate what you mean. (3.00 / 2) (#76)
by tkatchev on Tue Sep 09, 2003 at 09:25:43 AM EST

What do you mean?

Simply the fact that the universe cannot contain another universe larger that it inside itself?

If yes, then thanks, but I already know this fact. I merely cited this example to let you know how far the theoretical limits of computability stretch.

(I'm asking whether or not you really meant to bring in quantum uncertainty mumbo-jumbo into the discussion instead...)

Anyways, my point was that the Pentium IV on your desktop isn't anywhere close in complexity compared to the universe, so, obviously, you cannot talk about any sort of "non-determinism" in computing.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

this is what it means (3.00 / 2) (#90)
by phred on Tue Sep 09, 2003 at 01:34:38 PM EST

because you messed up and specialized in math, you neglected physics, HTH!

[ Parent ]
Ah no. (3.00 / 2) (#94)
by tkatchev on Tue Sep 09, 2003 at 02:56:30 PM EST

It means that you shouldn't throw around words like "non-deterministic" and "incomputable" if you don't know what they mean.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

More BS to counter the BS countering the BS (3.75 / 4) (#88)
by the on Tue Sep 09, 2003 at 12:35:34 PM EST

Few physicists expect that a classical computer can calculate the position of a given atom in the universe at any time. Not just in practice but in theory as well. The world seems to be descried by by quantum mechanics and so there's no way we could actually measure an initial state to feed into the simulation, let alone simulate it with the required accuracy.

The Definite Article
[ Parent ]
Look, (2.66 / 3) (#95)
by tkatchev on Tue Sep 09, 2003 at 03:04:58 PM EST

The equations are all there, and given a large enough computer and an accurate enough initial state, you can extrapolate the state of all the atoms to any given point in time.

Now you tell me where the "non-determinism" is in this picture. (Hint: "we cannot build a computer powerful enough" and "we don't know the initial state" are the wrong answers.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

We cannot measure an initial state... (3.50 / 2) (#97)
by the on Tue Sep 09, 2003 at 03:15:16 PM EST

...even in principle. Seems like a good answer to me. Sure, on a big enough computer (in a handy parallel universe) we could simulate the known laws the appear to hold in this universe. But how would we know that the simulation we're running corresponds precisely to the universe in which we are now living? There is no way to compare reliably, even in principle.

The Definite Article
[ Parent ]
Dude.. (3.50 / 2) (#99)
by Magnetic North on Tue Sep 09, 2003 at 03:49:31 PM EST

It's a thought experiment.

Practicality and incertainty in measuring initial state doesn't come into the equation.

"Given an initial state, could we.."

The results of the thought experiment are questions like; are the laws of physics deterministic? is space discrete?; is time discrete? etc.

[ Parent ]
there is no well defined initial state (2.50 / 2) (#100)
by Captain Segfault on Tue Sep 09, 2003 at 04:22:23 PM EST

Most modern interpretations of QM hold that the universe is simply not deterministic; there *is* no initial state to measure. Consider a hydrogen atom, with an electron and a proton. There is no uncertainty; I know that the electron is moving very slowly, and is falling into the nucleus. There is nothing keeping electrons from all falling into their nuclei except for their kinetic energy. On the other hand, if we have uncertainty then the fact that an electron has a lot less mass than a proton comes into play; although the elctron and the proton attract each other, the uncertainty in the momentum is too great for an electron held close to the nucleus, due to it's small mass. As such, the elctron eventually flies off with high momentum, rather than getting stuck in the nucleus. That process fundamentally requires uncertainty; you're not going to be able to simulate the universe in such a way that it will match up with reality, at least not with current physical models.

[ Parent ]
Notice something: (2.50 / 2) (#102)
by tkatchev on Tue Sep 09, 2003 at 05:21:36 PM EST

I said "state", not "coordinates".

The state of any given particle doesn't necessarily have to be a number; it could just as well be a distribution function.

Now, this makes the computational math more difficult, but I don't see how this fundamentally changes anything.

(Granted, from a physics context, where you need concrete results, i.e. coordinate vectors or whatnot, this uncertainty presents a problem; but from the standpoint of computability, you haven't changed anything. Certainly there isn't any sort of non-determinism going on here.)

Besides, what exactly do you mean by "matching up with reality"? Is a distribution function close enough to "reality"?

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Your problem is in the epistemology. (none / 0) (#120)
by ghjm on Tue Sep 09, 2003 at 10:52:16 PM EST

Your "thought experiment" is to suppose that a momentary state of the universe could be known (T), and then ask whether subsequent states (T+1) can be calculated from the initial state.

The problem is, what does "known" mean? Are you proposing to construct a computer capable of storing T and then calculating T+1? If so, you run into problems with conservation of mass/energy. Presumably a single elementary particle is the most compact possible way of storing the state of a single elementary particle. Unless you can determine a method of storing the state of multiple particles within a single particle, your computer will require the use of all the particles in the universe. Which particles will be left over to form the entity that throws the switch and reads the result?

Or does "known" mean that T can exist in an abstract symbolic form, outside and separate from the universe of material particles? If so, in what sense can T+1 be calculated from T? By definition, it cannot take on physical form, and therefore cannot be directly acted upon. No material computer or mathematician could ever perform the calculation.

Or does "known" mean that T, and therefore T+1, are knowable to God? But God has the property of being able to know the unknowable, including particularly the things that are non-deterministic from man's point of view. Being deterministic-to-God is not a particularly interesting property, since everything is.

Or does "known" mean that regardless of the impossibility of actually performing the calculation, the method is describable - that no new information is being added to the system over time, and T+1 (or by induction T+N) is entailed by the existence of T? This is a rather limited sense of the word "deterministic" but even this is incorrect. Suppose T contains at least one quantum wave function on the verge of collapse. When the wave function collapses, a photon either will or will not be emitted, but there is no way of predicting this except statistically (let's say there is a 30% chance the photon will be emitted). At T+1 the photon observably does or does not exist. What is the method by which we determine from T, in which only the wave function exists, whether T+1 will or will not contain this particular photon?

Or in other words, you're basically describing a 19th-century viewpoint. There was a lot of interesting work done in the 20th century, why don't you try reading some of it?


[ Parent ]

Ugh. (none / 0) (#127)
by tkatchev on Wed Sep 10, 2003 at 08:14:37 AM EST

I repeat again, the question isn't "can we build such-and-such computer", the question is "is such-and-such computer theoretically possible".

The answer is yes -- such a computer is theoretically possible. Anything that can be described using mathematical notation is theoretically computable, given an infinitely large state machine and infinite time. (The reverse is not necessarily true, I guess.)

And again, what we get as the output of such a computer isn't necessarily coordinate vectors -- the answer could be an abstract "state", for example, a distribution function.

There is absolutely nothing "non-deterministic" about probability theory. e.g. There is nothing non-deterministic about computing the probability of a thrown coin landing heads or tails.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

What do you mean by theoretically? (none / 0) (#187)
by ghjm on Thu Sep 11, 2003 at 09:38:46 AM EST

An infinitely large state machine and infinite time is not possible, even in theory.

And while you can perform deterministic calculations to find a probability, a toss of an actual coin is non-deterministic because it is dependent on quantum events that are themselves non-deterministic.

[ Parent ]

Come on. (none / 0) (#193)
by tkatchev on Thu Sep 11, 2003 at 03:56:43 PM EST

Yes, it is theoretically possible.

It's called a Turing machine, something you study in freshman computer science 101.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Ah (none / 0) (#218)
by ghjm on Mon Sep 29, 2003 at 09:40:52 PM EST

I see the confusion. You're conflating arbitrarily large with infinitely large. If you prove that solving a particular problem would necessarily require a Turing machine to have an infinitely long tape, you have proven that the problem is not solvable by a Turing machine.

[ Parent ]
Dude... (3.66 / 3) (#101)
by the on Tue Sep 09, 2003 at 04:33:00 PM EST

Thought experiments. Yes. That's the problem. Heisenberg had a thought experiment of his own that prevents me from carrying out this thought experiment. The uncertainty principle prevents the state of the universe being measured even in principle. That means even in a thought experiment.

The Definite Article
[ Parent ]
Computability (2.33 / 3) (#77)
by sakusha on Tue Sep 09, 2003 at 10:02:45 AM EST

You have much to learn about computing theory. There are many finite but incalculable problems. Any computer program with a true random number generator is non-deterministic.
You seem to be living in the Newtonian fantasy that the universe is not irreduceably complex, that T+1 is calculable. This may be true for simple systems but it is easy to become ensnared in incomputable problems.

[ Parent ]
Don't make me laugh. (3.00 / 4) (#84)
by tkatchev on Tue Sep 09, 2003 at 11:22:31 AM EST

I have a master's degree in math, whereas you are just some quack posting pseudo-science on a website.

Look, if you want me to take you seriously, you better define what you mean by "incalculable". If you mean "something that is too takes too long to compute", i.e. NP-Hard, then you are wrong.

Also, just because something cannot be computed in finite time on a Turing machine (i.e. the halting problem) doesn't mean that that the universe is non-deterministic. How long the computation takes (even if it is infinite time) bears absolutely no relevance to non-determinism.

P.S. Do you realize that "true" random number generation is impossible?

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Dr. Science (1.00 / 1) (#121)
by sakusha on Wed Sep 10, 2003 at 12:20:22 AM EST

Wow, you have a Master's Degree in Science. That makes you.. another quack posting on the internet.
FYI, truly random number generation is not so difficult, there are plenty of devices using everything from radioactive scintillation to quantum tunnelling to lava lamps, as seeds for random algorithms.
You also obviously have never worked in coding. I had a rude awakening early in my programming career, I wrote an iterative solution to an equation, it was intended to run in a loop that would converge on the solution, but in practice it never finally reached the true answer. It produced approximations that bounced back and forth around the answer, never converging closer than a certain limit, sometimes > and sometimes < than the true number. The algorithm is Turing incomputable, it is also nondeterministic because it can never arrive at a determined answer, the results always depend on how long you let it run. Even if you devoted infinite computing resources to the algorithm, it would never come closer to the answer than a certain limit. Come on Mr. Math, you should know of several infamous problems like this.

[ Parent ]
True random number generation. (1.00 / 1) (#128)
by tkatchev on Wed Sep 10, 2003 at 08:32:47 AM EST

True random number generation is impossible.

When you're using e.g. radioactivity, all you're doing is increasing the reliability of pseudo-random number generation by shifting the burden of computation to more effective analogue devices.

This is similar to those old analogue diffeq computers that gave out remarkably accurate answers remarakbly efficiently -- just because you're using a special trick for analogue computation doesn't make those diffeq's any different; you certainly don't get any sort of magic "non-determinism" simply by hooking up an oscillating pendulum to a computer.

P.S. I wrote more code than you ever will in your whole life, boy. Don't ever try to talk down to me, cause I will eat you for breakfast if you do.

Also, you really need to get a clear definition of what "non-determinism" really is. There is nothing non-deterministic about a non-converging series. You always get a determined answer out of a Turing machine.

Also, Turing machines have infinite state and infinite time. So, certainly, if you let a Turing machine run forever, it will give you the exactly correct answer. (e.g. The exactly correct answer has an infinite number of digits after the decimal point; assume that the machine prints out a new digit every time cycle.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Radioactivity (none / 0) (#150)
by glor on Wed Sep 10, 2003 at 04:32:54 PM EST

If you have evidence that radioactive decays can be described deterministically, rather than statistically, there are some people in Stockholm who would like to give you a big prize.

If you have faith that quantum processes are deterministic, you are silly.

Disclaimer: I am not the most intelligent kuron.
[ Parent ]

The situation is more subtle (none / 0) (#151)
by the on Wed Sep 10, 2003 at 04:41:14 PM EST

You can have your cake and eat it too. Everett's relative state formulation of Quantum Mechanics (popular among a very high proportion of physicsts BTW) is completely deterministic. Unfortunately it doesn't make it any easier to make predictions about whether or not a given atom is about to decay. A crude explanation for this seeming contradiction is that the universe splits into two: one with a decay and one without, and you don't know for sure which one you are in.

But regardless of whether or not this is a good formulation of QM, any probabilistic model of physics can trivially be converted into a deterministic one with multiple worlds that is indistinguishable from it. Curiously enough the method is the same as that used to turn NFAs into DFAs in automata theory.

Anyway, my point is this: there is no dichotomy between probabilistic and deterministic models.

The Definite Article
[ Parent ]

Impractical distinction (none / 0) (#152)
by glor on Wed Sep 10, 2003 at 05:42:48 PM EST

True, but you're left with the experimental fact that the time between radioactive decays from a source (or the final location of a particle passed through a double slit, or whatever) is a random variable:  you can predict its distribution, but not its next value, no matter how much you know about the system.  If you have a deterministic model that puts you in a random universe, you still get random numbers in your experiment.  

Disclaimer: I am not the most intelligent kuron.
[ Parent ]

Very fine distinction (none / 0) (#154)
by the on Wed Sep 10, 2003 at 05:52:34 PM EST

Yes, the distinction makes no practical difference. But it does make a difference.

For one thing I disagree with your description: it doesn't put you in a random universe, it puts you in all of them. The catch is that each one of you doesn't know which model to use.

Secondly: I think you need a sharper definition of non-deterministic model because. It might mean using a phrase like "a model in which you don't know which branch of the model to use to model an actual physical situation". But it certainly can't be used to mean "a model in which the event that follows any given event is determined randomly" because that is indistinguishable (to an observer living in it) from one in which all possibilities happen next.

The Definite Article
[ Parent ]

Exactly. (none / 0) (#153)
by tkatchev on Wed Sep 10, 2003 at 05:46:51 PM EST

Probability theory is completely and utterly deterministic.

Well, at least from the standpoint of computability; perhaps physicists use some sort of different definition of the word "deterministic" that I am not aware of.

The point is, a deterministic Turing machine is a machine that always gives out one definite answer; that answer could well be a distribution function instead of a number, this doesn't matter one bit.

(A non-deterministic Turing machine is, roughly, a hypothetical machine that could compute several answers at the same time.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

I guess we're in agreement here (none / 0) (#155)
by the on Wed Sep 10, 2003 at 06:27:24 PM EST

Let me go back and find some other thing to pick a hole in :-)

The Definite Article
[ Parent ]
Yes, we better. (none / 0) (#158)
by tkatchev on Wed Sep 10, 2003 at 06:55:31 PM EST

I'm starting to forget what the original argument was about.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Well it wasn't with me! nt (none / 0) (#159)
by the on Wed Sep 10, 2003 at 07:00:49 PM EST

The Definite Article
[ Parent ]
Umm... (4.00 / 3) (#86)
by Gully Foyle on Tue Sep 09, 2003 at 11:47:05 AM EST

A computer program can't contain a true random number generator. So sure, a TM running on random input would be unpredictable, but no more unpredictable than its input tape.

If you weren't picked on in school you were doing something wrong - kableh
[ Parent ]

This is amazing. (none / 0) (#125)
by Lethyos on Wed Sep 10, 2003 at 02:16:51 AM EST

By far, this thread is the funniest conversation I have ever witnessed on a discussion forum. Keep it up! You guys are riots. :-)

earth, my body; water, my blood; air, my breath; fire, my spirit
[ Parent ]
Slightly OT (3.00 / 1) (#68)
by bob6 on Tue Sep 09, 2003 at 04:59:46 AM EST

You might take a look at Mercury. It is a logic language with a syntax borrowed from Prolog and it also has pure functional features. The order of goals within a conjunction is totally irrelevant; the compiler actually reorders them as it sees fit and optimized. Side effects are declared by using references to a structure of type io__state that represents the state of the run time context. In particular conditions the backtracking engine can roll the state back to previous states.

One answer to your Q in the intro: (3.00 / 1) (#73)
by Jetifi on Tue Sep 09, 2003 at 07:10:53 AM EST

''How are you going to figure out just what went wrong a billion instructions earlier?''

The CoVirt project has code out there that ''logs enough information to replay a long-term execution of a virtual machine instruction-by-instruction''. IIRC it does this using UML.

No afiliation, saw this on sweetcode, but it looks like this is one answer to that question.

Debugging? Snapshot, don't reverse. (4.00 / 3) (#74)
by inode on Tue Sep 09, 2003 at 08:49:47 AM EST

If it's debugging you want, then the ability to take a snapshot and roll back to that point is all you need.  You snapshot the system and step through a function.  If the bug occurs, you then roll back and look more closely.

This can be implemented for many programs using the VM system - you simply save the state of the machine registers and mark the entire memory area of the process as copy-on-write, so that copies of any modified pages are made as the program runs.  This image can then be restored.

This won't work for programs that do file or network I/O.  You can't "unsend" a TCP segment.

Many digital circuit simulators have this feature to some extent.  You can snapshot the circuit at some simulation time and then reload that state at a later time.  They do this entirely in software - no VM or OS support required.

I'm surprised that this has not been attempted in Linux or NetBSD.  It would be reasonably simple to implement in the kernel, and for GDB to access.

It's implemented in any UNIX (5.00 / 1) (#112)
by trixx on Tue Sep 09, 2003 at 08:49:21 PM EST

Like this: if (!fork()) abort();

now you've got an instant core dump of your program (actually, a copy of your program), and it keeps running.

[ Parent ]

Doesn't work so well in Linux (none / 0) (#133)
by p3d0 on Wed Sep 10, 2003 at 09:33:55 AM EST

I seem to recall trying to do something like this under Linux, and the way Linux confuses threads with processes meant that it didn't work. I wish I could remember the specifics. I don't think I aborted the parent though; I think I tried to suspend it so I could attach gdb and continue debugging.
Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
Addendum (3.00 / 1) (#80)
by My Alternative Account on Tue Sep 09, 2003 at 10:26:02 AM EST

One thing I'd like to make clearer: I wasn't discussing Haskell as a means of programming reversible computers. Both Haskell and reversible computers provide an environment in which it makes sense to execute programs from back-to-front as well as from the usual front-to-back. But they are quite different systems. (Though Haskell gives a nice elegant way to simulate systems running backwards so these aren't entirely separate topics of course.)

And although debuggers that step backwards do exist, debugging using reversible instructions is intended more as a thought experiment than anything else.

Reversible computation vs reversibility in physics (3.00 / 1) (#81)
by codingwizard on Tue Sep 09, 2003 at 10:34:43 AM EST

Reversibility in physics and at the quantum level is an interesting thing to study, although at the quantum level states need to be set up very specially so information isn't lost when they collapse or go concrete. (Heisenberg Uncertainty Relation, etc.)

However, software isn't bound by the laws of physics. It can work in any way you want it to. Consequently, it seems to me going off and building a reversible computer is the hard way to do this. People working on such computers might cite easier debugging as a reason to do it, but that's a red herring.

In particular, if the semantics of assignment were changed so that an assignment statement like:

a := 3 + a
were interpreted to mean instead
Push the current value of the variable "a" onto a system stack tied uniquely to the variable "a" in this scope, then replace the value of "a" with 3 plus its former value
the assignment could be reversed by executing this backwards as
Pop the value from the system stack associated with "a" in this scope and assign it to "a"
There would also need to be context stacks which preserved the contents of a scope when blocks were exited.


Jan Theodore Galkowski (o°)
"There is nothing new/Beneath the sun." (Koheleth 1:9)
Yes, there is: Carbon nanotubes, for one.

Finally! (2.33 / 3) (#83)
by dpkg on Tue Sep 09, 2003 at 11:14:55 AM EST

I have been waiting for somebody to post an interesting article since I read the last interesting one!

I don't like the mentioning of haskell tough, since I had to learn it in a course i took in school and it was a pain not easy being reminded of.

this world rejects me
Haskell (none / 0) (#115)
by ComradeFork on Tue Sep 09, 2003 at 09:19:15 PM EST

I think Haskell has some really good points. I view it as Lisp should have been. Personally I am avoiding functional languages at the moment, but I do think they are worth notice.

[ Parent ]
How dare you. (none / 0) (#130)
by tkatchev on Wed Sep 10, 2003 at 08:36:33 AM EST

Comparing sweet, beautiful Lisp to Haskell...

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Lisp (none / 0) (#134)
by ComradeFork on Wed Sep 10, 2003 at 10:50:30 AM EST

Lisp is a bit of a halfhearted functional language. Haskell however *is* a functional language.

[ Parent ]
All I can say in reply to that... (none / 0) (#141)
by tkatchev on Wed Sep 10, 2003 at 01:14:20 PM EST

...thank ghod Lisp is only half-heartedly functional.

I believe in tools that work, not in tools that fit the current party agenda.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Yep (none / 0) (#156)
by ComradeFork on Wed Sep 10, 2003 at 06:33:06 PM EST

Thats why I don't use Lisp. There is not one good implementation of Lisp. Oh, and I like the Lisp language (well, CL and Scheme) and have worked with it a lot.

I certainly don't use Haskell either.

[ Parent ]

Good implementation of Lisp. (none / 0) (#157)
by tkatchev on Wed Sep 10, 2003 at 06:54:30 PM EST

Actually, there is one -- the LispMe Scheme interpreter for PalmOS.

Too bad it hasn't been updated in a while and doesn't support hi-res.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Heh (none / 0) (#176)
by ComradeFork on Wed Sep 10, 2003 at 09:25:22 PM EST

Funnily enough I am creating a system in that at this very moment (no kidding). It is quite good.

However, it is VERY much a niche thing.

[ Parent ]

Yeah. (none / 0) (#182)
by tkatchev on Thu Sep 11, 2003 at 05:50:10 AM EST

Still, it makes me feel good that there is still good software written somewhere out there in the wide world.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

ocamldebug can do that ;-) (4.00 / 2) (#92)
by malekith on Tue Sep 09, 2003 at 02:19:22 PM EST

It can run program forwards and backwards. It does it by fork()ing time to time. It's pretty usefull. (OCaml is functional-oo-imperative hybrid langauge).

-- Michal Moskal

Microplanner? (3.00 / 1) (#96)
by GuillaumeLeblanc on Tue Sep 09, 2003 at 03:09:32 PM EST

There was a programming language at MIT that did something like this. It was called uPlanner, son of Planner, and father of Conniver. It was lisp based.
  1. Where lisp has setq and prog, uPlanner had thsetq and thprog. (The th is silent.)
  2. It also had a database, and an instruction (thassert ...) which matched a pattern in the database. There might be several possible matches, so (thassert) would choose one.
  3. Every execution of thsetq saved the variable that was set and the old value.
  4. If a form in a thprog returned nil, it was considered a failure. The interpreter would then march backward through the thprog, undoing all of the assignments until it got to a place where it made a choice, and then make another.
It was used to do some kinds of AI based on depth first searches of a semantic net. The problem with it seemed to be that it is too easy in Lisp to write an ordinary function that returns nil. If you do that, it looks like a failure, and you silently back up, where you didn't expect to.

In about 1976 or so I took a programming language summer course in Bloomington, Indiana, where we typed in the entire implementation from a listing. I don't remember why we didn't get a tape, but each of us had 5 or so pages to type in. Getting it to work right was kind of difficult, I recall.

Codex gratia Codici.

Simulation (3.00 / 1) (#98)
by jabber on Tue Sep 09, 2003 at 03:47:40 PM EST

It's not such a great stretch of the imagination to envision a simulator that logs the entire state of the machine being simulated, at each time increment or change of state. While running instructions in reverse might hold merit in genetic algorithms, the aim of this article is achieved simply by navigating in reverse though such a high resolution log of the runtime.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

A giant "undo" buffer (none / 0) (#132)
by p3d0 on Wed Sep 10, 2003 at 09:14:28 AM EST

It's like a giant "undo" buffer in a text editor. The question is whether the size of this buffer is anywhere near practical, and whether it can be constructed in anywhere near a reasonable amount of time. I have grave doubts about both of these.
Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
I don't (none / 0) (#137)
by epepke on Wed Sep 10, 2003 at 11:11:00 AM EST

Actually, a lot of this can be done more efficiently than a giant Undo buffer by building algorithms around Fredkin gates, etc.

But still, why should a giant Undo buffer in a text editor be a hard problem? Considering the number of deletions and replacements that people actually do when writing text, any undo buffer would be swallowed up compared to the extra crap you get in a Word document.

The truth may be out there, but lies are inside your head.--Terry Pratchett

[ Parent ]
You're paying attention, right? (none / 0) (#186)
by p3d0 on Thu Sep 11, 2003 at 09:23:24 AM EST

Obviously an undo buffer is practical in a text editor. We're talking about an undo buffer for the billions of instructions that get executed in a program.
Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
Your analogy; not mine (none / 0) (#188)
by epepke on Thu Sep 11, 2003 at 10:39:24 AM EST

But, like so many others in this thread, the reason that you think it's such a humongous problem is that you're assuming a naive approach based on trying to compensate for traditional operators in a world of irreversible computing.

The actual limitations of reversible computing are small:

  1. The number of bits input to the computation must be the same as the number of bits that the computation outputs. Call this N.
  2. The number of bits that a reversible computation needs to remember at any point is also N.
  3. Given a irreversible computation with Ni input bits and No output bits, it is possible to produce a reversible computation with N not greater than Ni + No.

Now, actually figuring out how to do this in the best way might actually require a few CS majors to get off their dot-com-polished heinies and think some, but don't I also see people bitching about how there aren't any interesting problems left in CS?

One of the possible ways is to do it bottom-up. Reversible gates have been around for decades. They're easy to make with transistors, etc. They can even be made with ordinary gates. There are three gates: NOT, Controlled Not (CN), and Controlled Controlled Not (CCN) which, together, are as powerful as NOT, AND, and OR put together. There's even a make-it-easy-for-the-die gate, the Fredkin gate, which can do everything. Plus, it's equivalent to a single frickin' SPDT relay, and it's hard to get more basic than that.

The truth may be out there, but lies are inside your head.--Terry Pratchett

[ Parent ]
Take another look at this thread (none / 0) (#211)
by p3d0 on Tue Sep 16, 2003 at 03:55:50 PM EST

Thanks for your effort, but I don't think you are understanding the context of this thread. Here's a replay:
  • jabber: All you need to do is log everything that happens, and then play it in reverse.
  • me: That sounds like an undo buffer in a text editor, only impractically large.
  • you: Why are undo buffers in text editors impractically large?
  • me: They are not. That's not what I said.
  • you: Blah blah reversible computing.
  • me: Dude, pay attention.

Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
OK, dude. Whatever. (none / 0) (#215)
by epepke on Sun Sep 21, 2003 at 03:19:20 AM EST

The truth may be out there, but lies are inside your head.--Terry Pratchett

[ Parent ]
Sorry if I missed your point (nt) (none / 0) (#216)
by p3d0 on Mon Sep 22, 2003 at 09:09:37 AM EST

Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
Corrections (none / 0) (#189)
by epepke on Thu Sep 11, 2003 at 10:41:10 AM EST

It's equivalent to a frickin' DPDT relay with a wire between the arms. But still.

The truth may be out there, but lies are inside your head.--Terry Pratchett

[ Parent ]
Read more about how processors work... (3.00 / 2) (#106)
by baseinfinity on Tue Sep 09, 2003 at 07:20:53 PM EST

note: IAACE (Working on my MS at Carnegie Mellon...) Umm, basically, there's no possibility I can think of to reverse the computation of any processor based on technology. A clock on a computer is just a 0-1 pulse that triggers the next sequential combinatorial calculation based on the previous state. The state of even the most basic processor is so complex to deal with that information can exceed the number of atoms in the univerise in size. http://www.ece.cmu.edu/~ee760 is the old webpage (new one is on blackboard) of an interesting class I'm in that focuses on algorithms that work with industry-size logic problems. http://www.ece.cmu.edu/~ece741 is the class which I got to get back to work on the homework for right now, with lectures about contemporary processor organization. http://www.ece.cmu.edu/~ece347 is the more basic version of that class.

Elaboration (3.00 / 1) (#109)
by baseinfinity on Tue Sep 09, 2003 at 08:12:52 PM EST

I had meant to say "based on technology available" for that reason.  In order to create a machine (in the abstract sense) based on current technology, you would have to remember every change in state of the processor, which is pretty much impossible.  Now, if you're a purist CS student, you'd say to me that's because things shouldn't be modeled the way they are.  The fundamental problem is that the imperative model that all processors operate on can mutate any piece of data in the context set of the processor (all of the memory).  Tracking the changes that a mutator instruction (such as a store, or a trap) would be an impossible task.  Now, if you were to base a processor off of function programming ideas, the affects of various instructions would be far more managable.

To those of you who might say to me "well I can simulate and reverse such and such an architecture".  Well that's great: a fundamental principle of computer design is that our ability to completely understand and simulate an architecture lags behind the cutting edge.  As the cutting edge advances, our aspirations of what we expect from a computer does too.  Being able to master an antiquated implementation doesn't buy you much besides understanding to apply to the future.

[ Parent ]

Lazy unproofreading me (3.00 / 1) (#111)
by baseinfinity on Tue Sep 09, 2003 at 08:39:57 PM EST

Guess I should proofread more: I meant functional programming ideas, and affect should be effect.

[ Parent ]
It's not that hard (none / 0) (#124)
by QuantumG on Wed Sep 10, 2003 at 02:03:15 AM EST

instead of moving a value into a register, you simply exchange the values in two registers. This actually moves your heat from the logic circuitry to the peripherals (for example, you're memory isn't going to be reversible).

Gun fire is the sound of freedom.
[ Parent ]
Memory. (none / 0) (#129)
by tkatchev on Wed Sep 10, 2003 at 08:35:13 AM EST

Simple, you assume that all data in memory is loaded there from registers.

(Now, what to do about hard-disks?)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Been done in software. (3.00 / 1) (#110)
by darkseer on Tue Sep 09, 2003 at 08:39:32 PM EST

Friend of mine wrote his masters thesis on this. He could capture and playback the execution of a java program. Think of it as a VCR for java.

[ Parent ]
Clarification (none / 0) (#113)
by baseinfinity on Tue Sep 09, 2003 at 08:51:20 PM EST

It's a very unremarkable thing to be able to simulate and replay. That's what we used to do to design prcoessors. Thing is, you can miss cases, like Intel did with the FP-DIV a decade ago. It's a far more remarkable thing to be able to formally prove the paths of execution that a system can take in order to verify correctness. We have data structures and algorithms that can represent that now. Being able to model the reverse is a far more complicated thing. This would be the first step in understanding how to create maybe an overlay system that can perform the reverse. I don't think that would work though.

[ Parent ]
You couldn't be more wrong about that MIT link (2.50 / 2) (#116)
by baseinfinity on Tue Sep 09, 2003 at 09:25:33 PM EST

That MIT site focuses on reversable logic to recouperate charge used in CMOS designs in order to create lower power chips. Not on doing computations in reverse. Quite interesting actually, I'm going to consult with one of my professors about possible applications in differential power analysis attack prevention for smart cards, a research area I'm sorta stumped on right now.

Aspect Oriented Programming (none / 0) (#122)
by dantheman on Wed Sep 10, 2003 at 12:34:29 AM EST

You could use AOP to quickly prototype different solutions to this problem.  

Here is a quick link: http://www.aspectj.com.

It's a relatively simple concept that has a large amount of possibilities.


pure functional - what's the use? (none / 0) (#140)
by dimaq on Wed Sep 10, 2003 at 01:04:15 PM EST

When you're discussing reversibility in haskel you make it sound, as if no information was ever lost in a pure functional language, and the only point of loss was that monad thingy.

I don't think that's the case, consider this for example (pardon imperative notation)

bool function(a,b){ return a>b; }

There's a great number of combinations of a and b and yet the result is reduced to only two states - true or false.

Now you could say that since 'a' and 'b' are not evaluated until their values are needed and thus the information loss is moved to another [arbitrary?] point in time/execution, but I don't think that helps anything.

So here we have a pure functional program (and thus w/o any monad's I presume?), that looses information quite so nicely.

In layman terms, when that function completes and we find it returned, say, false, and we wanna know why, the actual values of 'a' and 'b' are already gone.

Come to think of this, there's also fanout happening at undefined points during execution of a pure function program, how are you gonna catch that?

P.S. I apologize for not really knowing what monad is in haskell.
P.S.2. perhaps you should provide some links to the discussion on whether there's any real use in lossless computations? My feeling is that the only useful programmes have to be lossy. or something.

Umm... (none / 0) (#144)
by epepke on Wed Sep 10, 2003 at 01:23:01 PM EST

bool function(a,b){ return a>b; }

There's a great number of combinations of a and b and yet the result is reduced to only two states - true or false.

I think this is what the "reversible computing sux0rz" people are failing to appreciate. Data reduction is perfectly OK in reversible computing. It's data destruction that isn't.

The truth may be out there, but lies are inside your head.--Terry Pratchett

[ Parent ]
OK hot shot: (none / 0) (#183)
by it certainly is on Thu Sep 11, 2003 at 06:52:51 AM EST

the function (A > B) has evaluated to true. As only the result of this function is needed, going forward, A and B have been discarded, last seen in the parameters stackframe to our function.

Let's go back one step. What were A and B?

kur0shin.org -- it certainly is

Godwin's law [...] is impossible to violate except with an infinitely long thread that doesn't mention nazis.
[ Parent ]

Functional languages and reversibility (4.00 / 1) (#184)
by epepke on Thu Sep 11, 2003 at 07:27:33 AM EST

In a purely functional language designed to be reversible, a and b are either inputs to the program, or they are the return values of other functions. If they're inputs, they're known. If they're the return values of other functions, then they've already been handled in a recoverable manner by the return mechanisms of the functions that produced them.

Having a functional language does not automatically cause a computation to be reversible; it simply makes it a lot easier to implement it reversibly, because there's only one place that you need to worry about it.

The most obvious way of doing this is in the continuation chain. It isn't the most efficient, though.

The point is that in a purely functional language, it is impossible to form an information-destroying algorithm. Any information-destroying operations are at the choice of the implementation. This is usually specified in the definition of the language. For example, the Scheme report says, "All objects created in the course of a Scheme computation, including procedures and continuations, have unlimited extent. No Scheme object is ever destroyed. The reason that implementations of Scheme do not (usually!) run out of storage is that they are permitted to reclaim the storage occupied by an object if they can prove that the object cannot possibly matter to any future computation." Of course, Scheme isn't a purely functional language, because it has set! and set-car!. But they can be gotten rid of and still have a completely useful Scheme. Beyond that, the only thing that has to change is the criterion for "cannot possibly matter to any future computation."

The other advantage is that, with a purely functional language, programmers are already used to not having those capabilities. So, they don't think in terms of "k = k + 1" and already know how to get work done without that. This is important because, if you have reversible computing, how are you going to get someone to write code on it? In a purely functional language, it doesn't require any new ideas on the part of the programmer. The programmer's model is already consistent with reversibility.

The truth may be out there, but lies are inside your head.--Terry Pratchett

[ Parent ]
and still... (none / 0) (#185)
by dimaq on Thu Sep 11, 2003 at 08:03:36 AM EST

I suppose I still fail see a couple of things:

first you don't make a distinction between generic programme [execution] reversal and automatic differentiation.

sencondly I can't see why a functional language is superior to an imperative for either - generic reversal is something really complicated, perhaps impossible, and AD can be done with a modified compiler or overloaded math functions, either way, AD doesn't require reversal per se (?).

[ Parent ]

reminds me of thermodynamic engines (3.00 / 1) (#149)
by fae on Wed Sep 10, 2003 at 04:10:15 PM EST

You can ideally make a heat engine/refridgerator perfectly reversible (as a Carnot cycle), but those isothermal processes would take infinite time in real life.

So, we give up efficiency for speed, and turn the isotherms into something faster.

-- fae: but an atom in the great mass of humanity

Nicely written troll. (1.50 / 2) (#165)
by mcgrew on Wed Sep 10, 2003 at 07:53:04 PM EST

In fact, it is quite simple to "roll back the clock if you allow for it at the outset.

Save each register set to disk with each machine code byte execution. This will, of course, slow the computations down like it were a simulation and take massively superior hardware.

It would help to do your coding in assembler for something like this.

Reversibility in General
So what is required to make a computer reversible? Consider a line of C code like this: a = a+1

Your computer doesn't understand "a=a+1". It doesn't really understand, although this is closer to its actual binary computations:
pop a
inc a
push a

The higher the level of language, the slower and more bloated the code will be to begin with, and the more opportunity for error.

at a fundamental level the laws of physics are reversible - you can't really destroy information

The laws of physics don't apply. There is no such thing as "information". Moo.

Reversible computers can, in principle, run with zero energy

It will never happen, the ionization of the tinfoil hats will cancel the affect.

If someone develops an artificial intelligence on a reversible computer, how would it feel to be that intelligence running backwards?

If someone develops a tooth fairy, how would it feel to lose a tooth?

-1, resection to fiction

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie

Optimize! (5.00 / 1) (#168)
by the on Wed Sep 10, 2003 at 08:01:09 PM EST

pop a
inc a
push a
Hey! You forgot to turn on optimization.

The Definite Article
[ Parent ]
Ancient solved problem (5.00 / 1) (#171)
by eodell on Wed Sep 10, 2003 at 08:28:54 PM EST

Reversability has been a solved problem in many instances for decades. Obvious examples include the 'undo' command in the text adventure games of the early 80's and the undo functionality in practically every significant end-user application out there.

Implementing reversability at the hardware level is a topic I'll leave to the hardware engineers, but having implemented it in interpreters before, it's really no big deal. All you have to do, at the level of atomic language operations in your VM, is decide on an reversing operation for every atomic operation. "Opposing" here shouldn't be taken to mean mathematically opposite, as in addition and subtraction, for example. That's too abstract and theoretical. You end up pondering useless questions like what the opposite of assignment is.

Take, for example, the expression:

a = 5 + 2;

When this is executed, 5 and 2 are pushed onto the stack, and the addition operator is executed, popping 5 and 2 off the stack and pushing 7 on. The assignment operator is then executed and 7 is popped from the stack and copied to the memory location represented by a. (What happens in native machine code is, obviously, a bit more complex than that, but for the purpose of a reversable VM, we don't care.)

If what you want to achieve is reversability at the statement level, the assignment operator in this case must additionally push onto the undo stack an assignment instruction, the old value of a, and the address represented by a. To begin undoing immediately after the expression (or wherever) is simply a matter of executing the accumulated instructions on the undo stack. Note that for statement-level reversability, this is little more than storing the value of variables at the same time they are changed.

Obviously this is not applicable to compiled code without specialized hardware, but constructing an interpreter/debugger isn't any harder than writing a compiler, and is frequently easier. Watcom's C debugger implemented reversability as I recall; I imagine there must have been some interpretation going on there.

There are some special cases where this model breaks down, such as file and network I/O, which requires building simulated interfaces, but it works well enough. Really fine-grained reversability -- at the level of the stack -- is also a step-up in complexity, but it's not rocket science, either. Certainly, it's not necessary to choose a special-purpose language.

you missed one --- (5.00 / 1) (#180)
by washort on Thu Sep 11, 2003 at 01:52:54 AM EST

No mention of reversible computing is complete without a reference to Henry Baker's paper on the topic: NREVERSAL of Fortune -- The Thermodynamics of Garbage Collection. He addresses the thermodynamic implications of reversible computing and garbage collection and proposes a basic language for such a machine.

I'm reading that article (none / 0) (#192)
by the on Thu Sep 11, 2003 at 03:54:03 PM EST

Check out the description of a Carnot heat engine as a device that 'summarizes' the random data in a reservoir in a smaller one at a lower temperature. Heat engines are essentially informational devices. Neat way of looking at things!

The Definite Article
[ Parent ]
Maxwell's Demon (none / 0) (#181)
by obsidian head on Thu Sep 11, 2003 at 02:06:47 AM EST

A question at OSNews reminded me of my own:  Does anyone know why Maxwell's Demon doesn't expend more energy closing the flap than just thinking?  Is there some thermodynamic result that says closing the flap is no big deal compared to erasing bits in one's brain?

Opening and closing the flap can... (none / 0) (#190)
by the on Thu Sep 11, 2003 at 03:36:30 PM EST

...take zero work if it's frictionless. Or it can be spring loaded (so to speak). So it might take work to push it open but you'll get it back again when it springs back so the amortized cost is zero. The point is that opening and closing the flap doesn't cause a change in entropy so opening the flap is a reversible operaton. (And this isn't just a thought experiment, there are exactly analogous situations in biochemistry.)

The Definite Article
[ Parent ]
compiling vs interpreting.. (none / 0) (#198)
by ehrichweiss on Fri Sep 12, 2003 at 07:08:44 PM EST

I seem to remember back in the late/mid-90's there being C/C++ *interpreters* that would allow you to debug code quite easily. Do these not exist anymore or does no one even remember them? I'm sure that if they've been forgotten, someone will pull them back up and start working with them again and rename it as a "realtime debugger"(tm) or some such.
Business intelligence is a nice way of saying "You've got spies".
CenterLine's ObjectCenter (none / 0) (#204)
by GGardner on Sun Sep 14, 2003 at 09:42:42 PM EST

Centerline's ObjectCenter, which used to be called Sabre C, is what you are thinking of. They had to change the name when the Sabre airline reservation folks found out about it. Looks like it is still being sold, but that the company has seen better days.

I remember using this 10 years ago, and it was great for tiny programs, but anything real (e.g. that used Motif) was hopelessly slow -- It took hours until our first window was drawn.

[ Parent ]

Centerline (none / 0) (#208)
by ehrichweiss on Mon Sep 15, 2003 at 05:43:38 PM EST

Go figure..I only used it a few times and never for anything huge or intricate..but it'd be great if they'd get their act together and optimize it a bit...maybe they could use pre-fetching, etc. for the interpreter...
Business intelligence is a nice way of saying "You've got spies".
[ Parent ]
very end of the article (none / 1) (#213)
by metalhurlant on Fri Sep 19, 2003 at 04:02:52 AM EST

The one-way hash at the end of the article is particularly relevant for a text regarding reversible computing.
As you've said, the clobbering of bits is an issue. It is also the only thing that stands between those one-way hashes and the original data it was generated from.
Effectively, when you run a program forward, you follow a line. no branching you have to keep track of, no growing complexity, just a plain straight line the CPU follows without hesitating much.
When you take the same program and run it backward, however, you run very fast into combinatorial explosions. Every operation that allows for any sort of bit clobbering (including bits in the CPU flag register) causes a bunch of leaves to appear in the run tree.

The article doesn't really explain what techniques are available to limit/avoid this problem.
My intuition is that there isn't anything publicly known that would be of significant help here, or the net landscape would be a bit different.

Turning back the CPU clock | 218 comments (187 topical, 31 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!