**Reversible Simulation**

There has in fact been some
research
on developing debuggers that allow reverse execution of code.
But it's not just the debuggers that benefit from reverse execution. Consider the
problems of writing simulations of complex systems. Many simulations, ranging from
simulations of tiny microprocessor circuits to
simulations of world-wide computer networks,
require the use of clusters of computers working in parallel in order to obtain results in a resonable time.
But clusters like this have a problem - the time for one computer to talk to another can take longer than it takes to perform a large chunk of simulation. By time a CPU has received
the message that an event in the simulation has taken place on another CPU it might have carried on the simulation well beyond
the time of the event. One way to deal with this is to roll-back the CPU clock to before
the time of the event and then restart with the new information from the incoming message.
It's like receiving a delayed memo from your boss telling you *not* to start the task you're now working on and work on something else instead. A nice technique for doing that is to use reversible computation.
If every instruction in your code can be executed backwards it becomes easy to perform this roll-back from wherever you currently are. So we can see
that reversibility is potentially very useful.
**Reversibility in General**

So what is required to make a computer reversible? Consider a line of `C` code like this:

`a = a+1`

It's very easy to roll the clock back on that one. We can execute it backwards
simply by running
`a = a-1.`

In fact, a great many mathematical operations are reversible, for example any operation
of the form
`a = a-b`

is reversible, except for one very special case:
when `b` is another name for the variable `a`.
In that case we have
`a = a-a`

and that is the same as saying `a = 0`. This operation isn't reversible because we can no longer
recover the original value of `a`. Now we have come to one of the central issues of
reversible computing: how to deal with the clobbering of variables by new values. If we want to
reverse a computation that does this we need to log the result of every non-reversible change, in this
case storing the previous value of `a` somewhere. On the other hand we can simply try to
write our code in such a way that we never actually need to clobber anything.
There are all sorts of workarounds, but you have to watch out: you might end up having to keep a lot of baggage sitting around that you're not allowed to overwrite. But would you believe that this issue is much much deeper than it looks
and in fact points to the very heart of modern physics?
**Reversibility and Physics**

Many years ago computer scientists realized that they were doing a fine job of shrinking down electronic components
so that they used less and less power. So an obvious question arose: just how much power do we need
to perform a computation? What is the fundamental physical limit
on computation? We can imagine the sort of answer we might expect: a kind of menu with different
kinds of operations and a cost for each. In fact, in the 1950s von Neuman
hypothesized that each fundamental logical operation could be performed using
just *kT*log_{e}(2) joules of energy where T is the absolute temperature at which the computer is running and k, the Boltzman constant,
is around 10^{-23} Joules per Kelvin.
But it didn't take long for someone to point out that it was in fact possible to perform a NOT
operation with essentially no energy.
It turns out that the correct answer is very elegant.
There is one fundamental operation that requires energy and if you don't perform it you don't need
any energy at all. What's more, it's an operation we've already discussed: clobbering of variables.
Destroying 1 bit of information takes *kT*log_{e}(2) of energy.
We can get a vague idea of how this happens quite easily: at a fundamental
level the laws of physics are reversible - you can't really destroy information.
Instead, when we clobber a variable, we really have to move the information about the previous state out
of the computer. This information eventually finds its way into the state of the molecules around the computer
adding to their disorder. In other words this information is dissipated as heat. And that's why it costs
energy.
(Incidentally: think about black holes. It is well known that black holes destroy information that falls into them. They have no hair. In a
sense a black hole is the ultimate variable clobberer. But isn't that violating the reversibility of
physics? Well this is a subject under much
debate at the moment so I can't give any answers here!)
One of the interesting things about information erasure is that in some sense it is a very fundamental operation but previously
researchers didn't consider important at all. Today we realise that ERASE, and its opposite partner
FANOUT, are actually very important to consider in electronic design. In the future you might like to think a little about the information you wipe out each time you assign a value to a variable.

So now we can see that some of the people who most want reversible computers are in fact people who have no interest
in performing computations in reverse at all. Reversible computers can, in principle, run with zero energy and
so are of great interest to engineers who wish to reduce power consumption. This is no pie-in-the-sky either.
Logic gates, including adders, that run without external power supplies have already been developed and
reversible computer
hardware is well on its way to reality. Anyone familiar with digital logic will be aware that all logic gates can be built from NAND gates. All *reversible* logic gates can be built using Fredkin gates. In fact there is a whole type of logic I left out of my previous article: reversible or conservative logic that is specially adapted for reversible computing.

**Quantum Computing**

There is another group of researchers that also have an interest in reversible computing even though they
have no desire to actually reverse their computations: these are the quantum computer scientists. This is
a large topic but the central idea is simple.
In quantum mechanics it is possible to arrange for the existence of physical states that are a
*superposition*, a kind of mixture of different physical states. The most famous such state
is Schrödinger's Cat which is both dead and alive. In a quantum computer we can arrange for
a single computer to perform many computations simultaneously by being in a superposition of many
physical states. Unfortunately we can only get useful output from one of these states but by
using cunning ingenuity, before reading the output we can mix these states together in such a way as
to perform useful computations that are much slower on classical (i.e. non-quantum) computers.
But one of the key points about quantum computers is that they need to be reversible. Unfortunately
there is a catch that makes them harder to make than energy saving computers. If we are trying to save
power and unintentionally make our computer ever so slightly non-reversible then it's no big deal: we just waste some power.
But a quantum computer needs to be 100% reversible or it is almost completely useless.
This is a very difficult task and so far only the simplest of quantum devices have been built.

**Computing Sensitivities**

Time to bring things back down to Earth, at least metaphorically if not literally. Consider the task of
the rocket scientists at NASA trying to design the shape of the wings for the space shuttle so as
to maximize safety on re-entry. They have a need to reverse computations too. The idea is this: they can
run fluid dynamics simulations to understand how the wing interacts with the atmosphere so as to compute
just how the wing will be heated. All useful stuff, but once they have made such a computation how should
they modify the wing so as to lower temperatures?
The obvious thing is to run many simulations with different
input parameters and see how the temperature depends on these variables. But if there are, say, a thousand
input parameters to the simulation then we need to perform at least 1000 simulations just to get a rough idea of how the result depends on the inputs.
What we really need is not only to compute the output of a simulation but also how that output varies with
the input. How an output varies with an input is called the sensitivity of the input. (Anyone who knows
their calculus will recognise that this is just another name for the derivative.)
But a fluid dynamics simulation is a complex operation. Computing the sensitivities is a difficult task.
But we can break it down into simpler steps. If the final output is X, and X depepnds on A that in turn depends
on B that depepnds on C and so on then we can find the sensitivity of X with respect to A, then use that computation to find the sensitivity of X with respect to B and so on. It turns out that if we take our original program and
rewrite it backwards it becomes a relatively routine process to write a new piece of code that computes the
sensitivities. (The backwards code is known as the *adjoint* code.)
So even if we have thousands of control variables we can compute all of the sensitivities by running the program forward
once and then running a modified version once backwards. These methods aren't just useful for re-entry, they have been used to design efficient ways to kick satellites into higher orbits as well as countless applications that have nothing to do with space.
Reverse computation really is turning out to be quite useful!

**Reversible Programming in Haskell**

So given all this need for reverse computation are there any computer languages out there that make it easy for us?
Life for the NASA researchers can be tough - they often have to work with concrete code written in FORTRAN with little abstraction to help them. (Though some tools exist.) It's slightly easier in C++ where we can use operator overloading to transparently record our operations so they
can be played back backwards.
But at the other extreme of the abstraction scale is a language that
is ideally suited to working with reverse computation:
Haskell.
In some sense Haskell doesn't strictly work
in any particular order. That's because it's what is known as a lazy language and will try to put off doing
actual work until the last possible moment. If the result of a computation isn't actually needed yet it will just go onto the next thing that's needed. If the thing we skipped was never actually needed it'll never get computed at all. This is a great form of optimization! But Haskell is also unusual in that it is a pure functional language.
Everything is a function call but function calls can only return results, they can't have side effects. Going
back to the example we started with: you can't write `a = a+1` normally in Haskell because after
we have executed the line we have caused a side effect, the value of `a` has been updated.
You might imagine this causes major problems: for example in C, `printf` returns the number of characters printed but
also has the side effect of printing stuff out. We can't do this in Haskell! Haskell functions can only return things.
Nonetheless, we can still write functions that return an object that contains a list of all the I/O that
the program would have done if it were allowed to do I/O. Sounds crazy but it works well enough that you can
write a web server in Haskell without too much difficulty. (This is where the laziness comes in, we can start returning the beginning of the list of output even though we haven't bothered to compute the rest of the list yet. You don't have to wait for the end of your program run to see the output!)

Anyway, in order to 'fake' operations that have side effects in Haskell an object called a monad is used.
One way to think about a monad is that it is the glue that sticks one statement to the next.
For example in C you can think of the semi-colon character as a function that first executes the
statement on its left and then executes the statement on its right. But in Haskell the monad
is completely under our control and we can write new ones if we like. And if we're in the mood we can
use a monad that performs the computation on the right before the computation on the left. In fact,
one Haskell researcher uses just such a technique
to solve the kind of sensitivities problem that is faced by the NASA researchers.

**A Final Thought**

Well, we're coming to the end of our survey of reverse computation. We've seen applications ranging
from blue-sky quantum computing to mundane debugging. So now I'll leave you with one last thing
to ponder. If someone develops an artificial intelligence on a reversible computer, how would it feel
to be that intelligence running backwards?