I'm fresh out of high school and working at the Department of Energy's Argonne National Laboratory as part of the climate group. I think many people misunderstand all the issues at hand here. So here is a bunch of background:
Part of the problem with climate modeling is that because of strange quirks in U.S. trade laws about selling below cost, the U.S. research groups have been unable to obtain fast vector-based supercomputers from companies located in Japan (Cray tends to be centrally at fault here), and instead were forced to typically buy Cray machines. In the previous decade, however, the groups have come to the conclusion that Crays are a waste of money, and since other vector-based supercomputers could not be bought, they switched to massively parallelized machines, (a.k.a.beowulf clusters in the Linux world). Argonne itself has Chiba City, which is a 512 processor (256 dual P3 machines) cluster with ultrafast Myrinet networking [great sales ad :-)].
Programming for vectorized machines is quite easy. The programmer codes normally, the compiler optimizes the code for vector operations, and the supercomputer rips through the operations. Great. However, programming for parallelized machines is wholly different. Although some implemenations exist where paralellism is handled by the compiler, they are quite slow and rarely what is needed. Instead, researchers have defined a standard called the Message Passing Interface, which forces the programmer to design the topology and the way that data gets sent around. Computations are much faster under this model.
But programming under MPI is quite complicated. Additionally, many climate models have been designed, but they all deal with one aspect of the global climate: you have the atmospheric model, the ocean temperature model, the land model, and the sea ice model. When working alone, the atmospheric model, for instance, assumes a constant ocean temperature. It would be nice to couple the systems together to work together to produce more accurate predictions, right?
This is where I come in. My small team is working on the Model Coupling Toolkit, which makes it very easy for climatologists to design couplers to hook up models and spread the computation evenly over all the processors available. Having data on different processors requires the toolkit to manage the transfer of data in a timely and efficient manner.
What does all this lead to? People are working on climates and are trying to make models as efficient and as realistic as possible. Climate@home sounds nice, but it really isn't that workable, because fast parallel computations require constant communication between the nodes, something that SETI does not do at all. Having extra funding does sound nice thought. I could use the $$ :-).
Do take a look at our Accelerated Climate Prediction Initiative. Before coming to work here, I had no idea about the effort involved.