Wednesday, March 10, 2010

How to Get Enough Computing Power for Climate Modeling

The Bishop Hill blog posted notes from Professor Tim Palmer, who recently gave a lecture on the computational challenges of weather and/or climate modeling. He asks:
How much resolution is needed to capture climate change details? For example, convective instabilities (~km scale) aren't included in climate models; should they be? Does higher resolution reduce uncertainty? There’s no good theory for estimating how well climate simulations converge with increasing resolution. Even worse, the equations themselves change with finer resolution as new features have to be included...

He answers his own question with an obvious truth: "We need bigger computers." But that raises a new question: where do we get them?

The answer, I suggest, is right in front of our noses--quite literally. We already have enough computing power on our desks or in our laptops. Climate modeling is probably the perfect application for a worldwide network of personal computers.

It's not like can't be done--it already has been! Oxford University networked 3.5 personal million computers back in 2002 to find a cure for anthrax. Dr. Graham Richards' Screensaver Lifesaver" project was a huge success, and it seems like it could be replicated.

I envision "screensaver" software that runs on an all-volunteer network of PCs in their idle time. Assign every station a point on the global grid and give it access to "live" meteorological measurements from as many observation stations as possible. Then, using a set of competing climate models (more on that later!), have each station generate the data each climate model would predict for the area around its unique grid point. As more people volunteer their computers for the project, make the grid increasingly fine.

The primary point of this network would be test competing climate models. To that end, any person would be invited to turn their theory about the weather into an algorithm that could run on this system. The network could test out any number of theoretical models, so I would make the "model building" component an essentially "open source" system, with just enough editorial control to keep hackers from implanting malware in the system.

It would seem appropriate to require every climate modeler to disclose his or her algorithm (but not the source code). An "open source" system of this sort should make the results of every model accessible to all researchers at all times. That would allow the maximum number of researchers to learn from other people's successes--and failures.