Tuesday, March 23, 2010
Friday, March 12, 2010
PaleoCLAMatology
I’m talking about what I can't resist calling “paleoCLAMatology," a lovely new method of detecting not just the climate, but the WEATHER over the last two thousand years. Clams live in shallow water and build their shells using the minerals and other elements that are in the water. One of the elements that goes into a clamshell is oxygen, and the ratio of oxygen isotopes dissolved in water varies linearly with the temperature of the water. Heavy oxygen (O-18) is more prevalent in colder water.
By slicing ancient clamshells with a microtome and sending those slices through a mass spectrometer, scientists can read the O-18 concentrations down to week-by-week precision. Preliminary results using clams from a bay in Iceland show clear evidence of both the “Medieval Warming Period” (MWP) and a “Roman Warm Period” (RWP).
That doesn't resolve the debate about the MWP--proponents of the man-made global warming theory don't pretend it never happened. They just believe it was a localized phenomenon that affected northern Europe but not the planet as a whole. Clams from Iceland can’t rebut that argument–but “clamatology” can be used on shells from anywhere. We finally have a methodology that gives us fine-grained information about the temperature of shallow waters anywhere we’d like to look–there are LOTS of clams out there!
Note: shallow-water temperature measurements are NOT the same as surface temperature measurements, so we’ll have to do some new modeling to see how air temperature relates to shallow seas. I think it’s promising work in its own right–a huge amount of heat is stored in the top layer of the ocean, and it’s hard to model planetary climate just by looking at proxies of inland air temperature.
So–is there ANYBODY here who isn’t happy to get a new scientific tool that gives us more information about reality? If so, speak out–I’d like to know who can be unhappy about ancient clamshells!
Thursday, March 11, 2010
Answering George Monbiot
The thing that would MOST convince me that humans are causing global warming would be to see the "green" movement advocate nuclear power. Nukes are the most obvious solution to CO2 emissions, yet the people who CLAIM to be most passionate about "saving the planet" refuse to take nuclear power seriously. Without that evidence of their good faith, I have to evaluate the science on my own.
My amateur exploration of the science has produced more questions than answers. Here's what it would take to convince ME:
(1) A clear acknowledgment of the diminishing impact of increasing CO2. Putting more paint on a window that has been painted over doesn't decrease the amount of light that gets through it. Doubling the CO2 in an atmosphere that already absorbs most of the spectra that CO2 affects doesn't double the amount of energy that gets trapped.
(2) A clear list of the SECONDARY effects that are supposed to amplify the CO2 effects. I've heard how water vapor and methane are supposed to rise as the planet warms, resulting in a second round of forcing. What other gases are we talking about?
(3) A clear acknowledgment of the impact of solar variability on weather cycles. I don't trust any model that can't explain why ice caps on Mars are retreating.
(4) A clear list of testable predictions made by any climate model--and an equally clear list of anomalies. I don't expect any model to be perfect. I do expect its flaws to be clearly identified!
(5) A clear recognition of the man-made impact on surface temperatures which is unrelated to CO2. Matched-pairs analysis of neighboring measurement stations shows that even a low human population density has a positive impact on temperature--an effect that CANNOT be caused by CO2, since neighboring stations are breathing the same air. This confounding variable MUST be addressed before any temperature dataset can be deemed reliable.
This may seem like a lot to ask, but it's our planet that's at stake.
Wednesday, March 10, 2010
How to Get Enough Computing Power for Climate Modeling
How much resolution is needed to capture climate change details? For example, convective instabilities (~km scale) aren't included in climate models; should they be? Does higher resolution reduce uncertainty? There’s no good theory for estimating how well climate simulations converge with increasing resolution. Even worse, the equations themselves change with finer resolution as new features have to be included...
He answers his own question with an obvious truth: "We need bigger computers." But that raises a new question: where do we get them?
The answer, I suggest, is right in front of our noses--quite literally. We already have enough computing power on our desks or in our laptops. Climate modeling is probably the perfect application for a worldwide network of personal computers.It's not like can't be done--it already has been! Oxford University networked 3.5 personal million computers back in 2002 to find a cure for anthrax. Dr. Graham Richards' Screensaver Lifesaver" project was a huge success, and it seems like it could be replicated.I envision "screensaver" software that runs on an all-volunteer network of PCs in their idle time. Assign every station a point on the global grid and give it access to "live" meteorological measurements from as many observation stations as possible. Then, using a set of competing climate models (more on that later!), have each station generate the data each climate model would predict for the area around its unique grid point. As more people volunteer their computers for the project, make the grid increasingly fine.The primary point of this network would be test competing climate models. To that end, any person would be invited to turn their theory about the weather into an algorithm that could run on this system. The network could test out any number of theoretical models, so I would make the "model building" component an essentially "open source" system, with just enough editorial control to keep hackers from implanting malware in the system.
It would seem appropriate to require every climate modeler to disclose his or her algorithm (but not the source code). An "open source" system of this sort should make the results of every model accessible to all researchers at all times. That would allow the maximum number of researchers to learn from other people's successes--and failures.