Wednesday, 21 July 2010

Can science study everything?

Science can be seen as a body of facts, a set of theories or a set of methods. In our view it is primarily a set of methods. These methods are both technical, eg controlled experiments, and social, eg peer review, and there is no definitive list.

Science is not, therefore, an area of study or even several such areas. You can do science in any area to which scientific methods can be applied including human behaviour and social dynamics. It’s true that studies of people present various distinctive difficulties – but so do astronomy and particle physics.

When thinking about methods in the natural sciences it’s easy to suppose that science requires controlled experiments but this is incorrect. Controlled experiments are impractical in studies of stellar, geological and biological evolution. In medicine some controlled experiments are unethical – that’s why we have ethics committees.

In his wonderful book Guns, Germs and Steel Jared Diamond showed how comparisons between societies and ecologies could be used to illuminate human history and prehistory. In this he applied scientific methods to history – he took a similar approach even more explicitly in Collapse.

Now Diamond has tackled the methodology question head-on. His new book, Natural experiments of history, was written with political scientist James Robinson, and looks at eight ‘natural experiments’, that is sets of historical episodes from which general conclusions can be drawn. In four, similar societies experienced different impacts, eg conquest, whilst in the others different societies experienced similar impacts. The book then compares the consequences in the various cases.

The specific results are interesting, of course, but the real importance of Diamond’s work is to show how scientific method can be applied to the unpromising material of human history.

Truly, science is method, not subject matter.

Friday, 9 July 2010

Down with naked numbers!

On the front page of today's Guardian Larry Elliott reported (Watchdog under new pressure for cutting job loss forecast) that the Office for Budget Responsibility (OBR) had forecast job losses of 499,000. That number is ridiculous, not in itself but in its claim to precision.

“499,000” means that the number of jobs lost will be neither 498,000 nor 500,000 but 499,000. That’s a claim worthy of a racing tipster or a psychic. It has no place in a sensible economic discussion.

In its pre-budget report the OBR discusses the uncertainty in its forecasts with care. It saysthe probability of growth being within one percentage point of our central forecast [2.5%] in [2014] is around 30 per cent". In science we generally quote a range or the standard deviation. In the OBR’s case the growth forecast is 2.5% +/- 2.1%; the range 0.4% to 4.6%. That's a dismal level of precision but at least it's honest - and that is not something you always get in official statements.

It’s obvious that this uncertainty affects every other OBR forecast. I can't find the job loss forecast in the OBR's Pre-Budget Report but if it's proportional the +/- 1 SD range would be 100,000 to 900,000. (And let's remember that there's a one third chance that reality will fall outside this range!)

You can't understand forecasts without knowing the uncertainties. Let's push to have them spelt out.