Friday, 26 October 2012

Should Predicitive Science Be Liable For Failures

I've never before used my blog to comment on someone else's writing but when I read Simon Jenkins' piece in the Guardian I found myself moved to say something.  Not to do so would have left a profoundly flawed, and potentially damaging, argument to go unanswered.

First, I think one has to divorce the style of Mr Jenkins' writing from the message.  It's difficult not to conclude that leading with a headline that states "Wave a banknote at a pundit and he'll predict anything", and to continue with such rhetoric throughout the text, is intended to be inflammatory.  However, as much as I find such journalism disappointing, let's put that to one side.

Let's focus on what I believe to be his main thesis: those making predictions should be held accountable if a) the prediction is incorrect, or b) if you fail to predict a damaging event.  Unfortunately, Mr Jenkins begins by confusing the two parts of this proposition. Failure to predict, and a failed prediction are two quite different matters.

As an engineer I live with the knowledge that when I design something that has safety related implications I have an inherent liable unless I act with "skill and care".  I am "predicting" that my design will not fail. Should the engine management system I design fail resulting in a Super Jumbo crashing, then I might expect those who suffer the consequences to attempt to blame me.  However, provided I can show that I have exercised all reasonable skill and care in designing the my system then I have a valid deference.  That is a well trodden path in the courts.  If it were not so then no one in their right mind would ever build anything that is safety related, let alone safety critical.

Exercising reasonable skill and care in engineering includes a luxury that is not always available in other sciences: testing.  Part of my defence is that I would have to show that I tested my design using the best practice available.  If I cut corners to save money then I quite rightly should suffer the wrath of those affected.  If it was an error that any engineer could reasonably have made, and extensive testing was conducted that should have revealed all such errors, but didn't, then I would feel any punishment quite unjust.

Now let's consider the case of the seismologists who Mr Jenkins says should be held accountable.  Their science is predicting events that are dependent on a number of variables that make engineering problems pale into insignificance.  As a statistician I know that any prediction in fields such as meteorology or seismology are subject to significant margins of error precisely because of the complexities involved. 

Yes we have models that have been tested, but even those that are considered to be state-of-the-art are known to be simplifications of the real world in which we live.  We all have every day experience of this when watching the weather forecasts. In the past 20 years the models and computing power applied by the Met Office mean that we have significantly improved the 3-5 day forecasts, but does that mean that 20 years ago you didn't want to hear their best prediction for 5 days hence. No.

Mr Jenkins seems to be suggesting that if the margin for error is too high then forecasters of all sorts should "shut up". Really?  Is it better to say nothing rather than give the prediction that you consider most likely.  After all,we all understand the capricious nature of weather and that forecasting is not 100% accurate.  We use our collective intelligence and do not blindly follow the forecasters, but use the forecast they give us calibrated with our understanding of the nature of those forecasts.

If a seismologist saw incontrovertible evidence that an earthquake was about to occur and failed to issue a warning (assuming it was their job to do so) then I can understand why they should be held liable.  But, to be held liable for not predicting an event, for which there was no clear evidence, suggests that a warning should be issued whenever there is the slightest indication, or even if there isn't an indication!  Imagine the impact this would have.  Just like to little boy who cried wolf you would find that people ignored the warning on the occasion when the event actually occurred.

I also have a problem with Mr Jenkins using the word accountable in a context where scientists have been held liable.  Absolutely scientists should be held accountable in such situations, just like engineers, statisticians and the like, so that we can understand what went wrong, and more importantly, how we can improve.  That is a far cry from making someone responsible for the loss suffered, which is what has effectively happened in this case.

Finally Mr Jenkins seems to extrapolate from a prediction failure to a generalisation that if you pay someone then they'll give the prediction you would most like to hear.  Doubtless there are rogues in the predictive sciences but as someone who works in science and technology I find it extraordinary that he is making such a sweeping statement.  In my (many) years I have not come across such people.  Quite the opposite.  When there is pressure of many sorts, including commercial pressures, I have always found those I have worked with to exercise professionalism and to say what the genuinely believe, even if it something that others would rather not hear.

I can assume only that Mr Jenkins has been dealing with a group of people that have left him deeply cynical of other professions. However, that is no excuse for making the sweeping generalisations he does, and all it will achieve is to damage the reputation of scientists and engineers, which, in my opinion, are the most likely source of solutions to our world's problems.