I read this Ben Goldacre piece a couple of weeks ago. The problem one always has to ask is … is this kind of bad science accidental or in some sense deliberate – a skilled incompetence either by the practitioners or their managers / editors / reviewers, or both in a kind of tacit collusion. In situations of complex human endeavours some hypocrisy is inevitable, to balance motives and goods across multiple levels, and a degree of trust is therefore also inescapable. Science is no different, taken as a whole “business”.
Personally, I’m more against bad scientism, using science badly in situations that are far from scientific – rather than good or bad science per se. With infinite time and resources you could argue all situations can be reduced to science, but the reduction can discard the real world value. Statistics is of course one of those techniques used to bring the vagaries of human behaviour into the scientific space in quantifiable chunks. This adds another level of complexity to the whole exercise leading to more possibilities of evaluating the wrong things, and/or evaluating them wrongly.
Ben’s story above is about the statistical methods, this story today in The Scholarly Kitchen (via David Gurteen and Stephen Downes) is about choosing the wrong inputs for the wrong motives – citations, again. Proves the point that science is a messy business, parts of which are far from scientific.
And of course, the “Measuring the Wrong Things” headline is one a long line including Einstein’s “Not everything that counts can be counted.”