I’ve seen a lot of clients pick and choose data to make a point. The most extreme case was a exec of a massive UK organisation going through a report and deleting the graphs from a report that didn’t support her point prior to presenting it to the board.
What’s interesting is that we seem to do something similar to ourselves when consuming information. Ben Goldacre wrote about this in his column and in the Guardian a few days ago.
He outlines a study in which students were told about an imaginary disease (Lindsay Sindrome) and an imaginary treatment (Batarim). The group of 108 students were told, on a patient-by-patient basis, about the outcome and whether or not they had been “given” Batarim for 100 cases. The 108 students were split into two groups, in one group 80 of the 100 cases detailed had been “given” Batarim, in the other only 20 of the 100 patients had been “given” Batarim. In both cases 80% of the patients “got better”. In reality Batarim made no difference whatsoever to the patient outcome.
What’s interesting is that he group that heard about the 80 cases that had been given the drug overestimated the effectiveness of the drug, whereas the other group had a much more realistic impression of the drug’s effectiveness.
The reason this study caught my eye is that it explains something that I’ve seen a bit too often: Managers very enthused by some activity (for example improvement activity or a policy change) and assuming that this is what led to a change in results. It’s entirely possible that there’s been lots of activity and the result happened in spite of the activity.
It shows that the perceptive manager needs to challenge the causal link between activity and results data and insist on being rigorous and objective in assessing impact – particularly if your business is “betting the ranch” on a particular initiative.