I’ve often found that people often get very uncomfortable if you change a measure to show poorer perfomance even if you lower the target proportionately.
Let me give you an example.
I worked with a paper manufacturer a few years ago. Their (flawed) efficiency measure showed them bobbing between 95 and 105% efficiency. Anyone with some familiarity with OEE (or any other rigorous efficiency measure) will know that anything over 100% is nonsense, as 100% is a theoretical maximum that you will never achieve for a sustained period). We rewored their efficiency measure and showed that their true performance was really in the 65-70% range. Now nothing had changed, they were still making roughly the same number of reels of paper per week, but we had just showed that theorectically there was a 50% “opportunity” to increase output (66% -> 100% efficiency, of course you are only ever likely to see a portion of that theoretical opportunity). What upset the foremen in particular was that they were used to getting 100%. The measure and the target had become blurred. What we did was separate the measure (OEE, with a maximum achievable of 100%) with the target (initially 65%, but delivering exactly the same performance as before).
The off-the-cuff answer was “people don’t like looking bad”. I think that was part of it, but it ran deeper than that. I think the best way to describe it is “dissonance“. It’s that uncomfortable feeling you get when you haven’t finished something – a bit like when you think you may have left the cooker on after leaving the house.
Now dissonance is interesting. It’s not a pleasant feeling, but it’s precisely this that leads us to take action. With my foremen in the paper mill it drove the behaviour that ultimately lifted their efficiency from 65% to 80% (a 23% increase in output). Do you think they would have driven that if they thought they were already running at 100%?