We live in a world where we want to measure everything to feel progress and to make sure we head in the right direction. That makes sense and helps us make more rational decisions, but it is important to account for the potential adverse effects. One of the most important ones is Goodhart’s law, that tells us that “When a measure becomes a target, it ceases to be a good measure”.
The term is named after the British economist Charles Goodhart, who wrote about it in an article in 1975 on monetary policy, but it became a generalized concept applicable everywhere. It uncovers the precious insight that when a measure becomes a target, it loses its effectiveness as an indicator due to the shift in focus from improving the actual system to merely meeting the targets. This shift often leads to manipulation and gaming of the system and oversimplification of complex scenarios. It can result in negative unintended consequences, as the narrow pursuit of specific targets overshadows the original, broader objectives.
Goodhart's Law is particularly relevant when incentives are tied to specific metrics. For example, if a school's performance is measured solely by its students' test scores, teachers might focus primarily on test preparation, potentially neglecting other important aspects of learning and skill development. Similarly, if a company focuses solely on hitting quarterly financial targets, it may neglect long-term strategy, innovation, or employee satisfaction.
Measuring success is essential, and there are situations where it makes sense to have targets on those. But Goodhart’s law reminds us that tradeoffs will happen when you set a target, and it is important to keep a qualitative judgment. Thinking about the second-level consequences of optimizing for a specific metric will help attenuate this effect and try to get the best of both worlds.
Craving more? Check out the source behind this Brain Snack!