Can Composite Indicators be Scientific?
by Stacey BarrPerhaps we can take a scientific approach to composite indicators, but that doesn’t mean they are worth the effort.
Most guides for composite indicators, composite indexes, composite indices, are hardly scientific. Their instructions are trivial and non-specific. They don’t reference enough (or any) of the more robust research into composite indicators, like this article on what needs to be done right, and this article on weighting complications.
After reviewing some of this research, I’m still unconvinced that composite indicators are useful in performance measurement. And I offer you three arguments against them…
ARGUMENT 1: Creating a useful composite indicator is too difficult.
If a composite indicator is to have any meaning and usefulness at all, the bare essentials that must be based on scientific rigour include:
- a specific definition of the ‘thing’ the indicator is a gauge of
- a sufficiently comprehensive set of component measures, which collectively do provide a strong
gauge of the ‘thing’ - definitions of each of the component measures, so they are individually reliable and representative of the component they are measuring
- weights for combining the component measures to reflect their individual contribution to the ‘thing’ the composite indicator is a
gauge of
Creating a scientific composite indicator is not a trivial task. It will take much more time than a workshop or two, and much more thinking than a vote to reach consensus. The Human Development Index (HDI), for example, is a very well-known composite measure. But even it has been
criticized for having a weak methodological framework.
How much rigour have you put into the design and construction of your composite indicators?
ARGUMENT 2: There isn’t a strong enough purpose for composite indicators.
While there is much written about how to create composite indicators, there is little about why we should do it. Particularly the big picture social indicators, composite indicators are generally used for ranking and comparing. But does ranking and comparing lead to improvement?
Of course not. Improvement only comes from action, not ranking or comparing. And action can only be focused on specific and observable results, not broad, abstract concepts (like
livability or environmental sustainability). And specific and observable results don’t need composite indicators to monitor them, because we can create direct evidence-based measures for them.
Perhaps one of our recent PuMP Blueprint Workshop participants gives us another good reason to have composite indicators:
In the workshop, I asked about composite/index measures that take multiple measures into account so that optimizing for just one measure at the expense of other important measures (that didn’t make it into the performance dashboard) would drag the overall score down and discourage “optimize for one measure at the expense of others”-type behaviour.
It’s true; we don’t want KPI myopia, or focusing only on what’s up close. But piling all the possibly-important KPIs into a single composite measure just swings the pendulum to the far opposite of hyperopia, or being completely unable to focus on what’s up close. Composite indicators stop us from seeing anything up close. Rather, we need to share our focus over a set of related and important KPIs, using them in concert to manage bigger picture performance.
How much discipline do you have to decide the fewest and most important KPIs that need improvement now, and to use them collectively?
ARGUMENT 3: What exactly a composite indicator is telling us can be hard to figure.
To interpret real changes in performance measures, we need to separate the signals from the noise. Noise is the random routine variation that every measure will have. And the noise isn’t telling us anything about how performance is changing. It’s a distraction.
Some measures have a lot of noise, others not so much. But the more noise a measure has, the harder it is to pull out the signals. Dr. Bill McNeese illustrated this recently with his blog post about a noisy sales KPI. So what happens when you combine lots of measures into a composite indicator?
What happens is that the noise from each measure compounds. The result is the composite indicator has so much noise that it’s nigh impossible to detect signals of change. Virtually all the variability in the composite indicator over time is just random.
What exactly do you want to know from any performance measure or indicator you use?
Composite indicators take more than they can give.
Best practice performance measurement, for the purpose of performance improvement, is to keep a sharp focus on each result that matters most, so we can respond to improve what matters. Composite indicators do not sharpen our focus; they blunt our ability to choose and take performance-improving action.
What feels like a simpler way to monitor performance is, in fact, a much more complicated way to have less actionable information about performance.
Composite indicators are not worth the immense effort they require to construct, when all they do is hide the problem of too many measures, hide true signals of change, and hide the specific results we need to act on.
Composite indicators do not sharpen our focus; they blunt our ability to choose and take performance-improving action.
[tweet this]
Connect with Stacey
Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.
167 Eagle Street,
Brisbane Qld 4000,
Australia
ACN: 129953635
Director: Stacey Barr