Does Improving Data Accuracy Make Historic KPI Values Useless?
by Stacey BarrImproving data accuracy doesn’t mean your historic KPI values are useless. Just don’t confound data integrity with performance improvement.
When a client in the energy industry shifted their data collection systems from manual to electronic, he asked me what impact that would have on the performance measures based on that data. In particular, he was concerned about the change in accuracy from the old data to the new:
“Should a KPI reflect the analysis from both data (manual and electronic) on a single visual or should the analysis be refreshed using the new electronically recorded (and more accurate and realistic) data in a new visual?”
Let’s start by addressing this term, ‘data accuracy’.
Data accuracy is only part of what matters.
It helps to think of two components to how much integrity data has:
- Accuracy = how closely the data matches reality, like if we throw darts at a dart board, accuracy is about how closely we can get them to the bullseye. If data is more accurate, then it more closely matches the true level of performance, rather than under- or over-estimating it e.g. if everyone is measuring the weight of something, but the scales are wrong.
- Precision = how much random error the data has, like when we throw darts at that dart board, precision is how close the darts are to each other. If data has more precision, then it’s more predictable and we can pick up changes in performance more quickly
e.g. if the scales are accurate, but everyone is weighing that thing differently and not carefully and therefore each getting different weight readings.
I like the term ‘data integrity’ to encapsulate both its accuracy and precision. When we say we want more accurate data, we probably mean we want more data integrity.
Data is neither accurate or inaccurate, precise or imprecise. It’s always on a sliding scale, and of course we should improve it when we can and when it matters. The trick is working out how to improve data integrity in a way that doesn’t ruin our KPI interpretation.
Data with lower integrity can still be useful.
Even if a measure’s data is not highly accurate, we can still get a sense of whether it’s going up or going down over time. And even if it’s not precise, we can still use techniques to help us distinguish signals of performance changes from the variation in the measure that’s caused by things like data integrity problems.
In PuMP, we use XmR charts to make this easier. Despite how much variation a measure might have, as long as the underlying data integrity remained the same, any signals we’d see in the measure would likely be due to our change or improvement initiatives.
If the data integrity was relatively low, our XmR chart would likely have wide natural process limits. This is because the measure’s variability would be inflated by variability in the data collection process. A consequence of more variability is less certainty, and so we’d need more measure values before we could see a true signal of performance change. It might mean our decisions are a bit delayed and not quite as effective, but a measure based on lower integrity data is not necessarily useless!
On the flipside, if we implemented no change or improvement initiatives, except for an improvement to the data integrity, we may see a signal in the measure too. This signal would likely be due to a change in data integrity. So now we need to be careful…
Interpreting KPI signals when data integrity changes can be tricky.
If we’ve done some work to improve our data integrity, it will show up as a signal in our measure. If there is a signal, we should flag that in our graph with a label that says the signal is due to data collection improvement. (If there isn’t a signal, it might be surprising, but perhaps our data integrity didn’t really improve. However, it’s still worth putting that flag in our graph.)
For my client, after the new electronic data collection starts, he might see a signal in his measure around the same time. This signal would likely be related to the change in data integrity and not a change in performance of what is being measured. In other words, the signal would reflect a change in the measure’s accuracy and precision, not a change in the underlying performance result being measured.
The tricky part comes if we had hoped to see a signal in our measure that one of our previous improvement initiatives had worked (or not). If we expected this signal around the same time as the data integrity improved, we have a problem. The two changes that happened – our performance improvement initiative and our data integrity improvement – are now blended.
This is called confounding, and it means if we want to test the effect of one change on performance, we can’t do it if we made other changes around the same time. The only way to know if our improvement initiative worked or not is if we also continued the old data process for a while, or delayed the new data process, to avoid the confounding.
If we avoided confounding our measure, and were not expecting to see any signal of change in performance other than the data integrity effect, we have the opportunity to take a fresh start to our measure.
Set a new baseline for the KPI, after the new data process.
Personally, I like to keep the historic KPI values based on the old data in the graph, along with the label I mentioned earlier. And we can calculate a new baseline of performance for our measure, starting from the date at which the new data process kicked in.
Based on the minimum rules for XmR charts, we’d need to wait for at least five measurement periods after the new data process began, to establish a new baseline. And we need a baseline before making any changes to performance. Otherwise, we won’t be able to test if any of these new changes worked or not; we’d just be confounding again.
You can see, I hope, how important it is to think about the frequency of
calculating our measure values. If we don’t measure frequently enough, we have to wait too long before we get insights from our measures. We need to make sure it’s regular enough to establish new baselines quickly and pick up signals of change quickly. (Of course, any measure’s calculation cadence depends on a few other things too.)
Have you confounded your KPI interpretation by implementing data integrity improvement at the same time as performance improvement? [tweet this]
Connect with Stacey
Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.
167 Eagle Street,
Brisbane Qld 4000,
Australia
ACN: 129953635
Director: Stacey Barr