INTERVIEW TRANSCRIPT: Doug Hubbard, on imprecise measures

by Stacey Barr |

STACEY: Doug Hubbard, of Hubbard Decision Research, is an internationally recognized expert in the field of IT value, with over 18 years experience in IT management consulting including 10 years experience specifically in teaching organizations to use his AIE (Applied Information Economics) method. But today I’m talking to him about the challenge of getting practical measures when people are overly concerned about precision.

Doug Hubbard, author of How to Measure Anything

Doug, for years now you’ve been coaxing people in the IT field to change their views from “IT is too intangible to measure” to “everything is measurable.” And I can attest that it’s not just in IT that needs to happen. Why has this been a hard transformation to them to make?

DOUG: IT often sees measurement as a choice between perfect 100% certain precision and nothing. Since they see perfect certainty as unachievable, they opt for no measurement at all. They’ve overlooked the usefulness of a third option: the “good enough” measurement. There may be three reasons why IT seeks illusory precision over something less precise but still useful.

STACEY: What have you found are the reasons this third option – and in my view often the only option – has been overlooked?

DOUG: Well, measurement doesn’t mean what they think it means. In my book, I explain that the practical scientific understanding of measurement is quite different from how IT management often uses the term. When I ask CIO’s and other IT managers at my seminars what measurement means, I often get an answer like “Assigning a specific value”. This is wrong in at least two ways.

First, in science values aren’t simply “assigned”, they are based on observation. This is rarely the case in IT (when’s the last time you saw a random survey or controlled experiment used to measure something in IT?). Second, a measurement is seen as a reduction in uncertainty, almost never the elimination of uncertainty. In effect, science uses the term measurement to mean “observations that reduce uncertainty about a quantity”.

IT, on the other hand, thinks of measurement more like accountants think of it: absolutely precise, but often arbitrary and not always based on an observation. The book value of an asset, for example, is not based on any observation; just the “accepted procedure” and it may be very different from market value. IT should deal with uncertain reality, not a false sense of precision. When we make a real measurement, it is expressed as a range like a “90% Confidence Interval” that shows our uncertainty about that measurement. Further measurements should make this range narrower, but will rarely shrink it to a point value.

STACEY: So that’s one thing you can do – reframe measurement as a process that helps to reduce uncertainty, not eliminate it. Are there any more hurdles to jump over before they get comfortable to start measuring imperfectly?

DOUG: Yes, and that’s accepting the idea that the presence of noise does not mean a lack of signal. Many managers, IT and otherwise, start anticipating potential errors in any measurement and assume that any existence of any kind of error undermines the value of the measurement.

A client of mine was considering a way to measure how much time employees spend in some activity the company wishes to automate. The solution I proposed was to conduct some sort of random survey of the staff over the course of a few weeks where the employee was asked to describe how much time they spent on that activity in that particular day. As soon as the idea was proposed, the mid-level IT managers were almost in a contest with each other to think of all the errors such a survey could have in it. Would the survey be truly random if people who spent more time in the activity were more likely to respond to the survey? Would some people be dishonest in their responses? And so on. Yet when the survey was conducted, they found the potential errors they identified could not possibly account for the findings.

The people who had no stake in the outcome gave about the same answers as people who had a stake in the outcome (addressing potential bias in the responses). The response rate was 95% so it is unlikely that the remaining 5% could have changed the findings by much. And simple statistics showed how unlikely it would be that by chance alone, we happened to pick employees that spent more time in this activity.

The presumption the managers originally made is that merely identifying the potential error is sufficient to make the determination about whether any survey would be useless. But if we remember the definition of measurement above, we see that as long as the error is less than the previous state of uncertainty, it counts as a measurement. They were presuming that these errors must be too great to allow for uncertainty reduction but, without conducting the survey and doing the math, there is no basis for such a claim.

The fact is that they have to make assumptions about how common these errors are or the effect they would have on the outcomes without having any idea of the relative frequencies of these problems. To put it another way, they have more error in their “error identification” method than the measurement is likely to have.

STACEY: Ha! I like that! It would be a really interesting experiment to find out how big the error in the measurement could get before it drowned out signals in the measure. Wouldn’t that help convince the skeptics?

DOUG: Actually, a formula for the value of information has been around since shortly after WWII. It is used to compute the monetary value of information in a wide variety of industries and government agencies. Ironically, the fact that there even is such a formula is mostly unknown to IT management.

In my consulting practice I compute something the decision sciences call “Expected Value of Perfect Information” (EVPI) and “Expected Value of Sample (Partial) Information” (ESI). Of course, the cost of perfect information would almost always be greater than the value of perfect information. In fact, it is almost always the case that the “biggest bang for the buck” is the initial, small amount of uncertainty reduction in a measurement.

I tell my clients to start taking a few observations, a random sample, a controlled experiment, etc. and see if the results are surprising in some way. Sometimes the initial observations are surprising enough that it can reduce the initial range for a value significantly and further observations may not be justified.

STACEY: In my previous life as a survey statistician, we’d use pilot samples in the same way, to estimate how much error there would be in the measurement. And this would help us choose a sample size only just large enough to make the measure useful without costing more than the value it would give. Doug, what’s the take-home point you’d like to leave mezhermnt readers with, regarding the imperfect nature of measurement?

DOUG: These three concepts should help IT – and anyone hung up on precision – start to make usefully imperfect measurements. Of course, they require thinking about measurement more like a statistician, scientist, or actuary would and less like an accountant normally would. Adopting these ideas not only encourage people to settle for “good enough” measurements, but they will probably cause them to focus on very different measurements in the first place.

In the last 12 years, I’ve completed 55 major IT valuation studies and I’ve seen that most things IT measures have little or no information value and that the measurements with the highest information value tend to be those things IT almost never measures. This means that the measurements IT seeks, even if they achieve infinite precision, probably have no bearing on pragmatic business decisions.

You should always choose imperfect but relevant measurements over arbitrarily precise and irrelevant measurements.

STACEY: Measures just need to have enough precision to give you the signals you need to make meaningful improvements to the business. Makes sense. Thanks Doug for sharing your ideas.

Upcoming KPI Training


>> Australia/NZ/Asia Pacific, Wellington NZ, 7-9 May 2024

>> Africa, Cape Town SA, 8-10 May 2024

>> UK & Europe, Online Interactive, 10-14 June 2024

>> North America, Online Interactive, 3-7 June 2024

>> USA, Washington DC, 25-27 June 2024

Register for the next PuMP Blueprint Workshop near you

Reprinting Articles


You are welcome to use articles from the Measure Up blog, with these requirements

Connect with Stacey


Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.



    *We respect your email privacy.
    Suite 117 Level 14,
    167 Eagle Street,
    Brisbane Qld 4000,
    Australia
    Stacey Barr Pty Ltd
    ACN: 129953635
    Director: Stacey Barr