How to Measure Design Success in a Meaningful Way
by Stacey Barr |Design is a creative process; it’s about making something new or different that hasn’t already been made. So, can we really measure design success in any meaningful way?
Just because something is creative, never-before-seen, intangible, highly qualitative, complex, and even subjective, doesn’t mean we’re not interested in making it successful.
Some of the wisest words I’ve ever heard in the performance measurement space are Douglas Hubbard’s Clarification Chain:
“If something is better, then it is different in some relevant way. If it is different in some relevant way, then it is observable. If it is observable, then it can be counted.”
Creative processes, and their outcomes, most definitely are about making something better. And if we couldn’t ever observe whether or not or how much better that something is, then what’s the point of the creative process in the first place?
Measuring things, *when we do measurement well*, tells us more than we can know without measuring. And doing measurement well means being logical, practical and comprehensive as we expand Doug’s Clarification Chain into practice.
Let’s do this now, using design engineering as our example. Thanks go to Measure Up subscriber, John K., for providing the example, when he asked me this question:
“I am trying to evaluate what is the best performance measure for the design engineering world?”
If something is better…
Expanding the first part of Douglas Hubbard’s Clarification Chain requires us to be very specific about what the “something” is. It will be tempting to skip this, and assume everyone already knows. But often, assumptions like this turn into roadblocks later on.
To get a handle on what the “something” is for our design engineering example, my first question to John K. was “who are your design engineering customers?” John responded with:
- For consultancy type projects: mine developers, government entities e.g. transport.
- For large infrastructure projects: a construction joint venture or a construction contractor who is responsible for delivering or building the project.
Customers are a great place to start to figure out what the “something” is, because it’s customers that ultimately define success. The “something” that gets delivered to John’s customers is basically a final design package, which is certified and provides all documentation and drawings needed for construction to begin.
… then it’s different in some relevant way.
Expanding the next part of Douglas Hubbard’s Clarification Chain requires us to be very specific about what about our “something” should be “different”.
To get a handle on what “different” means in the design engineering context, my second question to John was “what do customers consistently complain about?” John explained that the customers do not consistently complain, but do criticise the quality of some of the work:
- The format used in all of the design package was inconsistent or it contained spelling or typographical errors.
- For designs that include multiple disciplines (like civil, structural, architectural, electrical), the design contained disconnects (e.g. a structural column passed through the middle of the bath tub).
- Not all of the information necessary to build what is on the drawings was provided (e.g. a light is shown to be mounted in a ceiling, but no switch or power supply is included).
- The design package meets all the requirements of the original scope of services in its entirety.
Now we have specific definitions of the aspects that might define design success.
If it is different in some relevant way, then it is observable.
Expanding the next part of Douglas Hubbard’s Clarification Chain means we need to decide how we could observe those aspects about design that we want to be “different”.
Take, for starters, this aspect of design success (reworded to describe what we want): The format used in all of the design package is consistent and without spelling or typographical errors. This is observable in the following ways:
- The design team gets a call or email from customers about any errors found when they read the design package.
- A proof-reader notices (and corrects) any errors in the final draft of the design package.
- The design engineer managing the project notices instances in the design package where standard formatting was not followed.
And now take the next aspect of design success: For designs that include multiple disciplines, the design coordinates seamlessly across disciplines. This might be observable like so:
- The design team gets a call or email when the customer requests a correction to the design, that relates to disconnects between electrical, architectural, structural, etc.
- The customer or their contractors describe times when construction was held up due to disconnects between electrical, architectural, structural, etc.
You may likely think of additional examples of observable evidence for these aspects of design success. The most relevant will come from the team that owns and works in the design process.
If it is observable, then it can be counted.
Expanding the final part of Douglas Hubbard’s Clarification Chain means exploring ways to quantify the observable evidence of design success.
You’ll see the way this works with just one of the examples we’ve been using for engineering design: For designs that include multiple disciplines, the design coordinates seamlessly across disciplines.
Here are potential ways to quantify the observable evidence listed above for that example. We might quantify “the design team gets a call or email when the customer requests a correction to the design, that relates to disconnects between electrical, architectural, structural, etc.” this way:
- Total number of customer requests for corrections relating to disconnects that we received, for all designs, by month.
- Average number of customer requests for corrections relating to disconnects per design, for all designs, by month.
Then doing the same for “The customer or their contractors describe times when construction was held up due to disconnects between electrical, architectural, structural, etc.”:
- Total number of instances when construction was held up due to disconnects in the design, by month.
- Total hours of delay in construction caused by disconnects in the design, by month.
- Average perception of severity of impact of disconnects in the design, provided by customers, by quarter.
The final choice of which potential measures to select will depend on data feasibility, and relative usefulness of the measure. Again, it’s the design team that can make the best call on this.
Thinking a bit more deliberately is the key to measure design success in a meaningful way.
Douglas Hubbard’s Clarification Chain is timeless wisdom, and practical advice for those who too quickly discount the measurability of their goals.
It was one of the inspirations for PuMP’s Measure Design technique, twenty years ago when I created it. The Measure Design technique blends the Clarification Chain into a practical procedure to design evidence-based measures for just about any goal that matters. Try it today, with one of your design success goals.
Douglas Hubbard’s Clarification Chain is timeless wisdom, and practical advice for those who too quickly discount the measurability of their goals.
[tweet this]
TAKE ACTION:
Is there any area of your organisation’s work that creates new or different things or experiences? How could these ideas about how to measure design success help to improve the performance of those areas?
Connect with Stacey
Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.
167 Eagle Street,
Brisbane Qld 4000,
Australia
ACN: 129953635
Director: Stacey Barr