I was watching Corey Haines’ video
And I had these thoughts:
A challenge that I see is not so much about gathering metrics. I have been using metrics since the days of PC-Metric and PC-Lint (IIRC). I would try to get my code and designs as good as possible, without being too crazy about it all.
Later, I added 60+ metrics and 60+ audits into Together, you could even trend your results to see progress over time with a given code base. Big whoop. I even had dreams of anonymously uploading audits & metrics to a website so people could collaborate on arriving at good, meaningful values for various metrics. (I routinely got asked “What is a good level for metric x?”)
Yea, so we all know, “You can’t improve what you don’t measure.”
But what are we trying to improve? Quality? Reliability? Agility to make changes? Profit?
Just how do we correlate a measurement to a desired outcome? Can we tie a set of metrics to their impact on the business goals for the software? Less complexity equals more profit and more (happy) customers?
Or do we stop short of that and tie metrics to achieving “quality” and presume that if we target a given application to meet the “right” amount of quality, the business value will naturally follow?
This is a difficult conundrum for our industry. But we do have to start somewhere.
In the world of engineering, there exist measurements that can be tied to desired performance and cost.
We need something similar should we want to mature beyond just seat-of-the-pants and gut-feel techniques.
I am sure some folks have it down to a science… and for them, it must be a nice competitive advantage that is probably hard to share publicly.