Steve Gordon posited the following in response to the original poster thinking about
Original Post:
I explained in a simple words as possible the competitive advantage they could get if they give agile a chance, so I hope they will agree on a pilot project. Otherwise, how can they tell if they like chocolate cake without tasting it?
Steve replies:
With no measurable problems, what would be the success criteria of a pilot project anyway? Do you really expect them to adopt agile just because they liked a taste of it, even though it did not measurably improve anything?
IMHO, lack of “measurement” is the albatross around IT’s neck…
heck, i can’t even prove why my design is better than the next guy’s with anything other than a damn sure guarantee that his design will be worse. I base this solely on experience. Yeah, I suppose I could throw out a few metrics… but tying those to “success” or cost is hard to do. One of the problems is that BOTH solutions will work. I know he could get his solution to work, but I am also pretty sure it would be overkill, a nightmare to maintain, and difficult to extend.
I am tempted to put up the two competing examples and get folks to comment… one of these days.
This is generally not so difficult to do in a “hard” engineering discipline. One can usually compute costs of two solutions, roll forward estimates of how each design will cover future scenarios, determine the value proposition going forward, and choose which solution is best for business.
Until we get to this point in Software Engineering, it will forever be a fool’s challenge to “prove” one solution/process is the right way to go. (Not to mention the excessive variability to software that is inherent in its very nature.)
If you have two designs before implementation (I know that’s a really big if) you can run a series of metrics to calculate your “ilities”. Each design can have a quantifiable value in:
Maintainability – a more maintainable product will save in long term costs SIGNIFICANTLY. CMM came about when the US Navy discovered that 50% of all products delivered did not function as planned and maintenance costs were double development.
Adaptability – how can the s/w product adapt to changing environs.
Extensibility – How well can this product grow to add new features.
There are more “ilities” here to consider but back to the point…
Once you can quantify these prior to implementation then you can point to the better solution given the customer’s priorities. There are products out there that will analyze a design in moments and compare against industry level standards and present this in Kiviat charts and such so that it is easy to explain to your customer.
Use metrics to quantify your analysis and you won’t have to present your choice on experience alone. In the end your customer will value your experience more because you can back it up with the numbers.
John Lamb
Sr Principal Software Engineer
L-3 Communications
Wasn’t the albatross a good omen? Unless you hang a dead one around the neck of IT I suppose but then wouldn’t a live one be worse?
By the way, I forgot to mention in my previous comment: once you identify your customers priorities for the product you better select the agile methodology to implement it. A further benefit to collecting metrics early. Example: if their priority is extensibility then Feature Driven Development (FDD) might be more beneficial than some Fast-Track approach.
If they want to reduce costs during development then how will they feel during maintenance? A good argument for agile as a whole there but also a potential indicator that pair programming would not be a good idea for this customer.
-John Lamb
LOL on the dead albatross not being such a good omen. The thing I recall from dozens of conversations with Together customers, was the questions surrounding what the values should be for various metrics.
And yeah, we produced cool Kiviat graphs, but so few people knew what to do with them :-