Scrum Dust Up

This post is in response to this:

Kripanidhi said the following on 10/25/10 9:34 AM:

Scott,

It may have been a very interesting and useful idea to have included one more question in your survey:

“How many Certified Scrum …..Masters, Teachers, etc know how to code or have ever written a piece of software that was paid for ? ”

I could easily speculate that figure at less than 50%. Can guys who have never hacked in their lifetime, call themselves Agile ? and worse still teach others how to hack and Scrum ?

Food for Thought

I view Scrum more like a form of Project Management. Nothing more. Nothing less. But I might be mistaken. I am not certified. At least not in that way.

Anyway, given that as my starting point… I am not sure if teaching how to do a particular style of Project Management (e.g., CSM) requires knowing how to do the work that will result in the end goal. AFAIK, Scrum can be applied to many things beyond software. Even for software, does a good PM have to know how to do great graphic design, or be a brilliant UX person, or be the most awesome coder/hacker? Maybe folks drawn to CSM actually realize they can do more for the project than simply do mediocre coding.

Now, would knowledge of the “how” help a manager be a better manager? Probably. But isn’t a truly good Project Manager (Scrum or otherwise) cognizant of the need to trust (but verify) their folks to do the right things? Sure, sometimes it is helpful to question an estimate, or critique a design approach, or help shape the product roadmap, or ask probing questions… But does that require intimate knowledge solely on the part of the CSM? Or does it require a good team effort? To turn the argument on its head: is taking the best coder out of the loop to be the CSM a good idea?

Despite my personal preference for (and being) generalizing specialists, I do not think it is a fair pass/fail “litmus” test to say a CSM has to be a hacker/coder/doer of everything.

Personally, I don’t think we have to challenge the CSM individual’s hacking skills. After all, do hackers inherently make better managers? Rather, we should challenge the terminology that bestows the ludicrous moniker of “Master” after a few days of classroom training. Of course, as a counterpoint, the concepts behind Scrum are simple enough to be mastered in a couple of days — that’s the beauty of a simple technique. But the skill of being a Project Manager can take a lifetime to master, Scrum or otherwise, IMHO.

I think what is worse is the blind-faith that someone who has been certified as a CSM is qualified to run an agile project. Or the illusion that everything else we do can suck, but as long as we have Scrum, we’ll succeed.

The beauty of a free market: this CSM brouhaha will sort itself out. Companies will learn one way or the other that there is no magic bullet to hiring (despite the world’s stupidest job postings for CSM folks), nor is there a silver bullet to getting projects completed successfully.

Using Basecamp in an Agile Manner

Software Development Life Cycle

In software development, we want to make our process as visible as possible to all participants and stakeholders. To that end, behold a simple process that uses a handful of Basecamp To-Do Lists as markers for how a feature or bug is working its way through this process.

At any point in time, you can look at the “To-Do” list page for a status on your favorite new feature, task, or bug. And, if you have a chunk of features piled up in an “Iteration 1” list, you can see the number of items being worked through to completion. (There are even third-party apps that allow you to do burndown charts if they appeal to you.)

Using Basecamp's To-Do List in a simple manner

Requirements/Bugs

The business has a desired priority for features as expressed by the order of items in the “staging” lists, and especially the On-Deck list (to borrow a baseball metaphor for “What’s next”). For major new features, the business and the developers will work out a rough Release Plan. But for simple enhancements and bugs, the business can simply add desired items to the lists.

Development

The Developer will pick a new task off of the top of the On-Deck list. There will usually be a conversation around the issue to ensure proper understanding (unless it is self-explanatory). Ideally, we get the business/testers helping to write User Acceptance Tests in the BDD form of Given… When… Then… The Developer moves this feature into the “In Progress” list. If desired/possible, the user could add a date estimate for the task.

The Developer will create tests, be they Cucumber, RSpec, or UnitTest, or all 3 types. If possible, the Testers can help write the up-front acceptance tests, so that they are intimate with the feature/bug. The Developer writes code until the tests pass.

Once the issue is complete, and the tests are passing (and you didn’t break any other tests), the Developer: commits the code, updates the TEST server, and does a smoke test to ensure things still work as expected. The Developer then moves the task to the “To Be QA’d” list, where the Testers will now verify the issue is indeed complete and correct on TEST. Close collaboration between Tester and Developer is warranted here, as the nature of the fix and any developer tests can be explained so that testing can be as effective as possible. Plus we want to maximize use of automated tests, and minimize any unnecessary manual tests (as they are time consuming and expensive) given the nature of the code changes.

Testing

If the QA passes, the Tester moves the task to the “To Be Released” list. Depending on the nature of the feature, the Business can also help to “test” and ensure things work as expected. If the test fails, then the task is moved back to the On-Deck list and the details added to the issue as comments (if simple) and image uploads (add image link to the comments), or to the issue tracker if it is more complex (even though it is technically not an issue against the released product).

Release

The Developer will schedule the move for “To Be Released” tasks to the PRODuction box. This may require coordination with the client’s IT team if they have internal controls. Following release, the application is smoke tested to be sure the basics are still functioning. If any problems arise, the server can be rolled back, or fixed ASAP, based on the nature of the error.

Once the items are released to PROD, you can indicate that they are “complete.” If the issues came from a list that you want to keep for posterity, simply drag the completed task to the original list (“Iteration 1” for example). Now you can see the items in that list that have been released (i.e., completed).

Summary

I’ve used Jira and much more elaborate SDLC schemes in the past. But a recent Rails project with just a few folks on the team, found us trying out Basecamp. By creating various lists, we can mimic the Kanban-style wall board that Greenhooper provides within Jira. Because the lists allow simple ordering to show priority, and you can drag-and-drop between lists, and you can track your time, I think we have a very lightweight, winning combination.

This process is equally at home doing one feature all the way through, or an entire iteration’s worth of features in a bigger chunk. (Basically dependent on the cost of testing and the appetite of the end-user for changes.)

SDLC process[/caption]

Tale of the Demanding Consultant

So, I just heard that a consultant canceled a gig at a company because — get this — the company did not move off of a big VCS system prior to his Scrum (etc.) engagement. (In the words of one friend, it sounds a bit like a temper tantrum or elitism.)

In my opinion, any consultant worth their salt would simply deal with it… the choice of VCS is *not* an issue. Neither is the choice of “Use Cases” or “User Stories.”

If you think these sorts of things are a big deal,

  1. go directly to jail,
  2. do not pass go,
  3. do not collect $200.

meh!

Consider this a “Bad Smell” should you face such a pre-engagement demand!

Why Coding is Not Like Woodworking

In addition to my opinion that coding excellence is more akin to personal discipline, a la yoga, I think that software has an intrinsic escape hatch/built-in flaw (so to speak).

The feedback loop on software is often simply whether the client was happy with the feature. (Or the boss being happy with the work effort.)

Now, if the developer gets positive feedback even though they might not have coded it as good as an expert, what would drive them to think they need to do better that that? I submit: only an internal drive to push yourself, to stretch your mind, to see if there are alternative ways to do things more effectively. And that is not the norm in most professions, and quite probably not the norm in the software profession.

Most folks are satisfied at doing a “good job” as defined by their “clients.” And why shouldn’t they be?

Were our profession more like woodworking or even music, I think we would have an easier time spotting the difference between a mediocre woodworker, a competent woodworker, and a master craftsman woodworker. In woodworking or music, lousy technique is very evident — even to the lay person. From poor design aesthetic, to poor construction technique, to bad finishing.

So, we continue to struggle to show the downstream impact of today’s decisions. How can we improve this?

Why Coding is Like Yoga

This has been nagging me for a while… like it is an answer, maybe. I’m curious what you might think.

I have sort of determined that writing code is more like yoga, rock climbing, or mountain climbing. It is mostly personal and all about self discipline. Extrinsic motivation has some measure of effectiveness, but I submit that it is largely intrinsic motivation that makes any of us do what we do.

In yoga, it is all personal. I do “battle” only with myself. If I don’t get as much into or out of a pose, i cheat only my own body. So this is like when I am doing relatively isolated coding work on a project. However, when my body or skills are required to be part of a team, now the cheating of my own self can impact others.

For example, when I am working on bits that touch others’ work (common in coding), it is more like alpineering/mountain/rock/ice climbing. The actions of the other guy(s) on my rope will impact me to a great degree (and vice versa). Fortunately I can work with my close buddy and ensure proper protection and that we are making smart decisions. You see, in mountaineering — like in coding — you constantly need to be evaluating risk, as changes occur every step of the way (literally).

Another parallel to alpineering: My roped team might be fine and dandy and making good progress towards our goal. But believe me, we care a lot about other teams on the same route (for some routes are more crowded than others). Someone else’s carelessness can wipe us right off the mountain. So we are motivated to ensure that other people are safe, or that we are not exposing ourselves to their foolishness. (Ever get “bitten” by someone else’s rotten code?)

In each example, what I do regarding achieving the goal very much depends solely on my commitment to do the right thing. It depends a lot on discipline (hard to always do BDD and not just sling code). It depends a lot on being in good shape and having mental toughness (constantly learning new things surrounding coding). It depends a lot on being able to at once see the big picture (why am I writing this), but also be able to dive deep into the details (down-n-dirty coding).

So, try as we might, there is no way to get around the fact that coding is a individual team sport. Another analogy that might help cement this idea of mine:  coding might be closer to gymnastics “team” competition (with individual performances counting a great deal towards the team’s success) than, say, football (of both kinds).

Get your practice on!

Motivational Posters

The sight of motivational posters and laminated “corporate” values posters plastered around the conference room always raises my antennae.

It is frequently a portent of an organization full of well-meaning folks mistaking activity for progress.

Maybe that is why i like de-motivational posters so much 😐

Code City Metrics

Here is a cool “metrics” visualization tool that Thomas blogged about: Visualizing Code Aesthetics

For me, using city layout, while intriguing, might not generate immediate grokking by most people.

It is cool how it shows multiple dimensions by adding

  • blocks:packages and
  • building:classes have
    • footprint based on #attributes and
    • height based on LOC.

I am not sure that people have an automatic reaction to a cityscape that can equate “good code” to a good-looking city. For example, I think most people would say that a good-looking city has a nice skyline with tall buildings grouped in an aesthetically-pleasing manner. That might represent bad code, who knows?

Also, some bad code smells at the class level are

  • All attributes — a data blob — would be big footprint, low height
  • All methods — an overachiever — would be tall and skinny

I guess the code cityscape would lead you to see some obvious outliers. But does it tell you much more than that? Does it tell you anything about the “goodness” of the design? Does it tell you anything that a list of computed metrics doesn’t point out with less fanfare?

What is missing in the code city — and arguably of greater import IMHO — are indications of high coupling, low cohesion; and cyclomatic complexity values (i.e., how convoluted are your LOC).

Nonetheless, the Code City does get your attention as it is pretty cool looking at first glance 🙂

Thanks for sharing!

NOTE: Thomas pointed out that there are some different ways to view the metrics that address some of my metric faves:

Just so you know, there's a bunch of other metrics out of the box:

Color buildings by:
* Brain class
* God class
* Data class

You can also break the classes down into methods (look like floors in
the buildings) to study this:
* Brain method
* Feature envy
* Intensive coupling
* Dispersed coupling
* Shotgun surgery

Agile Bashing?

On a few different lists, on twitter, and elsewhere, there seems to be some disillusionment with “Agile.”

Here is a great article that outlines the rancor.

Much like pre-Snowbird, in my last few engagements over the past 4 years (just by way of recent examples), I always pushed solving the underlying business problem and providing solutions, as that was what the folks I was talking to needed. It just so happens that I use a lightweight, agile approach. It just so happened that the client was cool with that (and even if they weren’t, I would find a way to do it anyway <g>).

In other words, I did not come in selling agile and accidentally found a project. I was selling being able to help their team solve a business problem and deploy a solution.

One engagement (i was leading the team for about 2 years) was with a company devoid of much effective s/w processes (inefficient waterfall and palpable business/IT schism). Now they have an effective (distributed) agile process, they have a lightweight set of tools, and a good relationship between the business and IT. As agile as I would like? No. But on a scale of 0 to 10, I am very happy that they are probably a 6-7, and even happier that they continue to try and improve. After all, they are in charge of what practices they chose to adopt within their own constraints. I simply shared with them how I have had success doing projects. Some things I suggested we adopt were rejected. Some of those bit the team later — which was a better lesson than had I forced it down their throats.

However, many other times, I think, people/orgs are seeking to just become *more agile.* Or they want their people to be certified in this or that. For some reason, they probably think this will help them succeed. And maybe it will. For me, it is a bit nebulous. I always like to tie all education/training with actual doing so that the lessons would sink in and so that people had a reason to succeed.

Unfortunately, these orgs seeking “more agile” may not necessarily tie the agile transformation to any sort of project success. Rather, they just want to learn better techniques in the hope that it will foster improvement to their bottom line. Sometimes it is just throwing a bone to the folks to try and take their mind off of the truly waterfall world in which they operate.

At times, the agile hype is almost like “If I wear these jeans, I will be buff and have great-looking people surrounding me.” As Madison avenue has brainwashed most of us, that technique is very effective.

Frankly, this is a normal process of a good idea being turned into a movement being turned into a fad being turned into a money-making opportunity resulting in disillusionment and spotty success.

Warts and all, I’ll happily take the rewards that having agile “out there” has provided “humanity” — even if it is abused at times.

The smart and skeptical will prevail and the easily duped will be duped. The beauty of a free market 🙂

Caveat Emptor!

Detailed Estimation Sucks

Pretend you have a team cranking away at a production app (likely full of technical debt). Part of the IT staff, so to speak. Management has a year’s worth of features, and of course wants to be able to plan when things will get done. This was my response to an agile coach who is working with such a team that is having trouble with their estimation accuracy.

Please have a seat. Yes, over there by the window is fine. Buckle up. You may experience some turbulence.

If you want to know “when you can get all that stuff done,” sit the team in a room, show them the list of stuff, and ask them. I’ll give you an hour. I suggest you have them use seasons from the sounds of the current mess 🙂

If you really want to know within a month as to when the list of 1000 issues will be done (including the 250 that you don’t yet know about), then wait until you are about to release, and you will have a very accurate estimate.

I doubt that estimating accurately or poorly will help all that much in knowing when an entire list of stuff will be done. Especially when the list changes over time. Given that, why spend any/much energy? “Don’t polish a turd.”

Why not simply guarantee to management that the team will do their best effort. Just grab a stack of important (i.e., prioritized) stuff to do (loosely based on a logical roadmap/release plan), track the amount of things done each iteration on a chart. Skip the up-front waste of time estimating everything in detail. I mean really… just take a stab at “we can do these 20 things this iteration” and see how it goes — adjust as needed. The effort should take all of about 30 minutes each iteration depending on your tools and your team collaboration, and ability to know what those 20 things are.

Put an indication on the chart of number of total issues — but be ready to continually update this as new issues/bugs arrive (it’s lots of fun to watch the goal continually move further away). Or have the chart represent the bare minimum marketable features (bigger ticket items without the details). Now anyone can sense the rate of closure of the team towards the goal as each iteration piles on more closed issues. After two iterations, you can draw a straight line, after three, you can use fancier curve fits and show the least squares correlation coefficient <g>.

Instead of spending time estimating, spend time making the project progress “information radiator” as meaningful as possible. Believe me, developing software is merely working a to-do list. Anybody can watch a list of things getting ticked off and do a mental estimation as to rate of progress. Build a better view into what the project is actually doing and let people think about how it might look going forward — no need to debate future predictions. After all, at least your historical data is guaranteed accurate!

The only time I would estimate “more accurately” is if I have competing ideas or ways to solve a problem via wildly differing approaches. Then maybe we might want to weigh one versus the other from a business point of view because it could make a big difference in the long run.

But for an entrenched team on an entrenched product to estimate each little feature/issue/bug fix/story at the outset of each iteration? I question the value of the team’s time being spent doing that. I gave it up for Lent.

Try not doing detailed iteration estimation. It can be very liberating to just do the work as best and as fast as you can.

At some point, you and management will realize that it doesn’t matter if the team is good at estimating or not. At some point you may realize that most of the estimation process is an illusion, a grand trip into self-delusion fantasy land. The same amount of work will get done (well, more will likely get done if less time is spent estimating).

If you need to control what work gets done by when, then you prioritize mercilessly and do a good job to make development efficient (which includes smart architecture and design, among other things). If you need to speed up when the work gets done, then you need to consider resources (better or more) and/or reducing scope.

Now, if you want to spend time making the team better at estimating for their own edification (a la PSP), let them do their own estimates and track actuals and try to improve over time.

(My agile coach friend drew some aircraft analogies…) If only software were as simple as aerodynamics. I can predict quite accurately how a NACA airfoil is going to behave in 2D and 3D airflow. I have never been able to have a software team predict to that level of accuracy. Predicting software is not much different than predicting a relationship between groups of people.

Something to ponder… Why bother with estimates at all?

1. Who needs them and what are they for?
2. What happens if you are dead-on accurate?
3. What happens if you are off by 100%?
4. What if #2 == #3??
5. Are the workers going to be fired with poor estimates? Given raises with good ones?
6. How did they estimate before?
7. Is estimation the best use of their time?

We’re now ready for take-off!

Using Buzz to Take Advantage of Bureaucrats

Carson Holmes made an interesting post <a href=”http://tech.groups.yahoo.com/group/agilemodeling/message/8986″ target=”_blank”>Big Consultants Use “Lean” Buzz to Take Advantage of Bureaucrats</a>

My response:

And you can bet that when Big Company and Big Consulting Inc fills out an industry survey about the success of the projects there, it will be all rosy.

Too often I have seen execs roll out large deployments of enterprise-wide systems &mdash; largely I think to get it on their resumes &mdash; with little true ability to actually measure the pre- and post-outcomes. This is just seemingly normal for large, bureaucratic organizations &mdash; even if they are ostensibly capitalist/free-market driven.

I think the problem is that very few organizations (I know of none, but there must be some) truly knows if the IT budget is delivering business value <i>as good as it could be</i> in such large organizations. For example, do you think CIOs from two companies can discuss the output of their IT staff on an absolute basis? Someone from the outside might be able to discern that one organization appears to do twice as much work as the other, with fewer people. But even that is more of a gut feel than a measurement of a consistent unit of measure.

If the output of development were easy to see and measure (e.g., painting walls or laying carpet), it would be easy to examine return on investment. Eventually, competitors that do a better job of efficiently leveraging technology for business advantage will win out in the free marketplace. But it could take years or decades to reveal bad decisions (think Y2K) — by this time the responsible individuals are long gone, or have been promoted up the chain.

(Even in the small, many times an individual developer may not be around long enough to learn the consequences of some decisions/code they made two years ago &mdash; hence not being able to learn and grow from that experience. Instead, more often than not, they see and learn from other people’s mistakes. Yet they are unable to understand the original thought processes that went into that decision, because the author is long gone.)

Until our industry is able to resolve the conundrum of how to compare expenditure versus return on IT, it will be easy for Big Consultants Inc to do Selling By Buzzword, and easy for IT execs to do Management By Magazine.

It gets worse in large bureaucracies, like the government, where there are no market forces to expunge wasteful practices.

We are indeed a nascent industry.