Monthly Archives: July 2008

Requirements Traceability Matrix

In the agile modeling forum, the subject of a “Traceability Matrix” (TM) was broached.

Fabio asked:

I am trying to elaborate a traceability matrix, of the functionalities of the system that I work with. I would like to know if someone has an example on how I can elaborate this traceability matrix. I have a draft of this matrix, that I did, but I would like some ideas, so I can improve the matrix that I elaborated.

The traceability matrix will be used more by the QA team, but I think it can be useful for the whole it team. And with the traceability matrix, any requirement changes that may be developed we know exactly where the changes are affecting taking a look at the matrix. With the matrix we will also build up our test cases.

IMHO, the TM is a holy grail of an idea. In my years of working with building tools like Together, I got requests now and then for building a Traceability feature. The first one I recall was from a big company in Texas, maybe defense related, I forget (circa 1998 or 99). I asked for details behind why they wanted it and what is was supposed to solve. I didn’t get a very good answer.

Since that time, others asked for it here and there. Though I had technology at my fingertips to build traceability into Together tools, I never found the feature worth justifying the cost. (And yes, it has to do with relating a Test to the feature, and doing code tracing…) A few years back, when I was with OptimalJ (an MDA tool), and we added SteelTrace as a front-end requirements tool. There was a technical way to relate a requirement down to the lines of code.

But so what does it buy you?

While the concept seems appealing at first glance, reality often bites. It is hard to maintain, of exponentially diminishing value as the required level detail increases. In all my years, I never once was shown an example of a TM in practice that was used for anything remotely justifying its expense.

However, that doesn’t mean that a TM is not useful to somebody, I just never ran into that group or person(s).

My suggestion, Fabio, is to revisit the true needs of the organization. Then see what is the best way to get those needs satisfied. Should some form of traceability matrix be useful, I suggest starting small and inexpensive. See what you can get easily, and see if it is useful. Conversely, think of “prototyping” the end result of a TM. Pretend to use it for some scenario for which your organization thinks it would be useful. Explore alternative ways to achieve the same result (classic manual code searching, step-wise debugger, reverse-engineering sequence diagrams, or even code-based tools that allow insane levels of search — some recent web-based offerings that i don’t recall off the top of my head). Typically, the hardest things to find that might be impacted by a requirements change (nuances of data storage, little XML tweaks here or there, meta data, resource files, things generally non-discoverable in just the source code), are not tracked in a TM anyhow.

There are many ways other than a TM to keep the team informed of how requirements are implemented through the application. Acceptance Tests that prove the existence of a specific feature are one way. Design documents and models are another — assuming you do a good job at naming things to make it clear that a method in a class is named a lot like the english language of the expected requirement. A wiki can help keep these documents front and center.

IMHO, the best way to keep the QA team informed about the detailed impact of requirements changes is through developer communication (both in the issue comments and verbally as required). And, IMO there are two schools of thought regarding keeping QA informed of details. One is that an informed QA group that knows (via the developer) that a code change they just did for Reqt X, may have caused some havoc over in Reqt B, allows the QA team to please hammer those tests for B just in case. The other QA group is to be “blind” to details of the code changes, and simply do tests and ensure things still work.