Monthly Archives: June 2008

RIA as a part of Agile

part of the struggle with conversing on a topic as broad as UX in or outside the ASD loop is one of “size.”

for reasonably “straightforward” features that require UX design, a couple of iterations is usually sufficient to get to a good place. my steps are typically:

  • understanding the basic requirement (e.g., we need to use a special service to look up critical special info from a ZIP code as part of a custom address UI),
  • the domain details (often down to methods/sequence diagrams),
  • some dynamic behavior of how the UI might interact with the domain model/server (after zip is entered, make a server call, get the linked lists of the related objects), and a
  • rough sketch.

When to spawn a separate UX project

if the team senses that:

  • UX is a major and critical aspect of the final product (as something like the iPhone probably was),
  • the design is very complex,
  • “getting it right” will impact greatly on the success or failure of the product,
  • there is a need to explore a handful of design ideas before committing…

well, sounds like you need to have a project to do this effort unto itself.

The UI is Independent of Domain

In general and virtually “always,” while the “UX Big Design” project is underway, the rest of the team can carry on with much of the other part of the system development. The UI — regardless of the complexity or simplicity of the UX design — is rather independent of the server or domain side of the world. So, even tho my project may have a parallel cool UX design component going on with its own features/iterations, i have all of the features still visible in the iterations through what I termed the “Rough UI.”

Whether the feature is surfaced in an ugly, simple UI control world, or a slick Flex/RIA world matters little. The core “back-end” API still needs to function. As the fancy UI components get finalized, they can be connected to the real API one by one.

Soon the ugly duckling UI turns into a beautiful swan.


more folks should follow the WISKY principle… NOT!

(Why Isn’t Somebody Koding Yet?)

in 1995, my team and i were architecting a C++ solution for IBM’s next-generation manufacturing software. i had a crack team of 6 senior engineers/developers.

while i was gathering requirements and building object models, sketching UI ideas, and doing release planning; others on the team were building out (coding) the technical architecture we envisaged for this radical, thin client system. i also had my team setting up some real world simulation tests to thrash the architecture about, poke and prod it, and see if it was robust enough for our use.

the client was getting restless/antsy… they wanted to get moving on development of the whole app. they wondered “why aren’t more people coding? won’t we finish sooner if you have a couple dozen people working from the start?” but in reality, it seemed they wanted to bring on the dozen or so developers that were allocated and start billing them out and to report some “progress”.

i remember telling management “flat out” that if they insisted on bringing forth the horde of developers, the developers would just sit there and do nothing until my team and i were ready for them. i explained that this up-front work was essential to getting the team propelled in the right direction, with the right architecture, and consistent coding standards and coding templates. i also mentioned that this normally required about 10% of the project effort.

i got my way. no WISKY was allowed! (Allowing a bunch of developers to begin to develop prior to understanding the architecture, coding patterns, and the basic priorities is a big mistake.)

A few weeks later, we were ready to begin and brought in the other developers. By that time, we had our thin-client, layered architecture determined, and a pattern for folks to follow.

(footnote: the 3 months upfront was pretty close to that 10% mark, for the project ended up being ~3 years, ~250 domain classes, and ~1 Million LOC. This was my first large “agile” project.)

The Cost of Complexity

In reading this very nice post from Manoel Pimentel Medeiros, I am reminded of another value I ascribe to simplicity — “quick-and-dirty” can be a valuable technique.

Many times, when faced with some very complex issues in my projects (usually surrounding integration with bizarro other systems), I strive to have developers take a very simple approach at first. Sometimes these approaches seem like hacks, inelegant, and the exact opposite of what I normally practice. Sometimes I can tell it is hard for the developers to do this “ugly” work. (Maybe they fear I will leave the ugly hack in place .)

However, this is a technique I use when the route is not clear, or when there are viable alternatives from which to choose. Or if we are probing for the right solution and are not confident if the chosen solution will work. So, instead of designing a more complex and elegant “correct” solution, I do the simplest thing. Maybe it means dummying up the data to achieve the effect. Maybe dummying up some objects. Hard coding. Passing in a fixed XML file to test the new format versus changing the code to generate it that way.

By choosing a simple solution *now*, the value is that we can get to evaluating the results and downstream impact of our ideas sooner rather than later. Once we determine that our solution will work, then we go about implementing it correctly. At times, though, we discover that the idea didn’t work as expected. So off we go to look for another solution.

So, simplicity can also be very useful when trying to quickly and cheaply determine the best course of action in creating a viable solution to some aspect of a problem you face.