Pages

Wednesday, July 31, 2013

When is a user story done?

I am beginning to hate the word "done". It means so many different things in software development and causes so much confusion that I am trying not to use the term, instead opting for statements which convey much more information and provide more transparency into our product development efforts.

In the past, I have been asked to figure out why two development groups under a common work area have radically different velocities and thus are being viewed much differently by management. The first group is chewing up stories, the other, not so much. Looking at the user stories of each group, differences between the two groups definitely exist. The "higher performing" group have simple stories with a few acceptance criteria. The acceptance criteria seem very high level; my gut feeling is that the acceptance criteria would be difficult to turn into actual acceptance tests. The "lower performing" group has more acceptance criteria which are more concrete--I can visualize actual acceptance tests with these criteria. Digging into the groups a bit more, it becomes apparent that the "higher performing" group's quality assurance effort is behind the development group in completion, but the group is considering stories "done" when the developers reach code complete. The "lower performing" group does not claim a story completed until it has been verified by quality assurance and the customer has signed off on the functionality (usually through some demonstration of the functionality).

So which group is really the high performer here? Saying a story is done when code complete is a misnomer and is likely building a false reality of where the group really is at. This "higher performing" group is setting themselves up for a big fall--there is no sense of quality assurance complete or customer complete.

Some in the agile community call this "done, done, done". I guess that's OK, but I think we need to be really careful with our terminology as we communicate within our development groups and outside of the development group to shareholders. I feel we should use specific terminology to describe where a story currently resides in the "done" spectrum. Daniel Gullo has a good article about this. He enumerates criteria for completion:

  • “Code complete”: Development completed, including accompanying unit tests (assumption here is that everyone is operating under test-driven development).
  • “Unit tested”: Unit tests completed. I don't really care for this designation, especially since my groups engage in test-driven development. I would call this Acceptance tested or something like that. Basically, demonstrate that we have satisfied the conditions for completion (aka acceptance criteria). Hopefully most of these acceptance tests are automated and have been written concurrently during the iteration by QA with the cooperation of the developers.
  • “Peer reviewed”: Developer code reviews completed. I like this concept when paired with feature branching and pull requests.
  • “QA complete”: QA testing, automated and exploratory, completed. I like having this as a separate stage-gate, allowing for exploratory testing by QA.
  • “Documented”: As needed. There's probably another blog post here around documentation and the agile process, but I will leave that alone for a while.

Now there is little ambiguity as to where a story stands when using this terminology. I may throw another level in there, customer complete, when the customer signs off on the newly developed feature. Thoughts? How do you communicate when a story is complete?

11 comments:

  1. How about... "A User Story is done when the value it has added has been measured".

    ReplyDelete
    Replies
    1. I like that statement. Puts emphasis on measurement and metrics. Quantify what you are doing, what is the value proposition of your actions. Thanks Hans! And good to hear from you again.

      Delete
    2. Thanks Chris! Know that I'm a loyal reader of your blog :-).

      I try hard to define the smallest possible stories that "add value", but my "value added" is always subjective. I've never had an objective and quantitative measure of the value added by my individual stories - it would be pretty amazing to be able to quantify the fruits of one's labor for each story.

      Along with "value added" comes "debt incurred". The NPV of a story goes down if Unit Tests, Peer Reviews, etc. are neglected. The NPV of "completing" a story without appropriate completion criteria could even be negative!

      For the two groups that you describe - every story that each group completes has a NPV = value of the new features - new debt incurred. If you could measure that, you could answer objectively which group is the higher performer.

      Delete
  2. Wow, that sounds eerily familiar!

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. To manage the risk successfully one should have scum in their projects .With high competition, companies have to develop products fast and innovatively always adding value and greater customer satisfaction. In Scrum, it is important to learn agile through one of the Agile Training Providers and practice its basic principles which collectively and naturally help in effective management of risk. As a project manager i follow SBOK guide of scrumstudy.com

    ReplyDelete
  5. Nice post. Keep sharing. Thanks for sharing.

    abhiram astrology center. Best Astrologer In ontario

    ReplyDelete
  6. This is a good article. Thanks for sharing

    abhiram astrology center. Best indian Astrologer In brooks

    ReplyDelete