Pages

Showing posts with label requirements. Show all posts
Showing posts with label requirements. Show all posts

Wednesday, July 31, 2013

When is a user story done?

I am beginning to hate the word "done". It means so many different things in software development and causes so much confusion that I am trying not to use the term, instead opting for statements which convey much more information and provide more transparency into our product development efforts.

In the past, I have been asked to figure out why two development groups under a common work area have radically different velocities and thus are being viewed much differently by management. The first group is chewing up stories, the other, not so much. Looking at the user stories of each group, differences between the two groups definitely exist. The "higher performing" group have simple stories with a few acceptance criteria. The acceptance criteria seem very high level; my gut feeling is that the acceptance criteria would be difficult to turn into actual acceptance tests. The "lower performing" group has more acceptance criteria which are more concrete--I can visualize actual acceptance tests with these criteria. Digging into the groups a bit more, it becomes apparent that the "higher performing" group's quality assurance effort is behind the development group in completion, but the group is considering stories "done" when the developers reach code complete. The "lower performing" group does not claim a story completed until it has been verified by quality assurance and the customer has signed off on the functionality (usually through some demonstration of the functionality).

So which group is really the high performer here? Saying a story is done when code complete is a misnomer and is likely building a false reality of where the group really is at. This "higher performing" group is setting themselves up for a big fall--there is no sense of quality assurance complete or customer complete.

Some in the agile community call this "done, done, done". I guess that's OK, but I think we need to be really careful with our terminology as we communicate within our development groups and outside of the development group to shareholders. I feel we should use specific terminology to describe where a story currently resides in the "done" spectrum. Daniel Gullo has a good article about this. He enumerates criteria for completion:

  • “Code complete”: Development completed, including accompanying unit tests (assumption here is that everyone is operating under test-driven development).
  • “Unit tested”: Unit tests completed. I don't really care for this designation, especially since my groups engage in test-driven development. I would call this Acceptance tested or something like that. Basically, demonstrate that we have satisfied the conditions for completion (aka acceptance criteria). Hopefully most of these acceptance tests are automated and have been written concurrently during the iteration by QA with the cooperation of the developers.
  • “Peer reviewed”: Developer code reviews completed. I like this concept when paired with feature branching and pull requests.
  • “QA complete”: QA testing, automated and exploratory, completed. I like having this as a separate stage-gate, allowing for exploratory testing by QA.
  • “Documented”: As needed. There's probably another blog post here around documentation and the agile process, but I will leave that alone for a while.

Now there is little ambiguity as to where a story stands when using this terminology. I may throw another level in there, customer complete, when the customer signs off on the newly developed feature. Thoughts? How do you communicate when a story is complete?

Sunday, February 27, 2011

It's all about the conversations!

More pondering as I contemplate my previous consulting gig.  This time, I'm considering requirements discovery.  Pre-agile, people would write large requirements documents, hoping to document all the requirements needed for the developers to build a system that would satisfy the customers.  Unfortunately, this view of discovering all the requirements ahead of time is awfully naive.  Things change.  Requirements go undiscovered.   Requirements that are captured are not thought out as well as we would like and when it comes time to implement the requirements in software, the requirements don't make sense or are plain wrong.  Thus, large efforts to capture all the requirements has some amount of wasted effort.  This effort focuses most of the conversations at the beginning of the project.  Conversations after the requirements document has been written and signed off is discouraged; it's viewed as evidence that there are errors in the requirements document.  There doesn't seem to be any room for learning with requirements documents.

Now we have agile methods and the user story.  User stories are not requirements.  They're a planning tool for the agile team.  They are a statement of value that the business would like built.  There may or may not be acceptance criteria associated with the user story, depending on where the user story is in its lifecycle.  The most important part of the user story is the conversations that need to happen to flesh the story out so the feature value can be realized in the product.  This detail seems to escape a lot of people trying to use user stories to build products.  User stories give me the freedom to have conversations with all interested parties regarding the statement of value.  I learn just-in-time about the requirements as I implement the feature.  The business learns about how those requirements they communicated manifest themselves in a product.  And we both have the freedom to adjust, learning about the product along the way.

The above statements about user stories does not preclude one from using prepared, written documentation to feed the conversation.  My previous gig, we have a lot of federal government documentation regarding reporting and calculation requirements.  These requirements are set in stone by the government.  Yet, there needs to be conversations as to how those requirements will be accomplished across releases and sprints.  This is where the conversations come back to the forefront.

User stories allow me to deliver value to the business or customer in bite-size portions.  Value is delivered in small increments, thus I can deliver these more frequently and solicit feedback from the business or customer.  When I have short feedback loops in place, I can nimbly change my course if I need to.