I am beginning to hate the word "done". It means so many different things in software development and causes so much confusion that I am trying not to use the term, instead opting for statements which convey much more information and provide more transparency into our product development efforts.
In the past, I have been asked to figure out why two development groups under a common work area have radically different velocities and thus are being viewed much differently by management. The first group is chewing up stories, the other, not so much. Looking at the user stories of each group, differences between the two groups definitely exist. The "higher performing" group have simple stories with a few acceptance criteria. The acceptance criteria seem very high level; my gut feeling is that the acceptance criteria would be difficult to turn into actual acceptance tests. The "lower performing" group has more acceptance criteria which are more concrete--I can visualize actual acceptance tests with these criteria. Digging into the groups a bit more, it becomes apparent that the "higher performing" group's quality assurance effort is behind the development group in completion, but the group is considering stories "done" when the developers reach code complete. The "lower performing" group does not claim a story completed until it has been verified by quality assurance and the customer has signed off on the functionality (usually through some demonstration of the functionality).
So which group is really the high performer here? Saying a story is done when code complete is a misnomer and is likely building a false reality of where the group really is at. This "higher performing" group is setting themselves up for a big fall--there is no sense of quality assurance complete or customer complete.
Some in the agile community call this "done, done, done". I guess that's OK, but I think we need to be really careful with our terminology as we communicate within our development groups and outside of the development group to shareholders. I feel we should use specific terminology to describe where a story currently resides in the "done" spectrum. Daniel Gullo has a good article about this. He enumerates criteria for completion:
- “Code complete”: Development completed, including accompanying unit tests (assumption here is that everyone is operating under test-driven development).
- “Unit tested”: Unit tests completed. I don't really care for this designation, especially since my groups engage in test-driven development. I would call this Acceptance tested or something like that. Basically, demonstrate that we have satisfied the conditions for completion (aka acceptance criteria). Hopefully most of these acceptance tests are automated and have been written concurrently during the iteration by QA with the cooperation of the developers.
- “Peer reviewed”: Developer code reviews completed. I like this concept when paired with feature branching and pull requests.
- “QA complete”: QA testing, automated and exploratory, completed. I like having this as a separate stage-gate, allowing for exploratory testing by QA.
- “Documented”: As needed. There's probably another blog post here around documentation and the agile process, but I will leave that alone for a while.
Now there is little ambiguity as to where a story stands when using this terminology. I may throw another level in there, customer complete, when the customer signs off on the newly developed feature. Thoughts? How do you communicate when a story is complete?