Saturday, January 31, 2009
Thursday, January 29, 2009
I've been involved in this agile project at my client for some time and for the past three months or so I've been working in a group that favors a low fidelity mechanism for managing their stories--Post-It notes on a board. The stories, tasks, and acceptance tests are all written on Post-It notes. We are required to keep the stories and tasks in Team Foundation Server (TFS) also, but the group manages the iteration's work from the Post-It notes. The TFS eScrum template is not a particular good system for managing stories and tasks, so we tend to stick to the low fidelity method.
I really dislike this behavior. The Post-It notes are difficult to keep stuck to anything for the entire iteration. They drop off the board and mysteriously get lost. The Post-It notes are typically written in many different handwritings. There's no consistency in display of stories and tasks. Some of the Post-It notes are almost unreadable. The format of the Post-It notes is not consistent. It's difficult to write much on a single Post-It note. Reporting, well...
I'm a big believer in managing your stories and tasks electronically. Use the Post-It notes during iteration planning to facilitate teamwork and collaboration, but don't leave the stories, acceptance tests and tasks in that form for any amount of time. Get them into a tracking system that allows you to easily manage the stories and tasks for the iteration. I'm a big believer of a dashboard view that gives you a high level overview of all the iterations stories and their progress in a single view.
Powered by Zoundry Raven
Sunday, January 25, 2009
The Google Testing Blog has an a very timely article about keeping your fake object implemenations simple. I previously blogged about this issue. It's nice to see others have seen similar issues with the fake object test double.
Powered by Zoundry Raven
Saturday, January 24, 2009
Now I'm wondering what value these charts actually provide. If management is interested in seeing that people are busy working and completing tasks, then these charts are spot on. They definitely will show the amount of work completed during the iteration and when it's completed. But really, is that metric all that important? I tend to say no. One issue off the top of my head that I have seen happen in our group is the completion of tasks on stories, but not fully completing the stories. For whatever reason, our user stories are drifting from one iteration to the next, never reaching the point of completion. Therefore, I conclude that we're keeping ourselves busy, but not adding any value to the overall product.
I want to know how much value I have built into the product. To measure that quantitatively, you need to measure user story burn-up. If you measure user story burn-up, you will focus the development team on completing stories. I think the emphasis needs to be on the user story; the task is a planning construct that just helps us decompose the story into units of development that can be worked on concurrently by a number of developer pairs. I don't know if I really care about task estimates anymore either. I'm drifting towards estimating at the user story level only.
Friday, January 23, 2009
I'm a big believer in grooming the backlog while developers are working on the current iteration's stories; streamline your process by getting more things completed concurrently. The coach and one or more of the testers get together with the customer and determine which features the customer would like to focus on in the next iteration. If you need technical expertise, add the tech lead to this group. Use the release backlog to jumpstart this conversation with candidate stories. Fill out the details of this group of candidate stories for the next iteration. This meeting between coach, testers, and customer focus on the feature details and the acceptance tests. When you try to do this during iteration planning, there tends to be too many people and the conversation becomes chaotic.
Try to solidify your stories and acceptance tests well ahead of the iteration planning meeting. Send out the stories and the accompanying acceptance tests to the rest of group before the iteration planning meeting. Now the participants of the upcoming iteration planning can prepare off-line before the meeting.
Thursday, January 22, 2009
Today, I was having a conversation with a co-worker of mine, Ryan Anderson, about the use of the ArgumentMatcher. Ryan wondered if one could use JUnit assertions within the matches method implementation and just return true if all of the assertions passed; failed assertions will not reach the return statement. Indeed, you can use assertions in the ArgumentMatcher implementation. JUnit assertions actually throw an instance of java.lang.AssertionError, thus your stack trace will show exactly which assertion within the ArgumentMatcher failed. Much better using the assertions than testing and returning a boolean value. Might be helpful for your testing efforts.
Wednesday, January 21, 2009
Monday, January 19, 2009
UPDATE on February 3, 2009: Bill Dudney has seen this phenomenon in his trainings and blogs about it here, with screenshots.
Wednesday, January 14, 2009
Tuesday, January 13, 2009
All of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code. That’s the magic, and it’s why unit testing works also. When you write unit tests, TDD-style or after your development, you scrutinize, you think, and often you prevent problems without even encountering a test failure.
I've been thinking more and more about my own testing behaviors these days, especially with my endeavors of teaching TDD and mentoring other developers on unit testing and the use of mock objects.