Thursday, January 29, 2009

Managing stories electronically

I've been involved in this agile project at my client for some time and for the past three months or so I've been working in a group that favors a low fidelity mechanism for managing their stories--Post-It notes on a board. The stories, tasks, and acceptance tests are all written on Post-It notes. We are required to keep the stories and tasks in Team Foundation Server (TFS) also, but the group manages the iteration's work from the Post-It notes. The TFS eScrum template is not a particular good system for managing stories and tasks, so we tend to stick to the low fidelity method.

I really dislike this behavior. The Post-It notes are difficult to keep stuck to anything for the entire iteration. They drop off the board and mysteriously get lost. The Post-It notes are typically written in many different handwritings. There's no consistency in display of stories and tasks. Some of the Post-It notes are almost unreadable. The format of the Post-It notes is not consistent. It's difficult to write much on a single Post-It note. Reporting, well...

I'm a big believer in managing your stories and tasks electronically. Use the Post-It notes during iteration planning to facilitate teamwork and collaboration, but don't leave the stories, acceptance tests and tasks in that form for any amount of time. Get them into a tracking system that allows you to easily manage the stories and tasks for the iteration. I'm a big believer of a dashboard view that gives you a high level overview of all the iterations stories and their progress in a single view.

Powered by Zoundry Raven

Sunday, January 25, 2009

Follow up on keeping fake objects simple

The Google Testing Blog has an a very timely article about keeping your fake object implemenations simple. I previously blogged about this issue. It's nice to see others have seen similar issues with the fake object test double.

Powered by Zoundry Raven

Saturday, January 24, 2009

Programming vibration on the iPhone

Erica Sadun has a timely blog posting about how to get your iPhone to vibrate from within your iPhone applications. I've been working on an educational app for the iPhone and wanted to use vibration for user feedback. And now I know how to.

Measuring value during an iteration

I'm currently working on a large agile project where we measure task burn-up. Our user stories are decomposed into individual tasks and the tasks are estimated. When a developer or a development pair completes a task, they mark that task complete in the tracking system. A daily chart is then generated by the tracking system that displays a line graph of units of tasks completed in the current iteration. The cumulative total task units from the estimates is another line on the graph, and it typically stays horizontal across the entire iteration. Up until recently I hadn't given much thought to these generated charts.

Now I'm wondering what value these charts actually provide. If management is interested in seeing that people are busy working and completing tasks, then these charts are spot on. They definitely will show the amount of work completed during the iteration and when it's completed. But really, is that metric all that important? I tend to say no. One issue off the top of my head that I have seen happen in our group is the completion of tasks on stories, but not fully completing the stories. For whatever reason, our user stories are drifting from one iteration to the next, never reaching the point of completion. Therefore, I conclude that we're keeping ourselves busy, but not adding any value to the overall product.

I want to know how much value I have built into the product. To measure that quantitatively, you need to measure user story burn-up. If you measure user story burn-up, you will focus the development team on completing stories. I think the emphasis needs to be on the user story; the task is a planning construct that just helps us decompose the story into units of development that can be worked on concurrently by a number of developer pairs. I don't know if I really care about task estimates anymore either. I'm drifting towards estimating at the user story level only.

Friday, January 23, 2009

Using ArgumentMatcher to capture indirect output arguments

Yesterday, I posted a blog entry about performing assertions directly in your ArgumentMatcher implementations.  I wanted to close the loop on my previous usage of the ArgumentMatcher.  After some discussion with the main committer to mockito, Szczepan Faber, it seems that ArgumentMatcher should only be used to capture the indirect output argument, making it accessible to the test code. Once the test code has the indirect output argument available to it, you can then assert on it to your heart's delight. Szczepan recommended not embedding the assertions directly in the ArgumentMatcher implementation.  More information can be found here.  Seems like a good approach. 

The importance of grooming the story backlog

I'm currently on an agile project in primarily a developer role.  Our iteration planning has been a sore spot with me since I joined the group.  We've made some tweaks and improvements, but our stories and their acceptance tests are poor quality.  We aren't doing much of any pre-planning, harvesting of stories, or grooming of the backlog.  It really hurts when we get to the iteration planning day.  I dread iteration planning. 

I'm a big believer in grooming the backlog while developers are working on the current iteration's stories; streamline your process by getting more things completed concurrently.  The coach and one or more of the testers get together with the customer and determine which features the customer would like to focus on in the next iteration.  If you need technical expertise, add the tech lead to this group.  Use the release backlog to jumpstart this conversation with candidate stories.  Fill out the details of this group of candidate stories for the next iteration.  This meeting between coach, testers, and customer focus on the feature details and the acceptance tests.  When you try to do this during iteration planning, there tends to be too many people and the conversation becomes chaotic.

Try to solidify your stories and acceptance tests well ahead of the iteration planning meeting.  Send out the stories and the accompanying acceptance tests to the rest of group before the iteration planning meeting.  Now the participants of the upcoming iteration planning can prepare off-line before the meeting. 

Thursday, January 22, 2009

Using assertions within your mockito ArgumentMatcher implementations

I've been using mockito in my Java work recently and have really enjoyed its functionality and ease of use.  Lately I've been trying to do a better job of verifying my SUT's indirect outputs, both behavior and state.  Behavior is easy to verify with mocking frameworks and mockito is no different.  State verification of indirect outputs is bit trickier, but still pretty easy to do once you know what matcher combination you need to use.  For mockito, you use the org.mockito.ArgumentMatcher<T> class and its subclasses (all implement the org.hamcrest.Matcher interface).  There is a single abstract method that you should implement, the matches(Object argument) method.  This method returns a boolean result. 

Today, I was having a conversation with a co-worker of mine, Ryan Anderson, about the use of the ArgumentMatcher.  Ryan wondered if one could use JUnit assertions within the matches method implementation and just return true if all of the assertions passed; failed assertions will not reach the return statement.  Indeed, you can use assertions in the ArgumentMatcher implementation.  JUnit assertions actually throw an instance of java.lang.AssertionError, thus your stack trace will show exactly which assertion within the ArgumentMatcher failed.  Much better using the assertions than testing and returning a boolean value.  Might be helpful for your testing efforts.

Wednesday, January 21, 2009

Taking fake objects too far

I recently worked on a Java project where we utilized the Fake Object pattern ad nauseum.  We used this pattern to provide a test double layer of our data access objects, but in doing so, we tried to implement too much of a fake repository for our domain objects.  The fake data access objects became a hindrance to writing unit tests and thus dissuaded people from writing unit tests.  Well, a friend of mine still on the project told me today that they are ripping out all the fake object implementations and putting in mocks and stubs using the excellent mockito library.  They hope to get back to easy test double usage and thus promote the use of unit testing and TDD.  Moral of the story: Your tests can accumulate technical debt.  Constantly ensure that test double usage is healthy in your unit tests.

Monday, January 19, 2009

Interface Builder: Remember to hook up the view to the ViewController's FileOwner view outlet

I don't know if I like or despise Interface Builder.  Most of the time it's an effective tool to building UI for the Mac and the iPhone.  Every so often, I'm completely confounded by it's tediousness.  Case in point: The ViewController's File Owner has an outlet for the view.  If you have one single view in your XIB, you still need to make a connection between the File Owner's view and the View object in the XIB file.  I spent significant time today trying to figure out why a multi-view iPhone example that I was working on kept throwing exceptions after I started adding the subviews to the codebase.  To me, Interface Builder seems a bit opaque; you never really know if everything is wired together properly.  I'm sure experience will help alleviate this type of issue cropping up in the future, but from my very limited exposure to the Xcode/Interface Builder tooling, it seems like the tooling could use a revamp to make things much more user friendly.  I believe the precursor to Eclipse, Visual Age for Java|Smalltalk had a similar UI wiring scheme, but with a much easier view of all the existing wirings.  Something similar would be helpful in Interface Builder.  

UPDATE on February 3, 2009: Bill Dudney has seen this phenomenon in his trainings and blogs about it here, with screenshots.

Wednesday, January 14, 2009

Google's Quick Search Box for the Mac

If you like Quicksilver for the Mac, you may want to look at Quick Search Box for the Mac.  One of the developers of qsb-mac is the Quicksilver creator, Nicholas Jitkoff.  Google is hosting it at googlecode as an open source project.  I like what I see of it already.

Tuesday, January 13, 2009

Beginning iPhone Development book

I finished up reading the book Beginning iPhone Development and working through most of the examples.  Excellent book for developers starting out on the iPhone platform.  The book is laid out with examples through 17 chapters or so.  The examples are all very relevant to functionalities on the iPhone: accelerometer, autorotation, autosizing, multiview applications, persistence, Quartz, OpenGL ES, gestural interfaces, Core Location, and others.  The examples are straightforward to understand and really give you a great taste of what's available with the Apple iPhone SDK.  There is nothing in this book about web applications on the iPhone--go elsewhere if you want to learn about that.  Looking forward to building and distributing some iPhone apps in the near future.

Interesting unit testing thoughts from Michael Feathers

Michael Feathers is well known within the agile community for his efforts and thoughts around testing. He has a very thoughtful post entitled The Flawed Theory Behind Unit Testing. The main takeaway that I glean from the posting was how tests make us slow down and reflect on our code. From the blog entry:

All of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code. That’s the magic, and it’s why unit testing works also. When you write unit tests, TDD-style or after your development, you scrutinize, you think, and often you prevent problems without even encountering a test failure.

I've been thinking more and more about my own testing behaviors these days, especially with my endeavors of teaching TDD and mentoring other developers on unit testing and the use of mock objects.

Thursday, January 08, 2009

.NET Base Class Library types not friendly to mocking with MoQ

I spent some quality time this week driving a design implemented in .NET using unit tests.  It was an enlightening experience and gave me a good glimpse at the quality of the BCL design.  I'm not impressed. We had issues trying to mock System.Net.WebRequest and System.Net.WebResponse and its decendents.  It seems that Microsoft prefers to expose abstract base classes, instead of using interfaces for this.  The aforementioned types are not interface types; they're abstract classes.  We've been using MoQ, a fine mocking framework, at the client I've been working at and I tried to create mock implementations of the System.Net.WebRequest and System.Net.WebResponse types.  No go--these types do not have public constructors.  The non-virtual, non-abstract methods also give MoQ headaches (good write up here about why methods of these abstract class types have to be explicitly decorated with virtual if you want to mock them).  Personally, I'm more fond of exposing interface types, thus alleviating the problem all together.  Very frustrating.  We did continue to unit test and utilized mocks for this, but it forced us to wrap these BCL types with our own types where we could control the virtual decoration.  Yuck!!

Thursday, January 01, 2009

Now deploying my development app to my iPhone 3G device

I went through the steps to get my Standard Developer Program activation hooked up through the Apple iPhone Developer Portal.  I wrote up a wiki page of how I did this.  I was prepared for the worst as I had heard the process was really convoluted and complex.  For me, following their user guide, the process was almost painless and quite simple.  I do recommend following the user's  guide though.