config.time_zone = 'Central Time (US & Canada)' config.active_record.default_timezone = :localHope this helps others that discover that the default behavior of UTC is not what is desired. Official documentation on these configuration items can be found at http://guides.rubyonrails.org/configuring.html#rails-general-configuration and http://guides.rubyonrails.org/configuring.html#configuring-active-record.
Thursday, December 13, 2012
Pinning Rails 3.2 to a specific time zone
Thursday, September 13, 2012
Resolving connectivity issues with Verizon 4G LTE JetPack 4510L MiFi and your Apple devices
- Make sure the firmware on the hotspot is up-to-date. Update if not.
- Changed the radio protocol to 802.11n. It was set to 802.11g.
- Changed the network key on the hotspot.
- Forget the network on the Apple device.
- Connect to other network, selecting your MiFi hotspot network. When prompted for network key, enter the new network key.
- Done!
Tuesday, July 10, 2012
Adding HTTP headers to AJAX calls in Backbone.js
messagesCollection.fetch({
headers: {
'x-my-custom-header-1':'foobar',
'x-my-rest-api-version':'1.0'
}
});
Wednesday, May 23, 2012
Potential issue when mocking in Groovy
groovy.mock.interceptor.MockFor
and grails.test.GrailsMock
allow for mock objects in Groovy and Grails, respectively. I've been using both of these classes with good success for a long time. But recently, a small refactoring involving the the removal of a parameter from a method signature has caused me to re-evaluate the mock object usage in a dynamic language like Groovy. As I said, there was a refactoring done on the public contract of a service, where the method name stayed the same, but a parameter was removed from the method signature. The contract unit tests for this service were changed to drive the refactoring ("test-driving the refactoring"). However, the collaboration unit tests, where this service is now acting a dependency, were not changed and they continued to pass successful. I tried cleaning the old .class files and compiling the Groovy tests, but to no avail, the unit tests which mocked this service continued to pass successfully, even though the method signature no longer existed on the real service implementation. After perusing the javadoc documentation, there does not seem to be any functionality in either of these classes to verify that the type that is being mocked has a method signature that matches the method signature being mocked. Therefore, these classes can mock methods which are non-existent on the real dependency implementations. Fixing the issue involved finding the method signature using a text search.
The whole episode was a bit unsettling; we have a lot of unit tests and we may be testing scenarios which are not representative of the real world. In my case, the real world scenario manifested itself as a runtime exception stating that the method was missing. In the case of Java and Mockito, the method signature change would result in a compilation error where the changed method signature was mocked in unit tests. My takeaway was to be more diligent with my refactoring and really ensure that I have changed all places in the code where a particular method is referenced.
Friday, May 04, 2012
Allocating business logic in Grails
The dilemma
I've been on a couple of larger Grails projects in the past year and half and I'm witnessing a disturbing phenomenon. The allocation of business logic responsibility across the abstractions that Grails provides today is causing considerable pain. Grails provides controllers, services and domain objects where business logic can reside. I'll contend in this blog entry that these abstraction categories work well for small- to medium-sized Grails projects, but thing quickly start to unravel once your application gets to be large.Controllers
I see a lot of business logic code in controllers these days. Controllers shouldn't contain any business logic whatsoever. The controller's responsibility is to handle the web request and response. Anything else should be delegated to a collaborator. Don't do it!Domain objects
The next logical place to put business logic is in the domain class. Allocating responsibility here works to a point, but you will quickly encounter issues when you need business logic that resides in services. I'm not a fan of injecting Grails services into domain classes. This situation quickly spirals out of control and makes unit testing very difficult to perform. For simple per-domain business logic, free free to allocate to the domain class. Anything more, and it belongs in a service (or something else, which we'll discuss in a bit).Services
So most business logic seems to end up in Grails services these days. That's what the creators of Grails intended. I have no qualms about that. The beef I have with services is that a method is a crappy abstraction for representing business logic in the system. On larger Grails projects, the service approach seems to break down, as services seem to take on more and more responsibility, making them more difficult to test. I'm also witnessing a lot of code duplication; in it's current incarnation, there is no delineation of public API services versus private services which the public API services compose larger sets of business logic with. What we end up with is large, hard-to-test service methods that collaborate with too many dependencies and do too much.The desire
I want an abstraction in Grails that promotes proper factoring of business logic into unit-testable abstractions. These abstractions can be composed into larger abstractions to provide the necessary logic to fulfill the requirements of the system. The chain of responsibility design pattern may offer some value here. Individual commands that have a singular responsibility can be created, unit tested, and finally composed into "chains" of commands that provide the necessary functionality of the system. The command chains can be integration tested to ensure that the individual commands composition provides the functionality required by the customer/business. When new functionality is needed, a new command chain is created, reusing existing commands where appropriate and creating new commands where functionality does not exist. Spring Batch has a similar concept that is core to its design.Conclusion
I hope to blog a bit more around this in the coming weeks. I really like Grails and would love to see its usage increase in the coming months and years. I think it has some really cool features that allow you to get up and running very quickly. The plugin system alone is a huge advantage to using Grails, because features like a Chain of Responsibility executor can easily be added to the core Grails system.Monday, April 30, 2012
Using Apache Commons Net FTPClient on AWS EC2 instances
enterLocalPassiveMode()
method on the FTPClient
instance. This had me confused for a while, which isn't difficult to do and seems to happen often.
Saturday, January 28, 2012
Understanding the power of isolated object testing
I spent some time last night watching J.B. Rainsberger's excellent InfoQ presentation entitled "Integration Tests are a Scam". I've been recently contemplating why I favor unit testing (or as J.B. calls it, isolated object testing) over integration testing and I found his presentation particularly relevant. I highly recommend watching it. He also has a series of blog entries that support the presentation (Parts 1, 2, and 3). I also recommend reading those. They're truly gems.
Testing to prove basic correctness
I find many developers using integrated tests as a way to prove the basic correctness of the class or system under test. J.B. writes that "While integration tests offer value in other contexts, too many programmers use them to show basic correctness, and when they do that they waste a tremendous amount of time and effort." Integrated testing can be used within a project (I'm personally fond of acceptance testing), but integrated testing should not be used to prove basic correctness of your code. Focused, isolated object tests (aka unit tests using test doubles) should be used for this endeavor. If you discover behavior that a collaborator demonstrates and you have not accounted for in your isolated object tests, you should mimic this behavior in your test doubled collaborator contracts. You want to cover as much of your code with isolated object tests. We'll talk more about contract tests later.
J.B. mentions using an integrated test to learn about how a collaborator might support its contract, but that this integrated test is not included in the basic correctness test suite. I'm wondering if there isn't some other test suite here that we could use to keep integrated tests that support our learning the runtime and external dependencies. This test suite would be run periodically, but is not part of the whole continuous integration process of building a software system. Need to noodle on this more.
Isolated object testing
J.B. states in his aforementioned presentation that he does not use the term "unit testing" and instead favors a more focused term of "isolated object testing". He makes a point to call out the isolated word; these tests isolate the class under test by using test doubles to stub or mock the collaborators of the class under test. These tests focus on a single object and a singular behavior. Any collaborations are realized using test doubles.
I tend to agree that the phrase "unit testing" is a weak phrase describing the type of testing that we use to drive design of a single class and a single method. These tests focus on the responsibility of a single method of a single class. These tests also help you focus on understanding the contracts of collaboration between this class under test and its collaborators. This is really where the design part of the equation comes in. Classes that are hard to test are screaming out that the design is wrong. The tests are saying you have too much responsibility in the class under test or there are too many collaborators. Using your fast isolated object tests, you can engage in a reactive design activity, moving responsibilities out of the class under test, create new collaborators, and other design changes to make .
The collaboration contract
J.B. rightfully makes a big deal about contracts (aka interfaces) to explicitly specify collaboration protocols between objects. He mentions that in Domain-Driven Design, popularized by the Eric Evans book of the same name, three concepts can be used to express a domain-driven model: Values, Entities, and Services. All Services should have contracts and those contracts manifest themselves as interfaces. By specifying interfaces, we explicitly declare the protocol supported by each interface implementation. Constraining ourselves to interface types when specifying collaborations results in looser coupled systems, which is considered a "Good Thing". When engaging in isolated object testing, J.B. details the concept of collaboration tests and the closely associated contract tests. This concept of collaboration tests and contract tests is something new to me and is a big reason the presentation was so valuable in my eyes.
Collaboration tests and contract tests
After watching the presentation, I seem to be doing a pretty good job of using collaboration tests, but I'm not making the association to contract tests. Collaboration tests prove that the client interacts with its collaborators correctly; the client sends the correct messages and message arguments to the collaborator and appropriately handles all outputs from the collaborator. This is traditionally what I have used mock objects for and that seems to be what J.B. is arguing one should do for collaboration testing.
Contract testing, on the other hand, deals with testing that an interface implementation accurately respects the interface it is implementing. Does the implementor support the contract it declares to support? I haven't typically written these types of tests, but I'm going to start. Interestingly, I don't see a lot of this sort of testing in the wild. The part I really dig about contract tests as Rainsberger explains them is that they can be reused across interface implementations. He uses the List interface and two of its implementations, ArrayList and LinkedList, and details how to use implementation inheritance in the tests to DRY up your contract tests.
It really gets interesting when he declares that for every collaboration test that implies that a test double behavior, there better be a corresponding contract test that demonstrates that the interface implementation(s) actually does support that behavior. Same goes for values returned: if a test double returns a value in a collaboration test, there should be a contract test that demonstrates that the real implementation(s) does actually return that value. This is where I think selective, integrated learning tests can help you discover how your classes on the edges of a system may act when integrated to real external dependencies. But again, those integrated tests are not providing basic code correctness semantics. They're in your project to help you learn, but are not part of the isolated object tests suite. Don't lump them with your isolated tests and don't run them as part of your code/update repos/run tests/commit cadence. This learning test suite should be run periodically during the day, but not part of the CI build process.
The death spiral of integrated testing for proving basic code correctness
Both Rainsberger's tutorial presentation and his blog postings go into great detail to the fallacy of using integrated testing to prove code correctness. I won't rehash what he has to say about it. My conclusions that I draw from his material are:
- Focus on isolated object tests and isolation techniques and learn how to do collaboration and contract testing.
- If you need to learn, by all means, write an integrated test. If you need to reproduce a defect, initially write it as an integrated test. However, in all cases, take what you have learned and replicate that in your isolated object test suite. Replicate the behavior you discovered in the integration test(s) with test doubles. This will ensure that your isolated object tests stay true to the behavior of the integrated system.
- Don't get sucked into the downward death spiral of using integrated testing to guide your test-driven design efforts. It will only cause you pain.
J.B.'s series on Integrated Tests are a Scam can be found here.
Friday, November 18, 2011
Spring Security Grails plugin 1.2.4 and transparent password encoding in the User domain object
1 class User {
2
3 transient springSecurityService
4
5 String username
6 String password
7 boolean enabled
8 boolean accountExpired
9 boolean accountLocked
10 boolean passwordExpired
11
12 static constraints = {
13 username blank: false, unique: true
14 password blank: false
15 }
16
17 static mapping = {
18 password column: '`password`'
19 }
20
21 Set<Role> getAuthorities() {
22 UserRole.findAllByUser(this).collect { it.role } as Set
23 }
24
25 def beforeInsert() {
26 encodePassword()
27 }
28
29 def beforeUpdate() {
30 if (isDirty('password')) {
31 encodePassword()
32 }
33 }
34
35 protected void encodePassword() {
36 password = springSecurityService.encodePassword(password)
37 }
38 }
Notice that the domain object now has a springSecurityService injected into it. There's also some GORM callbacks that will be called before the state of the domain object is saved and updated in the database. This is where the encoding now occurs--you should not be doing the encoding explicitly yourself.
Lesson learned!
Thursday, October 13, 2011
Grails GSPs consume Underscore.js ERB-style interpolation delimiters
<%= %>
) before Underscore ever gets a chance to use it. I assume evaluations would also be consumed by GSP. Took me a while to figure this out, so I thought I'd raise it up as a concern. Underscore.js does give you a way to change the delimiters using regexes.
Wednesday, October 12, 2011
Rationale for CoffeeScript's 'Fat Arrow' syntax
I was building a Backbone.View today (in JavaScript) and had to deal with this binding. In my JavaScript, I wanted to call fadeOut on the view's el property (which happens to be a jQuery wrapped element). Without using the jQuery proxy method to bind this appropriately, my statement of 'this.el.fadeOut();' will not work. The this reference is no longer the view at the point I'm using it in the anonymous function; the context for this has changed within the anonymous function declaration. The correct JavaScript code is below.
var MyView = Backbone.View.extend({
initialize: function() {
this.template = _.template($('#my-template').html(), this.model.toJSON());
this.render();
},
render: function() {
this.el.html(this.template);
return this;
},
events: {
"click button#doSomethingButton": "doSomething",
},
doSomething: function(e) {
this.model.set({someValue: $('#someValueTextField').val()});
var promise = this.model.doSomethingOnModel();
promise.done($.proxy(function() {
this.el.fadeOut();
}, this)).fail(function() {
alert('Failed to check sequence uniqueness.');
});
}
});
So how does this relate to CoffeeScript? Well, the fat arrow operator is performing the proxying of the 'this' reference for you. If I wrote the above in CoffeeScript, I could write the done callback as:
…
promise.done => @el.fadeOut()
…
Chaining the done and the fail callbacks would necessitate the use of parentheses, but still very succinct. Score one for CoffeeScript!
Wednesday, October 05, 2011
Using JDBC URLs containing LDAP URI to connect to Oracle databases within Grails
I'm working on a Grails application that needs to connect to a Oracle database using a LDAP context. The URL format is something like the following:
jdbc:oracle:thin:@ldap://tns.mycompany.com:389/marketing,cn=OracleContext,dc=com,dc=marketing
I'm also not using the Grails DataSource.groovy configuration for this. I'm managing a separate DataSource in the resources.groovy using Spring DSL. I'm using the org.springframework.jdbc.datasource.DriverManagerDataSource. I have not tried this with the standard DataSource.groovy stuff. When I first tried using this, I would get an exception with the following text: "javax.naming.NotContextException Not an instance of DirContext". There seems to be a bug with the Spring LDAP and the SimpleNamingContextBuilder class. Basically the SimpleNamingContextBuilder returns a Context implementation, not a DirContext implementation. You can work around this in Grails by adding the following to the Config.groovy file:
grails.naming.entries = null
Problem solved. The DataSource now bootstraps correctly and I can go on my merry way. Kudos to Luke Daley for bringing this to my attention.
Monday, October 03, 2011
Hiking around in Big Woods State Parks
![]() |
Big Woods State Park |
Wednesday, September 07, 2011
CoffeeScript presentations
I did the first CoffeeScript presentation this past August to the Ruby Users of Minnesota (RUM) group and it looks like I'll be doing a second take on the presentation to the Groovy Users of Minnesota (GUM) here in October. If you're interested in the presentation and the examples, you can find them here.
Tuesday, September 06, 2011
Tracing and profiling SQL in Grails using log4jdbc
I spent some time today tracing and profiling SQL in one of the Grails applications that I support. I was looking around for proxy JDBC driver and happened on log4jdbc. It's similar to p6spy, but it seems to be actively developed and supported. Downloaded the driver, dropped it in my lib directory, and changed the logging and datasource configurations a bit in Grails and I was up and running. Very handy. I made copious use of the SQL timings profiling today. There are many other options for tracing and profile with this tool. Here are my changes to Config.groovy for enabling SQL timings to all SQL statements:
Config.groovy change to enable logging of SQL information from log4jdbc:
log4j = {
info 'org.codehaus.groovy.grails.web.servlet',
...
'grails.app',
'jdbc.sqltiming'
}
DataSource.groovy changes to enable log4jdbc:
development {
dataSource {
driverClassName = "net.sf.log4jdbc.DriverSpy"
url = "jdbc:log4jdbc:mysql://localhost/mydb-DEV?useUnicode=true&characterEncoding=UTF-8&useCursorFetch=true&autoReconnect=true"
}
}
Can't say enough good things about this tool. Really helped me zero in on some queries that were performing poorly with large data sets.
Wednesday, August 24, 2011
Identifying producer/consumer scenarios in the wild
I've been working on a data import process the past couple of days, trying to solve some memory issues (OOMEs). Essential we have a reader (the producer) and a writer (the consumer). The writer part of this scenario operates much slower than the reader part. The reader part is implemented as an iterator, so it only produces enough work for the writer to consume. As this design evolved over time, parallel execution of the writer was added, in an effort to speed up the overall writing process. The coordination of the parallelization is an ExecutorService implementation. With this executor service now in place, the iteration of the reader can operate independently of the writer. Thus, the consumer now starts creating lots of tasks that are then submitted to the executor service, where they queue up. The executor service is not bounded, so it just keeps accepting tasks. This wouldn't be a problem if the number of tasks were small and the memory footprint of those tasks was low, but that is not our situation. Thus, we keep blowing out our Java VM process with OOMEs. We're in the process of fixing this issue, using a bounded concurrent collection to handle the buffering of items between the reader and the executor service and ultimately the writer.
In support of Groovy's dynamic method dispatch
Came across a design issue today where we really could have used Groovy's dynamic dispatch and multi-method support to invoke the proper method based on the interrogation of the runtime type of an object instance passed as a parameter to the method. This blog posting by MrHaki gives a great description of how Groovy solves this problem without the need of a double dispatch pattern implementation. We're writing our code in Java, so we don't get the Groovy goodness of dynamic method dispatch and multi-methods. We resorted to a double dispatch implementation to solve our problem. We did not use the instanceof operator and a whole bunch of conditionals.
Sunday, August 07, 2011
Book review: Clean Coder

My rating: 3 of 5 stars
Decent follow-up to Clean Code. Much of the book has been talked about in other avenues. The professional attitude discussion was decent, but mostly common sense. I liked the book, but have seen the content before. Nothing really new here. Definitely meant more for new developers and would wholeheartedly recommend it for that group of people.
View all my reviews
Book review: The Great Derangement

My rating: 3 of 5 stars
Not nearly as good at Griftopia, also by Taibbi. Kind of wanders and there doesn't seem to be any real plot to this book. Kind of wonder if Matt was still on drugs when he wrote this thing. The religion parts were semi-interesting, the political insights were boring. Can't recommend this. Some good parts, but the whole doesn't make for a good read.
View all my reviews
Monday, July 25, 2011
Controlling transitive dependency resolution in Gradle
Just hit this so I thought I would write up a quick entry. I'm trying to get Hibernate and Apache CXF to work together. I have a Gradle build. I ran my test suite and I am seeing issues with CGLib classes. After a little bit of research, it seems there's an issue between the ASM library that Hibernate's CGLib uses and the one that Apache CXF uses. Solution is to exclude cglib-2.1_3.jar and use cglib-nodep-2.1_3.jar instead. To do this in Gradle:
configurations {
all*.exclude group: 'cglib', module: 'cglib'
...
}
dependencies {
compile group: 'cglib', name: 'cglib-nodep', version: '2.1_3'
...
}
Adding these lines to the build.gradle file allow me to remove the cglib-2.1_3.jar dependency and instead specify the nodep version instead. Pretty slick.
Thursday, July 14, 2011
Sharing a 27" Cinema Display using an Atlona AT-MDP21 KVM switch
Quick blog post here. Hooked up an Atlona AT-MDP21 2x1 Mini DisplayPort KVM switch to my 2009 Mac Pro and my 2010 MacBook Air. Both run the 27" Cinema Display at its highest native resolution, 2560 x 1440. Atlona documentation states highest resolution is 1920 x 1200. Easy to set up and I have the Cinema Display USB in the back working between the two computers. The only downside is the slight 1-2 second pause when switch the KVM from one computer to another. Not a big deal. Highly recommended.
Friday, June 24, 2011

My rating: 5 of 5 stars
Awesome non-fiction. Author does a great job keeping the drama and action going. I couldn't put this book down after I started reading it. Interesting to watch Max Butler's (aka Max Vision) downward spiral into cybercrime and black hat hacking. Very eye opening to read the details of how these talented hackers can cloak their infiltration and syphon information from computers for weeks or months.
View all my reviews
Friday, June 17, 2011
Zero Day by Mark Russinovich

My rating: 4 of 5 stars
Pretty good cyber-thriller. I think this is Marks' first novel. Knowledgeable author. The plot is very plausible and believable. A fair amount of technical information around malware, rootkits, computer viruses. A fun read. Very quick read and I found myself not wanting to put it down, especially after I got to the second half of the book. The last third of the book is like a runaway freight train of suspense. Recommended!
View all my reviews
Sunday, March 27, 2011
Grails JAR dependencies with classifiers
Quick post on specifying Grails dependencies in BuildConfig.groovy. The recommended way to suck in JAR dependencies in Grails is to use the dependencies DSL maintained in BuildConfig.groovy. I had a need to bring down a dependency that has a classifier attribute on it. Didn't really find anything definitive on how to do it, but it seemed like following a convention might do the trick. Here's how I solved the issue:
repositories {
grailsPlugins()
grailsHome()
grailsCentral()
mavenCentral()
ebr() // SpringSource Enterprise Bundle Repository
}
dependencies {
runtime group:'net.sf.json-lib', name:'json-lib', version:'2.4', classifier:'jdk15'
}
Saturday, March 05, 2011
Griftopia by Matt Taibbi
I recently read this book after seeing Alan Cooper had read it and stated that it was a terrifying book. I wondered what would be so terrifying about "Bubble Machines, Vampire Squids, and the Long Con That is Breaking America". After reading it, I wouldn't characterize it as terrifying as much as I would characterize it as infuriating. The incompetence, greed, self-interest, and gluttony that is repeatedly portrayed in the book is extremely infuriating to me as a hardworking American citizen that pays taxes. The book chronicles some of the most audacious power grabs this nation has ever seen, and in most instances, those power grabs are happening during the past two decades. Taibbi chronicles why the Tea Party is chasing its own tail, lambasts Alan Greenspan as "a one-in-a-billion asshole that has made America the mess it is today", and details the mortgage, commodities, and wealth fund scams that we, American taxpayers, have had to endure the last couple of years. The book is written in a no-holds barred fashion with a fair amount of profanity thrown in to spice up the prose. It's an entertaining read, but also very thought provoking and sheds some interesting light on the current political climate, especially around Obamacare and the health insurance industry. Very highly recommended.
Friday, March 04, 2011
Groovy Remote Control plugin via Maven
I had some issues getting the Groovy Remote Control plugin to pull down through Maven today. The documentation that is currently in place today is not correct. Here is the fragments of my Maven POM that enabled me to pull the plugin as a dependency:
Sunday, February 27, 2011
It's all about the conversations!
More pondering as I contemplate my previous consulting gig. This time, I'm considering requirements discovery. Pre-agile, people would write large requirements documents, hoping to document all the requirements needed for the developers to build a system that would satisfy the customers. Unfortunately, this view of discovering all the requirements ahead of time is awfully naive. Things change. Requirements go undiscovered. Requirements that are captured are not thought out as well as we would like and when it comes time to implement the requirements in software, the requirements don't make sense or are plain wrong. Thus, large efforts to capture all the requirements has some amount of wasted effort. This effort focuses most of the conversations at the beginning of the project. Conversations after the requirements document has been written and signed off is discouraged; it's viewed as evidence that there are errors in the requirements document. There doesn't seem to be any room for learning with requirements documents.
Now we have agile methods and the user story. User stories are not requirements. They're a planning tool for the agile team. They are a statement of value that the business would like built. There may or may not be acceptance criteria associated with the user story, depending on where the user story is in its lifecycle. The most important part of the user story is the conversations that need to happen to flesh the story out so the feature value can be realized in the product. This detail seems to escape a lot of people trying to use user stories to build products. User stories give me the freedom to have conversations with all interested parties regarding the statement of value. I learn just-in-time about the requirements as I implement the feature. The business learns about how those requirements they communicated manifest themselves in a product. And we both have the freedom to adjust, learning about the product along the way.
The above statements about user stories does not preclude one from using prepared, written documentation to feed the conversation. My previous gig, we have a lot of federal government documentation regarding reporting and calculation requirements. These requirements are set in stone by the government. Yet, there needs to be conversations as to how those requirements will be accomplished across releases and sprints. This is where the conversations come back to the forefront.
User stories allow me to deliver value to the business or customer in bite-size portions. Value is delivered in small increments, thus I can deliver these more frequently and solicit feedback from the business or customer. When I have short feedback loops in place, I can nimbly change my course if I need to.
Agile database modeling
My previous project has me doing a lot of introspection lately. One of the recurring themes that I have been noodling on has been evolving a data model using agile development techniques. The applications being built on this project are based on the .NET platform. The development group is currently using the Database Project template in Visual Studio 2010 but looking to support a database migration process here soon. The VS 2010 Database Project template does an admirable job of keeping track of all the DDL for your project, but it offers nothing for refactoring your database over time and migrating a production database. It seems like the template is meant for rebuilding the database from the ground up, thus there is no concept of database schema changes or migrations.
The database will evolve over time. Development groups should learn how to build their data model incrementally over many iterations and releases. Database migration tools can be very helpful in your quest to evolving your database over time. Tools like Liquibase and Rails migrations are very good at supporting this sort of development behavior. There are tools on the .NET platform that do this sort of thing.
Another thing that has caused quite a bit of headache is the desire of the data group (data architects and DBAs) to try to get out ahead of the developers some ways and build out much more of the data model than the developers need for the current sprint. We have found that when your data modeling efforts are not driven from user stories that they tend not to align with the efforts of completing the user stories. Thus developers and data people end up conversing about data model changes that could have been avoided in the first place by waiting for the right time to initiate changes to the data model. My advice for building an operational data store that one or more applications will be developed on top of: don't try to develop the entire data model upfront. You will inevitably end up changing the data model to support requirements as they change (hopefully you're using stories to guide your development). Typically your application developers will have constraints and needs that need to be accommodated in the data model. As the data person, you need to be communicating with them and working from user stories. Ideally, the data people are part of the project team and are dedicated to the agile process. We really did not have that on this project. I think that caused issues and slowed us down a bit.
Refactoring Databases should be required reading for development groups.
Great experience with Acceptance Test Driven Development (ATDD) and SpecFlow
I recently left a .NET gig where I was brought in to bring agile and craftsmanship behaviors to the entire project team. One of the more successful endeavors was the acceptance test-driven development (ATDD) effort. I was lucky enough to bring Joel Levandoski (joellevandoski) on board to head up this effort. Joel is an awesome developer and on this project, he was running as the lead QA resource, bringing automated testing to the group. The QA group traditionally had not done automated testing, so we had a bit of learning to do. Joel did a fabulous job of learning a new tool (SpecFlow for .NET) and training other QA people on its use.
Our QA developers created acceptance tests from the acceptance criteria specified in the user stories. We spent a lot of time evolving our user stories, but by the end of the first six months, I think we finally have a format that we like and that will facilitate communication between all the participants on the project. Our acceptance tests were written using SpecFlow, a Gherkin-compliant BDD testing framework for the .NET platform. SpecFlow is an amazing tool and its integration with Visual Studio is pretty nice. Having this integration with the IDE is a great selling point to using SpecFlow; SpecFlow specifications can be translated to normal xUnit tests using the VS integration. SpecFlow generates a stub unit test driver class for every feature file and it's compliant with several unit testing frameworks. This SpecFlow feature allows it to run directly from the various GUI unit test runners. This is a nice convenience. We used the MSTest generation strategy baked into SpecFlow.
Initially we used WatiN for testing our ASP.NET MVC app, but later migrated to an application suite of web and Silverlight applications. Therefore, we gravitated to using WebAii from Telerik. This tool worked well for testing both web and Silverlight environments.
Our specifications were very focused on actions and outcomes to those actions. We took a concerted effort to push the details of the steps into the fixture code. Keep the specifications light and to the point. We followed the technique advice from Concordion.org. Doing this really made our specifications communicate the intent of "what" was being tested, not the "how" the test worked.
Many thanks to Joel Levandoski and Tim Anderson for driving the ATDD efforts forward and making this effort a reality. There were numerous times that I would find Joel writing specifications during sprint planning as the rest of the group was reviewing user stories and associated acceptance criteria for the upcoming sprint. This is incredibly powerful to come out of sprint planning with a good portion of your acceptance tests executable (all fail as inconclusive).
Wednesday, January 12, 2011
Tweaking your user story mapping efforts
I had a great day leading a business group through a user story mapping session at my current client. Story mapping is technique that Jeff Patton has popularized for giving your product backlog some structure. I've done a couple of these user story mapping sessions with pretty good success. Today, we changed up a couple of things with the session and saw some good results. Thought it might be worth a blog posting.
First, identify your high-level activities and lay them out across a wall. We used large Post-It sheets and attached one high-level activity to each sheet. By doing this, we could move activities and associated tasks around the room, allowing us to rearrange priorities easily.
Next, give the customer/business group five (5) minutes to come up with as many tasks as they can think of for each high-level activity. Time-boxing the effort keeps you on a regular cadence. Post the task Post-Its on the large Post-It sheets in no particular order. Move from one activity to the next, spending the same amount of time on each. Don't worry about duplicate tasks or the prioritization of the tasks. You'll come back to these, culling and prioritizing the tasks associated with each activity.
After harvesting the tasks for each activity, go back to each activity and cull out the duplicate tasks and prioritize the tasks according to Jeff Patton's story mapping technique. We spent 20 minutes on each activity and were able to get a backbone of tasks defined, with other non-core tasks associated with the activity.
Another technique for ensuring that high priority tasks percolate to the walking skeleton row of the story map is to give the business people sticky dots to place on the tasks that they think are core. We had our business folks put their initials on the sticky dots so we knew who voted up the task. The dots stand out on the story map and the business really liked using this prioritization technique. Having the initials on the dots gives you added information regarding who is connected to what stories.
Now that the tasks are prioritized, you can walk your story mapping and talk about it with your customers/business people. Walking the story map ensures that the ordering of the activities and tasks makes sense and nothing has been missed. By tweaking our story mapping session today, we were able to keep everyone in the business group engaged and the conversations flowing.
Tuesday, January 11, 2011
PeepCode has new Rails 3 videos up
If you're interested in getting up and running with Rails 3, I recommend PeepCode's videos.
Testing as a learning sandbox
I've been spending some quality time with NHibernate 3.0. Last night I got stuck on an issue with the Criteria query where a collection passed to a constructor was null and the framework was complaining. After writing a couple of integration tests that tested various parts of my domain object model, I was able to determine that collection types that I was using for the many-side of relationships (ISet<T> and HashSet<T> in this case) where the inappropriate collection types to be using for my collection semantics configuration. I'm continually amazed at how powerful testing, both unit and integration, can be. My tests today allowed me to create a sandbox to try things and work out a misunderstanding that I had with NHibernate. Pretty cool.
Friday, December 31, 2010
Anatomy of a successful large agile project
I recently had a conversation with a colleague of mine at a company where I’m currently consulting. We’ve been trying to bootstrap a collection of projects using an agile development process and associated software craftsmanship behaviors. We have had mixed success to date. Frustrated with the progress, my colleague asked me to enumerate what I felt were the success factors on the WestlawNext project that I had recently participated on.
I worked at Thomson Reuters from January 2008 to August 2010, during the initial releases of the WestlawNext project. I was with the project from the beginning of its software development; the product development group had been working on the inception of the WestlawNext project for a couple of years prior. WestlawNext was a very large project. Hundreds of people were involved and millions of dollars were spent to build the next generation legal and regulatory research tool. A lot was riding on this product. It had to be a success—there was no option for failure. The following themes are what I feel made this project a success.
Attitude
Now that I’ve had time to ponder my WestlawNext experience from afar, I think the number one reason for its success was attitude. This was an audacious effort to build a new legal research tool in two years time with the number of people involved using an agile software development process.
But from the very beginning, a “can-do” attitude was instilled in the group that we would succeed. We were going to “knock the ball out of the park” with this product. There was never a thought that this thing might fail. Many concerted efforts were made to continually propagate this attitude throughout the participants of the project. Project tee-shirts, baseball trading cards, raffles, and summer socials were utilized to promote this team spirit. This infectious attitude allowed us to overcome obstacles that would probably derail other projects. People were willing to take responsibility for their work and put in the effort over and above the call of duty time and time again.
Communication
Communication is one of the most important functions of a software development project. Large projects are very susceptible to communication breakdowns as the number of people increase. We tried to minimize these breakdowns by favoring face-to-face communication as much as possible. We were encouraged as software developers to pair program. Designers were encouraged to work directly with developers on styling concerns. We were encouraged to collaborate together when tough problems arose. No GoToMeeting. No conference calls. Face-to-face conversations.
We were extremely lucky to be able to co-locate almost everyone on the project in three areas of the Thomson Reuters facility in Eagan. When I mean everyone, I mean business people, vice presidents, directors, testers, designers, managers, coaches, and software developers. This is one of a few places that I have consulted that have had the luxury of co-locating people in common areas.
Leadership
WestlawNext benefited from strong leadership that what 100% dedicated to the project. No other obligations—they were focused solely on the development on the new product. Our leaders were also quite familiar with agile software development process. Some of them had come from other agile projects, both within the company and from outside. They didn’t have to start from square one and many already knew the key behaviors of the process. A few of them were software developers at one time (or still are). This is refreshing from a developer’s point of view. They understand what it takes to build software; they’ve been in the trenches.
There is one moment in particular that I am very fond of. I was working on a tough networking issue with some .NET code that we had provided another group. We were throwing spurious socket closed exceptions, but it didn’t happen all the time and it seemed to occur only when the server load increased. Our senior director, one of our leaders, was helping triage the issue and participating in our root cause analysis. This senior director had technical chops and was quite proficient at network analysis. He rolled up his sleeves and got right in and loved being able to help solve the issue. We did solve the issue; it turned out to be tied to a deprecated thread-local storage API in .NET. That leader earned a ton of my respect that day.
Testing
Testing is paramount to building a quality software product. The WestlawNext project embraced testing like I have never seen before in my career. We evolved our designs with unit testing. We used integration testing to ensure that software components were wired together correctly. Acceptance testing ensured that features did not regress in future iterations of development. Load and performance testing was continuously run in an effort to tune the overall product. Beta testers were allowed to play with the software well in advance of its initial release date, ensuring that it satisfied the customer.
All of this testing allowed us to build tight feedback loops, giving us near-instantaneous data on the health of our growing and evolving product. The suites of tests infused confidence within the project group; we knew exactly how the software performed at specific load levels. I cannot fathom working on a software development project that does not fully embrace the aforementioned levels of testing.
Conclusion
In conclusion, I’m starting to realize that the WestlawNext project may have been one of those rare moments where everything came together in near perfect harmony to produce a great product. As I have moved on from Thomson Reuters, I yearn to replicate a similar experience at my other clients. My current engagement only reinforces the fact that every software development project takes a different path to success, and some may never make it to the end.
Thursday, December 02, 2010
Sunday, October 31, 2010
Kaleidoscope diff tool for Mac OS X
Found a really interesting new diff and merge tool for the Mac: Kaleidoscope. Native app, integrates with Versions, Cornerstone, and the command line. Looks promising.
Autofixture: A generic Test Data Builder implementation for .NET
Just came across a Test Data Builder implementation for .NET, Autofixture. The Test Data Builder pattern has become quite popular recently since it was mentioned in Growing Object-Oriented Software, Guided by Tests. I've used the pattern before, but I've always built the builder implementations by hand. This implementation looks really promising.
Wednesday, October 13, 2010
Great video on what motivates us
Great video on drive and motivation. Love the whiteboard drawings. Spend 10 minutes watching this video.
Tuesday, August 31, 2010
Loving my new Magic Trackpad
Just received my Magic Trackpad from Apple today. Very impressed after a bit of use with it. Love all the different gestures that you can map to. After using this, a mouse is going to seem awfully archaic.
Monday, May 17, 2010
Brief history of mock objects
http://www.mockobjects.com/2009/09/brief-history-of-mock-objects.html
Monday, May 10, 2010
Steve Freeman on sustainable TDD
Excellent presentation by Steve Freeman on sustainable TDD. Lots of great tips for making your unit tests easier to comprehend and maintain.
http://www.infoq.com/presentations/Sustainable-Test-Driven-Development
Sunday, May 09, 2010
Practical styles of pair programming
Excellent blog on pair programming.
http://blog.xebia.com/2010/05/09/practical-styles-of-pair-programming/
A quote from the blog entry:
"No you're not faster on your own, you're just creating more crap for your colleagues to puzzle over and eventually delete. The code you write alone sucks. That guy that is getting on your nerves is trying to tell you (clumsily) that your code sucks, try to listen to him and you'll turn into a better programmer."
Have you encountered one or more of these styles? How many developers are pair programming these days?
Monday, May 03, 2010
Tool coming to Xcel Energy Center
Tool concert on July 1, 2010
Tickets go on sale this Saturday.
Thursday, April 29, 2010
Uninstalling Mac developer tools
sudo /Developer/Library/uninstall-devtools --mode=all
Found this advice here.
Monday, April 26, 2010
Mac OS X key bindings
http://www.erasetotheleft.com/post/mac-os-x-key-bindings/
Unfortunately, this doesn't seem to work with Xcode. My original investigation started here.
Sunday, April 18, 2010
Test-Driven in Groovy presentation in the bag
Thursday, April 08, 2010
iPhone presentation
http://groups.google.com/group/jamsessions/web/10-04-07-iphone-os-the-next-killer-platform
A huge thanks goes out to my co-presenter, Bob McCune. The SenateJam demo was a big hit.
Sunday, April 04, 2010
Preparing for DevJam Jam Session iPhone presentation
Wow! Talk about growth. Looking forward to the presentation.
Saturday, April 03, 2010
First impressions of the new iPad
The processor in this device is very fast; the device is very snappy with animations, games, and video. I can see game makers really going after this thing. EA seems to have a couple of games out for the iPad (I tried Scrabble and Need for Speed). Game control was very good for Need for Speed.
Transferring .epub files to your iPad
Tuesday, March 23, 2010
Nice Mercurial tutorial by Joel Spolsky
Friday, March 19, 2010
Mockito's @InjectMocks annotation does reflection-based DI
Tuesday, March 16, 2010
Cool features in Cucumber, part 1
Saturday, March 13, 2010
PeepCode screencast: Use the Cucumber
CoverScout for retrieving iTunes album art
Monday, March 08, 2010
New annotations in mockito 1.8.3
@Captor
This annotation will automatically create typed argument captors (
org.mockito.ArgumentCaptor<T>
) in your unit tests. Argument captors are essential in verifying indirect outputs to your mocked collaborators.
public class Test{
@Captor ArgumentCaptor<Foobar> foobarCaptor;
@Before
public void init(){
MockitoAnnotations.init(this);
}
@Test
public void shouldDoSomethingUseful() {
//...
verify(mock.doStuff(foorbarCaptor.capture()));
Foobar capturedFoobar = foobarCaptor.getValue();
assertEquals("foobar", capturedFoobar.getName());
}
}
@InjectMocks
Automatically injects mocks by type using setter injection. Constructor injection is not currently available, but if you want to provide a patch, the mockito team will gladly consider your contribution. I'm actually more interested in reflection-based injection, similar to what Spring uses when annotating dependency fields using @Autowired. Having your unit tests inject dependencies via reflection would help me avoid the set* methods on the implementations. I may have to play with this a bit.
public class FooBarManagerTests {
@Mock private FooDependency mockFoo;
@Mock private BarDependency mockBar;
@InjectMocks private FooBarManager manager = new FooBarManagerImpl();
@Before
public void initMocks() {
// Initializes all mocks and then injects those mocks into the FooManager instance.
MockitoAnnotations.initMocks(this);
}
@Test
public void shouldDoSomething() {
manager.doSomething();
verify(mockFoo).doSomethingToFoo(any(String.class));
verify(mockBar).doSomethingToBar(any(Integer.class));
}
}
Saturday, February 27, 2010
Gerard Meszaros "From Concept to Product Backlog" talk
Friday, February 26, 2010
Adopting Agility with XP, Scrum and Lean Thinking - Powered by RegOnline
Adopting Agility with XP, Scrum and Lean Thinking
Monday, February 08, 2010
DevJam course offerings for February-March 2010
Wednesday, January 13, 2010
IntelliJ IDEA 9 as an Adobe Flex IDE
I can't comment on how it stacks up against Flash Builder 4, but Flex development in IntelliJ is a joy. I'll post more as a I get a chance to use it some more.
Wednesday, December 16, 2009
Are you cultivating your software development expert mind?
I spend a fair amount of time reading books and then applying the reading to personal coding projects. Typically I have used these projects to learn a new technology. Recently I've started working through Dave Thomas's Code Katas. From Dave's Code Kata website:
Code Kata is an attempt to bring this element of practice to software development. A kata is an exercise in karate where you repeat a form many, many times, making little improvements in each. The intent behind code kata is similar. Each is a short exercise (perhaps 30 minutes to an hour long). Some involve programming, and can be coded in many different ways. Some are open ended, and involve thinking about the issues behind programming. These are unlikely to have a single correct answer.
So, again, what are you doing to practice software development?
Friday, November 13, 2009
Not a fan of Apple's new Magic Mouse
Anyone else seeing issues with the basic laser tracking performance?
Wednesday, November 11, 2009
Using argument matchers in EasyMock and mockito
expect(mockObject.retrieveSomething((String) anyObject(), false)).andReturn(someObject);
Unfortunately, EasyMock and mockito do not like this. They both want you to use matchers for all parameters if you use matchers for any parameters. However, the two libraries react quite differently when this situation occurs. EasyMock complains with a somewhat confusing message that at first blush makes it seem like we declared the expectation for multiple invocations. It really threw us off for a while (at least an hour) trying to figure out what was wrong with our expectations. Here is how we fixed it in EasyMock:
expect(mockObject.retrieveSomething((String) anyObject(), eq(false))).andReturn(someObject);
Mockito does a much better job of stating that when you use a parameter argument matcher in your expectation, you have to use parameter argument matchers for all of the parameters of the method call participating in the expectation. I find it interesting that mockito retains the behavior of EasyMock (mockito is a fork of EasyMock) with regards to argument matching, but mockito improves on the error messaging when something goes wrong with the mock object setup. Further reinforces my decision to forego EasyMock in favor of mockito.
Getting a handle on code quality with Sonar
We've been working on getting unit tests built around a legacy code base and Sonar has been a big help in identifying classes that are the biggest code coverage offenders. We used the Clouds feature, a word cloud that weights the class names in the cloud based on code coverage and complexity. The less test coverage on the class and/or the more complex the class, the larger the weight of that class name word in the word cloud. It really helped us focus on where to direct our testing efforts.
I have yet to get this tool up and running in one of my own projects, but things are finally starting to simmer down now with consulting and training activities that I hope to focus on building out a CI environment using Hudson and hooking in Sonar to that environment. Stay tuned.
Tuesday, November 10, 2009
Promoting keystroke use in Eclipse
Here is a screencast of the MouseFeed plugin in action:
Thursday, November 05, 2009
Completed another Test Driven and Refactoring course for DevJam
One area that we will need to work on is the mock objects content. We don't have any hands-on exercises for using mock objects and we heard about it in the reviews of the course. I did walk everyone through a demonstration of using mock objects in your unit tests, but I mis-gauged how much interest the participants had in mock objects and the desire to get their feet wet with mock objects. Some of the class participants stay after the course ended and we did another 40 minutes of live coding demos on the use and features of mock objects (using moq for the mocking framework in .NET).
All in all, an awesome two days for me and hopefully for the course participants.
Tuesday, October 27, 2009
Test Driven and Refactoring class in Chicago
I did have a few participants that actually bowed out after the first day of training. One in particular was very abstinent about not writing tests and really does not believe unit testing and TDD in particular are useful in software development. This person was very much in favor of big design up front. This person's views really threw me for a loop. The group that I gave the training to has significant issues with quality, so the view of testing not worth the effort seemed very ironic in this situation. Needless to say, I was not able to get this person to realize how unit testing and TDD help you in the design process. Oh well, you can't win them all over.
I heard a lot of good feedback around the mock objects example that I demonstrated. In this example, I demonstrated not only behavior verification with the mock objects, but also was able to demonstrate capturing indirect outputs on the mock objects and then verifying the state of these indirect outputs. I used mockito 1.8 for the demo. All in all a great class.
Second Groovy and Grails training in the bag
Wednesday, October 07, 2009
Using the new Groovy-Eclipse V2 plugin
Monday, October 05, 2009
First Groovy and Grails training is in the bag
Don't put developers on an operating system that they don't know for training
The DevJam training has Mac minis which boot either Mac OS X Snow Leopard or Windows Vista (via Bootcamp). I had the systems booted to Mac OS X for the training. Unfortunately all of the developers that came to this training were unfamiliar with Mac OS X, but willing to try it. Bad move on my part. I ended up answering far too many questions on the operating system and which tools were were going to use. Oracle SQL Developer also gave me problems in the Mac OS X environment when trying to update the tool with the MySQL drivers through its software updating system.If you think you have enough code examples, you don't!
I had about 10-12 Groovy code examples to demonstrate various features of the language. Far too few for the questions that cropped up. Luckily, it was Groovy and writing new code examples or changing existing code examples was pretty straightforward. Kudos to Groovy for being very easy to explore and play with. The participants thought very highly of the interactiveness of the coding during the session.
Don't try to do both Groovy and Grails in a single day.
I knew going in that doing both was going to be very difficult. I just didn't realize how difficult it would be. Again, due to operating system and tool snafus, I didn't finish up the Groovy stuff until well into the afternoon. Not much time for Grails. I was looking forward to the exercise in Grails and we didn't get very far with it.
Automate the packaging of the student materials in electronic format.
We decided to put all the training handouts, examples, and anything else helpful for the students on 4 GB flash drives and give the flash drives to the students to keep. That's good. What's not good is missing some things on the flash drive and updating flash drives during the course. Next time I'll use an Ant build script to build a distribution and clean out any Subversion metadata from the student materials.
The Groovy Eclipse plugin seems to be making headway.
One of the participants in the group, Nick Spilman, had his laptop along and was using Eclipse Galileo and the new Groovy Eclipse plugin during the Groovy portion of the training. He thought it worked well with Groovy. I used IntelliJ 9.0 EAP (Maia) and that also works well. Looks like SpringSource (or shall I say VMware now) is getting serious on the tooling for Groovy and Grails.
Need to spend more time on understanding Groovy's meta-programming facilities.
It's one thing to use Groovy and use its meta-programming faclities (aka MOP) successfully in your own work. It is a far different thing to try and teach others about Groovy's meta-programming facilities. Teaching a concept, especially a concept as complicated as meta-programming, is extremely difficult.
Performing test runs of presentations and trainings is essential.
I'm very pleased that I was afforded the opportunity to be able to offer a couple of test runs of this training to some developers before offering it to the public. Like anything else, it takes practice and feedback to get good at something. I have another test run of this training later this month and I'm sure it will be much better than it was the first time out.
Saturday, September 26, 2009
ColorSchemeDesigner.com
Monday, September 21, 2009
Video conversion with HandBrake
Thursday, September 17, 2009
Is Spring Framework becoming a configuration nightmare?
Tuesday, September 15, 2009
Keeping traceroute/tracepath by my side these days
Monday, September 07, 2009
Snow Leopard ships with Subversion 1.6.5 support
Thursday, September 03, 2009
Snow Leopard installation frees up a ton of disk space
.NET thread local storage implementation kicks my butt
From Microsoft:
The .NET Framework provides two mechanisms for using thread local storage (TLS): thread-relative static fields (that is, fields that are marked with the ThreadStaticAttribute attribute) and data slots. Thread-relative static fields provide much better performance than data slots, and enable compile-time type checking. For more information about using TLS, see Thread Local Storage: Thread-Relative Static Fields and Data Slots. (http://msdn.microsoft.com/en-us/library/system.threading.thread.allocatenameddataslot.aspx)
Monday, August 31, 2009
Mac Pro upgraded to Snow Leopard
Tuesday, August 18, 2009
Know your SUT and your mocks
Wednesday, August 12, 2009
My top 5 favorite Mac OS X apps/tools/utilities
- Google Quick Search Box/Quicksilver: I was a big fan of Quicksilver, but it seems to be dead. Sounds like the Quicksilver developer is now working with/for Google on something similar: Google Quick Search Box (QSB). Love QSB and it seems to be in active development too.
- iTerm: Gotta have a command line. This one does tabbed terminal consoles.
- TextMate: Great programmer's editor.
- Versions: A good Subversion client for OS X.
- Growl: Notification system. Extremely helpful.
Sunday, June 21, 2009
First reactions to Xcode 3.1.3
Thursday, May 14, 2009
Learn IDE key mappings while pair programming
I've been doing a fair amount of pair programming lately on my current gig, a good thing. One behavior that we have been practicing while pair programming is gently forcing the driver of the pair to use keymappings to activate various actions within the IDE. Neal Ford describes this in his book The Productive Programmer. Practicing this behavior while pair programming has proven very valuable to increasing our productivity while in the code base. Neal mentions the KeyPromoter plugin for IntelliJ; I've tried this plugin and I haven't become a big fan of it. Much easier if your navigator keeps you honest.
I've done this before with pair programming while working at Identix in 2004 with Hans Loedolff. Hans knew all the IntelliJ keymappings and could type at about 90 words per minute. He was an excellent pair programmmer.