Pages

Friday, November 18, 2011

Spring Security Grails plugin 1.2.4 and transparent password encoding in the User domain object

Just spent some significant time today integrating Spring Security Grails plugin 1.2.4. It should have been a quick integration but I got stuck on some new functionality that has been added to the generated User domain object that this plugin generates when using the quick start script. You should now set the password on this domain object using the cleartext string. Look at the User domain object (you may have it named something else, but it's the domain object that represents the user). This domain class is now handling the encoding of the password transparently. I copied some code from another Grails app that was doing the encoding of the password explicitly. You should no longer do this--it will cause problems when attempting to authenticate. Basically I was doubly encoding the password, the plugin definitely does not like that. Here's what the plugin generates for a user domain object in 1.2.4:

 1 class User {
2
3
transient springSecurityService
4
5
String username
6 String password
7 boolean enabled
8 boolean accountExpired
9 boolean accountLocked
10 boolean passwordExpired
11
12
static constraints = {
13 username blank: false, unique: true
14 password blank: false
15 }
16
17
static mapping = {
18 password column: '`password`'
19 }
20
21
Set<Role> getAuthorities() {
22 UserRole.findAllByUser(this).collect { it.role } as Set
23 }
24
25
def beforeInsert() {
26 encodePassword()
27 }
28
29
def beforeUpdate() {
30 if (isDirty('password')) {
31 encodePassword()
32 }
33 }
34
35
protected void encodePassword() {
36 password = springSecurityService.encodePassword(password)
37 }
38 }


Notice that the domain object now has a springSecurityService injected into it. There's also some GORM callbacks that will be called before the state of the domain object is saved and updated in the database. This is where the encoding now occurs--you should not be doing the encoding explicitly yourself.

Lesson learned!

Thursday, October 13, 2011

Grails GSPs consume Underscore.js ERB-style interpolation delimiters

If you are planning to do any work with Grails and Underscore.js (or Backbone.js) and you have view templates maintained in Groovy Server Pages (GSP) with Underscore.js ERB-style interpolation markup, watch out! GSP will eat up the Underscore.js interpolation markup (<%= %>) before Underscore ever gets a chance to use it. I assume evaluations would also be consumed by GSP. Took me a while to figure this out, so I thought I'd raise it up as a concern. Underscore.js does give you a way to change the delimiters using regexes.

Wednesday, October 12, 2011

Rationale for CoffeeScript's 'Fat Arrow' syntax

I gave a CoffeeScript presentation last night at our local Groovy Users of Minnesota meet up. One of the questions that came up last night had to do with CoffeeScript's fat arrow (=>) syntax, which will both define a new function and bind the new function to the current value of this. This is super helpful for callback-based libraries, such as jQuery.

I was building a Backbone.View today (in JavaScript) and had to deal with this binding. In my JavaScript, I wanted to call fadeOut on the view's el property (which happens to be a jQuery wrapped element). Without using the jQuery proxy method to bind this appropriately, my statement of 'this.el.fadeOut();' will not work. The this reference is no longer the view at the point I'm using it in the anonymous function; the context for this has changed within the anonymous function declaration. The correct JavaScript code is below.


var MyView = Backbone.View.extend({

initialize: function() {
this.template = _.template($('#my-template').html(), this.model.toJSON());
this.render();
},

render: function() {
this.el.html(this.template);
return this;
},

events: {
"click button#doSomethingButton": "doSomething",
},

doSomething: function(e) {
this.model.set({someValue: $('#someValueTextField').val()});
var promise = this.model.doSomethingOnModel();
promise.done($.proxy(function() {
this.el.fadeOut();
}, this)
).fail(function() {
alert('Failed to check sequence uniqueness.');
});
}
});


So how does this relate to CoffeeScript? Well, the fat arrow operator is performing the proxying of the 'this' reference for you. If I wrote the above in CoffeeScript, I could write the done callback as:



promise.done => @el.fadeOut()



Chaining the done and the fail callbacks would necessitate the use of parentheses, but still very succinct. Score one for CoffeeScript!

Wednesday, October 05, 2011

Using JDBC URLs containing LDAP URI to connect to Oracle databases within Grails

I'm working on a Grails application that needs to connect to a Oracle database using a LDAP context.  The URL format is something like the following:

jdbc:oracle:thin:@ldap://tns.mycompany.com:389/marketing,cn=OracleContext,dc=com,dc=marketing

I'm also not using the Grails DataSource.groovy configuration for this.  I'm managing a separate DataSource in the resources.groovy using Spring DSL.  I'm using the org.springframework.jdbc.datasource.DriverManagerDataSource.  I have not tried this with the standard DataSource.groovy stuff.  When I first tried using this, I would get an exception with the following text: "javax.naming.NotContextException Not an instance of DirContext".  There seems to be a bug with the Spring LDAP and the SimpleNamingContextBuilder class.  Basically the SimpleNamingContextBuilder returns a Context implementation, not a DirContext implementation.  You can work around this in Grails by adding the following to the Config.groovy file:

grails.naming.entries = null

Problem solved.  The DataSource now bootstraps correctly and I can go on my merry way.  Kudos to Luke Daley for bringing this to my attention.

 

Monday, October 03, 2011

Hiking around in Big Woods State Parks

Some photos from our family hike through Big Woods State Park near Nerstrand, MN. Beautiful fall colors. Not quite peak in this area, but we'll take it. Awesome weather.

Big Woods State Park

Wednesday, September 07, 2011

CoffeeScript presentations

I did the first CoffeeScript presentation this past August to the Ruby Users of Minnesota (RUM) group and it looks like I'll be doing a second take on the presentation to the Groovy Users of Minnesota (GUM) here in October.  If you're interested in the presentation and the examples, you can find them here.

Tuesday, September 06, 2011

Tracing and profiling SQL in Grails using log4jdbc

I spent some time today tracing and profiling SQL in one of the Grails applications that I support.  I was looking around for proxy JDBC driver and happened on log4jdbc.  It's similar to p6spy, but it seems to be actively developed and supported.  Downloaded the driver, dropped it in my lib directory, and changed the logging and datasource configurations a bit in Grails and I was up and running.  Very handy.  I made copious use of the SQL timings profiling today.  There are many other options for tracing and profile with this tool.  Here are my changes to Config.groovy for enabling SQL timings to all SQL statements:

Config.groovy change to enable logging of SQL information from log4jdbc:

log4j = {
info 'org.codehaus.groovy.grails.web.servlet',

...
'grails.app',
'jdbc.sqltiming'
}

 

DataSource.groovy changes to enable log4jdbc:

development {       
dataSource {
driverClassName = "net.sf.log4jdbc.DriverSpy"
url = "jdbc:log4jdbc:mysql://localhost/mydb-DEV?useUnicode=true&characterEncoding=UTF-8&useCursorFetch=true&autoReconnect=true"
}   
}

 

Can't say enough good things about this tool.  Really helped me zero in on some queries that were performing poorly with large data sets.

Wednesday, August 24, 2011

Identifying producer/consumer scenarios in the wild

I've been working on a data import process the past couple of days, trying to solve some memory issues (OOMEs).  Essential we have a reader (the producer) and a writer (the consumer).  The writer part of this scenario operates much slower than the reader part.  The reader part is implemented as an iterator, so it only produces enough work for the writer to consume.  As this design evolved over time, parallel execution of the writer was added, in an effort to speed up the overall writing process.  The coordination of the parallelization is an ExecutorService implementation.  With this executor service now in place, the iteration of the reader can operate independently of the writer.  Thus, the consumer now starts creating lots of tasks that are then submitted to the executor service, where they queue up.  The executor service is not bounded, so it just keeps accepting tasks.  This wouldn't be a problem if the number of tasks were small and the memory footprint of those tasks was low, but that is not our situation.  Thus, we keep blowing out our Java VM process with OOMEs.  We're in the process of fixing this issue, using a bounded concurrent collection to handle the buffering of items between the reader and the executor service and ultimately the writer.

 

In support of Groovy's dynamic method dispatch

Came across a design issue today where we really could have used Groovy's dynamic dispatch and multi-method support to invoke the proper method based on the interrogation of the runtime type of an object instance passed as a parameter to the method.  This blog posting by MrHaki gives a great description of how Groovy solves this problem without the need of a double dispatch pattern implementation.  We're writing our code in Java, so we don't get the Groovy goodness of dynamic method dispatch and multi-methods.  We resorted to a double dispatch implementation to solve our problem.  We did not use the instanceof operator and a whole bunch of conditionals.

Sunday, August 07, 2011

Book review: Clean Coder

Clean Coder, The: A Code of Conduct for Professional ProgrammersClean Coder, The: A Code of Conduct for Professional Programmers by Robert C. Martin

My rating: 3 of 5 stars


Decent follow-up to Clean Code. Much of the book has been talked about in other avenues. The professional attitude discussion was decent, but mostly common sense. I liked the book, but have seen the content before. Nothing really new here. Definitely meant more for new developers and would wholeheartedly recommend it for that group of people.



View all my reviews

Book review: The Great Derangement

The Great Derangement: A Terrifying True Story of War, Politics, and Religion at the Twilight of the American EmpireThe Great Derangement: A Terrifying True Story of War, Politics, and Religion at the Twilight of the American Empire by Matt Taibbi

My rating: 3 of 5 stars


Not nearly as good at Griftopia, also by Taibbi. Kind of wanders and there doesn't seem to be any real plot to this book. Kind of wonder if Matt was still on drugs when he wrote this thing. The religion parts were semi-interesting, the political insights were boring. Can't recommend this. Some good parts, but the whole doesn't make for a good read.



View all my reviews

Monday, July 25, 2011

Controlling transitive dependency resolution in Gradle

Just hit this so I thought I would write up a quick entry.  I'm trying to get Hibernate and Apache CXF to work together.  I have a Gradle build.  I ran my test suite and I am seeing issues with CGLib classes.  After a little bit of research, it seems there's an issue between the ASM library that Hibernate's CGLib uses and the one that Apache CXF uses.  Solution is to exclude cglib-2.1_3.jar and use cglib-nodep-2.1_3.jar instead.  To do this in Gradle:

configurations {
   all*.exclude group: 'cglib', module: 'cglib'
   ...
}

dependencies {
   compile group: 'cglib', name: 'cglib-nodep', version: '2.1_3'
   ...
}

Adding these lines to the build.gradle file allow me to remove the cglib-2.1_3.jar dependency and instead specify the nodep version instead.  Pretty slick.

 

 

Thursday, July 14, 2011

Sharing a 27" Cinema Display using an Atlona AT-MDP21 KVM switch

Quick blog post here.  Hooked up an Atlona AT-MDP21 2x1 Mini DisplayPort KVM switch to my 2009 Mac Pro and my 2010 MacBook Air.  Both run the 27" Cinema Display at its highest native resolution, 2560 x 1440.  Atlona documentation states highest resolution is 1920 x 1200.  Easy to set up and I have the Cinema Display USB in the back working between the two computers.  The only downside is the slight 1-2 second pause when switch the KVM from one computer to another.  Not a big deal.  Highly recommended.

Friday, June 24, 2011

Kingpin: How One Hacker Took Over the Billion-Dollar Cybercrime UndergroundKingpin: How One Hacker Took Over the Billion-Dollar Cybercrime Underground by Kevin Poulsen

My rating: 5 of 5 stars


Awesome non-fiction. Author does a great job keeping the drama and action going. I couldn't put this book down after I started reading it. Interesting to watch Max Butler's (aka Max Vision) downward spiral into cybercrime and black hat hacking. Very eye opening to read the details of how these talented hackers can cloak their infiltration and syphon information from computers for weeks or months.



View all my reviews

Friday, June 17, 2011

Zero Day by Mark Russinovich

Zero DayZero Day by Mark Russinovich

My rating: 4 of 5 stars


Pretty good cyber-thriller. I think this is Marks' first novel. Knowledgeable author. The plot is very plausible and believable. A fair amount of technical information around malware, rootkits, computer viruses. A fun read. Very quick read and I found myself not wanting to put it down, especially after I got to the second half of the book. The last third of the book is like a runaway freight train of suspense. Recommended!



View all my reviews

Sunday, March 27, 2011

Grails JAR dependencies with classifiers

Quick post on specifying Grails dependencies in BuildConfig.groovy.  The recommended way to suck in JAR dependencies in Grails is to use the dependencies DSL maintained in BuildConfig.groovy.  I had a need to bring down a dependency that has a classifier attribute on it.  Didn't really find anything definitive on how to do it, but it seemed like following a convention might do the trick.  Here's how I solved the issue:

repositories {
  grailsPlugins()
  grailsHome()
  grailsCentral()
  mavenCentral()
  ebr() // SpringSource Enterprise Bundle Repository
}
dependencies {
  runtime group:'net.sf.json-lib', name:'json-lib', version:'2.4', classifier:'jdk15'
}

Saturday, March 05, 2011

Griftopia by Matt Taibbi

I recently read this book after seeing Alan Cooper had read it and stated that it was a terrifying book.  I wondered what would be so terrifying about "Bubble Machines, Vampire Squids, and the Long Con That is Breaking America".  After reading it, I wouldn't characterize it as terrifying as much as I would characterize it as infuriating.  The incompetence, greed, self-interest, and gluttony that is repeatedly portrayed in the book is extremely infuriating to me as a hardworking American citizen that pays taxes.  The book chronicles some of the most audacious power grabs this nation has ever seen, and in most instances, those power grabs are happening during the past two decades.  Taibbi chronicles why the Tea Party is chasing its own tail, lambasts Alan Greenspan as "a one-in-a-billion asshole that has made America the mess it is today", and details the mortgage, commodities, and wealth fund scams that we, American taxpayers, have had to endure the last couple of years.  The book is written in a no-holds barred fashion with a fair amount of profanity thrown in to spice up the prose. It's an entertaining read, but also very thought provoking and sheds some interesting light on the current political climate, especially around Obamacare and the health insurance industry.  Very highly recommended.

griftopia-e1296531461300.jpg

 

 

Friday, March 04, 2011

Groovy Remote Control plugin via Maven

I had some issues getting the Groovy Remote Control plugin to pull down through Maven today.  The documentation that is currently in place today is not correct.  Here is the fragments of my Maven POM that enabled me to pull the plugin as a dependency:


 

 

Sunday, February 27, 2011

It's all about the conversations!

More pondering as I contemplate my previous consulting gig.  This time, I'm considering requirements discovery.  Pre-agile, people would write large requirements documents, hoping to document all the requirements needed for the developers to build a system that would satisfy the customers.  Unfortunately, this view of discovering all the requirements ahead of time is awfully naive.  Things change.  Requirements go undiscovered.   Requirements that are captured are not thought out as well as we would like and when it comes time to implement the requirements in software, the requirements don't make sense or are plain wrong.  Thus, large efforts to capture all the requirements has some amount of wasted effort.  This effort focuses most of the conversations at the beginning of the project.  Conversations after the requirements document has been written and signed off is discouraged; it's viewed as evidence that there are errors in the requirements document.  There doesn't seem to be any room for learning with requirements documents.

Now we have agile methods and the user story.  User stories are not requirements.  They're a planning tool for the agile team.  They are a statement of value that the business would like built.  There may or may not be acceptance criteria associated with the user story, depending on where the user story is in its lifecycle.  The most important part of the user story is the conversations that need to happen to flesh the story out so the feature value can be realized in the product.  This detail seems to escape a lot of people trying to use user stories to build products.  User stories give me the freedom to have conversations with all interested parties regarding the statement of value.  I learn just-in-time about the requirements as I implement the feature.  The business learns about how those requirements they communicated manifest themselves in a product.  And we both have the freedom to adjust, learning about the product along the way.

The above statements about user stories does not preclude one from using prepared, written documentation to feed the conversation.  My previous gig, we have a lot of federal government documentation regarding reporting and calculation requirements.  These requirements are set in stone by the government.  Yet, there needs to be conversations as to how those requirements will be accomplished across releases and sprints.  This is where the conversations come back to the forefront.

User stories allow me to deliver value to the business or customer in bite-size portions.  Value is delivered in small increments, thus I can deliver these more frequently and solicit feedback from the business or customer.  When I have short feedback loops in place, I can nimbly change my course if I need to.

Agile database modeling

My previous project has me doing a lot of introspection lately.  One of the recurring themes that I have been noodling on has been evolving a data model using agile development techniques.  The applications being built on this project are based on the .NET platform.  The development group is currently using the Database Project template in Visual Studio 2010 but looking to support a database migration process here soon.  The VS 2010 Database Project template does an admirable job of keeping track of all the DDL for your project, but it offers nothing for refactoring your database over time and migrating a production database.  It seems like the template is meant for rebuilding the database from the ground up, thus there is no concept of database schema changes or migrations.

The database will evolve over time.  Development groups should learn how to build their data model incrementally over many iterations and releases.  Database migration tools can be very helpful in your quest to evolving your database over time.  Tools like Liquibase and Rails migrations are very good at supporting this sort of development behavior.  There are tools on the .NET platform that do this sort of thing.

Another thing that has caused quite a bit of headache is the desire of the data group (data architects and DBAs) to try to get out ahead of the developers some ways and build out much more of the data model than the developers need for the current sprint.  We have found that when your data modeling efforts are not driven from user stories that they tend not to align with the efforts of completing the user stories.   Thus developers and data people end up conversing about data model changes that could have been avoided in the first place by waiting for the right time to initiate changes to the data model.  My advice for building an operational data store that one or more applications will be developed on top of: don't try to develop the entire data model upfront.  You will inevitably end up changing the data model to support requirements as they change (hopefully you're using stories to guide your development).  Typically your application developers will have constraints and needs that need to be accommodated in the data model.  As the data person, you need to be communicating with them and working from user stories.  Ideally, the data people are part of the project team and are dedicated to the agile process.  We really did not have that on this project.  I think that caused issues and slowed us down a bit.

Refactoring Databases should be required reading for development groups.

 

Great experience with Acceptance Test Driven Development (ATDD) and SpecFlow

I recently left a .NET gig where I was brought in to bring agile and craftsmanship behaviors to the entire project team.  One of the more successful endeavors was the acceptance test-driven development (ATDD) effort.  I was lucky enough to bring Joel Levandoski () on board to head up this effort.  Joel is an awesome developer and on this project, he was running as the lead QA resource, bringing automated testing to the group.  The QA group traditionally had not done automated testing, so we had a bit of learning to do.  Joel did a fabulous job of learning a new tool (SpecFlow for .NET) and training other QA people on its use.

Our QA developers created acceptance tests from the acceptance criteria specified in the user stories.  We spent a lot of time evolving our user stories, but by the end of the first six months, I think we finally have a format that we like and that will facilitate communication between all the participants on the project.  Our acceptance tests were written using SpecFlow, a Gherkin-compliant BDD testing framework for the .NET platform.  SpecFlow is an amazing tool and its integration with Visual Studio is pretty nice.  Having this integration with the IDE is a great selling point to using SpecFlow; SpecFlow specifications can be translated to normal xUnit tests using the VS integration.  SpecFlow generates a stub unit test driver class for every feature file and it's compliant with several unit testing frameworks.  This SpecFlow feature allows it to run directly from the various GUI unit test runners.  This is a nice convenience.  We used the MSTest generation strategy baked into SpecFlow.

Initially we used WatiN for testing our ASP.NET MVC app, but later migrated to an application suite of web and Silverlight applications.  Therefore, we gravitated to using WebAii from Telerik.  This tool worked well for testing both web and Silverlight environments.

Our specifications were very focused on actions and outcomes to those actions.  We took a concerted effort to push the details of the steps into the fixture code.  Keep the specifications light and to the point.  We followed the technique advice from Concordion.org.  Doing this really made our specifications communicate the intent of "what" was being tested, not the "how" the test worked.

Many thanks to Joel Levandoski and Tim Anderson for driving the ATDD efforts forward and making this effort a reality.  There were numerous times that I would find Joel writing specifications during sprint planning as the rest of the group was reviewing user stories and associated acceptance criteria for the upcoming sprint.  This is incredibly powerful to come out of sprint planning with a good portion of your acceptance tests executable (all fail as inconclusive).

Wednesday, January 12, 2011

Tweaking your user story mapping efforts

I had a great day leading a business group through a user story mapping session at my current client.  Story mapping is technique that Jeff Patton has popularized for giving your product backlog some structure.  I've done a couple of these user story mapping sessions with pretty good success.  Today, we changed up a couple of things with the session and saw some good results.  Thought it might be worth a blog posting.

First, identify your high-level activities and lay them out across a wall.  We used large Post-It sheets and attached one high-level activity to each sheet.  By doing this, we could move activities and associated tasks around the room, allowing us to rearrange priorities easily.

Next, give the customer/business group five (5) minutes to come up with as many tasks as they can think of for each high-level activity.  Time-boxing the effort keeps you on a regular cadence.  Post the task Post-Its on the large Post-It sheets in no particular order.  Move from one activity to the next, spending the same amount of time on each.  Don't worry about duplicate tasks or the prioritization of the tasks.  You'll come back to these, culling and prioritizing the tasks associated with each activity.

After harvesting the tasks for each activity, go back to each activity and cull out the duplicate tasks and prioritize the tasks according to Jeff Patton's story mapping technique.  We spent 20 minutes on each activity and were able to get a backbone of tasks defined, with other non-core tasks associated with the activity.

Another technique for ensuring that high priority tasks percolate to the walking skeleton row of the story map is to give the business people sticky dots to place on the tasks that they think are core.  We had our business folks put their initials on the sticky dots so we knew who voted up the task.  The dots stand out on the story map and the business really liked using this prioritization technique.  Having the initials on the dots gives you added information regarding who is connected to what stories.

Now that the tasks are prioritized, you can walk your story mapping and talk about it with your customers/business people.  Walking the story map ensures that the ordering of the activities and tasks makes sense and nothing has been missed.  By tweaking our story mapping session today, we were able to keep everyone in the business group engaged and the conversations flowing.

Tuesday, January 11, 2011

PeepCode has new Rails 3 videos up

If you're interested in getting up and running with Rails 3, I recommend PeepCode's videos.

https://peepcode.com/pages/rails-3-screencasts

Testing as a learning sandbox

I've been spending some quality time with NHibernate 3.0.  Last night I got stuck on an issue with the Criteria query where a collection passed to a constructor was null and the framework was complaining.  After writing a couple of integration tests that tested various parts of my domain object model, I was able to determine that collection types that I was using for the many-side of relationships (ISet<T> and HashSet<T> in this case) where the inappropriate collection types to be using for my collection semantics configuration.  I'm continually amazed at how powerful testing, both unit and integration, can be.  My tests today allowed me to create a sandbox to try things and work out a misunderstanding that I had with NHibernate.  Pretty cool.