Sunday, June 09, 2013
Connecting DbVisualizer to Heroku PostgreSQL database
Tuesday, April 23, 2013
Jasmine spying on jQuery selectors
var $element = $('div.my-element');How would you write a Jasmine specification to drive this line of code in a Backbone.View function? Here it is:
it("find the element using a jQuery selector", function() { var spy = spyOn(jQuery.fn, 'find'); this.view.doSomething(); expect(spy).toHaveBeenCalledWith('div.my-element'); });The reason you can do this is that
$(selector, context)
becomes $(context).find(selector)
. By default, selectors perform their searches within the DOM starting at the document root. However, an alternate context can be given for the search by using the optional second parameter to the $() function (from http://api.jquery.com/jQuery/#jQuery1). Had to write this down in a blog so I remember it again some day.
Cheers!
Wednesday, February 20, 2013
Disabling hashListening in jQuery Mobile 1.2
$(document).bind("mobileinit", function () { $.mobile.ajaxEnabled = false; $.mobile.linkBindingEnabled = false; $.mobile.hashListeningEnabled = false; $.mobile.pushStateEnabled = false; });The trick to getting this to work is loading this bit of code before you load jQuery Mobile. More information at http://jquerymobile.com/demos/1.2.0/docs/api/globalconfig.html.
Friday, February 08, 2013
Disappearing USB ports on Mac Pro (Early 2009)
- Shut down your computer.
- Unplug the computer from power and wait about 15 seconds or so.
- Plug the computer power cord back into a power outlet.
- Restart your computer.
Voila! I have my USB ports back and my system is recognizing both the HiFiMan amp and my iPod.
Thursday, February 07, 2013
Effective use of the Rails has_and_belongs_to_many association
-
The naming of the association table is by alphabetical convention. For example, a many-to-many relationship between Assemblies and Parts models would result in an association table named assemblies_parts. The migration will look like:
class CreateAssembliesPartsAssociationTable < ActiveRecord::Migration def self.up create_table :assemblies_parts, :id => false do |t| t.integer :assembly_id, :null => false t.integer :part_id, :null => false end add_foreign_key(:assemblies_parts, :assemblies) add_foreign_key(:assemblies_parts, :parts) end def self.down drop_table :assemblies_parts end end
Note that I am using the foreigner gem to implement foreign keys in my migrations. - Use this association type when you want a direct many-to-many mapping of models without any intervening association model.
-
The model mapping looks like the following:
class Assembly < ActiveRecord::Base has_and_belongs_to_many :parts end
andclass Part < ActiveRecord::Base has_and_belongs_to_many :assemblies end
Thursday, December 13, 2012
Pinning Rails 3.2 to a specific time zone
config.time_zone = 'Central Time (US & Canada)' config.active_record.default_timezone = :localHope this helps others that discover that the default behavior of UTC is not what is desired. Official documentation on these configuration items can be found at http://guides.rubyonrails.org/configuring.html#rails-general-configuration and http://guides.rubyonrails.org/configuring.html#configuring-active-record.
Thursday, September 13, 2012
Resolving connectivity issues with Verizon 4G LTE JetPack 4510L MiFi and your Apple devices
- Make sure the firmware on the hotspot is up-to-date. Update if not.
- Changed the radio protocol to 802.11n. It was set to 802.11g.
- Changed the network key on the hotspot.
- Forget the network on the Apple device.
- Connect to other network, selecting your MiFi hotspot network. When prompted for network key, enter the new network key.
- Done!
Tuesday, July 10, 2012
Adding HTTP headers to AJAX calls in Backbone.js
messagesCollection.fetch({
headers: {
'x-my-custom-header-1':'foobar',
'x-my-rest-api-version':'1.0'
}
});
Wednesday, May 23, 2012
Potential issue when mocking in Groovy
groovy.mock.interceptor.MockFor
and grails.test.GrailsMock
allow for mock objects in Groovy and Grails, respectively. I've been using both of these classes with good success for a long time. But recently, a small refactoring involving the the removal of a parameter from a method signature has caused me to re-evaluate the mock object usage in a dynamic language like Groovy. As I said, there was a refactoring done on the public contract of a service, where the method name stayed the same, but a parameter was removed from the method signature. The contract unit tests for this service were changed to drive the refactoring ("test-driving the refactoring"). However, the collaboration unit tests, where this service is now acting a dependency, were not changed and they continued to pass successful. I tried cleaning the old .class files and compiling the Groovy tests, but to no avail, the unit tests which mocked this service continued to pass successfully, even though the method signature no longer existed on the real service implementation. After perusing the javadoc documentation, there does not seem to be any functionality in either of these classes to verify that the type that is being mocked has a method signature that matches the method signature being mocked. Therefore, these classes can mock methods which are non-existent on the real dependency implementations. Fixing the issue involved finding the method signature using a text search.
The whole episode was a bit unsettling; we have a lot of unit tests and we may be testing scenarios which are not representative of the real world. In my case, the real world scenario manifested itself as a runtime exception stating that the method was missing. In the case of Java and Mockito, the method signature change would result in a compilation error where the changed method signature was mocked in unit tests. My takeaway was to be more diligent with my refactoring and really ensure that I have changed all places in the code where a particular method is referenced.
Friday, May 04, 2012
Allocating business logic in Grails
The dilemma
I've been on a couple of larger Grails projects in the past year and half and I'm witnessing a disturbing phenomenon. The allocation of business logic responsibility across the abstractions that Grails provides today is causing considerable pain. Grails provides controllers, services and domain objects where business logic can reside. I'll contend in this blog entry that these abstraction categories work well for small- to medium-sized Grails projects, but thing quickly start to unravel once your application gets to be large.Controllers
I see a lot of business logic code in controllers these days. Controllers shouldn't contain any business logic whatsoever. The controller's responsibility is to handle the web request and response. Anything else should be delegated to a collaborator. Don't do it!Domain objects
The next logical place to put business logic is in the domain class. Allocating responsibility here works to a point, but you will quickly encounter issues when you need business logic that resides in services. I'm not a fan of injecting Grails services into domain classes. This situation quickly spirals out of control and makes unit testing very difficult to perform. For simple per-domain business logic, free free to allocate to the domain class. Anything more, and it belongs in a service (or something else, which we'll discuss in a bit).Services
So most business logic seems to end up in Grails services these days. That's what the creators of Grails intended. I have no qualms about that. The beef I have with services is that a method is a crappy abstraction for representing business logic in the system. On larger Grails projects, the service approach seems to break down, as services seem to take on more and more responsibility, making them more difficult to test. I'm also witnessing a lot of code duplication; in it's current incarnation, there is no delineation of public API services versus private services which the public API services compose larger sets of business logic with. What we end up with is large, hard-to-test service methods that collaborate with too many dependencies and do too much.The desire
I want an abstraction in Grails that promotes proper factoring of business logic into unit-testable abstractions. These abstractions can be composed into larger abstractions to provide the necessary logic to fulfill the requirements of the system. The chain of responsibility design pattern may offer some value here. Individual commands that have a singular responsibility can be created, unit tested, and finally composed into "chains" of commands that provide the necessary functionality of the system. The command chains can be integration tested to ensure that the individual commands composition provides the functionality required by the customer/business. When new functionality is needed, a new command chain is created, reusing existing commands where appropriate and creating new commands where functionality does not exist. Spring Batch has a similar concept that is core to its design.Conclusion
I hope to blog a bit more around this in the coming weeks. I really like Grails and would love to see its usage increase in the coming months and years. I think it has some really cool features that allow you to get up and running very quickly. The plugin system alone is a huge advantage to using Grails, because features like a Chain of Responsibility executor can easily be added to the core Grails system.Monday, April 30, 2012
Using Apache Commons Net FTPClient on AWS EC2 instances
enterLocalPassiveMode()
method on the FTPClient
instance. This had me confused for a while, which isn't difficult to do and seems to happen often.
Saturday, January 28, 2012
Understanding the power of isolated object testing
I spent some time last night watching J.B. Rainsberger's excellent InfoQ presentation entitled "Integration Tests are a Scam". I've been recently contemplating why I favor unit testing (or as J.B. calls it, isolated object testing) over integration testing and I found his presentation particularly relevant. I highly recommend watching it. He also has a series of blog entries that support the presentation (Parts 1, 2, and 3). I also recommend reading those. They're truly gems.
Testing to prove basic correctness
I find many developers using integrated tests as a way to prove the basic correctness of the class or system under test. J.B. writes that "While integration tests offer value in other contexts, too many programmers use them to show basic correctness, and when they do that they waste a tremendous amount of time and effort." Integrated testing can be used within a project (I'm personally fond of acceptance testing), but integrated testing should not be used to prove basic correctness of your code. Focused, isolated object tests (aka unit tests using test doubles) should be used for this endeavor. If you discover behavior that a collaborator demonstrates and you have not accounted for in your isolated object tests, you should mimic this behavior in your test doubled collaborator contracts. You want to cover as much of your code with isolated object tests. We'll talk more about contract tests later.
J.B. mentions using an integrated test to learn about how a collaborator might support its contract, but that this integrated test is not included in the basic correctness test suite. I'm wondering if there isn't some other test suite here that we could use to keep integrated tests that support our learning the runtime and external dependencies. This test suite would be run periodically, but is not part of the whole continuous integration process of building a software system. Need to noodle on this more.
Isolated object testing
J.B. states in his aforementioned presentation that he does not use the term "unit testing" and instead favors a more focused term of "isolated object testing". He makes a point to call out the isolated word; these tests isolate the class under test by using test doubles to stub or mock the collaborators of the class under test. These tests focus on a single object and a singular behavior. Any collaborations are realized using test doubles.
I tend to agree that the phrase "unit testing" is a weak phrase describing the type of testing that we use to drive design of a single class and a single method. These tests focus on the responsibility of a single method of a single class. These tests also help you focus on understanding the contracts of collaboration between this class under test and its collaborators. This is really where the design part of the equation comes in. Classes that are hard to test are screaming out that the design is wrong. The tests are saying you have too much responsibility in the class under test or there are too many collaborators. Using your fast isolated object tests, you can engage in a reactive design activity, moving responsibilities out of the class under test, create new collaborators, and other design changes to make .
The collaboration contract
J.B. rightfully makes a big deal about contracts (aka interfaces) to explicitly specify collaboration protocols between objects. He mentions that in Domain-Driven Design, popularized by the Eric Evans book of the same name, three concepts can be used to express a domain-driven model: Values, Entities, and Services. All Services should have contracts and those contracts manifest themselves as interfaces. By specifying interfaces, we explicitly declare the protocol supported by each interface implementation. Constraining ourselves to interface types when specifying collaborations results in looser coupled systems, which is considered a "Good Thing". When engaging in isolated object testing, J.B. details the concept of collaboration tests and the closely associated contract tests. This concept of collaboration tests and contract tests is something new to me and is a big reason the presentation was so valuable in my eyes.
Collaboration tests and contract tests
After watching the presentation, I seem to be doing a pretty good job of using collaboration tests, but I'm not making the association to contract tests. Collaboration tests prove that the client interacts with its collaborators correctly; the client sends the correct messages and message arguments to the collaborator and appropriately handles all outputs from the collaborator. This is traditionally what I have used mock objects for and that seems to be what J.B. is arguing one should do for collaboration testing.
Contract testing, on the other hand, deals with testing that an interface implementation accurately respects the interface it is implementing. Does the implementor support the contract it declares to support? I haven't typically written these types of tests, but I'm going to start. Interestingly, I don't see a lot of this sort of testing in the wild. The part I really dig about contract tests as Rainsberger explains them is that they can be reused across interface implementations. He uses the List interface and two of its implementations, ArrayList and LinkedList, and details how to use implementation inheritance in the tests to DRY up your contract tests.
It really gets interesting when he declares that for every collaboration test that implies that a test double behavior, there better be a corresponding contract test that demonstrates that the interface implementation(s) actually does support that behavior. Same goes for values returned: if a test double returns a value in a collaboration test, there should be a contract test that demonstrates that the real implementation(s) does actually return that value. This is where I think selective, integrated learning tests can help you discover how your classes on the edges of a system may act when integrated to real external dependencies. But again, those integrated tests are not providing basic code correctness semantics. They're in your project to help you learn, but are not part of the isolated object tests suite. Don't lump them with your isolated tests and don't run them as part of your code/update repos/run tests/commit cadence. This learning test suite should be run periodically during the day, but not part of the CI build process.
The death spiral of integrated testing for proving basic code correctness
Both Rainsberger's tutorial presentation and his blog postings go into great detail to the fallacy of using integrated testing to prove code correctness. I won't rehash what he has to say about it. My conclusions that I draw from his material are:
- Focus on isolated object tests and isolation techniques and learn how to do collaboration and contract testing.
- If you need to learn, by all means, write an integrated test. If you need to reproduce a defect, initially write it as an integrated test. However, in all cases, take what you have learned and replicate that in your isolated object test suite. Replicate the behavior you discovered in the integration test(s) with test doubles. This will ensure that your isolated object tests stay true to the behavior of the integrated system.
- Don't get sucked into the downward death spiral of using integrated testing to guide your test-driven design efforts. It will only cause you pain.
J.B.'s series on Integrated Tests are a Scam can be found here.
Friday, November 18, 2011
Spring Security Grails plugin 1.2.4 and transparent password encoding in the User domain object
1 class User {
2
3 transient springSecurityService
4
5 String username
6 String password
7 boolean enabled
8 boolean accountExpired
9 boolean accountLocked
10 boolean passwordExpired
11
12 static constraints = {
13 username blank: false, unique: true
14 password blank: false
15 }
16
17 static mapping = {
18 password column: '`password`'
19 }
20
21 Set<Role> getAuthorities() {
22 UserRole.findAllByUser(this).collect { it.role } as Set
23 }
24
25 def beforeInsert() {
26 encodePassword()
27 }
28
29 def beforeUpdate() {
30 if (isDirty('password')) {
31 encodePassword()
32 }
33 }
34
35 protected void encodePassword() {
36 password = springSecurityService.encodePassword(password)
37 }
38 }
Notice that the domain object now has a springSecurityService injected into it. There's also some GORM callbacks that will be called before the state of the domain object is saved and updated in the database. This is where the encoding now occurs--you should not be doing the encoding explicitly yourself.
Lesson learned!
Thursday, October 13, 2011
Grails GSPs consume Underscore.js ERB-style interpolation delimiters
<%= %>
) before Underscore ever gets a chance to use it. I assume evaluations would also be consumed by GSP. Took me a while to figure this out, so I thought I'd raise it up as a concern. Underscore.js does give you a way to change the delimiters using regexes.
Wednesday, October 12, 2011
Rationale for CoffeeScript's 'Fat Arrow' syntax
I was building a Backbone.View today (in JavaScript) and had to deal with this binding. In my JavaScript, I wanted to call fadeOut on the view's el property (which happens to be a jQuery wrapped element). Without using the jQuery proxy method to bind this appropriately, my statement of 'this.el.fadeOut();' will not work. The this reference is no longer the view at the point I'm using it in the anonymous function; the context for this has changed within the anonymous function declaration. The correct JavaScript code is below.
var MyView = Backbone.View.extend({
initialize: function() {
this.template = _.template($('#my-template').html(), this.model.toJSON());
this.render();
},
render: function() {
this.el.html(this.template);
return this;
},
events: {
"click button#doSomethingButton": "doSomething",
},
doSomething: function(e) {
this.model.set({someValue: $('#someValueTextField').val()});
var promise = this.model.doSomethingOnModel();
promise.done($.proxy(function() {
this.el.fadeOut();
}, this)).fail(function() {
alert('Failed to check sequence uniqueness.');
});
}
});
So how does this relate to CoffeeScript? Well, the fat arrow operator is performing the proxying of the 'this' reference for you. If I wrote the above in CoffeeScript, I could write the done callback as:
…
promise.done => @el.fadeOut()
…
Chaining the done and the fail callbacks would necessitate the use of parentheses, but still very succinct. Score one for CoffeeScript!
Wednesday, October 05, 2011
Using JDBC URLs containing LDAP URI to connect to Oracle databases within Grails
I'm working on a Grails application that needs to connect to a Oracle database using a LDAP context. The URL format is something like the following:
jdbc:oracle:thin:@ldap://tns.mycompany.com:389/marketing,cn=OracleContext,dc=com,dc=marketing
I'm also not using the Grails DataSource.groovy configuration for this. I'm managing a separate DataSource in the resources.groovy using Spring DSL. I'm using the org.springframework.jdbc.datasource.DriverManagerDataSource. I have not tried this with the standard DataSource.groovy stuff. When I first tried using this, I would get an exception with the following text: "javax.naming.NotContextException Not an instance of DirContext". There seems to be a bug with the Spring LDAP and the SimpleNamingContextBuilder class. Basically the SimpleNamingContextBuilder returns a Context implementation, not a DirContext implementation. You can work around this in Grails by adding the following to the Config.groovy file:
grails.naming.entries = null
Problem solved. The DataSource now bootstraps correctly and I can go on my merry way. Kudos to Luke Daley for bringing this to my attention.
Monday, October 03, 2011
Hiking around in Big Woods State Parks
![]() |
Big Woods State Park |
Wednesday, September 07, 2011
CoffeeScript presentations
I did the first CoffeeScript presentation this past August to the Ruby Users of Minnesota (RUM) group and it looks like I'll be doing a second take on the presentation to the Groovy Users of Minnesota (GUM) here in October. If you're interested in the presentation and the examples, you can find them here.
Tuesday, September 06, 2011
Tracing and profiling SQL in Grails using log4jdbc
I spent some time today tracing and profiling SQL in one of the Grails applications that I support. I was looking around for proxy JDBC driver and happened on log4jdbc. It's similar to p6spy, but it seems to be actively developed and supported. Downloaded the driver, dropped it in my lib directory, and changed the logging and datasource configurations a bit in Grails and I was up and running. Very handy. I made copious use of the SQL timings profiling today. There are many other options for tracing and profile with this tool. Here are my changes to Config.groovy for enabling SQL timings to all SQL statements:
Config.groovy change to enable logging of SQL information from log4jdbc:
log4j = {
info 'org.codehaus.groovy.grails.web.servlet',
...
'grails.app',
'jdbc.sqltiming'
}
DataSource.groovy changes to enable log4jdbc:
development {
dataSource {
driverClassName = "net.sf.log4jdbc.DriverSpy"
url = "jdbc:log4jdbc:mysql://localhost/mydb-DEV?useUnicode=true&characterEncoding=UTF-8&useCursorFetch=true&autoReconnect=true"
}
}
Can't say enough good things about this tool. Really helped me zero in on some queries that were performing poorly with large data sets.
Wednesday, August 24, 2011
Identifying producer/consumer scenarios in the wild
I've been working on a data import process the past couple of days, trying to solve some memory issues (OOMEs). Essential we have a reader (the producer) and a writer (the consumer). The writer part of this scenario operates much slower than the reader part. The reader part is implemented as an iterator, so it only produces enough work for the writer to consume. As this design evolved over time, parallel execution of the writer was added, in an effort to speed up the overall writing process. The coordination of the parallelization is an ExecutorService implementation. With this executor service now in place, the iteration of the reader can operate independently of the writer. Thus, the consumer now starts creating lots of tasks that are then submitted to the executor service, where they queue up. The executor service is not bounded, so it just keeps accepting tasks. This wouldn't be a problem if the number of tasks were small and the memory footprint of those tasks was low, but that is not our situation. Thus, we keep blowing out our Java VM process with OOMEs. We're in the process of fixing this issue, using a bounded concurrent collection to handle the buffering of items between the reader and the executor service and ultimately the writer.