Author Archives: jon

RSpec, Mongo and Database Cleaner

This is kinda obvious, once you see it… But I’d figure it might help someone, someday.

I wanted to create a document one time, so I put it in the before :all block.

Yet, in the “it should” block, the document was gone, spec failed.

If I changed to a before :each block, the spec passed

So I changed the spec_helper from doing a clean for each, to using truncation. I also switched to doing the clean to the before :suite block (so that data didn’t build up in Mongo):

spec/spec_helper.rb
config.before(:suite)do
  #DatabaseCleaner[:mongo_mapper].strategy = :truncation
  DatabaseCleaner.clean
end

config.before(:each) do
  DatabaseCleaner[:mongo_mapper].strategy = :truncation
  #DatabaseCleaner.clean
end

And now things are as I expected them to be when using a before :all block…

I can repeatedly run the specs, and they pass.

Lets in RSpecs Can Be Blech

Maybe it is just me, but I had suspected some weirdness here and there from using the fancy “let(:var_sym)” syntax. The trusty RSpec book says:

The first call to let( ) defines a memoized output( ) method that returns
a double object. Memoized means that the first time the method is
invoked, the return value is cached and that same value is returned
every subsequent time the method is invoked within the same scope.

So, it would seem that let() is a great way to define an object once, and use it from there onward.

However, I saw that in this particular instance of running an “expensive” operation in the let block, it took 17 seconds instead of 7 seconds to run the specs! I could see my specs ticking along, very slowly, one at a time. What the heck? I asked myself. Is there something that says “turn caching off (or on)?

Fancy Schmancy! To save ~10 seconds, I’ll forgo the niceties of let() and revert to using the @var_name syntax.

Given the following RSpec code:

 context 'instance methods' do
    let(:sample_xml_file) {File.expand_path('../../data/sample_v_1_13.xml', __FILE__)}
    let(:p) {
      xml_str = File.read(sample_xml_file)
      Nemsis::Parser.new(xml_str)
    }
    let(:r) {Nemsis::Renderer::HTML.new(p)}

    describe '#render_html' do
      context "plain HTML" do
        let(:html) { r.render(false) }

        it 'returns not nil' do
          html.should_not be_nil
        end

        it 'has title section' do
          html.should =~ ...
        end

        context 'specialty patient section' do
          it('has specialty patient') { html.should =~ ... }
          it('has specialty patient trauma criteria') { html.should =~ ... }
          it('has specialty patient airway') { html.should =~ ... }
        end

        it "should not have a STYLE section" do
          html.should_not =~ ...
        end

        it "write to html file" do
          write_html_file(sample_xml_file, "simple", html)
        end
      end

      context "fancy HTML" do
        let(:html) { r.render(true) }

        it "should have a STYLE section" do
          html.should =~ ...
        end

        it "write to html file" do
          write_html_file(sample_xml_file, "fancy", html)
        end
      end
    end
    ...

Contrast the above with the more traditional approach that uses a before block and @variables:

 context 'instance methods' do

    before :all do
      @sample_xml_file = File.expand_path('../../data/sample_v_1_13.xml', __FILE__)
      xml_str = File.read(@sample_xml_file)
      p = Nemsis::Parser.new(xml_str)
      r = Nemsis::Renderer::HTML.new(p)
      @html = r.render(false)
    end

    describe '#render_html' do
      context "plain HTML" do

        it 'returns not nil' do
          @html.should_not be_nil
        end

I did a bit more formal timing, which revealed the truth:

  • let() — 10.7 seconds
  • before block — 2.4 seconds

Am I missing something?

Manual Cucumber Tests?

there was some discussion over on the cucumber list about manual testing.

cucumber is great at BDD, but it doesn’t mean it is the only test technique (preaching to choir) we should use.

i have learned it is critical to understand where automated tests shine, and where human testing is critical — and to not confuse the two.

as far as cuking manual tests, keeping the tests in one place seems like a good advantage (as described in Tim Walker’s cucum-bumbler wiki <g>).

the cucumber “ask” method looks interesting. maybe your testers could use the output to the console as-is, or (re-)write your own method to store the results somewhere else/output them differently.

From the cucumber code (cucumber-1.1.4/lib/cucumber/runtime/user_interface.rb):

# Suspends execution and prompts +question+ to the console (STDOUT).
# An operator (manual tester) can then enter a line of text and hit
# <ENTER>. The entered text is returned, and both +question+ and
# the result is added to the output using #puts.
# ...
def ask(question, timeout_seconds)
...

Sample Feature:

    ...
Scenario: View Users Listing
  Given I login as "Admin"
  When I view the list of users
  Then I should check the aesthetics

Step definition:

Then /^I should check the aesthetics$/ do
  ask("#{7.chr}Does the UI have that awesome look? [Yes/No]", 10).chomp.should =~ /yes/i
end

The output to the console looks like this:

Thanks for the pointer, Matt!

[notice]NOTE: it doesn’t play well with running guard/spork.[/notice]
The question pops up over in the guard terminal 🙁

Of course, if you are running a suite of manual tests, you probably don’t need to worry about the Rails stack being sluggish :-p

    Spork server for RSpec, Cucumber successfully started
    Running tests with args ["features/user.feature", "--tags", "@wip:3", "--wip", "--no-profile"]...
    Does the UI have that awesome look? [Yes/No]
    Yes
    ERROR: Unknown command Yes
    Done.

Supporting SSL in Rails 2.3.4

Somehow, moving a perfectly happy production app to Rackspace and nginx caused URLs to no longer sport the SSL ‘s’ in “https” — bummer.

Link_to’s were fine… But a custom “tab_to” (responsible for highlighting the current tab) was not so happy (even though it used an eventual link_to).

Turns out, that it is the url_for method as I learned from here.

I also blended it with some ideas I found here.

# config/environment/production.rb
# Override the default http protocol for URLs
ROUTES_PROTOCOL = "https"
...
# config/environment.rb
# Specifies gem version of Rails to use when vendor/rails is not present
RAILS_GEM_VERSION = '2.3.5' unless defined? RAILS_GEM_VERSION
# Use git tags for app version
APP_VERSION = `git describe --always --abbrev=0`.chomp! unless defined? APP_VERSION
# The default http protocol for URLs
ROUTES_PROTOCOL = 'http'
...
# application_controller.rb
  # http://lucastej.blogspot.com/2008/01/ruby-on-rails-how-to-set-urlfor.html
  def default_url_options(options = nil)
     if ROUTES_PROTOCOL == 'https'
       { :only_path => false, :protocol => 'https' }
     else
       { :only_path => false, :protocol => 'http' }
     end
  end
  helper_method :url_for

Now the real kicker… Since I do not have SSL set up locally, I had to do some dev on our staging server to tweak the code and test that “https” showed up. so I turned off class caching: config.cache_classes = false.

However, when I cap deployed with it set back to “true” https did not show up. @#$$##@!!!!%%$% AARGH.

I suspect it might have something to do with not being able to open up a cached class and redefine it? I don’t know… I am going to have to go explore this oddity next…

Anatomy of a MongoDB Profiling Session

This particular application has been collecting data for months now, but hasn’t really had any users by design. At 33GB of data, pulling up a list of messages received was taking f-o-r-e-v-e-r!

So I decided to document how to go about and fix a running production system… Hope it helps.

Log into mongo console and turn on profiling (the ‘1’) to monitor slow queries. I entered >10 seconds, which really stinks (!). You should adjust it to suit your app’s definition of “slow”—maybe 500ms:

> db.setProfilingLevel(1,10000)
{ "was" : 0, "slowms" : 100, "ok" : 1 }

Next I went back to the webapp and executed the page request that exhibits the slow response …

Once the page returns, go in and look for any slow responses that the profiler logged:

> db.system.profile.find()
{
  "ts" : ISODate("2012-02-18T15:34:02.967Z"), 
  "op" : "command", 
  "command" : { "count" : "messages", 
    "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" }, 
      "fields" : null }, 
  "ntoreturn" : 1, 
  "responseLength" : 48, 
  "millis" : 119051, 
  "client" : "192.168.100.67", 
  "user" : "" 
}
{
  "ts" : ISODate("2012-02-18T15:35:51.704Z"),
  "op" : "query", 
  "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" },
  "ntoreturn" : 25,
  "ntoskip" : 791025,
  "nscanned" : 791051,
  "nreturned" : 25,
  "responseLength" : 49956,
  "millis" : 108720,
  "client" : "192.168.100.67",
  "user" : "" 
}

You can see there was a count query and a query for the data itself (we are using pagination). Sure enough, look here:

  • “ntoreturn” : 25,
  • “nscanned” : 791051,

 

Wow, that’s nasty… to return 25 records, we scanned 791,051! Gulp. Looks like a full table scan. Never a good thing (unless you have very small amounts of data).

Let’s see what sorts of indexes exist for the messages collection:

db.system.indexes.find( { ns: "production-alerts.messages" } );
{ "name" : "_id_", "key" : { "_id" : 1 }, "v" : 0 }
{ "v" : 1, "key" : { "created_at" : -1 }, "name" : "created_at_-1" }
{ "v" : 1, "key" : { "_type" : 1 }, "name" : "_type_1" }
{ "v" : 1, "key" : { "recv_app" : 1 }, "name" : "recv_app_1" }
{ "v" : 1, "key" : { "created_at" : -1, "recv_app" : 1 }, "name" : "created_at_-1_recv_app_1" }
{ "v" : 1, "key" : { "message_type" : 1 }, "name" : "message_type_1" }
{ "v" : 1, "key" : { "trigger_event" : 1 }, "name" : "trigger_event_1" }

Well, as expected, there is no index covering the multiple keys that we are searching on. So let’s add a multi-key index to match the query used by the controller!

db.messages.ensureIndex({_type:1, recv_app:1});

Now the app FLIES!! We dropped from 100+ seconds to 1.5 seconds (look at the “millis”) w00t!

db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
> db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
{
	"cursor" : "BtreeCursor _type_1_recv_app_1",
	"nscanned" : 791153,
	"nscannedObjects" : 791153,
	"n" : 791153,
	"millis" : 1546,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"_type" : [
			[
				"HL7Message",
				"HL7Message"
			]
		],
		"recv_app" : [
			[
				"CAREGIVER",
				"CAREGIVER"
			]
		]
	}
}

To prevent this sort of thing, you can consider adding indexes when you create new queries. But the best way to do this is to be empirical and know whether you should add the index through some testing. I’ll leave that for another day!

Exporting MongoMapper Objects to JSON

I wanted to export a MongoMapper document and it’s related documents as JSON — with embedded arrays for the collections. Invoking to_json did not seem to work perfectly, so I set about to discover what was going on.

Conclusion

If you use Embedded Documents for every associated document, the to_json method will work perfectly.

If you have normal Documents, you must override the as_json method to export the object “tree.”

Details

Here is a walk through of exporting mongo documents as JSON.

I created a simple Author class. And will use a simple test to show how to_json works:

describe "Author 1" do
  before :all do
    class Author
      include MongoMapper::Document
      key :name
      key :pen_name
    end
  end

  it "should output JSON" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard")
    json = p1.to_json
    puts json
    json.should include "name"
    json.should include "Poor Richard"
  end
end

And we get what we expect:

{
  "books":[],
  "id":"4f316a4c8951a2eefe000001", 
  "name":"Ben Franklin",
  "pen_name":"Poor Richard"
}

Now let’s add a new Book document of the Embedded variety. Here we will assert that the Author JSON should include a list of Books:

describe "Author 2" do
  before :all do
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
    end
  end
  it "authors have books" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
  end
end

And, sure enough, it works.

{
  "books":[{
    "id":"4f316a4c8951a2eefe000003", 
    "title":"Poor Richard's Almanac"}],
  "id":"4f316a4c8951a2eefe000004", 
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard"
}

Let’s add a list of Interests to the Author class, this time as a normal document type (not embedded). Now we can test that the Author JSON has the expected Interest:

describe "Author 3" do
  before :all do
    class Interest
      include MongoMapper::Document
      key :title, String
    end
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
      many :interests
    end
  end
  it "should have interests" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")],
                       :interests => [Interest.create(:title => "Movies")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
    json.should include "Movies" # Fails
  end
end

Whoa! No joy! Seems that the association to non-Embedded documents does not get automatically exported to the JSON.

{
  "books":[{
    "id":"4f316a4c8951a2eefe000006",
    "title":"Poor Richard's Almanac"}],
  "id":"4f316a4c8951a2eefe000007", 
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard"
}

And we get a failed spec 🙁

# expected "{"books":[{"id":"4f316a4c8951a2eefe000006","title":"Poor Richard's Almanac"}],"id":"4f316a4c8951a2eefe000007","name":"Ben Franklin","pen_name":"Poor Richard"}" to include "Movies"

Turns out we can add a custom as_json implementation to the class that you want to export as JSON. The as_json is responsible for indicating which fields and collections should be included in the json.

describe "Author 4" do
  before :all do
    class Interest
      include MongoMapper::Document
      key :title, String

    end
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
      many :interests

      def as_json options={}
        {
            :name => self.name,
            :pen_name => self.pen_name,
            :books => self.books,
            :interests => self.interests
        }
      end
    end
  end
  it "should have interests in json" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")],
                       :interests => [Interest.create(:title => "Movies")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
    json.should include "Movies"
  end
end

And we have Books and Interests. Success!

{
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard",
  "books":[{
    "id":"4f31782a8951a2f267000002", 
    "title":"Poor Richard's Almanac"}], 
  "interests":[{
    "author_id":"4f31782a8951a2f267000003", 
    "id":"4f31782a8951a2f267000001",
    "title":"Movies"}]
}

 

The Bizarro Manifesto

Let’s try a little Bizarro1 test (if you agree to these, I’ll poke you with a hot krypton stick):

We are uncovering better ways to provide the illusion of developing software by listening to others talk about watching people try. Through this (dare I call it?) work, we have come to value:

  • Dogmatic process and CASE-tool-like automation over inspiring quality individuals to interact with the team and the clients
  • Sufficient up-front comprehensive design specifications over seeing frequent, tangible, working results.
  • Writing detailed Statements of Work and negotiating changes over collaborating to do our collective best with the time and money at hand
  • Driving toward the original project plan over accommodating the client changing their mind, or a path turning into a dead end

To elaborate:

  • We prefer to focus on building software with lock-step process and tools — and reduce our need to worry about quality individuals and having conversations amongst developers or with the client.
    • that way we don’t need to worry about people issues and effective communication.
    • That way we can hire any individual regardless of skill, and forgo all verbal/personal interactions in favor of solely written words. Even better if those written words are automatically transformed into code. Maybe we can get non-coder tools! After all, people are merely fungible assets/resources, and software is factory-work — with processes and tools, and a horde of low-paid serfs, we can crank it out!
  • We prefer to spend a lot of time up-front ensuring we have the requirements and design specs fully determined —  rather than have tangible, working results early on.
    • We start with complete requirements specifications (often 400 pages), that follow our company standard template.
    • Even our Use Cases follow a mandatory 3-level deep path, with proper exception and alternate paths worked out.
    • We link the requirements items into detailed design documents — which include system design diagrams, interface specifications, and detailed object models.
    • If we don’t write it all down now, we’re likely to forget what we wanted. And if we don’t do it to the n-th degree, the developers might screw it up.
    • Writing it all down up front allows us to go on vacation while the process and tools “write” the code from the detailed specs/diagrams. Sweet.
    • In addition, we love to be rewarded by reaching meaningless intermediate deadlines that we place on our 1500-node Gantt chart.
    • When we combine all of the upfront work with important deadlines, many of the senior managers can get promoted due to their great commitment to generating reams of cool-looking documents. By the time the sh!t hits the fan when folks realize the “ship it” deadline is missed, the senior managers are no longer involved.
    • Besides, if we actually built software instead of writing all sorts of documents thinking about building software, our little ruse would be exposed!
  • We prefer to work under rigid Statements of Work — rather than actually work towards a “best-fit” solution given changing conditions of understanding and needs.
    • The agreement is based on the 400-page, fully-specified requirements document, and we pad the cost estimate with a 400% profit margin.
    • We then hire dozens of people to argue during the Change Control Review Board monthly meetings about re-writing code to deliver what you wanted versus what you asked for when you thought you knew what you wanted (and wrote it down in that 400-page doc that was signed off by 6 execs).
    • Contract negotiation pissing matches are such great uses of our collective resources and always result in perfect software! We love our fine print 🙂
    • With a 400% padding, the projects are too big to fail.
    • Once we are in it for 1 or 2 million and 50% done and 2x schedule overrun, who would ever say “No” to a contract extension? Who better to get you to the goal line than the same folks who squandered away your treasure, pissed away the calendar, and delivered no working software yet?
    • We like to appear like we’re just about done… Asymptote? Never heard of one.
  • We prefer to be driven by our initial plan — rather than dealing with change and having to re-print the Gantt.
    • Especially a Gantt chart that has been built with tender loving care to include resource allocations, inter-project dependencies, and partial resource allocation assignments for matrix-style organizations.
    • We love hiring a small army to ensure that we drive the entire team to meet every micro-task deadline even when they no longer make any sense.
    • The real fun is collecting the “actuals” data from the developers assigned to each task so we can compare it to their estimated hours.
    • And nothing sweeter than seeing 90% of our tasks being started, and 75% of those being 67% resolved;  and 25% of the resolved actually being complete — the roll-up summary to management is such a compelling story of success.
    • Changing such a beautiful plan that took 4 man-years to develop, that incorporates all of the comprehensive non-code documents, and is an appendix in the contract, is no small feat!
    • Better to produce the software according to plan even if nobody wants it that way. That’s our motto, and we’re not going to change!
    • We love the illusion of activity over the truth of delivered features.

Feel free to sign the manifesto below. It’s free to be certified.


1

    Credit goes to Superman and Bizarro World.

Jira vs Post-It Notes

Had some recent discussions on the subject of using tools or just sticky notes…

for me, a digital board representation of the kanban style 3-column view is merely one v-e-r-y small aspect of using something like jira.

though i could probably live with just post-it notes, i bet it would be an interesting challenge for me. i have been:

  1. doing only distributed work for so long (since late 90s)
  2. using jira so long, it is as simple as a pen and paper (or post-its)
  3. dissatisfied trying to do a project with something simpler like Basecamp, without the richness of jira
  4. working on long-term projects that have different people rolling through, and years of life (4000+ issues)

and i ask myself what other benefits do i derive from jira (plus, admittedly, a companion wiki)? why might i be uncomfortable with just post-its?  do i have an unnecessary “crutch” in the form of jira? hmmm… do i do more than is necessary in the way of using jira/wiki to add other pertinent documentation for the project?

well, here are the following things i can think of off the top of my head:

  • it is easy to
    • assign myself to an issue
    • move it to be in progress
    • move it to be resolved
  • our virtual chats hover around the greenhopper view
    • we’ll edit stuff on the fly as needed
    • quickly create a new issue if something pops up during the call
    • everybody refreshes their browser
  • we put a fair amount of supporting docs — details, helpful things — into the issue (or into the wiki and then that link goes in the issue)
    • that way you always know where to look 🙂
    • you can find it 24×7 and regardless of your GeoLoc
    • sometimes we’ll have skype chats and voice calls about the issue, maybe about design ideas. I’ll shove those things into the issue.
  • we can link related issues
  • we can use the search to look up old issues and refresh ourselves on what was asked for/done
  • we can have asynchronous, threaded conversations in the form of comments
  • we can track any myriad of other stuff against the issue
  • i log my hours against the issue
    • useful for billing if you need it…
  • i can use it to help generate pretty charts for management
  • we use the issue status to trigger QA (in addition to our chats)
  • i use jira to help ensure product version notes are up to date
    • jira manages the iterations
  • pretty easy to maintain a backlog, even chunking it up into groups if needed
  • easy to indicate that an issue has been rejected, and why

the point is, i find it incredibly useful to have modern technology at my fingertips…

in my experience, there is so much more to a project tracking “tool” than what post-it notes would seem to represent.

[important]but i have NO (zero, nada, zilch) experience being part of a co-located team using post-it notes. so take it with a grain of salt :-)[/important]

Use Agile Wisely

For the past few years, I have been bothered by a nagging urge to write a pamphlet on software titled “Common Sense” — an homage to Thomas Paine’s great work.

This may be a stretch, and I may counter this point once I actually try and research and draw more parallels — or not. So here I am just thinking out loud, as it were. A risky means to formulate my ideas, I know.

The Agile Manifesto is akin to the Declaration of Independence and US Constitution. (Not in its impact on the world, and I make the comparison in complete deference to the greatness that were our Founders.)

Agile is about freedom and individual responsibility grounded in an agreed-upon framework of overarching ethics and morals. The beauty of our Founding Fathers, was that they got to the essence of human behavior — including the depravity and moral weakness, and corruption by power, that we humans suffer from. They designed a system of government that accounts for such weaknesses, and that provides a means of feedback and correction. Correcting in the small (local and state government), and in the large (amendments to the document itself).

The Agile Manifesto is similarly poised. The four main tenets of the Manifesto are irrefutable, getting to the essence of software development. The Manifesto sets the stage up for great success without the shackles of a tyrannical, command and control form of management. It works from the bottom up.

There are many in the software world that decry the freedom of agile developers practicing their craft. Some organizations still drive a tyrannical structure within a framework of bureaucratic control. While many “Freedom Fighters” see the errors in these organizations, those on the inside are somehow oblivious, and probably believe in the superiority of command-and-control.

The lure of the bureaucratic layers to many managers always confounded me. Is it a lack of education about what freedom means? Is it a lack of courage to be individually accountable? Is it an allegiance to a known and friendly tyrannical structure — versus the unknown and scary individualism? Is it a belief that somehow a complex human organization can be broken down into its constituent parts, such that if each cog does their small bit, then the whole achieves its ultimate goal? It is, undoubtedly, simply part of human nature. Some folks are comfortable being risk takers and sticking their necks out, while others enjoy the comforts of a more controlled system within which to toil.

Over the past few years, our community has been inundated with “enlightened” Scrum Masters. The Church of Scrum has literally popped up overnight, anointing new converts at an alarming pace. With a relatively trivial-to-acquire certification, many organizations seek out said experts to be their savior on the road to riches.

Scrum itself is not the issue, after all it is Agile. But much like freedom in the hands of the uneducated often has disastrous results, so too does being Scrumified in the absence of understanding Agile.

Our young democratic republic worked hard to teach children about the US form of government and the hard-fought liberties we were blessed to enjoy. The children all studied from primers that helped them be educated about just “how” our form of government works. And they learned about the context… the “why” the framers of the Constitution chose their forms of checks and balances to stem undersirable side effects of human nature.

In the absence of much formal education about what the intent of the Agile Manifesto means, Scrum filled the void. That’s the beauty of a free market. Ideas can compete.

Much like our venerable representative democracy, Agile isn’t the most perfect form of s/w development, but it’s the best that has ever been. There are no guarantees that being “agile” will result in fabulous wealth and riches through the killer app. Just like there are no guarantees of outcome in a free society (at least there shouldn’t be any).

As with the United States of America, with great freedom lies great responsibility. Use agile wisely.

Uncle Bob Challenges The Architecture of a Rails App

Uncle Bob has a very interesting keynote at the Ruby Midwest 2011 conference.

I developed apps with the fundamental architecture of the following (with dependencies only crossing one layer):

——
UI Layer
——
Business Object Layer
——
Data Mgt. Layer
——

Born from the one pattern that is king of the hill in my book: “Separation of Concerns.” The above was my architecture for all projects since the early 90s… C++, Java… but so far not so much in Rails.

I have thought about trying it, but not sure if it will pay off or not.

Essentially, it is about cleaving the rails model classes into two parts:

  1. Business Methods, attributes, business rules
  2. Persistence Methods, attributes, all knowledge of the DBMS details

In general, the UI deals with BOs, but sometimes we create dumb “Data Transfer Objects” that are lightweight versions of the business objects to be thrown about the system.

As a side note, in general, moving code to a more “object-oriented” state often ends up with the same lines of code. And often a bit more due to the boiler plate of creating additional classes.

In a current project, we have pulled out the business objects into a separate gem — but mostly because it needs to be used by our web app and by an eventmachine app.

The thing that shocked me the most about Rails, when Corey Haines introduced me to it in 2009, was that it was a lot like “Model Driven Architecture” that I had worked with for a few years. Given an architecture, a vertical slice thru the app, weave the model thru the architecture generator and out comes an application with a consistent architecture for the bulk of the app that is mostly the same (save for model/property names). Commercially, this MDA technology was a failure, last time I checked. Even though I thought it was the smartest way to develop apps, few others did. Except for Rails developers — largely because most rails devs probably have a very different mindset than other devs.

See blast from the past presentation here.

Though Bob pokes fun at Rails high-level directory structure as not revealing the business domain, I am totally fine with that. It’s a good thing. Yea, sure, it is revealing that it is an MVC style app designed to deliver web apps, so what? No matter which architecture is used, I look for the domain classes to tell me what the system is doing…

In my handful of rails apps to date, I have only used MongoDB and MongoMapper — and this is the closest I have gotten to the good old days of when I used the POET Object-oriented Database with C++ back in the late 90s. It is the closest I have been to nirvana. I basically *almost* don’t need to care that there even is a database…

One of these days, I’ll compare and contrast a Rails/MongoMapper app with and without Business Objects separated from Data Management classes.