Author Archives: jon

The Challenge of Naming

John Nunemaker wrote a very nice article on assorted tips that improved his code… One of which revolved around naming.

Regarding naming… I absolutely agree. It is worth arguing about. The more critical the class is to the system design, the more I might go to the mat and wrestle a good name to the ground. The decision on a name can be a fleeting event, but it will have everlasting impact. Think: Write Once, Read Many. Don’t screw it up for the rest of us!

For a C++ app for manufacturing (’95-98), I employed a very layered architecture. Portable business objects were sent to the thin client (no UI talking to DB crap allowed!). The paradigm of pick lists was commonplace… show me a list of parts. The list UI component merely needed a set of IDs and Names. So each domain class could basically implement the interface and they too could be tossed into a drop-down list. My clever name for this little device never grew past “IdString” — we kind of joked about it, because a better name never surfaced.

When I see something like “GaugeAttributeServiceWithCaching” in isolation, it is not as easy to unilaterally discuss a better name.

However, were this inside a basic system called “Gauge” and I saw a bunch of other classes prefixed with “Gauge” — I would throw a red flag.

BTW: My rules are strict for domain-y things, and less strict for utility classes, and lesser things.

I mostly dislike prefixes, postfixes, redundant things of any sort in a name — class or attribute. If I have to mentally strip off a prefix to get at the gist of what I am reading, then it should not be there in the first place.

For example (non-Ruby community, mostly):

  • column names prefixed by the table name (Table User; user_first, user_last)
  • public class attributes prefixed by an underscore (Class User; string _first, string _last)

I also raise an eyebrow and look more closely at any (domain) class ending in “er.” Yup. Try it. Look for some. They are usually “doing” things.

You can go too far in making “God” classes that have no properties of their own, are stuffed full of collaborators via the initializer, and wouldn’t know how to delegate if it hit them over the head. It’s the kind of “Manager” class (there’s that ‘er’) that — instead of asking (delegating) for it’s gauges to compute some stat, it gets all the data from the gauges and does the work for the gauges. Don’t be that guy!

Conversely, look for the boring data class. No methods, just accessor stuff. While it might be just fine, and there truly is no business logic for this class, I would look around just to be sure there are no over achiever “er” classes lurking in the dark alleys — ready to pimp the business logic for the data class.

Thanks for sharing, John!

DRY your RSpecs

Inspired by reading DRY your Scopes

Sometimes I find that I can write a bunch of tedious specs in a simplified manner described below.

It started innocently enough. As a certain set of features were growing, I found that I was writing repetitive tests like this:

context 'specialty patient section' do
  it('has specialty patient') { @plain_html.should =~ /Specialty Patient/ }
  it('has specialty patient trauma criteria') { @plain_html.should =~ /Trauma Activation/ }
  it('has specialty patient airway') { @plain_html.should =~ /Advanced Airway/ }
end

So I threw that aside and simplified:

# Test that certain sections only display when there are data to be shown.
dynamic_sections = ['Vital Signs', 'ECG', 'Flow Chart', 'Initial Assessment', 'Narrative',
                    'Specialty Patient — ACS', 'Specialty Patient — Advanced Airway',
                    'Specialty Patient — Burns', 'Specialty Patient — Stroke', 'Specialty Patient — CPR',
                    'Specialty Patient — Motor Vehicle Collision', 'Specialty Patient — Trauma Criteria',
                    'Specialty Patient — Obstetrical', 'Specialty Patient — Spinal Immobilization',
                    'Influenza Screening', 'SAD (Psychiatric Ax)',
                    'Incident Details', 'Crew Members', 'Insurance Details', 'Mileage', 'Additional Agencies',
                    'Next of Kin', 'Personal Items', 'Transfer Details']

  context 'dynamic sections' do
    let(:p) {
      xml_str = ...some XML...
      Parser.new(xml_str) }
    let(:r) { Renderer::Customer1::HTML.new(p) }
    let(:html) { r.render }
    context ', when there is no info, ' do
      dynamic_sections.each do |s|
        it("should not have: #{s}") { html.should_not =~ /#{s}/ }
      end
    end
  end

Simple stuff… nothing amazing. Simply using ruby’s language to simplify the maintenance of the specs. When a new section is added to the HTML template, it merely needs to be added to the array. And since it is generating actual specs, you preserve meaningful error messages:

Renderer dynamic sections , when there is no info,  should not have: Specialty Patient — Trauma Criteria

The intent of the test is very clear, and 24 lines of “it” specs are avoided.

The Cost of Using Ruby’s Rescue as Logic

[notice]
If you use this sort of technique, you may want to read on.

node = nodes.first rescue return

[/notice]

 

[important]

Nov 2012 Update:

Though this post was about the performance cost of using a ‘rescue’ statement, there is a more insidious problem with the overall impact of such syntax. The pros and cons of using a rescue are well laid out in Avdi’s free RubyTapas: Inline Rescue

[/important]

Code like this:

unless nodes.nil?
  nodes.first
else
  return
end

Can be written using the seemingly more elegant approach with this ruby trick:

node = nodes.first rescue return

But then, that got me to thinking… In many languages I have used in the past (e.g., Java and C++), Exception handling is an expensive endeavor.

So, though the rescue solution works, I am thinking I should explore whether there are any pros/cons to allowing a “rescue” to act as logic. So I did just that…

Here are the two methods I benchmarked, one with “if” logic, and one with “rescue” logic:

def without_rescue(nodes)
  return nil if nodes.nil?
  node = nodes.first
end
def with_rescue(nodes)
  node = nodes.first rescue return
end

Using method_1, below, I got the following results looping 1 million times:

                  user     system      total        real
W/out rescue  0.520000   0.010000   0.530000 (  0.551359)
With rescue  22.490000   0.940000  23.430000 ( 26.487543)

Yikes. Obviously, rescue is an expensive choice by comparison!

But, if we look at just one or maybe 10 times, the difference is imperceptible.

Conclusion #1 (Normal Usage)

  • It doesn’t matter which method you chose to use if the logic is invoked infrequently.

Looking a bit Deeper

But being a curious engineer at heart, there’s more… The above results are based on worst-case, assuming nodes is always nil. If nodes is never nil, then the rescue block is never invoked. Yielding this (rather obvious) timing where the rescue technique (with less code) is faster:

                  user     system      total        real
W/out rescue  0.590000   0.000000   0.590000 (  0.601803)
With rescue   0.460000   0.000000   0.460000 (  0.461810)

However, what if nodes were only nil some percentage of the time? What does the shape of the performance curve look like? Linear? Exponential? Geometric progression? Well, it turns out that the response (see method_2, below) is linear (R2= 0.99668):

Rescue Logic is Expensive

Rescue Logic is Expensive

Conclusion #2 (Large Data Set):

In this example use of over a million tests, the decision on whether you should use “rescue” as logic boils down to this:

  • If the condition is truly rare (like a real exception), then you can use rescue.
  • If the condition is going to occur 5% or more, then do not use rescue technique!

In general, it would seem that there is considerable cost to using rescue as pseudo logic over large data sets. Caveat emptor!

Sample Code:

My benchmarking code looked like this:

require 'benchmark'

include Benchmark

def without_rescue(nodes)
  return nil if nodes.nil?
  node = nodes.first
end

def with_rescue(nodes)
  node = nodes.first rescue return
end

TEST_COUNT = 1000000

def method_1
  [nil, [1,2,3]].each do |nodes|
    puts "nodes = #{nodes.inspect}"
    GC.start
    bm(12) do |test|
      test.report("W/out rescue") do
        TEST_COUNT.times do |n|
          without_rescue(nodes)
        end
      end
      test.report("With rescue") do
        TEST_COUNT.times do |n|
          with_rescue(nodes)
        end
      end
    end
  end
end

def method_2
  GC.start
  bm(18) do |test|
    nil_nodes = nil
    real_nodes = nodes = [1,2,3]
    likely_pct = 0
    10.times do |p|
      likely_pct += 10
      test.report("#{likely_pct}% W/out rescue") do
        TEST_COUNT.times do |n|
          nodes = rand(100) > likely_pct ? real_nodes : nil_nodes
          without_rescue(nodes)
        end
      end
      test.report("#{likely_pct}% With rescue") do
        TEST_COUNT.times do |n|
          nodes = rand(100) > likely_pct ? real_nodes : nil_nodes
          with_rescue(nodes)
        end
      end
    end
  end
end

method_1
method_2

Sample Output

                  user     system      total        real
W/out rescue  0.520000   0.010000   0.530000 (  0.551359)
With rescue  22.490000   0.940000  23.430000 ( 26.487543)
nodes = [1, 2, 3]
                  user     system      total        real
W/out rescue  0.590000   0.000000   0.590000 (  0.601803)
With rescue   0.460000   0.000000   0.460000 (  0.461810)
                        user     system      total        real
10% W/out rescue    1.020000   0.000000   1.020000 (  1.087103)
10% With rescue     3.320000   0.120000   3.440000 (  3.825074)
20% W/out rescue    1.020000   0.000000   1.020000 (  1.036359)
20% With rescue     5.550000   0.200000   5.750000 (  6.158173)
30% W/out rescue    1.020000   0.010000   1.030000 (  1.105184)
30% With rescue     7.800000   0.300000   8.100000 (  8.827783)
40% W/out rescue    1.030000   0.010000   1.040000 (  1.090960)
40% With rescue    10.020000   0.400000  10.420000 ( 11.028588)
50% W/out rescue    1.020000   0.000000   1.020000 (  1.138765)
50% With rescue    12.210000   0.510000  12.720000 ( 14.080979)
60% W/out rescue    1.020000   0.000000   1.020000 (  1.051054)
60% With rescue    14.260000   0.590000  14.850000 ( 15.838733)
70% W/out rescue    1.020000   0.000000   1.020000 (  1.066648)
70% With rescue    16.510000   0.690000  17.200000 ( 18.229777)
80% W/out rescue    0.990000   0.010000   1.000000 (  1.099977)
80% With rescue    18.830000   0.800000  19.630000 ( 21.634664)
90% W/out rescue    0.980000   0.000000   0.980000 (  1.325569)
90% With rescue    21.150000   0.910000  22.060000 ( 25.112102)
100% W/out rescue   0.950000   0.000000   0.950000 (  0.963324)
100% With rescue   22.830000   0.940000  23.770000 ( 25.327054)

RSpec, Mongo and Database Cleaner

This is kinda obvious, once you see it… But I’d figure it might help someone, someday.

I wanted to create a document one time, so I put it in the before :all block.

Yet, in the “it should” block, the document was gone, spec failed.

If I changed to a before :each block, the spec passed

So I changed the spec_helper from doing a clean for each, to using truncation. I also switched to doing the clean to the before :suite block (so that data didn’t build up in Mongo):

spec/spec_helper.rb
config.before(:suite)do
  #DatabaseCleaner[:mongo_mapper].strategy = :truncation
  DatabaseCleaner.clean
end

config.before(:each) do
  DatabaseCleaner[:mongo_mapper].strategy = :truncation
  #DatabaseCleaner.clean
end

And now things are as I expected them to be when using a before :all block…

I can repeatedly run the specs, and they pass.

Lets in RSpecs Can Be Blech

Maybe it is just me, but I had suspected some weirdness here and there from using the fancy “let(:var_sym)” syntax. The trusty RSpec book says:

The first call to let( ) defines a memoized output( ) method that returns
a double object. Memoized means that the first time the method is
invoked, the return value is cached and that same value is returned
every subsequent time the method is invoked within the same scope.

So, it would seem that let() is a great way to define an object once, and use it from there onward.

However, I saw that in this particular instance of running an “expensive” operation in the let block, it took 17 seconds instead of 7 seconds to run the specs! I could see my specs ticking along, very slowly, one at a time. What the heck? I asked myself. Is there something that says “turn caching off (or on)?

Fancy Schmancy! To save ~10 seconds, I’ll forgo the niceties of let() and revert to using the @var_name syntax.

Given the following RSpec code:

 context 'instance methods' do
    let(:sample_xml_file) {File.expand_path('../../data/sample_v_1_13.xml', __FILE__)}
    let(:p) {
      xml_str = File.read(sample_xml_file)
      Nemsis::Parser.new(xml_str)
    }
    let(:r) {Nemsis::Renderer::HTML.new(p)}

    describe '#render_html' do
      context "plain HTML" do
        let(:html) { r.render(false) }

        it 'returns not nil' do
          html.should_not be_nil
        end

        it 'has title section' do
          html.should =~ ...
        end

        context 'specialty patient section' do
          it('has specialty patient') { html.should =~ ... }
          it('has specialty patient trauma criteria') { html.should =~ ... }
          it('has specialty patient airway') { html.should =~ ... }
        end

        it "should not have a STYLE section" do
          html.should_not =~ ...
        end

        it "write to html file" do
          write_html_file(sample_xml_file, "simple", html)
        end
      end

      context "fancy HTML" do
        let(:html) { r.render(true) }

        it "should have a STYLE section" do
          html.should =~ ...
        end

        it "write to html file" do
          write_html_file(sample_xml_file, "fancy", html)
        end
      end
    end
    ...

Contrast the above with the more traditional approach that uses a before block and @variables:

 context 'instance methods' do

    before :all do
      @sample_xml_file = File.expand_path('../../data/sample_v_1_13.xml', __FILE__)
      xml_str = File.read(@sample_xml_file)
      p = Nemsis::Parser.new(xml_str)
      r = Nemsis::Renderer::HTML.new(p)
      @html = r.render(false)
    end

    describe '#render_html' do
      context "plain HTML" do

        it 'returns not nil' do
          @html.should_not be_nil
        end

I did a bit more formal timing, which revealed the truth:

  • let() — 10.7 seconds
  • before block — 2.4 seconds

Am I missing something?

Manual Cucumber Tests?

there was some discussion over on the cucumber list about manual testing.

cucumber is great at BDD, but it doesn’t mean it is the only test technique (preaching to choir) we should use.

i have learned it is critical to understand where automated tests shine, and where human testing is critical — and to not confuse the two.

as far as cuking manual tests, keeping the tests in one place seems like a good advantage (as described in Tim Walker’s cucum-bumbler wiki <g>).

the cucumber “ask” method looks interesting. maybe your testers could use the output to the console as-is, or (re-)write your own method to store the results somewhere else/output them differently.

From the cucumber code (cucumber-1.1.4/lib/cucumber/runtime/user_interface.rb):

# Suspends execution and prompts +question+ to the console (STDOUT).
# An operator (manual tester) can then enter a line of text and hit
# <ENTER>. The entered text is returned, and both +question+ and
# the result is added to the output using #puts.
# ...
def ask(question, timeout_seconds)
...

Sample Feature:

    ...
Scenario: View Users Listing
  Given I login as "Admin"
  When I view the list of users
  Then I should check the aesthetics

Step definition:

Then /^I should check the aesthetics$/ do
  ask("#{7.chr}Does the UI have that awesome look? [Yes/No]", 10).chomp.should =~ /yes/i
end

The output to the console looks like this:

Thanks for the pointer, Matt!

[notice]NOTE: it doesn’t play well with running guard/spork.[/notice]
The question pops up over in the guard terminal 🙁

Of course, if you are running a suite of manual tests, you probably don’t need to worry about the Rails stack being sluggish :-p

    Spork server for RSpec, Cucumber successfully started
    Running tests with args ["features/user.feature", "--tags", "@wip:3", "--wip", "--no-profile"]...
    Does the UI have that awesome look? [Yes/No]
    Yes
    ERROR: Unknown command Yes
    Done.

Supporting SSL in Rails 2.3.4

Somehow, moving a perfectly happy production app to Rackspace and nginx caused URLs to no longer sport the SSL ‘s’ in “https” — bummer.

Link_to’s were fine… But a custom “tab_to” (responsible for highlighting the current tab) was not so happy (even though it used an eventual link_to).

Turns out, that it is the url_for method as I learned from here.

I also blended it with some ideas I found here.

# config/environment/production.rb
# Override the default http protocol for URLs
ROUTES_PROTOCOL = "https"
...
# config/environment.rb
# Specifies gem version of Rails to use when vendor/rails is not present
RAILS_GEM_VERSION = '2.3.5' unless defined? RAILS_GEM_VERSION
# Use git tags for app version
APP_VERSION = `git describe --always --abbrev=0`.chomp! unless defined? APP_VERSION
# The default http protocol for URLs
ROUTES_PROTOCOL = 'http'
...
# application_controller.rb
  # http://lucastej.blogspot.com/2008/01/ruby-on-rails-how-to-set-urlfor.html
  def default_url_options(options = nil)
     if ROUTES_PROTOCOL == 'https'
       { :only_path => false, :protocol => 'https' }
     else
       { :only_path => false, :protocol => 'http' }
     end
  end
  helper_method :url_for

Now the real kicker… Since I do not have SSL set up locally, I had to do some dev on our staging server to tweak the code and test that “https” showed up. so I turned off class caching: config.cache_classes = false.

However, when I cap deployed with it set back to “true” https did not show up. @#$$##@!!!!%%$% AARGH.

I suspect it might have something to do with not being able to open up a cached class and redefine it? I don’t know… I am going to have to go explore this oddity next…

Anatomy of a MongoDB Profiling Session

This particular application has been collecting data for months now, but hasn’t really had any users by design. At 33GB of data, pulling up a list of messages received was taking f-o-r-e-v-e-r!

So I decided to document how to go about and fix a running production system… Hope it helps.

Log into mongo console and turn on profiling (the ‘1’) to monitor slow queries. I entered >10 seconds, which really stinks (!). You should adjust it to suit your app’s definition of “slow”—maybe 500ms:

> db.setProfilingLevel(1,10000)
{ "was" : 0, "slowms" : 100, "ok" : 1 }

Next I went back to the webapp and executed the page request that exhibits the slow response …

Once the page returns, go in and look for any slow responses that the profiler logged:

> db.system.profile.find()
{
  "ts" : ISODate("2012-02-18T15:34:02.967Z"), 
  "op" : "command", 
  "command" : { "count" : "messages", 
    "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" }, 
      "fields" : null }, 
  "ntoreturn" : 1, 
  "responseLength" : 48, 
  "millis" : 119051, 
  "client" : "192.168.100.67", 
  "user" : "" 
}
{
  "ts" : ISODate("2012-02-18T15:35:51.704Z"),
  "op" : "query", 
  "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" },
  "ntoreturn" : 25,
  "ntoskip" : 791025,
  "nscanned" : 791051,
  "nreturned" : 25,
  "responseLength" : 49956,
  "millis" : 108720,
  "client" : "192.168.100.67",
  "user" : "" 
}

You can see there was a count query and a query for the data itself (we are using pagination). Sure enough, look here:

  • “ntoreturn” : 25,
  • “nscanned” : 791051,

 

Wow, that’s nasty… to return 25 records, we scanned 791,051! Gulp. Looks like a full table scan. Never a good thing (unless you have very small amounts of data).

Let’s see what sorts of indexes exist for the messages collection:

db.system.indexes.find( { ns: "production-alerts.messages" } );
{ "name" : "_id_", "key" : { "_id" : 1 }, "v" : 0 }
{ "v" : 1, "key" : { "created_at" : -1 }, "name" : "created_at_-1" }
{ "v" : 1, "key" : { "_type" : 1 }, "name" : "_type_1" }
{ "v" : 1, "key" : { "recv_app" : 1 }, "name" : "recv_app_1" }
{ "v" : 1, "key" : { "created_at" : -1, "recv_app" : 1 }, "name" : "created_at_-1_recv_app_1" }
{ "v" : 1, "key" : { "message_type" : 1 }, "name" : "message_type_1" }
{ "v" : 1, "key" : { "trigger_event" : 1 }, "name" : "trigger_event_1" }

Well, as expected, there is no index covering the multiple keys that we are searching on. So let’s add a multi-key index to match the query used by the controller!

db.messages.ensureIndex({_type:1, recv_app:1});

Now the app FLIES!! We dropped from 100+ seconds to 1.5 seconds (look at the “millis”) w00t!

db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
> db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
{
	"cursor" : "BtreeCursor _type_1_recv_app_1",
	"nscanned" : 791153,
	"nscannedObjects" : 791153,
	"n" : 791153,
	"millis" : 1546,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"_type" : [
			[
				"HL7Message",
				"HL7Message"
			]
		],
		"recv_app" : [
			[
				"CAREGIVER",
				"CAREGIVER"
			]
		]
	}
}

To prevent this sort of thing, you can consider adding indexes when you create new queries. But the best way to do this is to be empirical and know whether you should add the index through some testing. I’ll leave that for another day!

Exporting MongoMapper Objects to JSON

I wanted to export a MongoMapper document and it’s related documents as JSON — with embedded arrays for the collections. Invoking to_json did not seem to work perfectly, so I set about to discover what was going on.

Conclusion

If you use Embedded Documents for every associated document, the to_json method will work perfectly.

If you have normal Documents, you must override the as_json method to export the object “tree.”

Details

Here is a walk through of exporting mongo documents as JSON.

I created a simple Author class. And will use a simple test to show how to_json works:

describe "Author 1" do
  before :all do
    class Author
      include MongoMapper::Document
      key :name
      key :pen_name
    end
  end

  it "should output JSON" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard")
    json = p1.to_json
    puts json
    json.should include "name"
    json.should include "Poor Richard"
  end
end

And we get what we expect:

{
  "books":[],
  "id":"4f316a4c8951a2eefe000001", 
  "name":"Ben Franklin",
  "pen_name":"Poor Richard"
}

Now let’s add a new Book document of the Embedded variety. Here we will assert that the Author JSON should include a list of Books:

describe "Author 2" do
  before :all do
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
    end
  end
  it "authors have books" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
  end
end

And, sure enough, it works.

{
  "books":[{
    "id":"4f316a4c8951a2eefe000003", 
    "title":"Poor Richard's Almanac"}],
  "id":"4f316a4c8951a2eefe000004", 
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard"
}

Let’s add a list of Interests to the Author class, this time as a normal document type (not embedded). Now we can test that the Author JSON has the expected Interest:

describe "Author 3" do
  before :all do
    class Interest
      include MongoMapper::Document
      key :title, String
    end
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
      many :interests
    end
  end
  it "should have interests" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")],
                       :interests => [Interest.create(:title => "Movies")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
    json.should include "Movies" # Fails
  end
end

Whoa! No joy! Seems that the association to non-Embedded documents does not get automatically exported to the JSON.

{
  "books":[{
    "id":"4f316a4c8951a2eefe000006",
    "title":"Poor Richard's Almanac"}],
  "id":"4f316a4c8951a2eefe000007", 
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard"
}

And we get a failed spec 🙁

# expected "{"books":[{"id":"4f316a4c8951a2eefe000006","title":"Poor Richard's Almanac"}],"id":"4f316a4c8951a2eefe000007","name":"Ben Franklin","pen_name":"Poor Richard"}" to include "Movies"

Turns out we can add a custom as_json implementation to the class that you want to export as JSON. The as_json is responsible for indicating which fields and collections should be included in the json.

describe "Author 4" do
  before :all do
    class Interest
      include MongoMapper::Document
      key :title, String

    end
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
      many :interests

      def as_json options={}
        {
            :name => self.name,
            :pen_name => self.pen_name,
            :books => self.books,
            :interests => self.interests
        }
      end
    end
  end
  it "should have interests in json" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")],
                       :interests => [Interest.create(:title => "Movies")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
    json.should include "Movies"
  end
end

And we have Books and Interests. Success!

{
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard",
  "books":[{
    "id":"4f31782a8951a2f267000002", 
    "title":"Poor Richard's Almanac"}], 
  "interests":[{
    "author_id":"4f31782a8951a2f267000003", 
    "id":"4f31782a8951a2f267000001",
    "title":"Movies"}]
}

 

The Bizarro Manifesto

Let’s try a little Bizarro1 test (if you agree to these, I’ll poke you with a hot krypton stick):

We are uncovering better ways to provide the illusion of developing software by listening to others talk about watching people try. Through this (dare I call it?) work, we have come to value:

  • Dogmatic process and CASE-tool-like automation over inspiring quality individuals to interact with the team and the clients
  • Sufficient up-front comprehensive design specifications over seeing frequent, tangible, working results.
  • Writing detailed Statements of Work and negotiating changes over collaborating to do our collective best with the time and money at hand
  • Driving toward the original project plan over accommodating the client changing their mind, or a path turning into a dead end

To elaborate:

  • We prefer to focus on building software with lock-step process and tools — and reduce our need to worry about quality individuals and having conversations amongst developers or with the client.
    • that way we don’t need to worry about people issues and effective communication.
    • That way we can hire any individual regardless of skill, and forgo all verbal/personal interactions in favor of solely written words. Even better if those written words are automatically transformed into code. Maybe we can get non-coder tools! After all, people are merely fungible assets/resources, and software is factory-work — with processes and tools, and a horde of low-paid serfs, we can crank it out!
  • We prefer to spend a lot of time up-front ensuring we have the requirements and design specs fully determined —  rather than have tangible, working results early on.
    • We start with complete requirements specifications (often 400 pages), that follow our company standard template.
    • Even our Use Cases follow a mandatory 3-level deep path, with proper exception and alternate paths worked out.
    • We link the requirements items into detailed design documents — which include system design diagrams, interface specifications, and detailed object models.
    • If we don’t write it all down now, we’re likely to forget what we wanted. And if we don’t do it to the n-th degree, the developers might screw it up.
    • Writing it all down up front allows us to go on vacation while the process and tools “write” the code from the detailed specs/diagrams. Sweet.
    • In addition, we love to be rewarded by reaching meaningless intermediate deadlines that we place on our 1500-node Gantt chart.
    • When we combine all of the upfront work with important deadlines, many of the senior managers can get promoted due to their great commitment to generating reams of cool-looking documents. By the time the sh!t hits the fan when folks realize the “ship it” deadline is missed, the senior managers are no longer involved.
    • Besides, if we actually built software instead of writing all sorts of documents thinking about building software, our little ruse would be exposed!
  • We prefer to work under rigid Statements of Work — rather than actually work towards a “best-fit” solution given changing conditions of understanding and needs.
    • The agreement is based on the 400-page, fully-specified requirements document, and we pad the cost estimate with a 400% profit margin.
    • We then hire dozens of people to argue during the Change Control Review Board monthly meetings about re-writing code to deliver what you wanted versus what you asked for when you thought you knew what you wanted (and wrote it down in that 400-page doc that was signed off by 6 execs).
    • Contract negotiation pissing matches are such great uses of our collective resources and always result in perfect software! We love our fine print 🙂
    • With a 400% padding, the projects are too big to fail.
    • Once we are in it for 1 or 2 million and 50% done and 2x schedule overrun, who would ever say “No” to a contract extension? Who better to get you to the goal line than the same folks who squandered away your treasure, pissed away the calendar, and delivered no working software yet?
    • We like to appear like we’re just about done… Asymptote? Never heard of one.
  • We prefer to be driven by our initial plan — rather than dealing with change and having to re-print the Gantt.
    • Especially a Gantt chart that has been built with tender loving care to include resource allocations, inter-project dependencies, and partial resource allocation assignments for matrix-style organizations.
    • We love hiring a small army to ensure that we drive the entire team to meet every micro-task deadline even when they no longer make any sense.
    • The real fun is collecting the “actuals” data from the developers assigned to each task so we can compare it to their estimated hours.
    • And nothing sweeter than seeing 90% of our tasks being started, and 75% of those being 67% resolved;  and 25% of the resolved actually being complete — the roll-up summary to management is such a compelling story of success.
    • Changing such a beautiful plan that took 4 man-years to develop, that incorporates all of the comprehensive non-code documents, and is an appendix in the contract, is no small feat!
    • Better to produce the software according to plan even if nobody wants it that way. That’s our motto, and we’re not going to change!
    • We love the illusion of activity over the truth of delivered features.

Feel free to sign the manifesto below. It’s free to be certified.


1

    Credit goes to Superman and Bizarro World.