Category Archives: agile

Supporting SSL in Rails 2.3.4

Somehow, moving a perfectly happy production app to Rackspace and nginx caused URLs to no longer sport the SSL ‘s’ in “https” — bummer.

Link_to’s were fine… But a custom “tab_to” (responsible for highlighting the current tab) was not so happy (even though it used an eventual link_to).

Turns out, that it is the url_for method as I learned from here.

I also blended it with some ideas I found here.

# config/environment/production.rb
# Override the default http protocol for URLs
ROUTES_PROTOCOL = "https"
...
# config/environment.rb
# Specifies gem version of Rails to use when vendor/rails is not present
RAILS_GEM_VERSION = '2.3.5' unless defined? RAILS_GEM_VERSION
# Use git tags for app version
APP_VERSION = `git describe --always --abbrev=0`.chomp! unless defined? APP_VERSION
# The default http protocol for URLs
ROUTES_PROTOCOL = 'http'
...
# application_controller.rb
  # http://lucastej.blogspot.com/2008/01/ruby-on-rails-how-to-set-urlfor.html
  def default_url_options(options = nil)
     if ROUTES_PROTOCOL == 'https'
       { :only_path => false, :protocol => 'https' }
     else
       { :only_path => false, :protocol => 'http' }
     end
  end
  helper_method :url_for

Now the real kicker… Since I do not have SSL set up locally, I had to do some dev on our staging server to tweak the code and test that “https” showed up. so I turned off class caching: config.cache_classes = false.

However, when I cap deployed with it set back to “true” https did not show up. @#$$##@!!!!%%$% AARGH.

I suspect it might have something to do with not being able to open up a cached class and redefine it? I don’t know… I am going to have to go explore this oddity next…

Anatomy of a MongoDB Profiling Session

This particular application has been collecting data for months now, but hasn’t really had any users by design. At 33GB of data, pulling up a list of messages received was taking f-o-r-e-v-e-r!

So I decided to document how to go about and fix a running production system… Hope it helps.

Log into mongo console and turn on profiling (the ‘1’) to monitor slow queries. I entered >10 seconds, which really stinks (!). You should adjust it to suit your app’s definition of “slow”—maybe 500ms:

> db.setProfilingLevel(1,10000)
{ "was" : 0, "slowms" : 100, "ok" : 1 }

Next I went back to the webapp and executed the page request that exhibits the slow response …

Once the page returns, go in and look for any slow responses that the profiler logged:

> db.system.profile.find()
{
  "ts" : ISODate("2012-02-18T15:34:02.967Z"), 
  "op" : "command", 
  "command" : { "count" : "messages", 
    "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" }, 
      "fields" : null }, 
  "ntoreturn" : 1, 
  "responseLength" : 48, 
  "millis" : 119051, 
  "client" : "192.168.100.67", 
  "user" : "" 
}
{
  "ts" : ISODate("2012-02-18T15:35:51.704Z"),
  "op" : "query", 
  "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" },
  "ntoreturn" : 25,
  "ntoskip" : 791025,
  "nscanned" : 791051,
  "nreturned" : 25,
  "responseLength" : 49956,
  "millis" : 108720,
  "client" : "192.168.100.67",
  "user" : "" 
}

You can see there was a count query and a query for the data itself (we are using pagination). Sure enough, look here:

  • “ntoreturn” : 25,
  • “nscanned” : 791051,

 

Wow, that’s nasty… to return 25 records, we scanned 791,051! Gulp. Looks like a full table scan. Never a good thing (unless you have very small amounts of data).

Let’s see what sorts of indexes exist for the messages collection:

db.system.indexes.find( { ns: "production-alerts.messages" } );
{ "name" : "_id_", "key" : { "_id" : 1 }, "v" : 0 }
{ "v" : 1, "key" : { "created_at" : -1 }, "name" : "created_at_-1" }
{ "v" : 1, "key" : { "_type" : 1 }, "name" : "_type_1" }
{ "v" : 1, "key" : { "recv_app" : 1 }, "name" : "recv_app_1" }
{ "v" : 1, "key" : { "created_at" : -1, "recv_app" : 1 }, "name" : "created_at_-1_recv_app_1" }
{ "v" : 1, "key" : { "message_type" : 1 }, "name" : "message_type_1" }
{ "v" : 1, "key" : { "trigger_event" : 1 }, "name" : "trigger_event_1" }

Well, as expected, there is no index covering the multiple keys that we are searching on. So let’s add a multi-key index to match the query used by the controller!

db.messages.ensureIndex({_type:1, recv_app:1});

Now the app FLIES!! We dropped from 100+ seconds to 1.5 seconds (look at the “millis”) w00t!

db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
> db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
{
	"cursor" : "BtreeCursor _type_1_recv_app_1",
	"nscanned" : 791153,
	"nscannedObjects" : 791153,
	"n" : 791153,
	"millis" : 1546,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"_type" : [
			[
				"HL7Message",
				"HL7Message"
			]
		],
		"recv_app" : [
			[
				"CAREGIVER",
				"CAREGIVER"
			]
		]
	}
}

To prevent this sort of thing, you can consider adding indexes when you create new queries. But the best way to do this is to be empirical and know whether you should add the index through some testing. I’ll leave that for another day!

The Bizarro Manifesto

Let’s try a little Bizarro1 test (if you agree to these, I’ll poke you with a hot krypton stick):

We are uncovering better ways to provide the illusion of developing software by listening to others talk about watching people try. Through this (dare I call it?) work, we have come to value:

  • Dogmatic process and CASE-tool-like automation over inspiring quality individuals to interact with the team and the clients
  • Sufficient up-front comprehensive design specifications over seeing frequent, tangible, working results.
  • Writing detailed Statements of Work and negotiating changes over collaborating to do our collective best with the time and money at hand
  • Driving toward the original project plan over accommodating the client changing their mind, or a path turning into a dead end

To elaborate:

  • We prefer to focus on building software with lock-step process and tools — and reduce our need to worry about quality individuals and having conversations amongst developers or with the client.
    • that way we don’t need to worry about people issues and effective communication.
    • That way we can hire any individual regardless of skill, and forgo all verbal/personal interactions in favor of solely written words. Even better if those written words are automatically transformed into code. Maybe we can get non-coder tools! After all, people are merely fungible assets/resources, and software is factory-work — with processes and tools, and a horde of low-paid serfs, we can crank it out!
  • We prefer to spend a lot of time up-front ensuring we have the requirements and design specs fully determined —  rather than have tangible, working results early on.
    • We start with complete requirements specifications (often 400 pages), that follow our company standard template.
    • Even our Use Cases follow a mandatory 3-level deep path, with proper exception and alternate paths worked out.
    • We link the requirements items into detailed design documents — which include system design diagrams, interface specifications, and detailed object models.
    • If we don’t write it all down now, we’re likely to forget what we wanted. And if we don’t do it to the n-th degree, the developers might screw it up.
    • Writing it all down up front allows us to go on vacation while the process and tools “write” the code from the detailed specs/diagrams. Sweet.
    • In addition, we love to be rewarded by reaching meaningless intermediate deadlines that we place on our 1500-node Gantt chart.
    • When we combine all of the upfront work with important deadlines, many of the senior managers can get promoted due to their great commitment to generating reams of cool-looking documents. By the time the sh!t hits the fan when folks realize the “ship it” deadline is missed, the senior managers are no longer involved.
    • Besides, if we actually built software instead of writing all sorts of documents thinking about building software, our little ruse would be exposed!
  • We prefer to work under rigid Statements of Work — rather than actually work towards a “best-fit” solution given changing conditions of understanding and needs.
    • The agreement is based on the 400-page, fully-specified requirements document, and we pad the cost estimate with a 400% profit margin.
    • We then hire dozens of people to argue during the Change Control Review Board monthly meetings about re-writing code to deliver what you wanted versus what you asked for when you thought you knew what you wanted (and wrote it down in that 400-page doc that was signed off by 6 execs).
    • Contract negotiation pissing matches are such great uses of our collective resources and always result in perfect software! We love our fine print 🙂
    • With a 400% padding, the projects are too big to fail.
    • Once we are in it for 1 or 2 million and 50% done and 2x schedule overrun, who would ever say “No” to a contract extension? Who better to get you to the goal line than the same folks who squandered away your treasure, pissed away the calendar, and delivered no working software yet?
    • We like to appear like we’re just about done… Asymptote? Never heard of one.
  • We prefer to be driven by our initial plan — rather than dealing with change and having to re-print the Gantt.
    • Especially a Gantt chart that has been built with tender loving care to include resource allocations, inter-project dependencies, and partial resource allocation assignments for matrix-style organizations.
    • We love hiring a small army to ensure that we drive the entire team to meet every micro-task deadline even when they no longer make any sense.
    • The real fun is collecting the “actuals” data from the developers assigned to each task so we can compare it to their estimated hours.
    • And nothing sweeter than seeing 90% of our tasks being started, and 75% of those being 67% resolved;  and 25% of the resolved actually being complete — the roll-up summary to management is such a compelling story of success.
    • Changing such a beautiful plan that took 4 man-years to develop, that incorporates all of the comprehensive non-code documents, and is an appendix in the contract, is no small feat!
    • Better to produce the software according to plan even if nobody wants it that way. That’s our motto, and we’re not going to change!
    • We love the illusion of activity over the truth of delivered features.

Feel free to sign the manifesto below. It’s free to be certified.


1

    Credit goes to Superman and Bizarro World.

Jira vs Post-It Notes

Had some recent discussions on the subject of using tools or just sticky notes…

for me, a digital board representation of the kanban style 3-column view is merely one v-e-r-y small aspect of using something like jira.

though i could probably live with just post-it notes, i bet it would be an interesting challenge for me. i have been:

  1. doing only distributed work for so long (since late 90s)
  2. using jira so long, it is as simple as a pen and paper (or post-its)
  3. dissatisfied trying to do a project with something simpler like Basecamp, without the richness of jira
  4. working on long-term projects that have different people rolling through, and years of life (4000+ issues)

and i ask myself what other benefits do i derive from jira (plus, admittedly, a companion wiki)? why might i be uncomfortable with just post-its?  do i have an unnecessary “crutch” in the form of jira? hmmm… do i do more than is necessary in the way of using jira/wiki to add other pertinent documentation for the project?

well, here are the following things i can think of off the top of my head:

  • it is easy to
    • assign myself to an issue
    • move it to be in progress
    • move it to be resolved
  • our virtual chats hover around the greenhopper view
    • we’ll edit stuff on the fly as needed
    • quickly create a new issue if something pops up during the call
    • everybody refreshes their browser
  • we put a fair amount of supporting docs — details, helpful things — into the issue (or into the wiki and then that link goes in the issue)
    • that way you always know where to look 🙂
    • you can find it 24×7 and regardless of your GeoLoc
    • sometimes we’ll have skype chats and voice calls about the issue, maybe about design ideas. I’ll shove those things into the issue.
  • we can link related issues
  • we can use the search to look up old issues and refresh ourselves on what was asked for/done
  • we can have asynchronous, threaded conversations in the form of comments
  • we can track any myriad of other stuff against the issue
  • i log my hours against the issue
    • useful for billing if you need it…
  • i can use it to help generate pretty charts for management
  • we use the issue status to trigger QA (in addition to our chats)
  • i use jira to help ensure product version notes are up to date
    • jira manages the iterations
  • pretty easy to maintain a backlog, even chunking it up into groups if needed
  • easy to indicate that an issue has been rejected, and why

the point is, i find it incredibly useful to have modern technology at my fingertips…

in my experience, there is so much more to a project tracking “tool” than what post-it notes would seem to represent.

[important]but i have NO (zero, nada, zilch) experience being part of a co-located team using post-it notes. so take it with a grain of salt :-)[/important]

Use Agile Wisely

For the past few years, I have been bothered by a nagging urge to write a pamphlet on software titled “Common Sense” — an homage to Thomas Paine’s great work.

This may be a stretch, and I may counter this point once I actually try and research and draw more parallels — or not. So here I am just thinking out loud, as it were. A risky means to formulate my ideas, I know.

The Agile Manifesto is akin to the Declaration of Independence and US Constitution. (Not in its impact on the world, and I make the comparison in complete deference to the greatness that were our Founders.)

Agile is about freedom and individual responsibility grounded in an agreed-upon framework of overarching ethics and morals. The beauty of our Founding Fathers, was that they got to the essence of human behavior — including the depravity and moral weakness, and corruption by power, that we humans suffer from. They designed a system of government that accounts for such weaknesses, and that provides a means of feedback and correction. Correcting in the small (local and state government), and in the large (amendments to the document itself).

The Agile Manifesto is similarly poised. The four main tenets of the Manifesto are irrefutable, getting to the essence of software development. The Manifesto sets the stage up for great success without the shackles of a tyrannical, command and control form of management. It works from the bottom up.

There are many in the software world that decry the freedom of agile developers practicing their craft. Some organizations still drive a tyrannical structure within a framework of bureaucratic control. While many “Freedom Fighters” see the errors in these organizations, those on the inside are somehow oblivious, and probably believe in the superiority of command-and-control.

The lure of the bureaucratic layers to many managers always confounded me. Is it a lack of education about what freedom means? Is it a lack of courage to be individually accountable? Is it an allegiance to a known and friendly tyrannical structure — versus the unknown and scary individualism? Is it a belief that somehow a complex human organization can be broken down into its constituent parts, such that if each cog does their small bit, then the whole achieves its ultimate goal? It is, undoubtedly, simply part of human nature. Some folks are comfortable being risk takers and sticking their necks out, while others enjoy the comforts of a more controlled system within which to toil.

Over the past few years, our community has been inundated with “enlightened” Scrum Masters. The Church of Scrum has literally popped up overnight, anointing new converts at an alarming pace. With a relatively trivial-to-acquire certification, many organizations seek out said experts to be their savior on the road to riches.

Scrum itself is not the issue, after all it is Agile. But much like freedom in the hands of the uneducated often has disastrous results, so too does being Scrumified in the absence of understanding Agile.

Our young democratic republic worked hard to teach children about the US form of government and the hard-fought liberties we were blessed to enjoy. The children all studied from primers that helped them be educated about just “how” our form of government works. And they learned about the context… the “why” the framers of the Constitution chose their forms of checks and balances to stem undersirable side effects of human nature.

In the absence of much formal education about what the intent of the Agile Manifesto means, Scrum filled the void. That’s the beauty of a free market. Ideas can compete.

Much like our venerable representative democracy, Agile isn’t the most perfect form of s/w development, but it’s the best that has ever been. There are no guarantees that being “agile” will result in fabulous wealth and riches through the killer app. Just like there are no guarantees of outcome in a free society (at least there shouldn’t be any).

As with the United States of America, with great freedom lies great responsibility. Use agile wisely.

Considering Sprint Length

A friend of mine had an interesting situation:

  • Novel product, many unknowns
  • Multiple teams grouped into 3 product areas
  • Experience doing 3-week sprints

Only a fool would do anything other than the 30-day official sprint cycle that I saw on some website and in a few books.

(Just kidding. Unfortunately, like most of agile development, context has a tremendous impact on what you choose to do, process-wise.)

A lot could go into what the Optimal Sprint Length should be… You could ponder the dependent variables and try and guess an optimal length to optimize the independent variable(s) — which would be, what, maybe cost and rate of feature delivery and quality? You could do the “democratic process” and allow the team to vote, or even do “rock-paper-scissors” to figure out 2 or 3 weeks.

However, what if we built a continuum of sprint lengths for the sake of discussion. On the one end, we start at the idealization of doing one useful feature at a time and deploying it immediately — think simple web app. Anything longer than this is a compromise based on some (hopefully valid) reason. On the other extreme, we could wait until the entire system is done before deploying or integrating, maybe after 6 months or a year.

The cost of “batching up” the “work in process” at the upper end of long sprint lengths, is pretty obvious to everyone. I submit, that if you agree with (or experience first-hand) the premise that batching work has a non-linear impact on overall cost (including the hidden and subtle cost of everything that we know is bad with waterfall), then it stands to reason one might favor shorter cycles and less batching.

Not to digress, but the parallels exist in industry. To allow WIP to be large, and to allow certain parts of the process to run at high levels of batching, is a risk. A risk that the items in the batch, once released into the wild, are discovered to not be as valuable as first thought. Well, it’s water over the dam, time and effort you will never get back. (Think: extra features built because someone thought they would be useful, and it turned out that the marketplace thought otherwise.) Nonetheless, sometimes weighing the risks will lead you to some level of batch that makes the most sense.

There is often much more to the decision on sprint length than purely the development team. For example, what is the cost of QA? If the cost of QA is no different for 1 feature at a time versus a week’s worth of features, than QA cycle time/cost is not an issue. However, if it requires a week of QA time to regression test the system in the case of even a single small feature or bug fix, then you have a serious input into what the optimal sprint length should be.

Naturally, one could do development sprints at one frequency, and QA sprints at another… and even customer ship sprints at a completely other cycle time.

Regarding multiple teams… this is a solution that can be recursively applied, much like you would at a software architectural level. If the teams are horribly coupled, your costs will balloon and no amount of pondering sprint lengths will have a significant impact. If the work dependencies are carefully controlled between the teams, sprint length could vary between teams due to their own local reasons.

Much like the QA process can be a “tax” on each Sprint, what other taxes does your process incur? Running down to a one week sprint will likely reveal expensive parts of the process that could be ripe for improving.

So having said all of that… Here’s a thought. Why not simply agree to try out a few different lengths for enough sprints to get a feel for the differences. Try one week sprints for the next 6 weeks. Try 3 week Sprints three times. See if you can monitor metrics that will tell you what worked better. Consider that different teams might also work at different frequencies to test the “costs” of thinking the teams should be synchronized.

Much like with our USA republic, surely don’t let democratic, mob rule win the day.

Do I Still Do Domain Modeling?

Got a very nice “blast from the past” contact (4 levels deep) on LinkedIn. Scott was a member of a team where we went through object modeling for their business application.

The reason I wanted to contact you was two fold:

First for some reason that training has stuck with me more that many trainings and I still go back to the cobweb section of my brain and bring it out every time I have a object modeling task, so thanks you did a good job helping me.  This is true for many of the people that were there.

Second as I have been given another task of modeling a large system from scratch I was wondering  about how you feel about the Domain-Neutral model that was presented in the training you did for us.  I have found it useful over the years but do you still use this model in anything that you design this many years in the future?

Thanks for the help.
Regards,Scott

And yes, I still do domain modeling the basic way that we did 11 years ago (gulp). I only have my old copy of Together anymore, so I am stuck back in time there. In person, I always use post-it notes, flip-charts, and markers. Once I want to make a computer version, I’ll use different tools for modeling depending on the need. For example, UMLet is a great little simple tool to bang out a quick diagram in no time flat and that can be used by anybody.

Another good approach from what we went through 10 years ago, is to follow the color modeling order to help in the discovery process… that is focusing on the following as you walk through the assorted primary user scenarios:

  1. First trying to discover what time-sensitive aspects does the business care about most (the pinks)
  2. Then looking towards the roles that may be at play there (yellows).
  3. Are there any specific things (like contracts, purchase orders — greens)?
  4. And finally, any descriptive elements (blues) to go alongside the greens?

For the past 18 months, I am in major love with Ruby, Rails, and MongoDB — plus all of the surrounding tools and community (see my recent blog posts). Ruby the object-oriented language I wanted when I was doing C++, and MongoDB/MongoMapper is the OO-like DBMS that I always wanted. Putting it all together makes for real productive, high-quality, consistent development — I can quickly model out domain things and try them in a working application.

I’ve never done an adequate job of explaining how I like to approach doing just-enough design up front, but you can find a few snippets here:

Some tips that I find useful when doing initial up-front Domain Modeling (what you are about to do):

  • Spend time with subject matter experts and the business/product owners
  • Build out as you gather high-level features
  • Do breadth first, then depth
  • Only go into details when it will help reduce an unacceptable level of risk

That last bullet is key. I like to do just enough up-front work to please the stakeholders and myself that we can answer the question about: how much? and when? to the level of desired specificity. When you attempt a high-level estimate at a given feature, and it is too big for your comfort, then you can get more detail and break it down further so that it becomes acceptable.

If you do not do enough up front, and your estimate is off by an order of magnitude or three, you might upset the client!

On the flipside, if you do too much up front (detailed modeling), then you risk not getting frequent, tangible, working results into the hands of the client soon enough.

It’s a big balancing act.

You Don’t Always Have to Follow “The” Rules

A user was asking the following on the Agile Modeling list:

What experience does anyone have about standards for stories written for
non-UI components? I’m working with a proxy PO who feels the standard story
format (As a <user> I want to <activity> so that <purpose>) simply won’t
work for something that doesn’t have a user interface. Imagine, for this
example, a project that has the sole purpose of encrypting data without any
user interaction.

For my needs, I often apply the principles behind the concepts, if not always the exact template of a suggested practice. Take for example, the use of my favorite tool, Cucumber, to write Acceptance Tests. Typically, cukes are written from the user point of view in classic “Given – When – Then.” But sometimes I like to use the cucumber style for testing APIs that I am building.

Here is one example at the Cucumber level (with the companion RSpec shown below):

Feature: Version 2.0: As we parse PDFs, we need to be able to collect a list of fonts
  as a way to help discern the structure of the parsed document based on
  heading levels, density of upper case text, and what not.

Scenario: Parsing a simple document
  Given a sample set of text
  |reference|points|value|
  |R9| 10| Patient: FRANKLIN, BENJAMIN|
  |R9| 10| CHIEF COMPLAINT:|
  When i parse the text
  And provide a set of base fonts
  |ref |basefont|
  | R9 | Helvetica-Bold|
  |R10 | Helvetica     |
  Then I should have the following font stats
  |reference|points|upper_cnt|lower_cnt|percent|
  |R9       | 10   | 4       | 1       | 83    |
  And the following font names
  |reference |points|basefont|
  |   R9     |  10  |Helvetica-Bold|
  |  R10     |  10  |Helvetica     |

Scenario Outline: Parsing a simple document
  When collecting a set of text , ,
  Then I should have  and  word counts and  Uppercase stats

  # Note: the counts are cumulative
  Examples:
    |reference|points|value|upper_cnt|lower_cnt|percent|
    | R9| 10| "Patient: FRANKLIN, BENJAMIN" |  2 |  1 | 66 |
    | R9| 10| "CHIEF COMPLAINT:"            |  4 |  1 | 83 |
    | R9| 10| "PRESCRIPTIONS"               |  5 |  1 | 89 |
    |R10|  9| "Motrin 600mg, Thirty (30), Take one q.i.d. as needed for pain, Note: Take with food, Refills: None."| 0 | 13 | 0 |
    |R10|  9| "(Discount Medication) < Michael L. Panera, PA-C 7/13/2010 17:40>"| 0 | 17 | 0 |

And here’s another one:

Feature: Extract meaningful data from Discharge Message

  Scenario: Extract headings
    Given a discharge message
    When the message is parsed
    Then I should see meaningful information, structured as headings and paragraphs
    And I can get formatted values for HTML display

  Scenario: Extract headings from second message
    Given a second discharge message
    When the message is parsed
    Then I can get formatted values for HTML display

But wait! There is more!

At the “Unit Test” level, RSpec’s can be made rather “english friendly” yet still be all about the underlying API as this diagram and snippet show:

RSpec for an API

The Font Collector RSpec tests

describe PdfParser do
  describe PdfParser::FontCollector do

    before(:all) do
      PdfParser::FontCollector.clear_all()
      ...
    end

    context "initialize" do
      it "should reject missing font reference" do
        ...
      end
      it "should reject missing points" do
        ...
      end
      it "should reject missing value" do
        ...
      end
      it "should accept valid inputs" do
        ...
      end

      it "should start off with simple stats" do
        ...
      end

      it "should recognize reference+size is unique" do
        ...
      end
    end

    context "clear" do
      it "should clear the font list" do
        ...
      end
    end

 

MongoDB Honey Badger

In case you don’t know about the Honey Badger—you have to watch this video. Then you will see why MongoDB is a close cousin to this feared and fearless animal!

Developing a new project where your domain classes/tables are changing rapidly?

MongoDB don’t care!

Tired of running rake db:migrate?

MongoDB don’t care!

Need to add a new “column” to your “table?”

MongoDB don’t care!

Want to query your “table” on “columns” that don’t exist?

MongoDB don’t care!

Need to add a new index on the fly?

MongoDB don’t care!

Welcome the Nastyass MongoDB into your development lair, you won’t give a shit about your database growing and changing!

MongoDB don’t care!

Find out more about Honey Badgers here — though Randall already taught us most of the salient points!

 

The Website Development Anti-Pattern

I don’t want to come down too hard on the author, he looks like a real nice, friendly guy. Writing a post like this took time, effort, thought, and care — which I applaud.

But when I read this post on Website Development — A Practical Methodology, I had flashbacks to Waterfall-esque DoD projects. Seriously? A phased approach with no partial deliveries, no iterations, no “sneaking up on the solution?”

Is this a post from, like, 1986? A phased approach?

  • “Once everything has been appropriately documented, it is time for the Implementation Phase.”
  • “Once implementation is complete, it is time to test.”
  • “When the testing is complete… you are ready to deploy.”

I humbly submit that you might want to intersperse some of those activities into a more iterative approach to reduce the time and cost of getting things wrong and discovering it late in the phases.

I just checked the date of the post again.

It is 2011.

Someone pinch me.