Category Archives: MongoDB

Anatomy of a MongoDB Profiling Session

This particular application has been collecting data for months now, but hasn’t really had any users by design. At 33GB of data, pulling up a list of messages received was taking f-o-r-e-v-e-r!

So I decided to document how to go about and fix a running production system… Hope it helps.

Log into mongo console and turn on profiling (the ‘1’) to monitor slow queries. I entered >10 seconds, which really stinks (!). You should adjust it to suit your app’s definition of “slow”—maybe 500ms:

> db.setProfilingLevel(1,10000)
{ "was" : 0, "slowms" : 100, "ok" : 1 }

Next I went back to the webapp and executed the page request that exhibits the slow response …

Once the page returns, go in and look for any slow responses that the profiler logged:

> db.system.profile.find()
{
  "ts" : ISODate("2012-02-18T15:34:02.967Z"), 
  "op" : "command", 
  "command" : { "count" : "messages", 
    "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" }, 
      "fields" : null }, 
  "ntoreturn" : 1, 
  "responseLength" : 48, 
  "millis" : 119051, 
  "client" : "192.168.100.67", 
  "user" : "" 
}
{
  "ts" : ISODate("2012-02-18T15:35:51.704Z"),
  "op" : "query", 
  "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" },
  "ntoreturn" : 25,
  "ntoskip" : 791025,
  "nscanned" : 791051,
  "nreturned" : 25,
  "responseLength" : 49956,
  "millis" : 108720,
  "client" : "192.168.100.67",
  "user" : "" 
}

You can see there was a count query and a query for the data itself (we are using pagination). Sure enough, look here:

  • “ntoreturn” : 25,
  • “nscanned” : 791051,

 

Wow, that’s nasty… to return 25 records, we scanned 791,051! Gulp. Looks like a full table scan. Never a good thing (unless you have very small amounts of data).

Let’s see what sorts of indexes exist for the messages collection:

db.system.indexes.find( { ns: "production-alerts.messages" } );
{ "name" : "_id_", "key" : { "_id" : 1 }, "v" : 0 }
{ "v" : 1, "key" : { "created_at" : -1 }, "name" : "created_at_-1" }
{ "v" : 1, "key" : { "_type" : 1 }, "name" : "_type_1" }
{ "v" : 1, "key" : { "recv_app" : 1 }, "name" : "recv_app_1" }
{ "v" : 1, "key" : { "created_at" : -1, "recv_app" : 1 }, "name" : "created_at_-1_recv_app_1" }
{ "v" : 1, "key" : { "message_type" : 1 }, "name" : "message_type_1" }
{ "v" : 1, "key" : { "trigger_event" : 1 }, "name" : "trigger_event_1" }

Well, as expected, there is no index covering the multiple keys that we are searching on. So let’s add a multi-key index to match the query used by the controller!

db.messages.ensureIndex({_type:1, recv_app:1});

Now the app FLIES!! We dropped from 100+ seconds to 1.5 seconds (look at the “millis”) w00t!

db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
> db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
{
	"cursor" : "BtreeCursor _type_1_recv_app_1",
	"nscanned" : 791153,
	"nscannedObjects" : 791153,
	"n" : 791153,
	"millis" : 1546,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"_type" : [
			[
				"HL7Message",
				"HL7Message"
			]
		],
		"recv_app" : [
			[
				"CAREGIVER",
				"CAREGIVER"
			]
		]
	}
}

To prevent this sort of thing, you can consider adding indexes when you create new queries. But the best way to do this is to be empirical and know whether you should add the index through some testing. I’ll leave that for another day!

Exporting MongoMapper Objects to JSON

I wanted to export a MongoMapper document and it’s related documents as JSON — with embedded arrays for the collections. Invoking to_json did not seem to work perfectly, so I set about to discover what was going on.

Conclusion

If you use Embedded Documents for every associated document, the to_json method will work perfectly.

If you have normal Documents, you must override the as_json method to export the object “tree.”

Details

Here is a walk through of exporting mongo documents as JSON.

I created a simple Author class. And will use a simple test to show how to_json works:

describe "Author 1" do
  before :all do
    class Author
      include MongoMapper::Document
      key :name
      key :pen_name
    end
  end

  it "should output JSON" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard")
    json = p1.to_json
    puts json
    json.should include "name"
    json.should include "Poor Richard"
  end
end

And we get what we expect:

{
  "books":[],
  "id":"4f316a4c8951a2eefe000001", 
  "name":"Ben Franklin",
  "pen_name":"Poor Richard"
}

Now let’s add a new Book document of the Embedded variety. Here we will assert that the Author JSON should include a list of Books:

describe "Author 2" do
  before :all do
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
    end
  end
  it "authors have books" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
  end
end

And, sure enough, it works.

{
  "books":[{
    "id":"4f316a4c8951a2eefe000003", 
    "title":"Poor Richard's Almanac"}],
  "id":"4f316a4c8951a2eefe000004", 
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard"
}

Let’s add a list of Interests to the Author class, this time as a normal document type (not embedded). Now we can test that the Author JSON has the expected Interest:

describe "Author 3" do
  before :all do
    class Interest
      include MongoMapper::Document
      key :title, String
    end
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
      many :interests
    end
  end
  it "should have interests" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")],
                       :interests => [Interest.create(:title => "Movies")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
    json.should include "Movies" # Fails
  end
end

Whoa! No joy! Seems that the association to non-Embedded documents does not get automatically exported to the JSON.

{
  "books":[{
    "id":"4f316a4c8951a2eefe000006",
    "title":"Poor Richard's Almanac"}],
  "id":"4f316a4c8951a2eefe000007", 
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard"
}

And we get a failed spec 🙁

# expected "{"books":[{"id":"4f316a4c8951a2eefe000006","title":"Poor Richard's Almanac"}],"id":"4f316a4c8951a2eefe000007","name":"Ben Franklin","pen_name":"Poor Richard"}" to include "Movies"

Turns out we can add a custom as_json implementation to the class that you want to export as JSON. The as_json is responsible for indicating which fields and collections should be included in the json.

describe "Author 4" do
  before :all do
    class Interest
      include MongoMapper::Document
      key :title, String

    end
    class Book
      include MongoMapper::EmbeddedDocument
      key :title
    end
    class Author
      include MongoMapper::Document
      many :books
      many :interests

      def as_json options={}
        {
            :name => self.name,
            :pen_name => self.pen_name,
            :books => self.books,
            :interests => self.interests
        }
      end
    end
  end
  it "should have interests in json" do
    p1 = Author.create(:name => "Ben Franklin", :pen_name => "Poor Richard",
                       :books => [Book.new(:title => "Poor Richard's Almanac")],
                       :interests => [Interest.create(:title => "Movies")])
    json = p1.to_json
    puts json
    json.should include "Poor Richard"
    json.should include "Almanac"
    json.should include "Movies"
  end
end

And we have Books and Interests. Success!

{
  "name":"Ben Franklin", 
  "pen_name":"Poor Richard",
  "books":[{
    "id":"4f31782a8951a2f267000002", 
    "title":"Poor Richard's Almanac"}], 
  "interests":[{
    "author_id":"4f31782a8951a2f267000003", 
    "id":"4f31782a8951a2f267000001",
    "title":"Movies"}]
}

 

MongoDB Group Map-Reduce Performance

I wanted to get some aggregated count data by message type for a collection with over 1,000,000 documents.

The model is more or less this (a lot removed for simplicity):

class MessageLog
  include MongoMapper::Document

  # Attributes ::::::::::::::::::::::::::::::::::::::::::::::::::::::
  # Message's internal timestamp / when it was sent
  key :time, Time
  # Message type
  key :event_type, String, :default => "Not Set"
  # The message ID
  key :control_id, String, :index => true

  # Add created_at, updated_at
  timestamps!

  # Indexes :::::::::::::::::::::::::::::::::::::::::::::::::::::::::
  def self.create_indexes
    MessageLog.ensure_index([[:created_at,1]])
    MessageLog.ensure_index([[:event_type,1]])
  end
end

Distinct

Though I originally hard-coded the message types (as they do not change very often, and are meaningless without other code changes anyway), I figured I would test dynamically gathering the distinct types. MongoDb supports the distinct function. From the MongoDB console:

> db.message_logs.distinct("event_type")
[
	"Bed Order",
	"Cactus Update",
	"ED Release",
	"ED Summary",
	"Inpatient Admit",
	"Inpatient Discharge Summary",
	"Not Set",
	"Registration",
	"Registration Update",
	"Unknown Message Type"
]

Though I saw distinct in MongoMapper, I had trouble getting it to work (this is an older app on <v2.0, method missing error).

However, a very powerful technique within MongoMapper worked just perfect! Essentially, every collection in MongoMapper will return itself as a collection that MongoDB understands (in their db.collection.blah format) — helps when you need to execute MongoDB style commands:

class MessageLog
  # @return [Array] a list of unique types (strings)
  def self.event_types
    MessageLog.collection.distinct("event_type")
  end
end

Simple Count

I used a simple technique to iterate over each type and get the associated count:

class MessageLog
  # Perform a group aggregation by event type.
  #
  # @return [Hash] the number of message logs per event type.
  def self.count_by_type
    results = {}
    MessageLog.event_types.each {|type| results[type] = MessageLog.count(:event_type => type)}
    results
  end
end

Map-Reduce Too Slow

In this instance, it turned out that Map-Reduce was significantly slower, and I am not exactly sure why. Other than I suppose that iterating over each document is more expensive than calling count with a filter on the event_type key (which is covered by an index).

class MessageLog
  # Perform a group aggregation by event type.
  # Map-Reduce was slow by comparison (20 seconds vs 2.3 seconds)
  #
  # @return [Hash] the number of message logs per event type.
  def self.count_by_type_mr
    results = {}
    counts = MessageLog.collection.group( {:key => :event_type, :cond => {}, :reduce => 'function(doc,prev) { prev.count += 1; }', :initial => {:count => 0} })
    counts.each {|r| results[r["event_type"]] = r["count"]}
    results
  end
end

Performance Results

As you can see, Map-Reduce took about [notice]10 times longer,[/notice] ~21 seconds versus ~2.3 seconds.

And this is over 1,129,519 documents, so it is a non-trivial test, IMO.

> measure_mr = Benchmark.measure("count") { results = MessageLog.count_by_type_mr}
> measure = Benchmark.measure("count") { results = MessageLog.count_by_type }
ruby-1.8.7-p334 :010 > puts measure_mr
  0.000000   0.000000   0.000000 ( 20.794720)
> puts measure
  0.020000   0.000000   0.020000 (  2.340708)
> results.map {|k,v| puts "#{k} #{v}"}
Not Set                          1
Inpatient Admit              4,493
Unknown Message Type         1,292
Bed Order                    6,948
Registration Update        852,189
Registration               123,064
ED Summary                  94,933
Cactus Update               10,145
Inpatient Discharge Summary 18,150
ED Release                  18,304

 Summary

You may get better performance using simpler techniques for simple aggregate commands. And maybe Map-Reduce shines better on more complex computations/queries.

[important]But your best bet is to test it out with meaningful data samples.[/important]

MongoDB Index Performance

As part of this (unintended) mini-series on MongoDB and indexing, I had written a little test to see if I could document performance gains through indexing. I used realworld data, albeit only 50,000 records, to query out a handful or documents (24 being the most).

Related posts:

Here is the code:

require 'test_helper'

class EncounterListingTest < Test::Unit::TestCase

  context "Indexing" do
    ProfileStats2 = Struct.new(:doctor_num, :count, :timing1, :timing2, :timing3)

    should "profile assorted doctor patient retrievals" do
      stats = []
      doctor_nums = ["602490", "603324", "212043", "602938"]
      doctor_nums.each_with_index do |doctor_num, i|
        MongoMapper.database.collection('encounters').drop_indexes
        show_indexes if i == 0
        timing1 = (measure_performance(doctor_num) + measure_performance(doctor_num) + measure_performance(doctor_num))/3

        MongoMapper.database.collection('encounters').drop_indexes
        add_index([[:private_physician, 1]])
        show_indexes if i == 0
        timing2 = (measure_performance(doctor_num) + measure_performance(doctor_num) + measure_performance(doctor_num))/3

        MongoMapper.database.collection('encounters').drop_indexes
        add_index([[:private_physician,1], [:notify_physician,1], [:visible_count,1]])
        show_indexes if i == 0
        timing3 = (measure_performance(doctor_num) + measure_performance(doctor_num) + measure_performance(doctor_num))/3

        n_count = Encounter.count(:private_physician => doctor_num, :notify_physician => 'Y', :visible_count.gt => 0)
        stats << ProfileStats2.new(doctor_num, n_count, timing1, timing2, timing3)

      end

      File.open("test/performance/index_stats_results-#{Time.now.strftime("%d-%m-%Y")}.csv", 'w') do |f|
        puts "%10s  %6s  %5s  %5s  %5s" % ["doctor", "count", "None", "Phys", "Phys/Ntfy/Vis"]
        f.puts "doctor, count, None, Phys, PhysNtfyVis"
        stats.each do |s|
          results = "%10d, %6d, %5.3f, %5.3f, %5.3f" % [s.doctor_num, s.count, s.timing1, s.timing2, s.timing3]
          puts results
          f.puts "%d, %d, %5.3f, %5.3f, %5.3f" % [s.doctor_num, s.count, s.timing1, s.timing2, s.timing3]
        end
      end

    end

  end

  private
  def show_stats(stats)
    stats.each do |s|
      puts "%6d, %5.3f, %s" % [s.count, s.timing, s.index_type]
    end
  end

  def measure_performance(doctor_num = "99602326")
    start = Time.now
    n_public = Encounter.where(:private_physician => doctor_num, :notify_physician => 'Y', :visible_count.gt => 0).all
    delta = Time.now - start
    delta
  end

  def show_indexes
    puts "%s INDEXES %s" % ["*"*12, "*"*12]
    Encounter.collection.index_information.collect { |index| puts "    #{index[0]}" }
  end

  def add_index(new_index)
    coll = MongoMapper.database.collection('encounters')
    coll.drop_index(new_index) if !coll.index_information.detect { |index| index[0] == new_index }.nil?
    Encounter.ensure_index(new_index)
  end

end

Results:

The effect of adding indexes on query performance

The results are shown in the accompanying graph. Except for the query that returned 24 documents, the general trend was that 3 indexes were better than one. And one was w-a-a-a-y better than none (of course, you already knew that). The odd outlier being for count = 6, in that a single index did not perform as well as it did in all the other tests.

A Walk Through the Valley of Indexing in MongoDB

As you walk through the valley of MongoDB performance, you will undoubtedly find yourself wanting to optimize your indexes at some point or other.

How to Watch Your Queries

Run your database with profiling on. I have an alias for starting up mongo in profile mode (‘p’ stands for profile):

alias mongop="<mongodb-install>/bin/mongod
      --smallfiles --noprealloc --profile=1 --dbpath <mongodb-install>/data/db"

This will default to considering queries > 100ms being deemed “slow.”

Add a logger (if you are using MongoMapper) and tail the log file to see the queries.

##### MONGODB SETTINGS #####
# You can use :logger => Rails.logger, to get output of the mongo queries, or create a separate logger.
logger = Logger.new('mongo-development.log')
MongoMapper.connection = Mongo::Connection.new('localhost', 27017, {:pool_size => 5, :auto_reconnect => true, :logger => logger})
MongoMapper.database = "mdalert-development"

Run your query/exercise the app.

Examine the mongo log (trimmed for legibility), and look for the primary collection you were querying (bolded below).

['$cmd'].find(#"settings", "query"=>{:identifier=>"ItemsPerPage"}, "fields"=>nil}>).limit(-1)
['$cmd'].find(#"accounts", "query"=>{:state=>"active"}, "fields"=>nil}>).limit(-1)
['accounts'].find({:state=>"active"}).limit(15).sort(email1)
['settings'].find({:identifier=>"AutoEmail"}).limit(-1)

Use MongoDB’s Explain to dig deeper.

Open up the mongo shell (<mongodb-install>/bin/mongo) and enter the query that you want explained. Hint, you can take much of it from the query in the log.

Without any indexes, you can see the query is scanning the entire table basically. A bad thing! Another tip is the cursor type is “BasicCursor.”

> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain();
{
  "cursor" : "BasicCursor",
  "nscanned" : 11002,
  "nscannedObjects" : 11002,
  "n" : 15,
  "scanAndOrder" : true,
  "millis" : 44,
  "nYields" : 0,
  "nChunkSkips" : 0,
  "isMultiKey" : false,
  "indexOnly" : false,
  "indexBounds" : {
  }
}

Since I was doing a find on state, and a sort on email (or last_name), I added a compound index using MongoMapper (you could have just as easily done it at the mongo console).

Account.ensure_index([[:state,1],[:email,1]])
Account.ensure_index([[:state,1],[:last_name,1]])

Re-running the explain, you can see

  • the cursor type is now BtreeCursor (i.e., using an index)
  • the entire table is not scanned.
  • The retrieval went from 44 millis down to 2 millis
  • Success!!
> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain();
{
  "cursor" : "BtreeCursor state_1_email_1",
  "nscanned" : 15,
  "nscannedObjects" : 15,
  "n" : 15,
  "millis" : 2,
  "nYields" : 0,
  "nChunkSkips" : 0,
  "isMultiKey" : false,
  "indexOnly" : false,
  "indexBounds" : {
    "state" : [ [ "active", "active" ] ],
    "email" : [ [ {"$minElement" : 1}, {"$maxElement" : 1} ] ]
  }
}

Fiddling a Bit More – Using the Profiler

You can drop into the mongo console and see more specifics using the mongo profiler.

> db.setProfilingLevel(1,15)
{ "was" : 1, "slowms" : 100, "ok" : 1 }

For this example, I cleared the indexes on accounts. and I ran the following query, and examined its profile data.
Note: the timing can vary over successive runs, but it generally is fairly consistent — and it is close to the “millis” value you see in explain output.

> db.accounts.find({state: "active"}).limit(15).sort({email: 1})
> db.system.profile.find();
{ "ts" : ISODate("2011-11-27T21:09:04.237Z"),
  "info" : "query mdalert-development.accounts
  ntoreturn:15 scanAndOrder
  reslen:8690
  nscanned:11002
  query: { query: { state: "active" },
    orderby: { email: 1.0 } }
    nreturned:15 163ms", "millis" : 43 }

Now let’s add back the indexes… one at a time. First up, let’s add “state.”

> db.accounts.ensureIndex({state:1})
>db.accounts.find({state: "active"}).limit(15).sort({email: 1})
db.system.profile.find({info: /.accounts/})
{ "ts" : ISODate("2011-11-27T21:26:29.801Z"),
  "info" : "query mdalert-development.accounts
  ntoreturn:15 scanAndOrder
  reslen:546
  nscanned:9747
  query: { query: { state: "active" },
    orderby: { email: 1.0 }, $explain: true }
    nreturned:1 81ms", "millis" : 81 }

Hmmm. Not so good! Let’s add in the compound index that we know we need, and run an explain:

>db.accounts.ensureIndex({state:1,email:1})
> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain()
{
	"cursor" : "BtreeCursor state_1_email_1",
	"nscanned" : 15,
	"nscannedObjects" : 15,
	"n" : 15,
	"millis" : 0,

And sure enough, we get good performance. The millis is so small, that this query will not show up in the profiler.

If you want to clear the profile stats, you’ll soon find out you can’t remove the documents. The only way I saw how to do it was as follows:

  • restart mongod in non-profiling mode
  • reopen the mongo console and type:
    db.system.profile.drop()

You should now see the profile being empty:

> show profile
db.system.profile is empty

Now you can restart mongod in profiling mode and see your latest profiling data without all the ancient history.

Some Gotchas

Regex searches cannot be indexed

If your query is a regex, then the index can’t help. With regex, retrieval is 501 ms (not bad, given 317K records):

> db.message_logs.find({patient_name:/ben franklin/i}).sort({created_at:-1}).explain()
{
	"cursor" : "BtreeCursor patient_name_1 multi",
	"nscanned" : 317265,
	"nscannedObjects" : 27,
	"n" : 27,
	"millis" : 501,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"patient_name" : [ [ "", { } ], [ /ben franklin/, /ben franklin/ ] ]
	}
}

Without regex, it is essentially instantaneous:

> db.message_logs.find({patient_name:'Ben Franklin'}).sort({created_at:-1}).explain()
{
	"cursor" : "BtreeCursor patient_name_1",
	"nscanned" : 27,
	"nscannedObjects" : 27,
	"n" : 27,
	"scanAndOrder" : true,
	"millis" : 0,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"patient_name" : [ [ "Ben Franklin", "Ben Franklin" ] ]
	}
}

Tips for examining profiler output

  • Look at the most recent offenders:
    show profile
  • Look at a single collection:
    db.system.profile.find({info: /message_logs/})
  • Look at a slow queries:
    db.system.profile.find({millis : {$gt : 500}})
  • Look at a single collection with a specific query param, and a response >100ms:
    db.system.profile.find({info: /message_logs/, info: /patient_name/, millis : {$gt : 100}})

Summary

So here you have an example of how to see indexes in action, how to create them, and how to measure their effects.

References:

 

Mongo Remembers All Keys

On the MongoMapper group list, Nick was wondering about getting key names from the model. But he noticed it remembered keys that had once been used… He wanted to only be able to see the current state of his MongoMapper class, I suppose… No dice, Nick!

Remember, MongoMapper Don’t Care! MongoMapper also does not forget! You can always see what keys were ever used as demonstrated here:

MongoMapper.database.collection('users').drop
class User
  include MongoMapper::Document

  key :name, String, :required => true
end
User.destroy_all
text = []
text << "After model with key :name, String"
text << User.keys.keys.inspect

text <<  'User.create(:name => "Fred")'
User.create(:name => "Fred")
text <<  User.keys.keys.inspect

text <<  'User.create(:name => "Fred", :email => "me@me.com")'
User.create(:name => "Fred", :email => "me@me.com")
text <<  User.keys.keys.inspect

text <<  'User.destroy_all'
User.destroy_all
text <<  User.keys.keys.inspect

text.each {|t| puts t}

You can see how the model keys reflect what is in the model class and in the actual document store (that is, dynamically added via a create):

After model with key :name, String
["name", "_id"]
User.create(:name => "Fred")
["name", "_id"]
User.create(:name => "Fred", :email => "me@me.com")
["name", "_id", "email"]
User.destroy_all
["name", "_id", "email"]

Now let’s extend the model class to add a new city key:

class User
  include MongoMapper::Document
  key :name, String, :required => true
  key :city, String
end
text = []
text <<  'Extended the class, adding city'
text <<  User.keys.keys.inspect
text.each {|t| puts t}

As expected: there is the new key:

Extended the class, adding city
["city", "name", "_id", "email"]

Removing Keys

If you accidentally added keys, then you should remove them. For example, I accidentally had an uppercase key in the model for a while (oops). Here is how I eradicated it from the database store:

  def self.purge_msid_key
    uppercase_msid_acts = Account.where(:MSID.exists => true).count
    if uppercase_msid_acts > 0
      Account.unset({}, :MSID)
    end
  end

Related MongoMapper Issue: Track Loaded Keys at the Instance Level

MongoDB Honey Badger

In case you don’t know about the Honey Badger—you have to watch this video. Then you will see why MongoDB is a close cousin to this feared and fearless animal!

Developing a new project where your domain classes/tables are changing rapidly?

MongoDB don’t care!

Tired of running rake db:migrate?

MongoDB don’t care!

Need to add a new “column” to your “table?”

MongoDB don’t care!

Want to query your “table” on “columns” that don’t exist?

MongoDB don’t care!

Need to add a new index on the fly?

MongoDB don’t care!

Welcome the Nastyass MongoDB into your development lair, you won’t give a shit about your database growing and changing!

MongoDB don’t care!

Find out more about Honey Badgers here — though Randall already taught us most of the salient points!