Tag Archives: profiling

Anatomy of a MongoDB Profiling Session

This particular application has been collecting data for months now, but hasn’t really had any users by design. At 33GB of data, pulling up a list of messages received was taking f-o-r-e-v-e-r!

So I decided to document how to go about and fix a running production system… Hope it helps.

Log into mongo console and turn on profiling (the ‘1’) to monitor slow queries. I entered >10 seconds, which really stinks (!). You should adjust it to suit your app’s definition of “slow”—maybe 500ms:

> db.setProfilingLevel(1,10000)
{ "was" : 0, "slowms" : 100, "ok" : 1 }

Next I went back to the webapp and executed the page request that exhibits the slow response …

Once the page returns, go in and look for any slow responses that the profiler logged:

> db.system.profile.find()
{
  "ts" : ISODate("2012-02-18T15:34:02.967Z"), 
  "op" : "command", 
  "command" : { "count" : "messages", 
    "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" }, 
      "fields" : null }, 
  "ntoreturn" : 1, 
  "responseLength" : 48, 
  "millis" : 119051, 
  "client" : "192.168.100.67", 
  "user" : "" 
}
{
  "ts" : ISODate("2012-02-18T15:35:51.704Z"),
  "op" : "query", 
  "query" : { "_type" : "HL7Message", "recv_app" : "CAREGIVER" },
  "ntoreturn" : 25,
  "ntoskip" : 791025,
  "nscanned" : 791051,
  "nreturned" : 25,
  "responseLength" : 49956,
  "millis" : 108720,
  "client" : "192.168.100.67",
  "user" : "" 
}

You can see there was a count query and a query for the data itself (we are using pagination). Sure enough, look here:

  • “ntoreturn” : 25,
  • “nscanned” : 791051,

 

Wow, that’s nasty… to return 25 records, we scanned 791,051! Gulp. Looks like a full table scan. Never a good thing (unless you have very small amounts of data).

Let’s see what sorts of indexes exist for the messages collection:

db.system.indexes.find( { ns: "production-alerts.messages" } );
{ "name" : "_id_", "key" : { "_id" : 1 }, "v" : 0 }
{ "v" : 1, "key" : { "created_at" : -1 }, "name" : "created_at_-1" }
{ "v" : 1, "key" : { "_type" : 1 }, "name" : "_type_1" }
{ "v" : 1, "key" : { "recv_app" : 1 }, "name" : "recv_app_1" }
{ "v" : 1, "key" : { "created_at" : -1, "recv_app" : 1 }, "name" : "created_at_-1_recv_app_1" }
{ "v" : 1, "key" : { "message_type" : 1 }, "name" : "message_type_1" }
{ "v" : 1, "key" : { "trigger_event" : 1 }, "name" : "trigger_event_1" }

Well, as expected, there is no index covering the multiple keys that we are searching on. So let’s add a multi-key index to match the query used by the controller!

db.messages.ensureIndex({_type:1, recv_app:1});

Now the app FLIES!! We dropped from 100+ seconds to 1.5 seconds (look at the “millis”) w00t!

db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
> db.messages.find({ _type : "HL7Message", recv_app : "CAREGIVER"}).explain();
{
	"cursor" : "BtreeCursor _type_1_recv_app_1",
	"nscanned" : 791153,
	"nscannedObjects" : 791153,
	"n" : 791153,
	"millis" : 1546,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"_type" : [
			[
				"HL7Message",
				"HL7Message"
			]
		],
		"recv_app" : [
			[
				"CAREGIVER",
				"CAREGIVER"
			]
		]
	}
}

To prevent this sort of thing, you can consider adding indexes when you create new queries. But the best way to do this is to be empirical and know whether you should add the index through some testing. I’ll leave that for another day!

A Walk Through the Valley of Indexing in MongoDB

As you walk through the valley of MongoDB performance, you will undoubtedly find yourself wanting to optimize your indexes at some point or other.

How to Watch Your Queries

Run your database with profiling on. I have an alias for starting up mongo in profile mode (‘p’ stands for profile):

alias mongop="<mongodb-install>/bin/mongod
      --smallfiles --noprealloc --profile=1 --dbpath <mongodb-install>/data/db"

This will default to considering queries > 100ms being deemed “slow.”

Add a logger (if you are using MongoMapper) and tail the log file to see the queries.

##### MONGODB SETTINGS #####
# You can use :logger => Rails.logger, to get output of the mongo queries, or create a separate logger.
logger = Logger.new('mongo-development.log')
MongoMapper.connection = Mongo::Connection.new('localhost', 27017, {:pool_size => 5, :auto_reconnect => true, :logger => logger})
MongoMapper.database = "mdalert-development"

Run your query/exercise the app.

Examine the mongo log (trimmed for legibility), and look for the primary collection you were querying (bolded below).

['$cmd'].find(#"settings", "query"=>{:identifier=>"ItemsPerPage"}, "fields"=>nil}>).limit(-1)
['$cmd'].find(#"accounts", "query"=>{:state=>"active"}, "fields"=>nil}>).limit(-1)
['accounts'].find({:state=>"active"}).limit(15).sort(email1)
['settings'].find({:identifier=>"AutoEmail"}).limit(-1)

Use MongoDB’s Explain to dig deeper.

Open up the mongo shell (<mongodb-install>/bin/mongo) and enter the query that you want explained. Hint, you can take much of it from the query in the log.

Without any indexes, you can see the query is scanning the entire table basically. A bad thing! Another tip is the cursor type is “BasicCursor.”

> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain();
{
  "cursor" : "BasicCursor",
  "nscanned" : 11002,
  "nscannedObjects" : 11002,
  "n" : 15,
  "scanAndOrder" : true,
  "millis" : 44,
  "nYields" : 0,
  "nChunkSkips" : 0,
  "isMultiKey" : false,
  "indexOnly" : false,
  "indexBounds" : {
  }
}

Since I was doing a find on state, and a sort on email (or last_name), I added a compound index using MongoMapper (you could have just as easily done it at the mongo console).

Account.ensure_index([[:state,1],[:email,1]])
Account.ensure_index([[:state,1],[:last_name,1]])

Re-running the explain, you can see

  • the cursor type is now BtreeCursor (i.e., using an index)
  • the entire table is not scanned.
  • The retrieval went from 44 millis down to 2 millis
  • Success!!
> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain();
{
  "cursor" : "BtreeCursor state_1_email_1",
  "nscanned" : 15,
  "nscannedObjects" : 15,
  "n" : 15,
  "millis" : 2,
  "nYields" : 0,
  "nChunkSkips" : 0,
  "isMultiKey" : false,
  "indexOnly" : false,
  "indexBounds" : {
    "state" : [ [ "active", "active" ] ],
    "email" : [ [ {"$minElement" : 1}, {"$maxElement" : 1} ] ]
  }
}

Fiddling a Bit More – Using the Profiler

You can drop into the mongo console and see more specifics using the mongo profiler.

> db.setProfilingLevel(1,15)
{ "was" : 1, "slowms" : 100, "ok" : 1 }

For this example, I cleared the indexes on accounts. and I ran the following query, and examined its profile data.
Note: the timing can vary over successive runs, but it generally is fairly consistent — and it is close to the “millis” value you see in explain output.

> db.accounts.find({state: "active"}).limit(15).sort({email: 1})
> db.system.profile.find();
{ "ts" : ISODate("2011-11-27T21:09:04.237Z"),
  "info" : "query mdalert-development.accounts
  ntoreturn:15 scanAndOrder
  reslen:8690
  nscanned:11002
  query: { query: { state: "active" },
    orderby: { email: 1.0 } }
    nreturned:15 163ms", "millis" : 43 }

Now let’s add back the indexes… one at a time. First up, let’s add “state.”

> db.accounts.ensureIndex({state:1})
>db.accounts.find({state: "active"}).limit(15).sort({email: 1})
db.system.profile.find({info: /.accounts/})
{ "ts" : ISODate("2011-11-27T21:26:29.801Z"),
  "info" : "query mdalert-development.accounts
  ntoreturn:15 scanAndOrder
  reslen:546
  nscanned:9747
  query: { query: { state: "active" },
    orderby: { email: 1.0 }, $explain: true }
    nreturned:1 81ms", "millis" : 81 }

Hmmm. Not so good! Let’s add in the compound index that we know we need, and run an explain:

>db.accounts.ensureIndex({state:1,email:1})
> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain()
{
	"cursor" : "BtreeCursor state_1_email_1",
	"nscanned" : 15,
	"nscannedObjects" : 15,
	"n" : 15,
	"millis" : 0,

And sure enough, we get good performance. The millis is so small, that this query will not show up in the profiler.

If you want to clear the profile stats, you’ll soon find out you can’t remove the documents. The only way I saw how to do it was as follows:

  • restart mongod in non-profiling mode
  • reopen the mongo console and type:
    db.system.profile.drop()

You should now see the profile being empty:

> show profile
db.system.profile is empty

Now you can restart mongod in profiling mode and see your latest profiling data without all the ancient history.

Some Gotchas

Regex searches cannot be indexed

If your query is a regex, then the index can’t help. With regex, retrieval is 501 ms (not bad, given 317K records):

> db.message_logs.find({patient_name:/ben franklin/i}).sort({created_at:-1}).explain()
{
	"cursor" : "BtreeCursor patient_name_1 multi",
	"nscanned" : 317265,
	"nscannedObjects" : 27,
	"n" : 27,
	"millis" : 501,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"patient_name" : [ [ "", { } ], [ /ben franklin/, /ben franklin/ ] ]
	}
}

Without regex, it is essentially instantaneous:

> db.message_logs.find({patient_name:'Ben Franklin'}).sort({created_at:-1}).explain()
{
	"cursor" : "BtreeCursor patient_name_1",
	"nscanned" : 27,
	"nscannedObjects" : 27,
	"n" : 27,
	"scanAndOrder" : true,
	"millis" : 0,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"patient_name" : [ [ "Ben Franklin", "Ben Franklin" ] ]
	}
}

Tips for examining profiler output

  • Look at the most recent offenders:
    show profile
  • Look at a single collection:
    db.system.profile.find({info: /message_logs/})
  • Look at a slow queries:
    db.system.profile.find({millis : {$gt : 500}})
  • Look at a single collection with a specific query param, and a response >100ms:
    db.system.profile.find({info: /message_logs/, info: /patient_name/, millis : {$gt : 100}})

Summary

So here you have an example of how to see indexes in action, how to create them, and how to measure their effects.

References: