Tag Archives: indexing

MongoDB Index Performance

As part of this (unintended) mini-series on MongoDB and indexing, I had written a little test to see if I could document performance gains through indexing. I used realworld data, albeit only 50,000 records, to query out a handful or documents (24 being the most).

Related posts:

Here is the code:

require 'test_helper'

class EncounterListingTest < Test::Unit::TestCase

  context "Indexing" do
    ProfileStats2 = Struct.new(:doctor_num, :count, :timing1, :timing2, :timing3)

    should "profile assorted doctor patient retrievals" do
      stats = []
      doctor_nums = ["602490", "603324", "212043", "602938"]
      doctor_nums.each_with_index do |doctor_num, i|
        MongoMapper.database.collection('encounters').drop_indexes
        show_indexes if i == 0
        timing1 = (measure_performance(doctor_num) + measure_performance(doctor_num) + measure_performance(doctor_num))/3

        MongoMapper.database.collection('encounters').drop_indexes
        add_index([[:private_physician, 1]])
        show_indexes if i == 0
        timing2 = (measure_performance(doctor_num) + measure_performance(doctor_num) + measure_performance(doctor_num))/3

        MongoMapper.database.collection('encounters').drop_indexes
        add_index([[:private_physician,1], [:notify_physician,1], [:visible_count,1]])
        show_indexes if i == 0
        timing3 = (measure_performance(doctor_num) + measure_performance(doctor_num) + measure_performance(doctor_num))/3

        n_count = Encounter.count(:private_physician => doctor_num, :notify_physician => 'Y', :visible_count.gt => 0)
        stats << ProfileStats2.new(doctor_num, n_count, timing1, timing2, timing3)

      end

      File.open("test/performance/index_stats_results-#{Time.now.strftime("%d-%m-%Y")}.csv", 'w') do |f|
        puts "%10s  %6s  %5s  %5s  %5s" % ["doctor", "count", "None", "Phys", "Phys/Ntfy/Vis"]
        f.puts "doctor, count, None, Phys, PhysNtfyVis"
        stats.each do |s|
          results = "%10d, %6d, %5.3f, %5.3f, %5.3f" % [s.doctor_num, s.count, s.timing1, s.timing2, s.timing3]
          puts results
          f.puts "%d, %d, %5.3f, %5.3f, %5.3f" % [s.doctor_num, s.count, s.timing1, s.timing2, s.timing3]
        end
      end

    end

  end

  private
  def show_stats(stats)
    stats.each do |s|
      puts "%6d, %5.3f, %s" % [s.count, s.timing, s.index_type]
    end
  end

  def measure_performance(doctor_num = "99602326")
    start = Time.now
    n_public = Encounter.where(:private_physician => doctor_num, :notify_physician => 'Y', :visible_count.gt => 0).all
    delta = Time.now - start
    delta
  end

  def show_indexes
    puts "%s INDEXES %s" % ["*"*12, "*"*12]
    Encounter.collection.index_information.collect { |index| puts "    #{index[0]}" }
  end

  def add_index(new_index)
    coll = MongoMapper.database.collection('encounters')
    coll.drop_index(new_index) if !coll.index_information.detect { |index| index[0] == new_index }.nil?
    Encounter.ensure_index(new_index)
  end

end

Results:

The effect of adding indexes on query performance

The results are shown in the accompanying graph. Except for the query that returned 24 documents, the general trend was that 3 indexes were better than one. And one was w-a-a-a-y better than none (of course, you already knew that). The odd outlier being for count = 6, in that a single index did not perform as well as it did in all the other tests.

A Walk Through the Valley of Indexing in MongoDB

As you walk through the valley of MongoDB performance, you will undoubtedly find yourself wanting to optimize your indexes at some point or other.

How to Watch Your Queries

Run your database with profiling on. I have an alias for starting up mongo in profile mode (‘p’ stands for profile):

alias mongop="<mongodb-install>/bin/mongod
      --smallfiles --noprealloc --profile=1 --dbpath <mongodb-install>/data/db"

This will default to considering queries > 100ms being deemed “slow.”

Add a logger (if you are using MongoMapper) and tail the log file to see the queries.

##### MONGODB SETTINGS #####
# You can use :logger => Rails.logger, to get output of the mongo queries, or create a separate logger.
logger = Logger.new('mongo-development.log')
MongoMapper.connection = Mongo::Connection.new('localhost', 27017, {:pool_size => 5, :auto_reconnect => true, :logger => logger})
MongoMapper.database = "mdalert-development"

Run your query/exercise the app.

Examine the mongo log (trimmed for legibility), and look for the primary collection you were querying (bolded below).

['$cmd'].find(#"settings", "query"=>{:identifier=>"ItemsPerPage"}, "fields"=>nil}>).limit(-1)
['$cmd'].find(#"accounts", "query"=>{:state=>"active"}, "fields"=>nil}>).limit(-1)
['accounts'].find({:state=>"active"}).limit(15).sort(email1)
['settings'].find({:identifier=>"AutoEmail"}).limit(-1)

Use MongoDB’s Explain to dig deeper.

Open up the mongo shell (<mongodb-install>/bin/mongo) and enter the query that you want explained. Hint, you can take much of it from the query in the log.

Without any indexes, you can see the query is scanning the entire table basically. A bad thing! Another tip is the cursor type is “BasicCursor.”

> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain();
{
  "cursor" : "BasicCursor",
  "nscanned" : 11002,
  "nscannedObjects" : 11002,
  "n" : 15,
  "scanAndOrder" : true,
  "millis" : 44,
  "nYields" : 0,
  "nChunkSkips" : 0,
  "isMultiKey" : false,
  "indexOnly" : false,
  "indexBounds" : {
  }
}

Since I was doing a find on state, and a sort on email (or last_name), I added a compound index using MongoMapper (you could have just as easily done it at the mongo console).

Account.ensure_index([[:state,1],[:email,1]])
Account.ensure_index([[:state,1],[:last_name,1]])

Re-running the explain, you can see

  • the cursor type is now BtreeCursor (i.e., using an index)
  • the entire table is not scanned.
  • The retrieval went from 44 millis down to 2 millis
  • Success!!
> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain();
{
  "cursor" : "BtreeCursor state_1_email_1",
  "nscanned" : 15,
  "nscannedObjects" : 15,
  "n" : 15,
  "millis" : 2,
  "nYields" : 0,
  "nChunkSkips" : 0,
  "isMultiKey" : false,
  "indexOnly" : false,
  "indexBounds" : {
    "state" : [ [ "active", "active" ] ],
    "email" : [ [ {"$minElement" : 1}, {"$maxElement" : 1} ] ]
  }
}

Fiddling a Bit More – Using the Profiler

You can drop into the mongo console and see more specifics using the mongo profiler.

> db.setProfilingLevel(1,15)
{ "was" : 1, "slowms" : 100, "ok" : 1 }

For this example, I cleared the indexes on accounts. and I ran the following query, and examined its profile data.
Note: the timing can vary over successive runs, but it generally is fairly consistent — and it is close to the “millis” value you see in explain output.

> db.accounts.find({state: "active"}).limit(15).sort({email: 1})
> db.system.profile.find();
{ "ts" : ISODate("2011-11-27T21:09:04.237Z"),
  "info" : "query mdalert-development.accounts
  ntoreturn:15 scanAndOrder
  reslen:8690
  nscanned:11002
  query: { query: { state: "active" },
    orderby: { email: 1.0 } }
    nreturned:15 163ms", "millis" : 43 }

Now let’s add back the indexes… one at a time. First up, let’s add “state.”

> db.accounts.ensureIndex({state:1})
>db.accounts.find({state: "active"}).limit(15).sort({email: 1})
db.system.profile.find({info: /.accounts/})
{ "ts" : ISODate("2011-11-27T21:26:29.801Z"),
  "info" : "query mdalert-development.accounts
  ntoreturn:15 scanAndOrder
  reslen:546
  nscanned:9747
  query: { query: { state: "active" },
    orderby: { email: 1.0 }, $explain: true }
    nreturned:1 81ms", "millis" : 81 }

Hmmm. Not so good! Let’s add in the compound index that we know we need, and run an explain:

>db.accounts.ensureIndex({state:1,email:1})
> db.accounts.find({state: "active"}).limit(15).sort({email: 1}).explain()
{
	"cursor" : "BtreeCursor state_1_email_1",
	"nscanned" : 15,
	"nscannedObjects" : 15,
	"n" : 15,
	"millis" : 0,

And sure enough, we get good performance. The millis is so small, that this query will not show up in the profiler.

If you want to clear the profile stats, you’ll soon find out you can’t remove the documents. The only way I saw how to do it was as follows:

  • restart mongod in non-profiling mode
  • reopen the mongo console and type:
    db.system.profile.drop()

You should now see the profile being empty:

> show profile
db.system.profile is empty

Now you can restart mongod in profiling mode and see your latest profiling data without all the ancient history.

Some Gotchas

Regex searches cannot be indexed

If your query is a regex, then the index can’t help. With regex, retrieval is 501 ms (not bad, given 317K records):

> db.message_logs.find({patient_name:/ben franklin/i}).sort({created_at:-1}).explain()
{
	"cursor" : "BtreeCursor patient_name_1 multi",
	"nscanned" : 317265,
	"nscannedObjects" : 27,
	"n" : 27,
	"millis" : 501,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"patient_name" : [ [ "", { } ], [ /ben franklin/, /ben franklin/ ] ]
	}
}

Without regex, it is essentially instantaneous:

> db.message_logs.find({patient_name:'Ben Franklin'}).sort({created_at:-1}).explain()
{
	"cursor" : "BtreeCursor patient_name_1",
	"nscanned" : 27,
	"nscannedObjects" : 27,
	"n" : 27,
	"scanAndOrder" : true,
	"millis" : 0,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"patient_name" : [ [ "Ben Franklin", "Ben Franklin" ] ]
	}
}

Tips for examining profiler output

  • Look at the most recent offenders:
    show profile
  • Look at a single collection:
    db.system.profile.find({info: /message_logs/})
  • Look at a slow queries:
    db.system.profile.find({millis : {$gt : 500}})
  • Look at a single collection with a specific query param, and a response >100ms:
    db.system.profile.find({info: /message_logs/, info: /patient_name/, millis : {$gt : 100}})

Summary

So here you have an example of how to see indexes in action, how to create them, and how to measure their effects.

References: