When you’re programming an application, you generally want the database to respond instantly to anything you do. To maximize its ability to do this, it’s important to know what takes up time.
Accessing data from RAM is fast and accessing data from disk is slow. Therefore, most optimization techniques are basically fancy ways of minimizing the amount of disk accesses.
Reading from disk is (about) a million times slower than reading from memory.
Most spinning disk drives can access data in, say, 10 milliseconds, whereas memory returns data in 10 nanoseconds. (This depends a lot on what kind of hard drive you have and what kind of RAM you have, but we’ll do a very broad generalization that is roughly accurate for most people.) This means that the ratio of disk time to RAM time is 1 millisecond to 1 nanosecond. One millisecond is equal to one million nanoseconds, so accessing disk takes (roughly) a million times longer than accessing RAM.
Thus, reading off of disk takes a really long time in computing terms.
On Linux, you can measure sequential disk access on your machine
sudo hdparm -t
/dev/hd. This doesn’t
give you an exact measure, as MongoDB will be doing non-sequential
reads and writes, but it’s interesting to see what your machine can
So, what can be done about this? There are a couple “easy” solutions:
SSDs (solid state drives) are much faster than spinning hard disks for many things, ...