I think it's important people understand WHY this is the case, because it's not simply an issue for MySQL vs Mongo.
The simplest way to get really good performance for multi-row queries (even if you fall out of cache) is to physically order your data in query-order. (that is, if you are going to ask for the most recent 100 blog comments, order the comments by (blog-post-id, reverse comment-date).
MySQL -Innodb makes this really easy (your data is physically ordered by primary key). In MongoDB it's not possible.
Here is a more elaborate explanation...
MySQL-Innodb stores record data in primary-key order (it puts the data right into the b-tree). This means that if you want to access 100+ records in primary-key order, it's pretty darn efficient. Even if it's out of cache, it could be just one or two disk seeks (depending on how many records fit in a block)
MongoDB stores record data in a heap-table in a semi-random order based on insertion and freespace, with each document receiving a "doc id". You can make an index on whatever you want, but when Mongo fetches multiple records, it cross-references every index entry with the doc_id. If your data is out of cache, this means a seek for _every single document_. AFAIK, as of 2012, there is no way around this, because there is no way to get mongodb to store the document data directly in the b-tree. This is a big part of why Mongo is super-slow if you fall out of cache.
HOWEVER, there are some other systems that also have this problem, including some ORMs that sit ontop of MySQL. Ruby-on-Rails forces you to use an auto_increment primary key for every record -- which means even if you use MySQL, you are forcing your data to be in a semi-random insertion order. Django does this as well. If you want to efficiently fetch a bunch of records (like 100 comments on a blog post), then you want them to be in primary key order.
In the SQL world, Oracle, MS-SQL, and Postgres also normally use a form of heap-table for records... This allows records to have a physical "ROWID" which can be used to directly look them up (it's a physical block address with an O(1) lookup). However, it also means they are in semi-random order. The ROWID was an important join and foreign-key optimization back in the days of nightly SQL jobs on machines with very little RAM. Today, it's not a good optimization. B-tree indirect blocks are always in RAM, so direct ROWID lookups have little benefit over b-trees, and the huge drawback of no natural primary key ordering. One can workaround this problem with fully-covered-indicies, table-in-index, and key-clustering -- all of which have their own frustrating tradeoffs.
Does primary-key ordering have downsides? References to records are bigger (They have to contain the full primary key), there is no guaranteed stable way to reference a record (for foreign key constraints), and if you change fields in the primary key, the entire record must be moved. In the "old days" there was another big drawback, b-tree O(log-n) lookups are much slower than ROWID O(1) lookups.. Today this is not an issue because all b-tree indirect nodes fit in RAM always.
Bottom line, if you want the easiest way to order data properly, use MySQL-Innodb, and choose your primary key wisely. If you are using another storage system, study up on how you can control physical ordering becuase every system is different.
The simplest way to get really good performance for multi-row queries (even if you fall out of cache) is to physically order your data in query-order. (that is, if you are going to ask for the most recent 100 blog comments, order the comments by (blog-post-id, reverse comment-date).
MySQL -Innodb makes this really easy (your data is physically ordered by primary key). In MongoDB it's not possible.
Here is a more elaborate explanation...
MySQL-Innodb stores record data in primary-key order (it puts the data right into the b-tree). This means that if you want to access 100+ records in primary-key order, it's pretty darn efficient. Even if it's out of cache, it could be just one or two disk seeks (depending on how many records fit in a block)
MongoDB stores record data in a heap-table in a semi-random order based on insertion and freespace, with each document receiving a "doc id". You can make an index on whatever you want, but when Mongo fetches multiple records, it cross-references every index entry with the doc_id. If your data is out of cache, this means a seek for _every single document_. AFAIK, as of 2012, there is no way around this, because there is no way to get mongodb to store the document data directly in the b-tree. This is a big part of why Mongo is super-slow if you fall out of cache.
HOWEVER, there are some other systems that also have this problem, including some ORMs that sit ontop of MySQL. Ruby-on-Rails forces you to use an auto_increment primary key for every record -- which means even if you use MySQL, you are forcing your data to be in a semi-random insertion order. Django does this as well. If you want to efficiently fetch a bunch of records (like 100 comments on a blog post), then you want them to be in primary key order.
In the SQL world, Oracle, MS-SQL, and Postgres also normally use a form of heap-table for records... This allows records to have a physical "ROWID" which can be used to directly look them up (it's a physical block address with an O(1) lookup). However, it also means they are in semi-random order. The ROWID was an important join and foreign-key optimization back in the days of nightly SQL jobs on machines with very little RAM. Today, it's not a good optimization. B-tree indirect blocks are always in RAM, so direct ROWID lookups have little benefit over b-trees, and the huge drawback of no natural primary key ordering. One can workaround this problem with fully-covered-indicies, table-in-index, and key-clustering -- all of which have their own frustrating tradeoffs.
Does primary-key ordering have downsides? References to records are bigger (They have to contain the full primary key), there is no guaranteed stable way to reference a record (for foreign key constraints), and if you change fields in the primary key, the entire record must be moved. In the "old days" there was another big drawback, b-tree O(log-n) lookups are much slower than ROWID O(1) lookups.. Today this is not an issue because all b-tree indirect nodes fit in RAM always.
Bottom line, if you want the easiest way to order data properly, use MySQL-Innodb, and choose your primary key wisely. If you are using another storage system, study up on how you can control physical ordering becuase every system is different.