31. July 2012 16:40
A follower on twitter, @red_square, ran my tests from my previous blog post using MongoDB as a storage engine that gave some interesting event read rates.
I incorporated this into my test:
Compared to the RDBMs, these reads speads are blistering: 7 times faster than SQL Server and over 20 times faster than PostgreSQL. There is a cost to this, and we are comparing apples and oranges here. There is plenty on the web comparing MongoDB and other NoSql storage tech to the traditional so I'm not going to regurgitate them here.
For my case, I think I'd like to still present the option of using an RDMBS for our customers. I think they are just more comfortable with it. But it does open the door to an interesing optimization strategy. There are cases, such as deploying new projection schema, upgrading an exisiting one, recovering from a crash, or replaying to point-in-time for analysis, where you want a pipeline that can rebuild the projections as quickly as possible. So perhaps using MongoDB as a secondary event store (with lower reliabilty requirements) which gives you a super fast read pipeline could be feasible. This secondary store can be a local async mirror of the primary store that will get you 99% of the required events for a projection store re-build (it may not have 100% of events due to it being an async mirror). The remaining events can the come from the slower primary event store at the end of the rebuild step. If the MongoDB mirror is ever lost, such as in disaster recovery, it can just be re-mirrored. I'd hope this would be a very rare occurance
(Big thanks to Steve Flitcroft for the MongoDB help)
Edit: The specific numbers are not representative of real world numbers. The interesting bit is the relative difference between them, ceteris paribus.