I've been reading through documentation and watching screencasts specific to Mongo DB in the last couple of days and i'm baffled when ever an answer like this is much better than an average pg or mysql atmosphere.
Particularly, my real question is under what circumstance (w/ use situation could be nice) would you need to go the nosql route?
Many disparate authors. Particularly when the authors could possibly get segmented because of disconnections within the network, and can later have to resync data that's been written to on sides from the bifurcation. This breaks Acidity, even though you are able to solve the issue with explicit business logic, you are now in NoSQL territory. This is common in military situations, but any system by which everybody is really a prolific author will have some write-contention lock with an Acidity system.
Fluid schemas. Altering a schema inside a traditional DB is definitely an costly operation that frequently requires some kind of server down time or any other complicated processes. With many NoSQL systems it's trivial. So if you have data from lots of disparate sources to merge and/or have situations where you might want to start monitoring new information later on, NoSQL systems is a lot simpler to cope with. Merging two data sources to allow them to be charted with one another is a great one I'm able to think about.
Low-bandwidth replication. Once you have damaged Acidity you could have visitors and authors on leaf nodes of the network graph with partial data which have no need for full replicas from the database. My very own company's product, the Army's Command Publish for the future uses this.
Data interoperability. Most NoSQL databases permit you to introspect the information not understanding the schema in advance, permitting connections between disparate systems to occur simpler.
Massive scaling. This is actually the one that's most-frequently debated, and many frequently mistreated by NoSQL advocates. If this sounds like the only real reason you are selecting NoSQL, begin with MySQL rather and scale later.
We use MongoDB for any massive very transient data structure. It essentially functions as a job tracker / manager with lots of work models being processed every second. The job unit doesn't have defined schema (different models are invented somewhat frequently) yet we have to be capable of query for specific fields or qualities without iterating within the entire DB. To recap: highly transient, highly available (can not afford to bar for any query) having a work of roughly 600QPS for any single "commodity" machine running within the cloud.
Simple fact could it be is very hard to perform the same on the SQL machine while still maintaining exactly the same costs.
Other popular use cases for MongoDB (moreover us) are statistic collection, it's very efficient at incrementing specific qualities inside documents, much much more then most RDBMS systems.
Again, it isn't that it's out of the question so in MySQL it is simply more costly and takes additional time (more skill) which for small businesses or perhaps a fast development atmosphere means it cannot be achieved.
Some Relaxation APIs return JSON data (for instance, most of the open government data APIs). If you wish to dump the Relaxation data to some local data store (just in case you have to run analysis, etc), consuming a JSON object with MongoDB is trivial. You don't need to define a table schema. Better still, when the JSON object changes with time (e.g. the Relaxation API returns additional fields) you are able to still consume the information in a single step. Try by using a relational database!