There appears to become a large push for key/value based databases, that we believe memcache to become.
May be the value usually some kind of collection or xml file that will hold more meaningfull data?
If so, could it be generally faster to deserialize data then to complete traditinally JOINS and chooses on tables that return a row based result set?
What is happening is the fact that some really, really, REALLY large internet sites like Google and Amazon . com occupy a tiny, small niche where their data storage and retrieval needs are extremely dissimilar to anybody else's that a different way of storing/locating information is known as for. I am sure these men know what they're doing, they're excellent at the things they're doing.
However, than the will get acquired and reported on and altered into "relational databases aren't as much as handling data for that web". Also, visitors begin to think "hey, if relational databases aren't adequate for Amazon . com and Google, they are not adequate for me personally.Inch
These implications are generally wrong: 99.9% of databases (including individuals behind internet sites) aren't within the same ball park as Amazon . com and Google - not within several orders of magnitude. With this 99.9%, nothing has transformed, relational databases still work all right.
Associated with pension transfer things, "it is dependent". When the joins are relatively irrelevant (that's, a small amount of joins on well-keyed data), and you're simply storing especially complex data, it might be better simply to stick to the greater complex query.
It is also dependent on quality. Oftentimes the objective of many joins would be to gather very disparate data that's, data which varies broadly in the relative quality. It may add considerable complexity and overhead to help keep a vital-value pair table synchronized whenever a small slice from the data across a lot of pairs is up-to-date. System complexity can frequently be described as a type of performance cost time, risk and price to create a switch to an intricate system without affecting performance is frequently far more than an easy one.
The very best solution should be to code the things that work as simply as possible. Generally I'd say what this means is produce a fully stabilized database design and join the garbage from it. Only revisit your design after performance becomes an apparent problem. Whenever you evaluate the problem, it will likewise be apparent in which the problems lie and what must be completed to fix them. Whether it's reducing joins, then so whether it is. You know when you should know.
I do not have lots of knowledge about key/value dbs, so take what I only say having a touch of suspicion.
With nevertheless, the very first factor I ought to explain is the fact that memcached is not a vitalOrworth database. A database suggests some type of persistent store, which memcached is not. Memcached will probably be a brief store in order to save a question towards the actual database.
Apart from that, my understanding is the fact that you are not likely to have the ability to replace your RDBMS having a key/value database. They are usually perfect for unstructured data or any other data where you might not understand all the characteristics that should be saved. If you want to store highly-structured data, you cannot do a lot better than a conventional RDBMS.
They can be complex structured data that requires deserialization. They may also be simple fixed-size records, much like your RDBMS. Area of the benefit is you reach choose to yourself. When you are optimizing your database, you are not restricted to what SQL can perform.
How you request causes it to be seem such as the join or even the deserialization will be the bottleneck. However in any database, situations are never that easy. Place the denormalized data inside your RDBMS, too, or write an RDBMS interface on the top of the key-value database, if you want.