Within our production database, we went the next pseudo-code SQL batch query running every hour:
Place INTO TemporaryTable (Choose FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true)
This query itself need not be fast, however i observed it had been securing up HighlyContentiousTableInInnoDb, despite the fact that it had been just reading through from this. That was making another quite simple queries take ~25 seconds (that's how lengthy that other query takes).
I Quickly learned that InnoDB tables in this situation are really locked with a Choose! http://world wide web.mysqlperformanceblog.com/2006/07/12/place-into-choose-performance-with-innodb-tables/
However I don't enjoy the answer within the piece of choosing into an OUTFILE, it appears just like a hack (temporary files on filesystem appear sucky). Every other ideas? It is possible to way to create a full copy of the InnoDB table without securing it in by doing this throughout the copy. I Quickly could just copy the HighlyContentiousTable to a different table and perform the query there.
The response to this is a lot simpler now: - Use Row Based Replication and browse Committed isolation level.
The securing you had been going through vanishes.
Longer explaination: http://harrison-fisk.blogspot.com/2009/02/my-favorite-new-feature-of-mysql-51.html
If you're able to allow some anomalies you are able to change ISOLATION LEVEL towards the least strict one - READ UNCOMMITTED. But throughout this time around someone is permitted to see from ur destination table. Or lock destination table by hand (I suppose mysql is giving this functionality?).
Or else you should use READ COMMITTED, that ought to not lock source table also. It locks placed rows in destination table till commit.
I'd choose second one.
Disclaimer: I am not so familiar with databases, and I am unsure if the idea is workable. Please correct me when not.
What about establishing another equivalent table
HighlyContentiousTableInInnoDb2, and creating
AFTER Place etc. triggers within the first table which keep your new table up-to-date with similar data. You now should have the ability to lock
HighlyContentiousTableInInnoDb2, and just decelerate the triggers from the primary table, rather than all queries.
- 2 x data saved
- Additional work with all card inserts, updates and removes
- Is probably not transactionally seem
the reason behind the lock (readlock) would be to secure your reading through transaction to not read "dirty" data a parallel transaction may be presently writing. most DBMS provide the setting that customers can set and revoke read &lifier write locks by hand. this can be interresting for you personally if reading through dirty data isn't a condition in your situation.
i believe there's no secure method to read from the table with no locks inside a DBS with multiple transactions.
but this is some brainstorming:
if space isn't any problem, you are able to consider running two instances of the identical table.
HighlyContentiousTableInInnoDb2 for the constantly read/write transaction along with a
HighlyContentiousTableInInnoDb2_shadow for the batched access.
you may can fill the cisco kid table automated via trigger/programs within your DBMS, that is faster and wiser that the aditional write transaction everywhere.
also try this may be the question: do all transactions have to access the entire table? otherwise you could utilize sights to lock only necessery colums. when the continous access as well as your batched access are disjoint regarding posts, it may be entirely possible that it normally won't lock one another!
I am unfamiliar with MySQL, but hopefully there's a similar towards the transaction isolation levels
Read committed snapshot in SQL Server. Using these should solve your condition.