In one of my rails-projects we had problems with deadlocks during transactions:
"A ActiveRecord::StatementInvalid occurred in ...: Mysql::Error: Deadlock found when trying to get lock "
All it does is to wrap the ActiveRecord.transaction method in a begin rescue clause:
retries = 0 begin MyModel.transaction do #do your stuff end rescue ActiveRecord::StatmentInvalid => e # raise if e is not a deadlock retries += 1 raise if retries > 3 retry end
Yet this raises a new problem: What if the
#do your stuff is not free of side effects? MySQL guarantees to completely rollback an aborted transcation, but Ruby does not. For example, if you count inside the transaction how many rows you inserted and do not reset the counter before the retry you may end up with the wrong number, since your rails-code runs several times.
In theory you could try to solve this by using continuations but then rails does not seem to work well with continuations.
And there is a second problem (and it looks like MySQL is to blame): How many retries are enough? In theory a slow transaction could just starve being retried over and over again. In practice this will happen on busy servers.
According to Sliberschatz et al. (Database System Concepts) this can be solved by assigning to each transaction an increasing number (or timestamp). If a transactions is retried it keeps its previous number. When the database has to kill a deadlocked transaction, it should prefer the younger ones with higher numbers. By that approach each transactions finally commits. Unfortunally MySQL does not seem to do anything like this.