Database Deadlock Avoidance Patterns

[Moved here from DeadlockAvoidancePatterns]

Patterns of application design that achieve DatabaseDeadlockAvoidance.


See: SynchronizationStrategies

(It would be nicer in my view to extend the extant pattern language than to create a newer, similar one. If you had different goals for this language than SynchronizationStrategies, I apologize.)


Briefly...


If you "SELECT" a row, and then go back and update it, doing the "SELECT" will obtain a read lock on the record/page, and the "UPDATE" will upgrade the read lock to a write/exclusive lock. Upgrading locks is bad: If two processes read the record, obtaining read locks, then both attempt the update, they will block on each other -- a DeadLock.

The "SELECT FOR UPDATE" syntax informs the database that you intend to follow the read operation with an update -- so it starts with an exclusive lock. This avoids deadlocks.


If all programs insert/update/delete rows of tables in a certain well-defined hierarchy or order of tables, then you'll avoid the problem of program #1 updating table A and wanting to update table B, while program #2 just updated table B and now wants to update table A (a DeadLock situation). If all programs update table A first, then this situation can't happen.

(However, in typical business development environments, I just don't see this happening. -- JeffGrigg)


Creating a TransactionObject? that will hold the various database actions should solve most DeadLocking issues.

How it works: When an object is updated/deleted/inserted, a reference is placed into the update/delete/insert queue of the Transaction object. The Transaction object does not execute any behavior until the transaction is closed. Since the Transaction object does the execution, you can enforce that all updates/deletes/inserts into tables happen in the same order every time. It is also possible to extend this to handle nested transactions.

Cons: If the object being updated/inserted/deleted changes before the transaction is closed, you could be using invalid data. Long lived transactions may be troublesome.

If any of the data used as an input to the business process changed during the transaction, then the inserts, updates & deletes done might have been different with the new data -- and are invalid. Thus, unless you obtain locks on all data you read, it's still possible that your transaction may fail. (You might successfully avoid DeadLocks, but if you do it right, the transaction may still fail.)''

Perhaps appropriate application of the CommandObject pattern would help: If the transaction fails, you can attempt to execute the business-level command again.

I implement database access in my application in similar way. However instead of having a queue of transactions I made method which updates database synchronized (it's a Java server). If several threads want to update the same data they have to wait until previous operation finishes. This approach however may not scale well when database access is slow. In simpler setups (or when database access is very fast) it guarantees you that operation is always performed right and on correct data.


CategoryDatabase CategoryConcurrency CategoryConcurrencyPatterns


EditText of this page (last edited September 17, 2003) or FindPage with title or text search