An advanced way of designing a database concurrency control system, invented by JimStarkey (who now works at MySql completing a new OLTP storage engine, Falcon).
Each transaction is granted a consistent view of the data from transaction begin, and the database will use old "versions" of the data to ensure your isolation level is attained without locking the data. This way readers will not block writers and writers will not block readers.
This is sometimes used in combination with OptimisticConcurrency control, where a field-call checks whether a transaction has conflicted with the current one. Since there are no locks, the transaction would have to be aborted & retried (or in the case of Oracle, one could commit the work completed thus far, as the exception would be thrown at statement-execution time.)
Known implementations:
GemStone and ObjectStore implement this through disk-based garbage collection and comparing read & write object sets for each transaction.
Does ObjectStore really do this? I used it before and don't recall it having the same locking model as GemStone.
An OracleDatabase implements this through using the rollback-segment and keeping a recent transaction interest history in each block (the INITRANS and MAXTRANS parameters).
PostgreSql is a log-based storage model, so it can query prior data versions.
Oracle and InterBase implement MVCC, as does the the InnoDB table type in MySql.
AxionDatabase does this with something like a rollback-segment.