Persistent Data Structure

The phrase 'persistent data structure' does not refer to database persistence, but rather to preservation of the old structure across updates. This is an unfortunate overloading of the word, but historical. FunctionalProgramming uses these extensively.

A classic example is a tree. Given a tree of the form:

 . . . . . A . . . .
 . . . . / . \ . . .
 . . . B . . . C . .
 . . / | \ . . | \ .
 . D . E . F . G . H

We can update E to E' and preserve most of the tree:

 . . . . . A'. . . .
 . . . . / . \ . . .
 . . . B'. . . C . .
 . . / | \ . . | \ .
 . D . E'. F . G . H

The total cost of this update is proportional to the height of the tree rather than to the number of elements - i.e. O(lg(N)). Compared to in-place updates, it is likely more expensive since we need to allocate three new nodes, but this expense is often justifiable if we have use of the old version (i.e. for concurrent readers in ReadCopyUpdate, or multiple clients in an ObserverPattern.)

Use of some data structures, such as FingerTree?s, allow amortized O(1) updates (with worst-case O(lg(N))) for the first and last elements, basically by inverting the graph structure so that the edges are at the top instead of the middle. Thus, we can achieve O(1) deques and queues by use of persistent data structures.


see http://en.wikipedia.org/wiki/Persistent_data_structure


EditText of this page (last edited September 14, 2011) or FindPage with title or text search