Concurrency primitives are built on other, lower-level, primitives supplied by underlying software or hardware. Higher levels of a given cactus (usually) contain primitives that operate at higher levels of abstraction. Different hardware and software platforms and applications may have slightly differing cacti.
In this sense, the properties of the cactus are determined more by the properties of the human brain and of the current state of the concurrent computing art than by any underlying theoretical properties. This is in contrast to the complexity hierarchies found in theory of computing or Herlihy's hierarchies of atomic primitives (GlobalConsensus). These hierarchies are used to explore properties of abstract machines, while the cactuses will be used to classify patterns that occur in parallel software.
These cacti thus form one possible index into these patterns -- an index that is useful for someone trying to understand the architecture, design and implementation of synchronization primitives and mechanisms. Other indices would be more helpful to someone trying to construct an application, still others for people reverse-engineering an applications, and yet more for people debugging an applications (fault tree!).
In keeping with a fine old computer-science tradition, these cacti will be drawn upside down.
+ ElectricalSignal | + FlipFlop | +=======================+ || + MemoryTransfer+ DeviceInterrupt || + CacheProtocol+ InterruptMask | + SharedMemory | +===============================+ || + AtomicOperation+ LoadStore || +===============++ DEKKER/LAMPORT/ETC. || + BusyWait+ WaitFree | + PreemptiveBlock | + SoftwareMessage | +=======================+ || + ElectionAlgorithm + RemoteProcedure .| .+=======================+ .|| .+ DistributedLock+ DistributedMemoryElectricalSignal: a set of electrical/optical signals sent from one IC to another.
FlipFlop: a circuit used to (among other things) synchronize signals.
DeviceInterrupt: interrupt sent from I/O devices to CPUs or between CPUs.
InterruptMask: mechanism for suppressing or deferring interrupts across critical sections of code.
MemoryTransfer: hardware message that transfers values between CPUs, caches, and memory modules.
CacheProtocol: cache-coherency protocol that ensures that all components of a shared-memory computer system will agree on the current value contained in any given memory location.
SharedMemory: an illusion that a set of caches and memory modules provide a global memory shared among a set of CPUs.
LoadStore: a family of mutual-exclusion algorithms based on coordinated sequences of simple loads and stores. Dekker's algorithm and Lamport's algorithms are well-known examples of this group. Most current software relies on explicit hardware support for mutual exclusion, see below.ATOMIC: a genus of computer instructions that provide support for mutual exclusion and parallel computation. Examples include the FetchAndPhi class of instructions (which in turn includes FetchAndAdd and TestAndSet), AtomicExchange, CompareAndExchange, LoadLinkedAndStoreConditional, and OklahomaUpdate.
WaitFree (WaitFreeSynchronization): a genus of mutual-exclusion algorithms based on CompareAndExchange? or LoadLinkedAndStoreConditional that guarantee that each process will complete any given operation in a finite number of steps. Some members of this genus can withstand termination of member processes at any time. This attractive property is counterbalanced by high ConTention at high loads.
BusyWait (BusyWaiting): the more traditional genus of spinlock-based algorithms. In contrast to WAITFREE, terminating a process that holds a lock can hang the parallel application. However, processes waiting for the lock can avoid contending with the current holder of the lock for the data.
PreemptiveBlock: applications with critical sections that may be held for long periods (e.g., a lock might be held while doing I/O) may not wish to spin while waiting for locks. A blocking mutual-exclusion primitive allows the application to give up the CPU while waiting for the lock.
SoftwareMessage: a software message. These normally require some form of mutual exclusion in shared-memory environments (although in some cases only BUSYWAIT is needed).
RemoteProcedure: generalized notion including asynchronous messages (AKA C/S)
DistributedLock: distributed lock manager: BLOCK w/out shared memory, or a particular form of ELECT.
DistributedMemory: distributed shared memory: recurses on SHDMEM to create an ingrown cactus.
ElectionAlgorithm: Note that all mutual-exclusion primitives are by definition election algorithms. The term ``election algorithm'' is normally reserved for the more complex elections among entities that do not share memory. Even more complex election algorithms allow entities and/or the communictions networks connecting them to fail. (Is this the same as GlobalConsensus ?
Nice page. Is missing Timers for simplest form of lock retry, as used in Ethernet, and sleep(). No manager required, though still need shared collision detection.
(Note: Most of the WikiLinks on this page are nulled out by SixSingleQuotes. This is because most probably don't need a page to describe them. If you find any that you think do deserve a page, feel free to reactivate the links by removing the quotes.)