Are these described already somewhere?
Here goes, from the top of my head:
- No threading at all. Usually asynchronous events that happen in the context of the process (signals, interrupts) are not considered threads. The traditional approach to these little asynchronities (is there such a word?) is to make them respect the integrity of the main thread.
- One thread, main thread, doing most of the work but starting threads to do background tasks when needed. Often used as a workaround for library calls that cannot be requested to send completion events to the system's event multiplexor (eg select()).
- One thread, main thread, that only exists to start worker threads for any real work. Often uses a pool of worker threads to avoid thread creation / termination costs. All state-integrity critical code has to be made threadsafe under explicit locking. Further divided into:
- coarse locking, where big subsystems are written in the "traditional" way (no threadsafety) and have a common lock for ensuring that at most one thread executes in the subsystem at any time (like many OS kernels);
- fine-grained locking, where each object has its own lock (if it needs one).
- no locking, where no system does changes into global / heap-allocated variables (like scripts executed in a web server);
- some suitable combination of the above.
- Dedicated threads. All threads have their own "areas" in the application, and communicate with each other via well-defined, threadsafe interfaces. Sometimes referred to as message-passing. Further divided into:
- coarse division, where there are relatively big subsystems with one thread for each. Akin to old-fashioned processes;
- fine division, where each component (object, structure) has its own thread (like Erlang programs).
- Some combination of dedicated / moving threads, eg.
- threads that most of the time stay in their dedicated system, but may make direct (ie not message-based) calls into other systems.
Others?
See also MessagingAsAlternativeToMultiThreading
CategoryConcurrency