This is my dissenting opinion to the (bogus) idea that ThreadsAreOptimizations. -- WylieGarvin
In the beginning, computers could only do one thing at a time. Then some clever people came up with the idea of letting computers do more than one thing at a time, by having an OS (OperatingSystem) manage the actual resources of the machine and share them among the running programs. The OSes used the mechanism of Processes (which for my purposes are much the same as Threads) to represent all the different things that the computer was doing at once, and since different Processes needed different machine resources at different times, it sometimes happened that while one process was "blocked", or waiting for some resource to become available/operation to be complete, some other process could do something useful with the machine's other resources. This was great and probably led to the mistaken belief that ThreadsAreOptimizations.
Now:
In asynchronous systems (e.g. pipes or message queues), the system allows a programmer to go off and have a thread juggle four different things at once. Indeed, back when OSes provided only processes and not threads, this type of system became dominant, because people needed their different computational activities to share the memory and resources of one process. This is the bad way to use threads. The OS contains some machinery for remembering the state of a thread (registers, stack, etc.), keeping track of which threads are more important than others (priorities), selecting which thread will get the CPU (scheduling), making sure threads that haven't been run for a while get their chance (preventing starvation), and lots of other subtle capabilities. But since the system uses an asychronous model, people ignore all that stuff and try to do a bunch of different things with one thread, and this is where the complexity of threading usually comes from. This leads to smelly things like a giant state machine for keeping track of which state each of the thread's tasks is in and manually managing the interactions between them.
In systems with synchronous IPC (InterProcessCommunication), the mechanisms of ThreadCommunication? and ThreadSynchronization? are combined into one mechanism (e.g. SendReceiveReply). The system essentially forces you to have one thread for each task or activity that you are trying to do, since any outside service you call to help you complete this task will block your thread until it is ready to answer the results of the call. This is the good way to use threads. This takes most of the pain out of synchronizing and coordinating things. It helps you build simple, deterministic, provably correct multithreaded systems. Other good news is that with synchronous OSes it is possible to have VeryLightweightThreads--most of the "heaviness" of threads in today's operating systems is due to their asynchronousness and bogus OS design.
There are real systems based on synchronous IPC (e.g. QNX [QnxOperatingSystem], Neutrino, L4 and L4/Linux). Unfortunately the rest of the world seems to think that the pipe-based Unix model and message-based Windows model and so on are "good enough" and have inadvertently forced multithreaded software developers to suffer terribly in the development of multithreaded applications. Bleah.
I think FlowBasedProgramming (FBP) embodies many of these ideas - perhaps more by dumb luck than good planning. We found that FBP allows us to build highly complex, reliable, maintainable and high performance systems that have functioned well for 30 years. However, I think I take the opposite view to that stated in SendReceiveReply - IMHO the most basic function is for a thread to receive and then send the result onwards. From this point of view, SendReceiveReply just becomes a loop topology in the diagram of process connections. To see this visually, do a find on the name MCBSIM in http://www.jpaulmorrison.com/cgi-bin/wiki.pl?DrawFlow.
This view is shared by the authors of several articles on LindaLanguage. Here is a quote: "In our experience, processes in a parallel program usually don't care what happens to their data, and when they don't, it is more efficient and conceptually more apt to use an asynchronous operation like Linda's out than a synchronous procedure call.... It's trivial, in Linda, to implement a synchronous remote-procedure-call-like operation in terms of out and in. There is no reason we know of, however, to base an entire parallel language on this one easily programmed but not crucially important special case." These remarks apply equally well to FBP.
The problem is, though, that threads still involves swapping processor state. The more registers there are, the longer this will take. This is also particularly nasty when it comes to real-time interrupt-driven systems. And, as if that weren't enough, many embedded platforms can be made to "multitask" using pure event-driven architecture alone with very high performance, but yet not have enough resources for adequate implementation of both a multithreading engine and the problem's solution.
Also, object oriented programming is event-driven programming, but with language-enforced mechanisms to make it tractable. Objects contain the global state, while methods are event handlers. This is why OO maps so well to GUI programming, and why most OO programs have a "procedural" core that always seems to characteristically follow the "initialize, event-loop, cleanup" pattern.
--SamuelFalvo?
See also: TableOrientedSynchronization