A Transaction Processing Monitor (TPmonitor) is a systems tool for configuration and management of usage of Computing Resources (terminals, printers, database resources) by users (people, online application programs, ?services) that are typically interactive in nature.
Processing of Transactions are important, but perhaps the Monitoring aspect of such type of software is the key point. Monitoring can include, but not limited to, logging, resource balancing, security management, etc. A TransactionProcessingMonitor has the primary role of coordination of services, much like an operating system, but does so at a higher level of granularity and can span multiple computing devices. -- OpenAuthor
I'm no TransactionProcessingMonitor expert, but this seems vastly overly vague to me, given my past understanding of the subject. Surely something concerning transactions should appear centrally in this definition! The definition as it stands includes thousands of things that I know for a fact are not TPMs. Perhaps the problem is that the creator of this page is looking to learn the subject? Don't give definitions when you don't know the definition!
Luckily the original author has not said he gave definitions, only descriptions.
I was hoping other people see fit to take on further description of this, however there had not been constructive additions.
I will therefore have to take these from RogerSessions, an ex IBM person turned MicroSoft guru turned ??, who authored a book on ComPlus. He called TPMonitors TPMs.
The book quoted the originator of this term as saying the T standed for Teleprocesing, not Transaction as it was originally coined. It became known later on as Transaction as it sounded much better. And it reflected the importance of management of transactions done by TPmonitors in a three tiered architecture.
Roger also has these to say:
Examples of commonly used TPMonitors include:
The following Software Products are probably not classified as TransactionProcessingMonitors?
Anyone interested in educating me in a comparative description of the product features listed above? -- DavidLiu
I agree with the RogerSessions definition. Here are a couple of other important stuff attached to TPM.
Sharing of resources
The first objective of a TransactionProcessingMonitor is to enable sharing of resources and the optimum use of those resources by the application.
This point is quite a key point. For a longer explanation, just consider some application that runs a fat client on the client workstation and owns its dedicated link to the database. This kind of application is generally referred to as a two tier application: one tier being the application and the other tier the database. In this kind of model:
The flow will be different in that model than in the two tier application model:
There are several other potential services provided by the TPM:
Coordination of resource managers can be taken in charge by the TPM most commonly with the OpenGroup? XA standard which enables two-phase commit (Unix-like environments). XA is not used a lot in the industry because it too often conflicts with the main business interest which is to always take the cash associated to the transaction to be performed (and not to rollback a full complex transaction because only one sub transaction failed).
Instead of XA, global data integrity in DistributedSystems is usually managed half by TPM services (such as enqueuing for later retry in case of ResponseTimeOut?) and half by the application logic (in that case, every bouncing transaction should be specified correctly in case of RTO).
Design considerations
A TPM usually provides a kind of development framework for coders. This framework is usually quite restrictive both in the design of transactions (imposed callbacks, use of certain TPM APIs mandatory, restrictions in memory management, etc.) and in the building of the executables (requirement to build and link with the TPM libraries with some conventions). This usually makes the application specifically designed for a specific TPM.
In terms of transaction design, you may want to read the TransactionDesign page.
About off the shelf products
I would say there are three kinds of products:
The Unix-like products took the assumption that an idle process under Unix-like machines was costing nothing to the OS. Compared to mainframes, it was not possible to design a TPM based on start and stop of processes, those operations being too costly for Unix-like OS. So the TPM is generally built upon an administration process at least that runs TPM processes in which you have application code embedded. In case of a core, there is potentially a problem because the core puts at stake the possible TPM throughput. The administration process detects cores (or blocked processes) and start new instances of the dead process. This is a bit less reliable than under the mainframes.
The third kind of platforms is based on application code running in the same address space than the "TPM" (but is it still a TPM?) so in the same process. Transactions are threads. That implies:
Conclusion
Application servers such as JEE or .Net should be, for me, restricted to small transactional applications or to GUI related problems (and so stateful session handling). As soon as you want to develop something heavier, I would recommend you have a look to Unix-like TPMs. Mainframe TPMs are quite expensive and very proprietary but a lot of banks are using them extensively for decades.
The single-thread versus multi-thread thing is quite an important choice. Recently, Google designed the Chrome browser with a multi-processes approach for the same reasons TP monitors were designed: in case one "part" is failing, just one process will dump and the rest of the transactions will not suffer a negative effect on them, contrary to JEE or .Net approaches. For the same reasons Google cannot trust web developers, the TPM designers never really could trust application developers ;)
That's all folks. Hope that helps. -- OlivierRey