An arrangement where human workers are employed for the purpose of interfacing between other humans (usually customers) and some sort of ExpertSystem?. In some cases, the human may be part of the ExpertSystem?, and bring DomainKnowledge to bear to assist the customer. Medical transcription is one example--one must have a good deal of medical training (at least similar to what a registered nurse has) to become a transcriptionist.
In many other such arrangements (the subject of this page), the human is not encouraged to think, and is actively prohibited from doing so. In the most extreme examples, the human's only tasks are to a) read what is on the screen to the customer, and b) type in the customer's response.
[I once wrote a user interface that broke a very complex workflow into single question/answer pairs. Minimum wage employees would sit in front of a terminal all day. A call would come onto their headset. They would read the text on the screen and type in the caller's response. We were using them as cheap voice recognition software. We tried several electronic systems but none could match their accuracy. -- EricHodges]
I interacted with such a system recently at great length. Its voice recognition accuracy was undeniably far better than software equivalents, however it was much worse at doing conditionals than is software. "No! I don't want to be transferred to billing! No, I already talked to them, they transferred me to you. No, I don't have...no, I already told you...no, don't ask me those same 5 questions again, I already...no...no...I'm trying to tell you...no...just stop...please..."
[That's not the sort of system I'm talking about. All of the workflow paths were determined in software. All the human did was read the words on the screen and enter the response. -- EH]
Software at least tends to remember the values of its own variables and does if-then-elses pretty accurately.
Of course, if SW is programmed to not let you speak to a supervisor; then you're not gonna speak to a supervisor. Arguing with a human, even if said human is not empowered to help you, can bring some relief. Arguing with a fully automated system is annoying...
True. I was able to enjoy the full benefits of both approaches, since I had to make a series of calls in order to navigate through their 7 level deep voice menus (which always hung up on me when I chose the wrong path) before reaching the human voice recognition system (which was clearly in India, btw, speaking of outsourcing).
I thus delighted in both an unhelpful pure software system as well as an unhelpful human interface. Lucky me.
Given that voice synthesis is considerably more advanced than voice recognition, why let the HVRS speak to customers? Money could be saved by this; the voice recognition duty could be "batched"--the human's job could be little more then typing a never-ending series of random customer queries (from many different calls, interleaved) into the computer, which then processes them and answers the customer back in an obviously mechanical voice. This would probably allow 30%-40% of the staff to be sacked, and would further put the customer in his/her place--there is no chance (barring a SW bug) that the mechanical synthesized voice would ever accidentally be helpful to the customer. (Of course, delays would still need to be built into the system so that the customer requires more billable minutes, assuming that this is a concern).
And the mechanical system could be arranged to never ever ever let the customer speak to a real-live human--especially someone with authority--who might be tempted to solve the customer's problem (especially if said solution might cost the company any money).
Why do I suspect that this has already been done?