If you write a public Web site, and need security from automated scripts that simulate users, you must enforce some constraints that break automated testing, such as via HttpUnit. You might restrict the number of hits permitted from a single address, require an e-mail address and a mail-back, or ultimately implement a "Reverse Turing Test":
Humans conduct manual "TuringTests" to learn if ArtificialIntelligence solutions can impersonate humans. Computers conduct "ReverseTuringTests" to ensure remote users are not computers. ReverseTuringTests present challenges that humans can solve easily but computers cannot yet solve.
These malicious network issues arise from the native insecurity of raw TcpIp packets, which we must assume came from whom they say they came from. Until computer scientists sort these problems out, down in their LogicLayer, we must write irritating security systems that make our public Web sites test-resistant.
Tests should temporarily disable security to do their jobs. A test might send a private Web server a signal on a port other than 80 to disable its security, and the server could trust that signals on ports other than 80 must come from the local side of a firewall. This strategy leaves the security system intact for manual testing. -- PhlIp
Devising a ReverseTuringTest is a matter of compromise. You want to require enough intelligence to foil computers, but not enough to annoy your users. The Yahoo example is a bad compromise - it looks easy to foil using regular OpticalCharacterRecognition techniques. A compromise bad in the other way could require the user to beat the web site in a game of Go.
The rate at which a word picture generator foils OCR is relevant, not the point incidence. The above picture would probably pass OCR, but some generations can produce a word with a twist, confusing backgrounds, lines thru the word, etc.
Another possible compromise is a series of questions in the form "Which of the following facial expressions does the pictured human express?"
which is bad for other reasons - people with AspergersSyndrome, or other autistic spectrum disorders, would have severe difficulty with it (I know I'd probably fail it...) -- MikeRoome
C'mon now... everyone can tell happy vs angry, it's not like you'd be subtle trying to trick the user. Even for those who couldn't figure it out... well, they'd make such a miniscule percentage of total users that it's not worth worrying about. Web sites don't need to be 100% accessible to all possible users... if you cover 95%... that's good enough for most any site.
[Say 5% equals 5 million users who have been rejected for basically no reason except bad technology and bad attitude. That's 5 million cases of negative word of mouth advertising, possibly millions of dollars of revenue lost, etc.]
[However, the technology doesn't have to be perfect to avoid this problem; it just needs to have a fallback in those cases where it fails.]
It'd actually be a larger amount denied access than for the word-recognition captcha - in addition to Aspies, the facial-recognition system still wouldn't solve any of the accessibility issues for those who are blind or have visual processing difficulties. So, we've introduced more barriers to humans than before. This is a Bad Thing when you're potentially part of the 5% or so who will be denied access! -- CodyBoisclair
And then there are these:
- or -and
- or -and variations on that theme ...
What exactly do those have to do with it? They're not very different from a text-based captcha, and the way in that they're different was designed to eliminate a particular subset of the human population (myself being in that subset)
See also CaptchaTest, describing one class of "reverse Turing test" which includes images like the one above.