Artificial Intelligence

I study artificial intelligence, 'cause I have none of my own.

ArtificialIntelligence (AI) comes in two varieties: SymbolicProcessing, which is all about playing with set theoretic datatypes, and NeuralNetworks, which is all about acquiring stochastic datatypes. Regrettably, neither proves terribly intelligent except in the most limited domains, in which they're nothing more than fancy searching/sorting devices - really no more intelligent than a sieve. The trouble is that the moment you try to make 'em more than that, you hit a CombinatorialExplosion that kills you. So AI remains the stuff of marketing and pipe dreams.

At CMU, the SymbolicProcessing people were called "Neats" and the NeuralNetworks types were the "Scruffies".

For a hopeful attempt at the SymbolicProcessing variety, SemanticNetwork? flavor, try ConceptsWeb.


What happened with FuzzyLogic and GeneticAlgorithms?

 Is FuzzyLogic neat or scruffy? neither
 Is a GeneticAlgorithm neat or scruffy? neither
What happened to them when? They're still in widespread use.

I've never seen a satisfactory explanation of fuzzy logic having descriptive power beyond (a subset of) what you can do with statistics. It may be a different way of looking at things, but doesn't seem to do anything new, really. It also doesn't seem to be a big win as a way of looking at things, either. So I've always been a bit puzzled by its popularity. It certainly isn't any gain in the direction of `artificial intelligence'.

FuzzyLogic and BayesianLogic? (probability-logic) are orthogonal, and both can be used simultaneously in an inference system. That which can be described with FuzzyLogic cannot be described with statistics. However, FuzzyLogic is of questionable semantic value... saying that something (a bar stool) is '40% chair' is far less meaningful than saying 'it has 3 legs in a tripod formation supporting a single disk-shaped surface horizontally to the ground at a height slightly higher than has been recognized as suitable for a chair'. The latter allows a great deal more -correct- inference, whereas '40% chair' has semantic issues (what does 40% chair mean to you?) and doesn't lead to any sort of meaningful inference. I'll certainly agree with your statement that it is hardly a big win. Its primary examples are prototype-words (tall, short, warm, cool, fat, thin), but even these have semantic issues (60% warm???) and may be better handled by bayesian logics (I'm 90% confident that Bob would say it's 'warm', and I'm his thermostat, so I'll say it's warm.)


This should be called ArtificialHumanIntelligence?. That is what the TuringTest is about as well; testing whether there is a perceptible difference between a human and a computer. This is an anthropocentric view of the world - not that this is necessarily a BadThing, but is is somewhat limiting. Would Bacteria classify the humans they live in as intelligent? Probably not. Would we classify the earth we live on or our SolarSystem? as being intelligent? We do not now, but it may be intelligent in such a fundamentally different way that we will never understand it. -- AalbertTorsius


I've often thought that ArtificialStupidity was much more interesting from a scientific perspective. Systems that are consistently wrong in interesting ways tell us more about the nature of intelligent behaviour than overly engineered ArtificialIntelligence BigScience. In other words you would learn more about natural language by making a machine that could make the same mistakes that a two year old does while learning to speak than making one that perfectly recognizes a total of six words spoken by one speaker in clear well separated tones. -- LarryPrice

When I was studying AI, I always explained to people that the real challenge was trying to achieve ArtificialCommonSense.

While chatting in the office today we came to an interesting conclusion: AI is doomed to failure, because every apparent success is immediately written off as "conventional programming". To play chess as good as a human does was once an AI goal, who's impressed now? To prove theorems was once an AI goal, now theorem provers appear as the solutions to problems in programming textbooks. And so on. Poor old AI community, every time they get something right, it gets taken away from them.

The rest of the IT community demand miracles, that is, seemingly impossible things, from the AI folk (who do themselves no favours re managing expectations), and so anything that turns out to be definitely possible after all is reclassified as "not a miracle" and so doesn't count as "real" AI. The Ai community is, therefore, never going to be able to produce anything that other will look at and say "Yes! that is artificial intelligence". It'll always be: oh yeah, well that didn't turn out to be so hard after all, then.

Hmm, maybe there was always a bit of an incompatibility between the TuringTest as a goal for software within fifty years (I think Turing actually said less) and DoTheSimplestThingThatCouldPossiblyWork. AI was always going to be hoist with its own petard. [What?] [ from www.m-w.com; hoist with one's own petard: victimized or hurt by one's own scheme ]

Turing said (in ComputingMachineryAndIntelligence, 1950):

I have always been of the opinion that AI systems were not going to be really intelligent until they started reaching conclusions whereby even the authors of the system could not understand the "thought process" used to reach the conclusions. We could thus be in awe of the unfathomable intelligence of the machine. Frankly. We are there. My word processor is stubborn and pig headed. It frequently decides to do things that I do not want it to do. :)

In order to understand what is going on one needs a lesson from history. 30 or forty years ago there was an argument about human intelligence. The chronology is enlightening. [The lies you tell in order to make your point are enlightening. The point can be made without the falsehoods.]

(I think you mean Ape, not monkey.)

My suggestion. Start believing in artificial intelligence before the computers declare war on you just to prove a point! :)

As an implementer of intelligent software, let me say that it's OK - I gave up worrying about where other people placed the goalposts years ago.

This is a RedHerring. The goalposts have never changed, just our perception of them. It's a trivial observation that apes and monkeys aren't like human beings. What exactly that means is ill-defined. TheMapIsNotTheTerritory. Our understanding of the difference between human beings and mere animals can change without that difference ever changing.


Monkeys were found to have language and construct novel sentences according to a grammar.

Really? I'd be interested in a reference. So far as I know, the only apes that "construct novel sentences according to a grammar" are those of the species Homo sapiens. -- GarethMcCaughan

The following seems to contain relevant information: http://www.pigeon.psy.tufts.edu/psych26/language.htm

Interesting stuff, but I wouldn't describe anything it reports as "constructing novel sentences according to a grammar". Would you?

Being able to talk at all is pretty much forming novel sentences according to a grammar, isn't it? In any case, the chimps were capable of forming new compounds - eg "cryhurtfood." I can't see what more you might need.

I think it has already been proven that apes/monkeys (Don't know the difference, sorry) can understand and use human language (Koko the gorilla anyone?). So I don't think it is much of a stretch to assume it is possible for apes to develop their own languages. I think it is rather arrogant to imply that Homo sapiens are the only species of sentient animal - the fact is we don't know to the extent most animals are intelligent, and quite a few have been shown to be self-aware. Heck, my dog can understand quite a few words (close to a hundred). Sorry if I sound angry. -- TimothySeguine

Sorry, but it is ludicrous for apes and monkeys (apes are big, monkeys are small) to develop their own language. Don't you think that if it were at all possible for them to do so, they would have done it in the last tens of millions of years?

The fact is that apes and monkeys simply never create words, they don't expand its vocabulary. This is in sharp contrast to human beings, who will create entire new languages when young if the ones they're exposed to don't suit their needs. How did you think sign languages were invented? Well, they were invented multiple times by children who had been born deaf, without any exposure to any pre-existing sign language.

The only thing human beings require to learn a language (creating it if necessary) is exposure to sounds or signs. Not words, just sounds or signs. That's pretty freaking remarkable when you think about it. And it's infinitely more than those stupid apes have ever amounted to.

Homo sapiens are the only species of sentient animal on this planet. That's not arrogance, it's well-investigated, well-documented fact. I think it's pretty arrogant of you to devalue our species just because you don't like it. -- rk

''Seems like I have hit on a sore spot for most people here. Well first of all, I'd like to remark that the comment about apes creating their own languages was mostly a for-instance -- I never meant to imply that was an actual fact. As for sentience in animals, we can look to a standardized definition of what sentience is. I checked my handy-dandy dictionary and the primary definition read something like this: "Having sense perception; conscious." I then looked up the word conscious: "Having an awareness of one's environment and one's own existence, sensations, and thoughts." So sentience breaks down to being self-aware. It has been shown by scientists that a few species of primate(chimpanzees, bonobos, and orang-utans most notably) are in fact self-aware - I have read about it on multiple occasions and have seen it on television more than a few. The science books you grew up reading in high school and college and such probably would not agree with me, but most of those are outdated. Here are some references from a couple seconds searching(Please excuse my formatting errors): http://www.abc.net.au/cgi-bin/common/printfriendly.pl?/science/news/stories/s235077.htm http://books.cambridge.org/0521580277.htm http://homepages.uc.edu/~geiselsc/bigpage.htm I don't want to start a flame-war here but, if one claims something is well-documented fact, one should provide a reference to said documented fact. They do change after all. Your thoughts?'' -- TimothySeguine

You chose to mention consciousness, so sorry, you lose. Because consciousness is very well defined above and beyond what you'll find in a stupid dictionary (see WhatIsConsciousness, under psychological consciousness) and it's very likely that human beings as little as 3000 years ago did not have consciousness (OriginOfConsciousness). So the chance of a lower animal having it? Yeah, right!

Oh, and forget self-awareness because that is not well-defined at all. I don't see why having a model of oneself in an environment matters so long as you're not aware that you are that model. And I just can't imagine that happening short of full-fledged consciousness, which we know that some humans don't have ....

And then there's philosophical consciousness which is undetectable .... -- rk

I can see that this is going to be difficult than I thought (I respect that). In reference to point 1, spatialization: sorry I don't have a link at the moment but there was a study with orang-utans involving a square room that was fully furnished and an exact model of that room. An object was chosen and hidden somewhere in the room. Then a miniature version af the object was shown to the ape and then they placed it in the same place in the model. The ape was then sent in alone to find it and was able to find it with the same accuracy as a 4-year-old child. You have provoked some great thought in me as to the nature of this issue. I don't have any knowledge regarding any scientific study of any of the other items on the list. But also to point 6, narratization: I am not sure if my memory serves me correctly, but I believe that Koko often told stories about things she did. I will look into this issue more deeply in the recent future. As a footnote, I would like to mention that my original post on this thread was about a previous fallacious post stating that apes cannot use language. This conversation has been interesting but I don't believe I have all of the necessary information to proceed. Thanks for making my work-day more interesting. -- TimothySeguine

There's more than just OriginOfConsciousness. JohnMcCrone? has written extensively on consciousness and one of his key points is that it's a learned ability which humans don't possess naturally and can fail to develop if it isn't instilled by the age cutoff. His website (just google) seems to be down at the moment.

I saw a study once where apes demonstrated the ability to recollect. But I won't put much stock in it until it's compared with animal human abilities. I really need to read JM's books. Well, good luck. -- rk

The inner-voice is something I had not thought of as a hallmark of consciousness. It makes a heck of a lot of sense though. In most of my early memories, I distinctly remember my the voice in my head. I believe that the inner-voice would be at least partially separate from actual thought though. I have memory (maybe backfilled by my older brain, I dunno) of not being able to express what I was thinking tangibly and being frustrated by it. Not sure how this relates to the common experience or how much of my memory in this matter is purely imagined. I thought it might be of note to mention (Please Delete if too off topic), but perhaps this is the wrong place for this discussion RefactorMercilessly. This is a topic of major interest to me because evaluating natural and artificial intelligences is one of my favorite past-times. -- TimothySeguine

See DanielDennett for more about the model of consciousness as internalized speech. It's my favorite model right now. -- EricHodges


I've been doing a lot of thinking recently that perhaps people ought not be trying to invent artificial intelligence so much as reverse engineer natural intelligence. You figure most real systems of behavior have five or six simple rules of operation and all components are made up of collections of those rules. A perfect example is AND, OR, NOT, XOR (and even this set can be reduced to AND and NOT). {Actually just to one: NAND} So if someone can figure out the few basic rules of operation, you could effectively duplicate neural activity in silicon. Add a few million input lines and boom, you've got a human brain. This is my theory, which is mine, and it's my theory. :) -- SteveSparks

One thing that's always stopped me cold about that level of approach is how, even if you got the underlying uber-kernel exactly right, likely would one then be to be able to hit on the precise instructional sequence giving rise to human-like intelligence? Seems about as likely or on the order of the complexity of coming up with the human genome to be fed into that little chemical assembler called DNA (or the double helix, or whatever the little engine itself is called - my biology is about as good as my Spanish: I had a couple years of it in school. :) -- BillKelly

ArtificialNeuralNetworks are based on the idea of ReverseEngineering natural intelligence. Not at the level of making things exactly the same as human brains, though (I don't think we have the technology to do that, for a start). But the principle is to try building things that work in the same sort of way as real brains. Incidentally, Bill: DNA is the medium in which the genome is recorded. There are actually lots of "little engines"; they're called ribosomes.

Like everything else, it is not the fundamental units that really count, but how you put them all together. The thing about the brain is that we pretty much know the big picture and the small picture, but no way to understand the link between them. The big picture is the various areas of the brain that specialize in different things, and the small picture is the neurons.


I think that there are (at least) three fundamental questions about AI. They are all based on the assumption that we will eventually figure out how to make artifacts with an average human level of intelligence. For example, I think either of RodneyBrooks?'s grand projects of slow progress in AI via ontogeny (i.e., start with a computer with the intelligence of a human fetus, and gradually work through development one day at a time) or via phylogeny (start with a computer with the intelligence of a bacterium and work through our evolutionary branch) would/will work.

The first question is then: what short cuts are possible? Strong symbolic AI, the idea that human intelligence was largely a matter of logic-like conceptual processing that did not fundamentally depend on (for example) interaction with the external world, would be an example of a (potential, possibly already failed) short cut.

The second question is: will we ever build artifacts that greatly exceed human levels of intelligence? Arguably we have already done so, but of special interest is whether we will ever construct a robot that is better than humans at all of the things we think of as typically human: telling stories, raising families, cooking food, etc.

The final question is: to what extent will we understand the artificial intelligences that we create? In my line of work, we train and use statistical models all the time without in any real sense understanding what they do, which is not to say that such understanding is impossible: sometimes we do decide to go and try to figure out how such and such NeuralNetwork is doing its job, and sometimes (depending on the size and complexity of the neural net) we even succeed. We almost certainly won't understand everything about the first AIs that we create, but the scientist in me hopes that asymptotic complete understanding is possible (and if we should be so lucky as to continually create ever more complex and intelligent intelligences, that the lag between our understanding and creation be bounded by a constant or sub-log function of time, amen.) -- ThomasColthurst


AI could also stand for Automated Information Processing, where the inputs are all understood, the process not understood, and the output not too important.


Moved from SearchSpace

A search space is metaphor for many problems in ArtificialIntelligence. For example, a program for GameOfChess can calculate the combinations of different alternatives after each move. The new positions are nodes in the search space (also called a GameTree). GameTrees can be computationally explored, starting from a root node. A search space can also be explored randomly or using a GeneticAlgorithm which doesn't involve starting from a root node.


It would be interesting to try create an ArtificialNeuralNetwork with subsections for major parts of the brain (ArtificialBaby??) to see what would result. Train a subnet to do simple language understanding, a subnet to generate the same (as text like an ElizaProgram), a subnet to do basic problem solving, a subnet to simulate basic emotions, and a subnet with some basic knowledge about the world (which could be modified by the language subsystem), and interconnect them. The advantage would be you could incrementally increase the "knowledge" of the system by training from examples. Could even be like a Wiki where the topology, weights, training would be arbitrarily specified and modified by many users over internet. Even if each person added a few neurons/connections and test inputs/outputs once a week, or NnVandals? scrambled it, most would want to see it grow it would become more and more intelligent over time. Perl has AI::NeuralNet? modules available with the standard distribution for free it would be relatively easy to set up a cgi interface to create this (use mysql to store the topology and weights). Similar to OpenMind? (http://commonsense.media.mit.edu/cgi-bin/search.cgi) but that implementation does not let you query yet and is not based on NeuralNetworks as far as I know. Users can input random data over internet but the architecture is controlled centrally. Another that is NN based is "20 Questions" http://www.20q.net/index.html but seems very basic.

I thought of a similar idea recently that I wanted to fling into an open forum. I don't know the exact feasibility of this as I couldn't think of an acceptable algorithm. It involves seeding a relatively small net with values, then using quasi-realtime evaluation of inputs and neuron firing. Training would involve sending strings of characters (or other desired information) and a self-organizing learning mechanism. The learning algorithm would not only alter weights but connections and simulated cell growth(strategic addition of neurons). Decaying unused connections, strengthening commonly used ones, and sprouting new ones when localized activity seems to suggest it should happen. The web interface would be used to provide input from the masses. It seems to me that (only in theory) this is like the biological process in more ways than traditional forward and back propagation nets. I have know idea of various implementation details, even certain parts of the math. There is also an issue with continuous input being required to run the net. I don't know if it would even provide interesting results even because it relies on the ability of the input values to induce patterns in the net (of what variety I couldn't be sure). Haven't gotten very far in trying to implement anything of this nature as I completely break down when it comes to deciding on solid learning rules. -- TimothySeguine


Are there systems out there that crawl the web and automatically create a NeuralNetwork with the information that is on the web?

That's identical to asking if there are systems that crawl the web and create data structures based on statistical analysis. Probably, almost certainly, but the question is too vague to have much meaning. Neural nets are building blocks, not solutions. Look at them as bricks. You can make a building out of bricks, but there are an unlimited number of kinds of such buildings. Or non-buildings. You can make a nice pathway in your garden with bricks. You can use bricks as paperweights. There are many kinds of bricks, made out of many materials, but to most people bricks are not interesting, it's what you make out of them. Talking about "bricks" is just too vague to mean much. Same thing with neural nets.

Questions like this presumably spring out of the vast quantities of hype we've heard over the last twenty years. Neural nets are hugely over-hyped. There is nothing magic about them. They aren't even very powerful, in and of themselves, any more than a single transistor is very powerful.


If the new economy will treat BrainsAsaCheapCommodity, why bother with AI? It just takes cheaper bandwidth to connect to billions of cheap brains, and that is a more solvable problem then true AI it appears right now. The bandwidth needs seem to be about 5 to 10 years away, instead of the 25 or so for AI.


Integration Lacking

I think the biggest problem with AI is lack of integration between different intelligence techniques. Humans generally use multiple skills and combine the results to correct and hone in on the right answer. These include:

It takes connectivity and cordination between just about all of these. Lab AI has done pretty well at each of these alone, but has *not* found way to make them help each other.

There are things like Cyc, but Cyc has no physical modeling capability, for example. It couldn't figure out how to fix a toy beyond the trivial, for example. At best it would be like talking to somebody who has been blind and without a sense of touch all their life. Even Hellen Keller had a sense of touch to understand the physical dimensions of something. Cyc was designed to give something basic common sense about life, not about novel thinking or walking around a room. Thus, it lacks the integration of skills listed above. There are AI projects that deal more with physical models, but nobody knows how to link them to things like Cyc so they can reinforce each other. That is the bottleneck. We can build specialists, but just don't know how to connect them to make a generalist who can use a variety of clues (visual, logical, deductive, inductive, case history, simulation, etc.).


Is Hardware or Software the Bottleneck?

Related to the question of whether the bottleneck is hardware or software, the answer perhaps depends on what stage we are looking at. I suspect that the hardware technology currently exists to build an expensive demonstration *if* we knew how. We have the technology to have the same hardware power as a human brain: it just costs many millions. If such did happen in a big lab, it may still require too much hardware to be practical in the field. But for a proof-of-concept, I don't think expensive hardware is actually the bottleneck to building a smart machine.

However, another view is that experimentation on a wide scale is needed to explore ideas. And, this will probably only happen if the hardware is relatively cheap so that small universities and home tinkerers can also play around. In short, we need to distinquish between powerful hardware needed to make it practical, and powerful hardware needed to test the ideas in labs. The answer will probably depend on whether the needed breakthroughs are related to theory or to experimentation. The more they require experimentation, the more hardware will be a factor to at least make a prototype. At this point we don't know what the magic solution will look like.

Once it happens in the lab, companies will find a way to make it cheaper for commercial use in a relatively short time. It is too big an opportunity for investers to ignore. If they dump gazillions of bucks into pets.com, then real AI seems a no-brainer (bad pun). Thus, I don't think hardware is a big bottleneck to wide adoption of real AI.

As a person who studies AI, I can definitely say that if you give me a terabyte of memory, I'll use that entire terabyte and probably want a few more. However, the software needs to advance quite a bit, too. To have AI, we need to design algorithms that steadily reduce the computational cost to solve problems and answer question (why? how? what?). This is especially the search-cost when dealing with many possibilities. It has proven that true experts do not process multiple steps forward with a combinatorial explosion... humans are very bad at thinking about that sort of thing! No, instead we just think of the few correct solutions, and move forward... spending time thinking about the wrong solutions is entirely wasteful. This sort of reduction of search costs is definitely not a hardware problem... it is entirely'' a software problem.

I, personally, believe that we could build a reasonably intelligent machine on a modern desktop computer if we had a few terabytes of memory, a veritable ton of training material (also measured in terabytes), and software that will automatically develop and recognize useful patterns, learn associations between them, continuously optimizes and indexes, self-audits its learning, and 'forgets' those things that are too low-level to bother remembering. The ultimate task from this continuous process would be to go from low-level pixels and audio to high-level patterns (like recognizing apples vs. glasses of water vs. individual people, human vs. chair, etc.) with severe reduction of the wrong search paths. Something similar can be done with gaining higher concepts out of unprocessed text. Now, this process would be slow (possibly taking over a year), and the final intelligence would only be moderately snappy... but the depth of one's intelligence isn't really associated with the speed of one's intelligence; use of a massive HDD will do just fine for memory in the presence of the right sort of indices.

I'd go so far as saying that: if you don't know how to build a -deep- intelligence on a modern desktop machine, you won't be able to build any sort of deep intelligence given a thousand times the processing power and memory capacity. That isn't to say you should stick to the desktop machine if someone offers you the latter... but the more powerful machine will never substitute for building the right software.

The Cyc method of teaching via by-hand entry of phrases is absolutely the wrong way to go. To learn the concept of 'apple', an AI ought to know it through sensor-fusion just as we do: what does it look like, how does one sound when a person bites into it, etc. Teaching that the word 'apple' with its associated sound 'apple' just happens to be associated with this concept of 'apple' and can be used to communicate this concept is teaching vocabulary. Symbolics intelligences cannot teach concepts that are not, themselves, symbolic in nature... if you want an AI that can draw a 3D apple and put it on a 3D image of a teacher's desk that it synthesized from its own conceptions, it absolutely needs to learn from sensor fusion.

I have to disagree with your apple analogy. As a thought experiment only (hopefully), suppose we took an infant and severed their nerves such that they could not feel nor taste nor smell, and also blinded them. They would basically be an ear + brain + mouth. I beleive that such a child could grow up to still answer common-sense questions if given a decent education. In fact, I bet you could hold a phone conversation with them and never know there was any problem as long as certain senses (color, feel) weren't mentioned. Studies and evidence of injuries and birth defects show that the brain makes the best out of the information channels it has available to it. It does not depend on all the senses, or even most working in to function on an every-day level. In fact, I would bet that they only need just one of the following senses: Hearing, Sight, Touch (perhaps this is stretching it) -- top

That unfortunate child would be unable to draw an apple, remember the taste of an apple, know how an apple feels in one's hand, the crunch, the spray of juice, the scent of citrus. Further, she would be unable to even imagine these things; she'd have no frame of reference to learn things like red, shape, taste, etc. Placed in a situation where she must learn on her own (without educators providing artificial senses), the most she'd ever learn is that apples crunch between the teeth, and do so with far less effort and damage to one's teeth than do rocks. The concept of 'apple' as understood by other children includes a vast array of associated patterns including color, size, taste, stiffness, and more... all concepts that they understand through their senses. That is, they have learned these patterns by experiencing and remembering them, and they can synthesize -- imagine -- these patterns, reproducing a facsimilie of them in their own minds: imagined taste, imagined weight, imagined crunch, imagined spray of juice, imagined scent of citrus. Given the proper media, they might be able to draw an image, offer a verbal description, write down a description, shape clay as an apple - all are forms of expression; all are art. But, no matter what top's unfortunate child does on her own, she'll lack any true understanding of the apple. She'll be unable to learn these patterns. She'll be unable to imagine them. On her own, she'll be unable to ever gain that understanding or produce that art...

Thus she had best not operate 'on her own. Her understanding will depend on the understanding of others, who sense with their own eyes and hands, who can offer some summary of what they see. It may also depend on secondary tools, instruments, to measure signals that don't reach the brain directly and present them by some means through the remaining systems. By speaking into her ears, an educator can tell her of the taste of the apple, of the color, of the weight and shape. She will be unable to intuitively comprehend what weight and shape mean, but the educator could teach her that heavy things cannot be lifted by one man, that light things can be lifted easily, and that objects between one and ten stones can be lifted by men of varying strength. She'll gain a symbolic understanding of 'weight'. The educator can teach that apples grow on trees, then busy themselves for a few weeks teaching the concept of 'tree'. They can make a long list of 'features' for apples... and when later describing some subset of those features, the child may learn to infer (correctly) that they discuss an apple. Ultimately, these educators aren't inherently different from any other sort of sensor. What are eyes but instruments of high-bandwidth visual information that translate to higher-level patterns (people, places, objects) in one's mind? If someone speaks into your ear of the objects that are near, is not one providing a somewhat lower bandwidth, somewhat higher-level eye? Educators can speak even of things that are far beyond your own experience, bringing understanding and patterns and associations back to you would never fathom on your own. And about everything they teach, they can go into smaller detail or greater detail... perhaps at request.

On the other hand, perhaps you cannot gain details at request, and lacking anything but a symbolic input sensor, you cannot investigate on your own. Perhaps, as for Cyc, you must wait for some arse to enter the detail -by hand- along with millions of other things that the arses who crippled you forget to tell you about.

Suppose, for a moment, that through some miracle of modern medicine, we were able, at the age of twenty, to restore this child's eyes, sense of smell, sense of taste, and her somatosensory system (collectively 'touch'). Suddenly, she'd have all the senses of another human. However, she'd still be unable to recognize an apple, or draw an apple... she'd know a lot about apples after she is taught to see, of course; she has a lot of high-level knowledge on apples that won't just go away... but nothing in her education has prepared her for handling visual inputs. It will be a long road to learn what people commonly understand about 'apple' and 'tree'. The same goes for Cyc. You cannot just add mics and cameras to Cyc and expect -any- gain. Sight will need to be learned entirely and associated with higher-level concepts through pattern-recognition... invertible to allow for synthesis. (Not that Cyc is capable of automated pattern-recognition in any case. The limits based on dependency for higher-level symbolic information from other sources will hold true for any intelligence.)

It would have been much better to just have the senses to begin with, to have some perceptual understanding of color, weight, size, shape, etc. when first learning about apples. This is not harmful, at the very least... excepting that it adds to the software burden (e.g. translating visual input to meaningful data).

I don't disagree that an AI can learn the concept of 'apple' in the same manner as this crippled child... symbolically, with massive aid from educators. I only believe that it is absolutely the wrong way to go: that the AI ought to know it through sensor-fusion just as we do: what does it look like, how does one sound when a person bites into it, etc. Only in this manner can the AI even begin to really understand the apple as we do, or with sufficiently similar resolution (e.g. as an anosmic does). Anything else will result in a crippled understanding. For the full range of reasoning, one needs both higher-level symbolic information (origin, destination, 'physical object', etc. -- associations of patterns communicated by use of words) and lower-level information (all the way down to recognizing (or analyzing) from raw sensory inputs, or imagining (synthesizing) the appropriate inputs for projections... e.g. imagining the weight in one's hand before predicting how far one can lob an apple.)

One thing at a time. Passing a Turing Test and winning an art award are two very different goals. You didn't seem to refute my original goal as stated. -- top

I have no need to refute your 'original goal'. You disagreed with a statement about what ought to be by declaring what can be. I agree on the can, provided you give her educators that use what they have learned through their senses to teach her. However, it does not change my opinion on the ought. If you are unwilling to discuss the ought, then we're done on that subject because I have no intention to refute the can.

I do disagree with your more recent statement: "passing a Turing Test and winning an art award are two very different goals". The ability to pass a Turing Test requires the ability to synthesize dialogue that is believable as coming from another human. That is art. It might not win you an 'art award' for best script, but it does indicate that the two are definitely not "very different goals".

Couldn't an AI machine describe what an apple feels or tastes like based on a database of descriptions, say google and/or cyc? Plus, a person or machine is not necessarily going to fail a common sense test or Turing Test if they say, "Sorry, but I was born without arms, so I cannot really describe that". You are not going to ask Stevie Wonder what a pretty girl looks like. (What one feels like is probably for a mature audiance only.) I suppose this debate could not be settled unless such a deprivation experiment is actually performed. But it is not realistic (unless medical history could provide such data).

Does the ability to regurgitate information from a google database constitute real understanding of a perceptual concept? If Stevie Wonder memorized that "She has sharp grey eyes and long auburn tresses", does he have a real understanding of 'auburn' and 'grey'? I think not. However, perhaps I'm the only person here that thinks that understanding requires the ability to synthesize, to produce... to imagine. And proof of understanding requires going first to a high level of abstraction, then back down to a lower one... e.g. to prove you understood a book, you read it then you summarize it; summarizing a book requires building high-level concepts in your mind then synthesizing words to communicate those concepts. To me, that's fundamental to the difference between understanding and recording. However, you and top certainly seem to think otherwise.

If full honesty and understanding is required for intelligence, then most salesmen and most CEO's would flunk. Probably politicians also. Further, why couldn't Mr. S. Wonder answer, "I don't know, I am blind"; or the AI-box answering, "I don't know what an apple feels like; I was born without arms". That is an honest answer. What is an example dialog that you are envisioning?

I guess I am of the belief that "understanding" isn't necessary as long as a satisfactory answer is given. I don't want to get into a LaynesLaw brawl over the meaning of "understanding", which is probably a tricky sticky philosophical problem. Thus, I choose to measure AI by the output rather than how processing is actually done. Cowardly, perhaps, but practical.

:Measuring an AI by the output rather than how the processing is performed is okay. All one needs to do is define the correct sort of tests. The test I defined for 'proof of understanding' is the ability to take a lot of input data, (black-box process it), then produce and communicate a summary. A summary must have the following characteristics: (a) it is considerably smaller than the original data set, (b) it is at a higher abstraction than the original data-set, (c) it hits the most important features necessary to reproduce a facsimile of the data-set with a certain degree of fidelity.

:These are part and parcel to the common definition of 'summary'. One should note that 'features' are utilized opposite of abstractions... e.g. 'red' applied to 'apple' is a feature, whereas 'red' applied to a 'clump of pixels' is an abstraction. The production of a summary proves understanding to any intelligence that has both the original data-set and agrees that the reproduced facsimile is within the specified degree of fidelity (the summary may be translated, if necessary). It often is useful to prove understanding to oneself as a form of auditing one's knowledge. 'Higher abstraction' is defined in such a way that prevents many forms of lossless and lossy data-compression (including simple transformations of representation as per JPEG or LZW. Transformations and translations, in general, cannot constitute proof of understanding). Essentially, production of a 'summary' requires semantic compression.

:I've found nobody so far who does not agree that 'understanding' (in its common, English understanding) is not sufficiently proven by production of this summary. I've seen initial disagreements as to whether component is necessary, or to whether this is the only possible proof of understanding... but, in general, thought-experiments have revealed that alternative proofs are largely equivalent to this one, and removing requirements results in something less than understanding (... as it is commonly understood...). Anyhow, this provides you the opportunity to measure AI by the output. You can now measure understanding. And by that, I mean 'understanding' sufficient to qualify for the English use of the term.

:Supposing you are not writing a machine that plays the fool ("I don't summarize for no one"), understanding is a necessary component to pass any Turing Test. The proof for this is simple. One may ask at any time for a summary of the dialog thus far. However, that sort of understanding does not mean there is any underlying understanding... i.e. if one talks about apples, the system need not understand apples to pass the test. To understand apples would require that it know what you -mean- by 'apple' as more than just the word 'apple'. :How could an AI that never sees, touches, tastes, smells, etc. an apple understand an apple? Well, through secondary sensors -- educators or instruments can tell it of the underlying characteristics common to apples ('some apples are red', 'some apples are green', 'some apples are sweet', 'some apples are tart'). So, yes, such an AI can come to understand apples. Stevie Wonder can come to understand 'blue' as 'the same color as the sky on a sunny day', even if he never sees the thing. Or he can understand it as 'the range of wavelengths 440-490 nm. After all, we don't need to see radio-waves to understand them. I don't disagree that this can be done.

:As far as sticky, philosophical issues go... understanding itself can be practically defined as those characteristics that are necessary to pass your proof of understanding. That's practical, not sticky, not philosophical. One could argue that you possess understanding so long as you have these characteristics... even if you never bother proving you have it, or lack the means to communicate a proof. The essence of understanding under the definition I provided is the ability to analyze data to a higher abstraction paired with the ability to synthesize an abstraction to facsimile data. When you can do both, you understand the abstraction. If you can look at the light reflecting off an apple and categorize (correctly) that the predominant color is 'red', and if you can later remember or predict about an apple that it is 'red' and you can synthesize a facsimile of low-level 'red' and attach it to the apple, then you understand 'red' as applied to objects (or at least apples). Understanding requires both synthesis and analysis; without both, semantic compression (low-level-data to 'red', 'red' to low-level-data) becomes impossible... you'd either be unable to produce or utilize a summary.

:I do care how one might go about implementing a system capable of understanding since I plan to implement systems capable of it, but I'm not concerned about which precise implementation is used insofar as it produces the measurable results. In that, I feel the same as you. On the other hand, if you can prove that the necessary characteristics of understanding do not exist, then you can prove that understanding does not exist. If you are forever failing to investigate what 'understanding' actually takes at the level of processing, you'll never make a step of progress towards any implementation.


I suppose this debate could not be settled unless such a deprivation experiment is actually performed.

Such a deprivation, at least in two sense vectors, has been "performed" and her name was HelenKeller?. Obviously, the degree and nature of understanding is going to depend on the degree of sensory capability. An AI based purely on Cyc will exhibit (at least theoretically) only a symbolic and abstract "understanding". An AI with audition, vision, gustation, tactition, olfaction, thermoception, nociception, proprioception, and equilibrioception will be theoretically capable of human levels of understanding. An AI with all of these, plus electroception, broadband EM reception, magnetoception, and echolocation will theoretically be capable of "understanding" beyond human capability. That does not, however, diminish the value of AI systems built only upon Cyc; but they shouldn't be expected to be human in nature. -- DaveVoorhis

My main issue with Cyc isn't the lack of ultimate capability... it's the AI's utter dependence on human input, which is both erroneous and incomplete. A system like Cyc will be largely unable to audit or extend its own information. It also has little or no use for the knowledge it does possess, and lacks much ability to make conclusions based on patterns of data. Now, perhaps that is my personal bias for learning AIs speaking. I won't hide it.

As you say, an AI with more sensory capability than we possess would potentially be capable of obtaining a much deeper understanding of the world than we currently possess... and we'll feel at a loss when it tells us of how (for example) certain patterns of echolocation and broadband EM reception are meaningful to (some predictions here) just as HelenKeller? felt at a loss dealing with colors and sounds. Again, it isn't to say we couldn't learn about these patterns (should the AI deign to explain it to us, or should we study it), but we'd have far less use for that knowledge and understanding than would the AI, and we'd never gain the knowledge intuitively.

Of course, a broader range of sensors goes beyond just accepting a broader range of physical signals. Merely having sensors placed differently, while still providing sensor fusion, can create a world of difference (or at least a paradigm of difference). Having three auditory channels allows you to triangulate sound. Having audio and/or cameras spread over a massive area (e.g. a city) can provide access to broad-area patterns that humans have troubles seeing due to scale (e.g. traffic patterns, crime patterns).

I have to still disagree that dealing with a variety of sensory input is the key. It may be helpful, but not a requirement. Hellen Keller could read braile books about all kinds of subjects and "understand" once she had a teacher to get her started. Nor did I claim that Cyc's rules alone could produce sufficient AI. I agree it probably requires a multi-pronged approach that "triangulates" the responses from different tools/skills, but not necessarily involving lots of sensory info. For example, it may require temporal and/or physical modeling in addition to cyc's rules. Or at least proximity models, such as graphs with distance links (stick-figure if you will). A lot of human activity deals with temporarl and physical proximity. --top

I'm left thinking you haven't read what I have written. Feel free to continue ignoring those locations where I more or less say, point blank, that a variety of sensory input is not a necessary requirement... that understanding can be gained without it, provided you have sufficient support from secondary sensors (including teachers and instruments).

There is no inherent difference between understanding temporal vs. spatial patterns. There is a great deal of -intuitive- difference, mind you, and some hard-wired differences in humans... but data is data, and such things as space and time are just dimensions that exert themselves in reference to the ordering of sensory input. Useful patterns in 'space' are usually viewed as 'objects' (or 'nouns') whereas useful patterns in 'time' are usually viewed as actions (or 'verbs'), but that's just based on what happens to be useful. Anyhow, your new tactic has some problems (let's add a bunch of models atop Cyc! Yeah!). A model incorporates an understanding, a paradigm; it's quite possible for models to be wrong or inadequate just like anything else you teach Cyc. More importantly, though, a model should be something you develop -within- the AI if you wish to claim any understanding is present... otherwise the AI is doing the equivalent of pulling out a simulator to answer the questions, and it will never learn where it can abstract, cut corners, speed up the process with low loss of fidelity... or even improve on the process, gaining more fidelity. If you're going to add existing models to an AI, it'd be better to teach them the models so they are truly part of the AI's knowledge and understanding. And, yes, this is another ought... I won't argue that you cannot hack together a functioning AI by plugging in third-party models. I'll only argue that it is an ultimately redundant and harmful approach to the progress of AI technologies, and that you ought to know better.

We're not looking for infinite flexibility. I never ruled out the models from being expanded to accept new knowledge. But making entirely new learning frameworks or techniques on its own is building an unnecessarily high goalpost. We are going for average joe at this stage, not Dr. Codd. How about a sample dialog to illustrate your point. -- top

That can't possibly be the royal we. When I started searching for the an internally consistent and correct set of formal abstractions and definitions for data, pattern, information, knowledge, intelligence, wisdom, and behavior, it was definitely for flexibility purposes. My goal is to build an intelligence that can learn anything I can learn, and systematically grow its own understanding over time (which is why I needed a measurable definition of understanding in the first place). Interestingly, though, my studies have shown that it's actually more difficult to make models 'modules' integrate with a good knowledge system than it is to make the good knowledge system integrate the models directly.

I'm not sure which point you wish me to illustrate with a dialog, or whether a dialog is an appropriate mechanism to illustrate the point you desire.

You seem to be expecting the Wright brothers to build a jet the first time through. Walk before you run.

The Wright brothers didn't cobble something together and hoped it worked. They sat down with the pencil, the pad, and their brain... and worked on designs. They also knew exactly what they wanted: a machine that could fly and carry men while doing so. They also had access to plenty of literature revealing plans that failed in their past. They had a very decent chance at getting it... and, no, they didn't build machines that could 'walk' along the way. Their failures floundered about in the mud, just as yours will if you expect that cobbling together failing plans will get you Joe Average intelligence.

You should examine, for a moment, that earlier statement of yours: "We are going for average joe at this stage, not Dr. Codd." I'm quite dubious that Joe Average is somehow an 'easier target' than Dr. Codd. Joe Average can drive cars and taxis, can do dishes and laundry, can learn carpentry and architecture, can learn math and medicine. Dr. Codd doesn't have anything inherently different than Joe Average... just more patterns, more education, more knowledge... and, with time and effort, Joe Average could know everything Dr. Codd knows.

You need to do some serious and pervasive design work. YAGNI is a principle that keeps you on a plateau of solutions that can be reached with least expenditure of energy... and modern computer science has not yet lifted it to a stage where YAGNI will ever get you to human intelligence. YouAreGonnaNeedIt. If you think that you can even get to Joe Average without significant design work, you'll fail. Or, worse, you'll succeed and haven't a darn clue why... which combination of cobbled-together pieces and third-party model-modules got you intelligence?

I think you focus too much on the Turing Test. If you wish to walk before you can run, go for the intelligence of a Mutt... which can get around in the world, survive, cannot learn Riemann Sums for its life, and absolutely cannot hold a conversation with a human. That will be the path I take.


Interesting that animals can do physical navigation pretty well, but do not communicate beyond the basics very effectively. A chimp may be able to make a nice banana split with training, but would fail a Turing Test.

Indeed, that is true. And pigeons can play piano. Do not underestimate the mutt.


A hypothetical AI Turing Test chat:

(Per = person, Bot = AI-box)

Per: Describe an apple

Bot searches dictionary

Bot: A round-shaped fruit

Per: What color is it?

Bot googles "apple color". "Red" appears most often, but green and yellow also appear. Using this rank, it uses its standard frequency-ranking response.

Bot: Apples are usually red, but can be green and yellow.

Per: Do apples bounce?

Bot googles for "apple bounce" (perhaps with variations such as "bounce apples"). Finds infrequent references. Uses standard low-frequency response.

Bot: I do not remember ever seeing apples bounce.

Per: Is that a yes or a no?

Bot has a canned response for forced choices.

Bot: I am only stating my experience. I shall not speak outside of it. Have you seen apples bounce?

Per: No, I guess not. Do apples fall?

Bot googles for falling apples. Finds many references to Isaac Newton's apple experience, but not much else. Bot bets on the Newton hits due to their high occurrence rate relative to other responses. Also checks standard biography database to make sure that Newton was a real person. Since current context is assumed not to be fiction, this check is standard. A Cyc rule says that people don't perform much activity when they are dead such that the event probably happened during Newton's life-time, of which the range can be found in the standard biography database. Thus, the event is guessed to have happened in the past.

Bot: An apple fell on Isaac Newton.

Per: Why?

Bot searches dictionary for "fell", which takes it to "fall". Finds definition "to descend due to force of gravity".

Bot: Probably due to gravity.

Person decides to try to knock bot off guard.

Per: But that was like 300 years ago. Do apples still fall?

The question "still?" causes bot to bring up time-related change research algorithm. Using pre-programmed synonyms for "change", it googles for items such as "gravity change", "gravity increase", "gravity decrease", "more gravity", "less gravity", etc. It finds a few physics-related articles that talk about speculation regarding changing of cosmological constants. They are too complicated for Bot to interpret. There is also a grade-school-targeted article on how earth can increase its gravity if it's mass changes. Bot can glean some basic facts about such, such as a change of mass can cause a change in gravity, but none that directly answer the question. Bot concludes that it is confused, and thus uses a standard "confusion" response for time-related change questions.

Bot: I don't know if gravity has changed.

Per: Have you seen it change?

Since bot is programmed to use google as its "experience record", it answers the same question, but from the perspective of personal observation since the question was phrased in terms of an observation request.

Bot: I have not seen it change.

Per: So why do you think it might have changed.

Bot inspects its past responses and concludes that Per's restatement does not appear to be a good match for anything Bot has said. Thus, it produces a canned response for probable misquotes.

Bot: I don't think I said that.

Per: If you say you don't know it changed, then you are entertaining the idea that gravity may have changed.

Based on bots English idiom dictionary, it knows that "entertaining the idea" is the same as "considering".Bot reviews its logic and Cyc heuristics and finds a rule about considering possibilities of the form: "If there is no evidence for lack of X, then one should consider X a possibility".

Bot: If I have no evidence of lack of change, I probably should consider change a possibility.

Per: That's dumb. I don't know of anybody who thought gravity might change in 300 years. At least not enough to notice.

Bot knows its view is being criticized. Bot's logic and evidence weighing heuristics realizes that it does not know why Per thinks that gravity has not changed. It is a statement made without any detectable evidence. It thus asks a canned question to try to get Per's reasoning.

Bot: Why are these people so certain of their viewpoint?

Per: I guess if nobody ever talks about it happening, then it didn't happen.

Bot's debate logic tells it to try to form a general rule based on Per's suggested IF-Then rule and ask for clarification.

Bot: So if nobody ever talks about something happening, then it probably didn't happen?

Per: There are probably exceptions, but I guess you can say that.

Bot's debate heuristics recognize that Per has admitted there may be exceptions.

Bot: Is it not possible that this case is an exception?

Per: Maybe. I am not a scientist. I just feel that somebody would have mentioned it.

Bot can conclude based usage of "feel" that intuitive or emotion-based evidence is being used and admitted to. It thus uses a canned debate-related response:

Bot: Well, emotions and gut feelings are sometimes wrong.

Per: Perhaps. I don't claim to always be right. If gravity changed, then something has to cause it.

Based on the previous article, it knows that a change in mass may affect gravity, but does not have information about whether mass has changed. Bot decides to float the idea.

Bot: I hear gravity can change if mass changes.

Per: Okay, but what masses changed?

Bot googles for articles related to mass changing using the technique already described. Finds lots of mention of stars losing mass from astronomy articles.

Bot: A star's mass can change.

Per: Do you mean the sun?

Basic classification and categorizing database is searched and finds that the sun is a member of "stars".

Bot: The sun is a star.

Per: Yip. So did it change mass within the last 300 years?

Bot performs the same kind of change study on mass of the sun. Finds an article that it can glean some basic information.

Bot: I've read that mass coronal ejections can cause a star to lose mass.

Per: Well, that is outside of my league. But I am not sure what happens on the Sun would change Earth's gravity much. We have tides and all, but I don't know much about the details. And, if the sun did barf stuff out, then wouldn't it have shrunk a lot over billions of years? If it can shrink noticeably in 300 years, then in a billion it might be gone.

Bot can find nothing wrong with the stated logic. Decides to change the subject a little bit by trying to clarify the apparent but not certain association with "barf" and coronal ejections because its database associates barf with living things.

Bot: You call mass coronal ejections "barfing stuff out"? I thought only humans and animals barf.

Per: I am just joking around with words. You are kind of a literal guy, aren't you?

Bot's designers encountered this criticism before in tests, so put in some canned responses.

Bot: I was born in another country, so I sometimes don't get some figures of speech.

Per: Understood. Not a problem. Your country does seem to have some odd notions of gravity, though. Good day, Sir.

Bot passes TuringTest, but it judged to be of merely "average intelligence" from a human perspective.


... And now the person proposing this just needs to make this hacked together approach work. As a note, none of the 'google experience records' will work. Here's what will really happen:

Per: Describe an apple.

Bot searches dictionary.

Bot: A usually round, edible fruit of a small tree, Malus sylvestris, of the rose family.

Per: What color is it?

Bot googles "apple color". Its primary references are apple.com with regards to color utilities and toner cartridges. It filters these responses for words that identify colors, and sorts by frequency of response.

Bot: Apples are usually black, but may be yellow, cyan, or magenta.

Per snickers, then asks another question.

Per: Do apples bounce?

Bot googles "apple bounce" and finds common responses involving Apple shares and commemoration key-chains. It makes what sense of this it can.

Bot: In 2000 there was a great deal of controversy as to whether apples could bounce back. In 2006, Wall Street Journal reported that apples do bounce. I am led to believe that apples bounce.

Per snickers again, then goes on to have fun stumping the Bot some more.

... Ambiguity and communications context must be handled explicitly, and must seamlessly integrate with any searches to independent sources of information like google. That seamless integration costs a great deal for the hacked-together system consisting of many component parts - a cost that increases at least linearly (and more likely with the square) of the number of component parts. It ends up being the humans that pay this cost, that program in, explicitly, which subsystem to use, when, and how... and it can be made to work for specific applications. However, the code will be rigid. It isn't nearly so flexible or long-term efficient as it is to create a system that learns when and which subsystems to use and how to best utilize them (i.e. learn which queries are sensible); if all you do is help out on the 'how' (via provision of a component communications service), the human cost is guaranteed to be linear and significantly smaller with the number of different component interfaces.

If it has Cyc-like rules, which was stated as an assumption, then it could detect that "apple" may mean different things (distant nodes in multiple taxonomies with the same handle) and use the most appropriate context, or at least ask to clearify. As far as making the machine self-learning versus human-fed, well that was not really a debate the example was meant to address.

Suppose, for the moment, that the Bot was tuned a little and now utilized its initial definition ('a round-shaped fruit' or 'A usually round, edible fruit of a small tree, Malus sylvestris, of the rose family.') and kept this in some sort of communications context for future conversation. Further, suppose that it was able to identify, in its Cyc-like ruleset, this particular form of the apple (e.g. by its attachments to 'fruit', 'tree', and 'Malus sylvestris'). That is, you've already integrated three components of the system: the dictionary, the Cyc-like ruleset, and a communications context management service. So far so good, right? Things are nicely integrated and working 'seamlessly'.

Then you decide to mess with this fragile, rigid balance by utilizing Google as the experience record. If you use statistics and hits as you originally suggested you'll STILL get the sorts of responses I describe above because you would have not integrated the Google service with the human-fed cyc-ruleset, the dictionary, and the communications context. What you must do is, at minimum, filter web page results by subject prior to obtaining statistical hits. (Better would be to make the 'Bot' read the pages and 'understand' them, too, but this will lead you directly into that rabbit hole you were attempting to avoid by use of statistical hits.) In any case, to distinguish the subject you must 'read' some other parts of the page (even if it's just to filter out all pages containing the sequence 'apple.com'). Now, as the programmer, you have the opportunity to decide how you are going to make the Bot 'read' the page and perform the filtering, and how this will integrate (if at all) with the communications context, dictionary, and cyc-like rules set. Tell me, how will you distinguish 'apple' (the fruit) and 'apple' (the company) in a systematic manner that extends to a great many subjects other than the apple?

Consider, also, the case where the rules-set isn't initially aware of 'apple' (the company) and must, therefore, self-learn to distinguish between them. Even if you take care of 'apple', this situation will show up again and again and again. Your apparent assumption is that humans have already hand-fed the machine all the answers it will need. I am extremely skeptical that such a thing would ever happen... that humans can hand-code anything like 'intelligence'. Even such 'simple' things as reliably recognizing and distinguishing faces is difficult, much less distinguishing behaviors (e.g. recognizing criminal activity, identifying error so one can suggest corrections), and pattern perception is only a quarter of intelligence: consequence prediction, behavior production, and planning are part of intelligence, too.

Wikipedia, which the bot can use, has "disambiguation pages". The bot can check those to see what context is the closest fit. And if there is not a good fit, the bot can simply ask. And even if there wasn't a disambiguation page. At the bottom of many Wikipedia pages is a classification or grouping. Millions of human fingers have already "educated the web" to some extent.

http://en.wikipedia.org/wiki/Apple_%28disambiguation%29

Example:

Per: Describe an Apple.

Bot: What kind of Apple? There is the fruit, Fiona Apple the singer-songwriter, the Apple Corporation, etc.

Per: I mean the fruit.

It may be annoying, but we are not measuring annoyance. (Hell, I'd probably fail an annoyance test too :-) As far as whether such is "intelligence", again that is a definition battle that I don't want to get into. I view it from a standpoint of utility, not a property that by itself we are trying to achieve (if even universally measurable). If something can "fake it" using combinations of google, wikipedia, and cyc to acheive its goal; so be it. On reviewing such a dialog, an AI engineer may add a new rule that says something like, "if the article 'an' is put before a word, it implies a physical object as apposed to people or organizations". (Cyc doesn't seem to do fuzzy associations very well, which is something I would probably fix or suppliment if I was building a knowledge base.)

This isn't relevant. Even if you can distinguish the two at the point of 'Describe an Apple', it isn't affecting the later questions: do apples fall, what color is it, etc. I.e. even if the Bot simply ASSUMES you mean 'fruit', you haven't sidestepped the problem. Did you misunderstand me, above?

Perhaps not, that is why I created a sample bot discussion so that problems can be shown instead of verbalized. Once the context of "apple" is established, that context stays locked until otherwise changed. You seem to be assuming that the bot forgets prior questions.

Re: "consequence prediction, behavior production, and planning are part of intelligence, too.

Many humans are not so good at those either. It may be possible to grab a plan from the web or knowledge base also. We need to strive for average joe, not Einstein at this point. We don't have to reinvent ideas that other people already documented. OnceAndOnlyOnce. It seems you are looking for a "learning machine" more so than a task doer. And what is "behavior production"?

Humans are excellent at planning. We're so good that we reach UnconsciousCompetence and don't even think about how much planning we actually do. How do you plan to obtain dinner? to make yourself presentable for work? to drive home? to obtain your e-mail? to learn about the colors of apples? It is only when making plans that are well beyond our experience, beyond our ability to predict, into the realm of ConsciousIncompetence? that we experience frustration and weakness in our planning. Humans are also excellent at consequence prediction. What do you predict will happen if you throw a baseball at a window? or at your boss? what are the likely consequences of a car accident? Finally, humans are quite capable at behavior production on average, and outstanding for certain individuals. I.e. after planning to throw the ring onto the bottle, many of us will at least get close when we make the attempt... but some people will get it every time. We do lack the precision behavior available to machines, though.

Behavior production (aka behavior synthesis) is the production of a sequence of signals to an agent's actuators... and while it includes such things as entirely random behavior, the 'quality' of behavior production would usually be measured by the disparity between intent and effect (including 'side'-effects, such as expenditure of energy).

You keep saying that you're going for JoeAverage?. I'll continue to insist that the difference between JoeAverage? and Einstein is negligible on the intellectual scale... like insisting you're attempting for $10billions instead of $30billions funding when you'll be working hard to reach $100thousands.

Different humans are better at different things. So why should a bot be good at everything?

The bot doesn't need to be good at everything. But if you're going for JoeAverage?, the bot needs to be mediocre at an enormously vast array of things, and 'good' at a merely 'vast' array of things. The width of domains a typical human's understanding, predictive ability, reasoning, planning, and behavior-set touches is, quite apparently, beyond your comprehension... beyond your sense of scale. To truly understand it, you need to think about how you'd program a Bot to understand or perform things you consider 'trivial', like folding laundry, summarizing a letter, making lunch, and predicting the likely immediate consequences of throwing a baseball at the president's dog. People like Einstein and Feynman don't really have anything special except lifetime curiosity, a habit of thinking deeply, and a mind at least briefly open to alternative paradigms; they might be more creative, have more talent for recognizing patterns, making connections... but those are just power-ups on components everyone has. Even JoeAverage? can understand Bose-Einstein condensates and Relativity if JoeAverage? can be motivated to learn about them; the potential of even 'average' humans is vast. JoeAverage? is not a significantly easier target than Einstein and Feynman.

That, we do not know, since it has never been done. I don't think anyone in the 1950s would have guessed that a chess bot would beat the master before a bot could do laundry. And it could have been the other way around.

I wouldn't be too surprised if we have 'Einstein' and 'Feynman' level bots of specific fields (capable of 'imagining' physics instead of using a rigid physics-engine) before we have the JoeAverage? robot capable of handling the thousands of fields.

{Are you talking about a Turing Test or doing the laundry?}


See also: ArtificialIntelligenceAndLinguistics? HalsLegacy NaturalThinkingMachine AiWiki AiWinter ArtificialStupidity JavaAbleFramework


MarchZeroSeven

CategoryInformation CategoryArtificialIntelligence


EditText of this page (last edited June 26, 2014) or FindPage with title or text search

Meatball