Frame Problem

A problem of determining which elements of a description are consequentially altered after an event occurs. Named after cartoon animation in which a frame of elements - chairs, walls, etc. - is kept static while the subjects of attention move around it. Which graphic elements can remain in the frame and which must be redrawn per cel?

The FrameProblem began in classical AI as an annoyance in inference systems; if an event occurs, what attributes of the model need not be considered in inferring its consequences? The lack of a satisfactory answer has universally led AI systems to CombinatorialExplosion.


Note that the AI FrameProblem is not a synonym for human common sense, despite comparisons between the two, so please put non-AI-related comments about common sense on e.g. CommonSenseIsAnIllusion or RareSense, not here; this page tends to attract non-technical comments about the lack of human common sense, which is way, way off topic for this page.


I've always been amused by the Pinker version of Dennett's take on what the Frame Problem is (from HowTheMindWorks):

"The philosopher Daniel Dennett asks us to imagine a robot designed to fetch a spare battery from a room that also contained a time bomb. Version 1 saw that the battery was on a wagon and that if it pulled the wagon out of the room, the battery would come with it. Unfortunately, the bomb was also on the wagon, and the robot failed to deduce that pulling the wagon out brought the bomb out, too. Version 2 was programmed to consider all the side effects of its actions. It had just finished computing that pulling the wagon would not change the color of the room's walls and was proving that the wheels would turn more revolutions than there are wheels on the wagon, when the bomb went off. Version 3 was programmed to distinguish between relevant implications and irrelevant ones. It sat there cranking out millions of implications and putting all the relevant ones on a list of facts to consider and all the irrelevant ones on a list of facts to ignore, as the bomb ticked away."

"An intelligent being has to deduce the implications of what it knows, but only the *relevant* implications. Dennett points out that this requirement poses a deep problem not only for robot design but for epistemology, the analysis of how we know. The problem escaped the notice of generations of philosophers, who were left complacent by the illusory effortlessness of their own common sense. Only when artificial intelligence researchers tried to duplicate common sense in computers, the ultimate blank slate, did the conundrum, now called "the frame problem," come to light. Yet somehow we all solve the frame problem whenever we use our common sense."

-- DougMerritt

Pinker was rattling along there real good until that last sentence, when he demonstrated a FrameProblem of his own. See CommonSenseIsAnIllusion.

FrameProblem: Not a problem to be solved.

Deducing *relevant* implications must be performed relative to a set of goals or objectives. In this case, the objective seems to be: obtain the battery and avoid damage to self or the team. I.e. there are both positive and negative objectives (get: battery, don't get: blown up, don't: blow up the team). (One might also term the negative objectives 'constraints' or 'rules'.) Now, if the objective was just to obtain the battery, Pinker's Version 1 is the only robot that properly did so. What is needed is a planner that can handle constraints and ask itself: "does this possible solution accomplish the objective without violating the constraints?". Version 4 of the robot might then deduce that pulling the wagon out violates the constraint of not getting blown up (or blowing up the team). So it might lift the battery and take it, or remove the bomb from the wagon, or might (if it can determine how) disarm the bomb then pull the wagon back to the team complete with disarmed bomb.

The FrameProblem does NOT need to be solved in order to do this. And it shouldn't be. The irrelevant 'goal' of processing every possible implication of one's actions (and classifying these implications) was the failing of Versions 2 and 3. Pinker is wrong when he states Humans have solved the FrameProblem with 'common sense'; even given our comparative intelligence, there is often a great deal we don't consider... we only learn to consider something as relevant through experience. The only advantages we possess are a planning system that properly handles negative objectives alongside the positive ones (e.g. avoid: pain, seek: pleasure), a powerful pattern-matching system that is capable of Bayesian/fuzzy probability or confidence based deduction, and a fully recursive inference system (that is capable of learning and abstracting which patterns of inference generally lead to relevant deductions, and thus capable of assumption with Bayesian/fuzzy confidence). Admittedly these aren't trivial advantages (it will be a while yet before they're part of the typical AI), but it's also not the case that any of them answer the FrameProblem. Unless our priority goals or constraints somehow involve the colors of the walls or floors (e.g. "don't track dirt through the house" - something many of us hear as children), we simply wouldn't bother considering whether grabbing the wagon might change their colors until we're asked the question (at which point it becomes 'relevant').


A FrameProblem is a question of how to determine what solution elements should be considered in solving a problem. The archetypal example of a FrameProblem is a possibly apocryphal story dating from the 1960s.

A side-burned and groovy new civil engineering post-doc decided to give his undergrads a practical lesson to liven the course up. His girlfriend was working with monkeys in the biology department, so they set up her biggest monkey cage with a mess of odd lengths of 2 by 4 on the ground and a banana suspended 20 feet off the floor by a string looped over one of the ceiling bars. The other end of the string was tied to another bar on the wall. The students were challenged to arrange a structure of 2 by 4s the monkey could safely climb to reach the banana.

The students were shown the cage and worked in pairs over the weekend to figure out their solutions on paper. Then the great day arrived - time to try the designs out! The first team set up a complicated but apparently sturdy solution based on BuckminsterFuller's tensegrity principles. Other students ribbed them about the unorthodox design. Then they all left the cage while the post-doc's girlfriend brought in the monkey.

The monkey looked at the structure of 2 by 4s. It looked up at the banana. It looked at the string. Then it ambled over to the wall, untied the string, and let down the banana.

--PeterMerel

Actually, that's not right at all. A FrameProblem really is just this: the problem of determining the consequences of an action - what changes, what does not change, how the predicted 'world model' should be updated after performing the action (which is extremely relevant in planning since you can't try every possible action). That's the traditional use of the word-phrase in AI. The question of "what solution elements are part of solving a problem" is more an issue of LateralThinking; the above example doesn't really represent a FrameProblem at all, but it does provide an embarrassing example of humans being outsmarted by a monkey. The closest it comes to a FrameProblem is that the students didn't determine the consequences of making a solution available in the form of an accessible knot on a wall.

In order to address what changes you must understand what can change - what is in the frame. The students here assumed the frame of the post-grad's problem. Indeed they're taught to avoid questioning the frame - 20 years of word-problems like "A train leaves a station traveling at 60mph ..." have trained them that they get results by not questioning the frame.

The monkey assumes a different frame. It does not outsmart the humans, nor are the humans embarrassed in my conception. Indeed outsmarting and embarrassment are elements of a frame you're bringing to this description, not elements of it - I hadn't thought of them at all, nor are they the point. Which is the point :-) Nor, for that matter, are planning, "world model"s, "lateral thinking", or traditional AI "word-phrase"s essential to the FrameProblem. They're ways people have thought about FrameProblems, but not the only ways we can think about them.

Since I started this page a few years back and it's linked to by several pages on this and other wikis that observe such usage, I'd like you to accept and consider looser definitions of the term FrameProblem. Otherwise I'll have christen a "MetaFrameProblem?" or similar neologism to account for such usage, which would be both confusing and onerous. You are free to insist that the alternative is not "right". But then "right" and "wrong" are altogether another frame ~ Pete playing the monkey.

You say that "world models" aren't essential to the FrameProblem. However, you're wrong. This is an AI problem, and it IS defined by its original description, not by whichever arbitrary criterion you're choosing in order to make it fit your misconceptions of the problem. The FrameProblem is the problem of determining "what changes" and (most especially) "what does not change" as a result of performing an action. It is nothing more, and nothing less. Your saying otherwise is insufficient to redefine the FrameProblem, no matter how much you wish it to be. Now planning isn't "essential to the frame problem"; indeed, the opposite is true: the FrameProblem is essential to planning. You can't perform planning in complex systems without encountering the FrameProblem. On the other hand, a world model IS essential to the FrameProblem. Very fundamentally, you cannot answer a question as to "what changed" or "what might change" or "what didn't change" without having an answer to "what is there in the first place?" - i.e. a world model. And one thing a FrameProblem is not: A FrameProblem is NOT a question of "what is or is not in the frame?". That's simply a question of world model.

I hate it when people come along and start redefining things to meet their misconceptions or misinterpretations. If you redefine FrameProblem, you'll make its meaning ambiguous (so it now means one of two things) or vague (so nobody can really be sure what it means), and you'll thus weaken its meaning. Accepting flexible definitions in place of existing, concise and formal definitions is an intellectual sin. I will not commit it at your request.

Now as to whether the monkey outsmarted the humans: sure it did. It understood and acted upon something that the humans failed to even consider using information that was readily available to both the humans and the monkey. That's the very nature of outsmarting. And my statement was more to the effect that this was an embarrassing example for ALL humans, not just for those grads.

As to whether the monkey outsmarted the students: not at all. They were solving different problems. The monkey solved the problem of getting the banana. The students were given a specific assignment: The students were challenged to arrange a structure of 2 by 4s the monkey could safely climb to reach the banana. The students had no intention of just giving the monkey a banana, but creating a structure. Presumably, the post-doc's goal was to set up the banana so that the monkey would have to climb the structure (thus testing it) to get the banana. The monkey outsmarted the post-doc.


EditText of this page (last edited November 26, 2014) or FindPage with title or text search