Laynes Law Discussion

(Based on discussion in WhatIsIntent)

People will not agree on base definitions.

It seems as though you confuse communication errors with model errors. LaynesLaw is a problem because people in disagreement about definitions simply aren't speaking the same language and are fighting about the language with which to discuss the meat rather than about the meat. One solution is to change languages, but that puts a huge learning burden on all people who wish to observe the conversation. It's fine to point out that communications problems exist - especially in English - but I'm still unclear as to how this is relevant to any meaningful argument. Words used to discuss concepts may be relative or arbitrary, but that doesn't make the concepts being discussed relative or arbitrary.

I disagree. Any language based on abstractions and approximations will have the same problem. A UsefulLie is not necessarily a perfect lie or tool.

Any language based on abstractions and approximations will lose information or allow for model errors when applied to a universe about which we possess incomplete information. But since the "same problem" being discussed in this topic are the 'communication errors' and not the 'model errors', you are incorrect in your assertion that any language will have this problem. Your mention of UsefulLies clearly indicates that you are still confusing the two. It is possible to achieve perfect communication of imperfect models.

You don't know that. Modeling differences can produce communication errors.

Modeling differences only exist because of communication errors. It is true that some languages, such as English, cannot perfectly communicate models and will thus result in modeling differences that may result in further communications errors. But other languages, including maths and programming languages, can essentially achieve perfect communication of the model. This doesn't mean the model being communicated will be correct, but it does mean there will be no modeling differences; any error can be fully blamed upon the model itself rather than upon communication thereof.

People communicating in a vague language such as English often reduce vagueness via active listening: asking appropriate questions then looking for the answers, responding with examples or analogies ('so it's like ...') or at least predicting some then confirming them, or even solving problems and providing answers that have a very low probability of being correct unless the model was communicated successfully. Proper communication in English requires that the listener meet the speaker half way, for it is by doing so that arbitrary levels of precision are obtained and communications error is reduced in an otherwise vague or ambiguous language.

The vagaries of English do not entitle persons to calling just any interpretation of another person's words to be 'correct'. Instead, such interpretations, when held in an internally consistent manner, must further be confirmed against the support mechanisms: predictions, examples, analogies, problems and solutions, etc. Even then, there may be some modeling differences, but the error due to modeling differences can be reduced to a degree that it isn't significant compared to the degree of error inherent to the model itself.

My observation is that the biggest problems are usually related to the application of the model to the real world rather than flawed models (internally inconsistent). Software magnifies this because most of the contentious issues are not related to connections to the real world, but rather internal organization. "Wrong output" is by far usually easier to settle compared to internal organization issues.

Perhaps application of a model to the real world is a problem. But it is not a LaynesLaw problem or a communication problem. Indeed, even if there is contention as to exactly how one should go about applying a model (policy), there are languages that do allow one to perfectly communicate exactly how one shall go about applying the model.

RE: "flawed models (internally inconsistent)" - I'm under the impression you didn't catch my meaning with regards to modeling errors. Any reasonable model will only be flawed in the external sense, such that it makes predictions that are either imprecise or inaccurate. Models that are flawed in the 'internal' sense are flawed independently of the universe in which they are applied: either they can't make any useful or falsifiable predictions, or they can make predictions but predict contradictory things such that no matter what you observe the model is simultaneously wrong and right. When I discussed 'modeling errors' above, I was talking only about the external errors - i.e. the "wrong output" stuff. I won't deny that the other errors may exist; I simply wasn't giving them any thought.

As far as application being "the biggest problems", I have my doubts. I imagine it depends on the model, but most models are designed for application to the real world and thus avoid making application particularly difficult. Coming up with a 'correct' model (one that has a high degree of accuracy and precision) or an 'efficient' model (one that requires fewer resources for computation) or a 'simple' model (one that is easier to communicate, teach, or implement) may often be a greater problem. If your observation is that application of the model is "the biggest problem", it might be because you haven't been doing the work on developing or implementing the model.


People will not agree on base definitions. The best you could achieve is to have both parties agree on the root abstractions, and then build precise derivations based on that. But there are at least two problems with this:


EditText of this page (last edited September 25, 2009) or FindPage with title or text search