From TheFifthDiscipline:
"Mental models are the images, assumptions, and stories which we carry in our minds of ourselves, other people, institutions, and every aspect of the world. Like a pane of glass framing and subtly distorting our vision, mental models determine what we see. Human beings cannot navigate through the complex environments of our world without cognitive 'mental maps'; and all of these mental maps, by definition, are flawed in some way."
"flawed" seems a bit harsh. Yes, I cannot predict in every detail what a certain person or piece of hardware will do at all times. However, my simplified approximations of people and things are (sometimes) adequate to understand many of the more important parts of their behavior. There are some approximations that I even would say to be "true", even though they don't predict actual results exactly, when the approximation gives an "error band" and the actual results fall within it.
Take it any way you want.
It seems to be that the idea of a Mental Model is useful when it comes to thinking about training. There is a lot of information to be absorbed in any training situation, including new jargon. If too much is given at once the trainees can become confused, having no clear mental picture of what is significant.
This becomes vitally important when the trainees will later become responsible for activities where mistakes are life threatening. If their mental model is inaccurate and this is not tested and corrected, then there is the possibility of serious damage. -- JohnFletcher
See ComputersAsTheatre for an alternate view.
Each MindMap (CategoryMindMap?) is a written summary of a MentalModel. But some MentalModels cannot be expressed by a MindMap.
There is also a page called KnowledgeMap. KnowledgeMapsWhitePaper mentions the idea of a mental model.
SemanticNets are a simple way to represent MentalModels. ArtificialNeuralNetworks would give the most accurate but require a lot more detail.
See also ParadigmShift, HiddenAssumption, TheMapIsNotTheTerritory, AllModelsAreWrongSomeModelsAreUseful, MentalImage, MentalIndexability