The perceived patterns of change appear to differ between people, and this is often the crux of disagreements about the "best" techniques or paradigms. After all, one of the main goals of software engineering is to make software easier to change. However, if we all see change differently, it is hard to agree on the best techniques to achieve this goal.
For example, OO proponents tend to see more "sub-typing" embedded in real-world change patterns than others see. Some who dislike OO see sub-typing as unnatural or artificial (ThereAreNoTypes). One thing that is hard to pin down is the ChickenOrEgg issue of whether paradigms, or training in paradigms, shapes one's change perception, or a given change perception leads one to favor certain paradigms or techniques. One could allege that constant exposure to sub-type and class hierarchy examples may trigger one's head to "see" sub-types where there are not any or are barely there.
Perhaps we should distinguish between perceptions of general patterns of change, and perceptions of a given change. The second is more objective because it is a particular something that can be directly analyzed. The second involves a mental summary of past changes endured and using those to estimate the pattern of future changes. As humans we have selective memories because we cannot store everything in our head. (It is an urban myth that every memory is fully stored.) Different things affect what we remember and what we don't.
See also: ChangePattern, SoftwareDevelopmentIsGambling, DecisionMathAndYagni