Suppose someone working in the Widget domain develops a new method, process, or technique. The method, process, or technique is successful, so the project is successful. Another project in the Widget domain is sent the developer's way, the method, process, or technique is employed again ("A trick used twice is a method."), and again, success follows.
Soon the developer has a fistful of conference papers elaborating the method, process, or technique (which from now on will be called, inaccurately but by custom, a methodology (see MethodOrMethodology)), with a book on the way. The developer, always working on Widget projects, meets nothing but success as the consultation requests come rolling in.
Meanwhile, other developers in the Doodad, Thingummy, and Whatsit domains have their own methodologies, and are busy hawking them in the open market of ideas.
What has happened, of course, is that each methodology developed in its own technological niche, in which it is highly successful. Comparisons between methodologies are difficult when the differences between their niches are considered, useless when they aren't.
Size of teams and size of projects only show one side of the "domain of suitability" for a methodology. The historical domain of success is a very strong indicator of suitability, and may be a stronger counter-indicator when the domain changes than most methodologists are willing to admit.
The idea of a ProblemFrame, due to MichaelJackson, can help with such comparisons. In SoftwareRequirementsAndSpecifications Jackson also notes that methods can be described as more or less sensitive to both problem domain and solution technology.
The theory stated here but not proven is that a method has a high probability of failing outside its "domain". This assertion is certainly no more well-founded than the assertion that a method that works on X has a good chance of working on Y because Y is not that unlike X. This is the excuse of those who just don't want to change: "It wouldn't apply here anyway."