Predicting and controlling the nature of software
Moved from RocketAnalogyProblem:
(LTPT = language/tool/paradigm/techniques)
If a rocket engineer gets a specification for a certain payload, vibration, peak acceleration, pollution, noise, performance, cost profile, that engineer can select from among models and methodologies that allow him to effectively predict and control these properties. Of course, that control doesn't necessarily result in an optimal rocket - in part because the specification may be ridiculous (users want magic ponies), in part because BadCodeCanBeWrittenInAnyLanguage (a good tool in inexperienced hands), and in part because the best models/tools might be inadequate for the peculiar case (requiring some R&D and further model development - rocket science). This prediction and control is the property that matters when comparing LTPT. You can't compare LTPT by the end product except insofar as it is part of the end product (e.g. for user-extensible systems, continuous maintenance, runtime upgrade). Why would an educated and intelligent engineer pick one LTPT over another? because it lets that engineer more effectively predict and control properties relevant to the end-user that the other does not. The end-user's specification is relevant because there is no universal "better than" ("better than for what?" any intelligent engineer would ask). But there is no direct relationship between the end-user product metrics and the tool user's LTPT metrics. It doesn't make sense to compare rocket-building methodologies in terms of end-user specification metrics. To do so is a category error.
Re: "prediction and control is the property that matters when comparing LTPT" [emph. added] - As a general rule, I disagree. Many domains are too complex for developers to micromanage the minutia of the domain. I don't just want a giant logic model that allows me to predict the impact of scenarios, but also for managers and PowerUsers to be able to do the same, in large part because they likely know the domain better than me. Such systems are usually more successful for larger projects in my experience. See CompilingVersusMetaDataAid. Whether this applies to rockets, I don't know. It may be a factor when modular space-ships are common-place and the customer largely manages the modules and combinations there of. --top
Why do you assume that 'prediction and control' means 'micromanage'?
It does if there is a different level/function of management or staff that could potentially handle some of the details. "Micromanagement" generally implies that one is dealing with details that should be delegated to or handled by somebody or something more appropriate. Your pet techniques seems to depend on a techy(s) who deal with much of the details of business logic. Your tools are not appropriate for typical power-users who would or could manage much of the business logic.
I don't see how any of what you just said is relevant to comparing tools, models, techniques, and languages. Your HandWaving about my 'pet techniques' is completely out of place, since I haven't mentioned any. Are you just stringing unjustified claims together, hoping it will be a sound argument or at least a good RedHerring?
Apparently I'm mixing up one non-signer with another. (Assuming they are really different people.) If it doesn't apply to your pet tools, then please ignore it and just wait for the matching non-signer to respond. Or, flesh out what "prediction and control" refers to. Let me restate the above. "Prediction and control" should apply to designated power-users as much as to "formal" software engineers and programmers. The issue relevant to the topic then becomes: "how do we measure the power-users' ability to predict and control per design choice?"
"Prediction and control" is not a 'technique', and therefore not a 'pet technique'. And sure, any user of an engineering tool/model/technique/language should have the ability to predict and control the relevant properties of whichever products they engineer. That's why we compare them as properties of the tool/model/technique/language, rather than of the users. I suspect you are trying to say that controlling and predicting certain properties of the product isn't relevant to certain developers of the product. That's one of those facts-of-life we must suffer; just because an ergonomics engineer wants big name-brand cushy seats in the rocket and he doesn't WANT to care about weight, sheer stress, and vibration doesn't mean he should or will get his way. What matters is the gestalt engineered product, not the WishfulThinking and MentalMasturbation by any particular developer (or 'power user'), though if they're competent they should reasonably get some influence in how to achieve the targeted gestalt properties. If a client desires to ensure realtime performance, then the videogame-enemy-AI developers are going to be stuck with an videogame-enemy-AI language that lets the other techs control and achieve this performance characteristic. If you're trying to say that the 'performance-junkie techs' shouldn't be the ones who decide that 'realtime performance' is a must-have feature, then you're fighting a straw man.
There seems to be a communications gap here. Please clarify "That's why we compare them as properties of the tool/model/technique/language, rather than of the users." Tools don't "control", humans do, for the most part. And prediction is often done by both in practice: the machine may do some of the prediction and the user/analyst/programmer may also do some of it. Ideally the tool assists the humans in making decisions relevant to them. For example, bar charts and line charts may be used to predict future trends. The machine can do some interpolation into the future using regression etc., but ultimately a human usually makes the final prediction and acts upon it, such as setting a re-ordering quantity for a given product. A good system would have the computer make a re-ordering guess based on regression etc. A manager can then click on or see a chart to verify the prediction and then give final approval of the re-ordering amount. -t
Humans control things through tools. Human ability to control is hugely, enormously affected by the tools they use; for example, it is very difficult to polish glass with a chisel, and very difficult to kill a horse with a toothpick. We, thus, measure our tools towards a purpose (fitness for a purpose). A good tool for some purpose X lets us control some property-of-the-world Y. Prediction is similar; our models and techniques will greatly affect the accuracy and precision of our predictions. Prediction and control go hand-in-hand: you cannot effectively control that which you cannot predict. To engineer a rocket for reduced pollution, you first need a model that lets you predict pollution based on other characteristics of the rocket (such as choices for fuel, burn rate, and funnel shape). Development of software is no exception; LTPTs must support humans in predicting and controlling whichever properties are relevant to the generated software product, whether that be producing line-graphs at the push of a button or properly delivering orders to another business.
I agree, and LTPTs that help power-users control, manage, study, and predict the system are useful for those very reasons. Same for the developers also. We seem to agree on that point. I was just hammering home the point that it's not just the formal developer that needs such aides and techniques, but also users and power-users. Some of the LTPTs proposed by WikiZens with a similar writing style as yours seem to emphasize aides for the formal developers at the expense of power-users. Metrics for software designs should include them also, at least in my domain. This is all part of the bigger "problem" of identifying the viewpoint of where to measure from: developer, power-user, user, business owner (see below), etc. They may overlap in many areas, but they are not identical. If you wish to focus on just the developer, or optimize the system just for the developer, that's fine as long as the limits of that approach are pointed out to readers. -t
Your claims confuse me. Saying that a user's "viewpoint" is relevant to measuring LTPTs strikes me as analogous to claiming we should compare axes by looking at the end-product canoes. This would be silly: canoes aren't axes, and the quality of a canoe is affected by very many factors other than which axe is selected (especially including its design or blueprints). The product users see is not the LTPT that constructed them, and the quality of that product is affected by very many factors other than which LTPT is selected (especially including its specified requirements). Perhaps you should explain what you mean by 'power-user' that distinguishes it from developers and regular users? Following the topic analogy, what would be a 'power-user' for a rocket?
Ideally, yes we should look at the end product: canoes or at least canoe sales. I agree, though, that in practice this is very difficult such that we find stand-in metrics instead, and developer-centric metrics may be such a thing in some cases. We'd have to hold non-ax factors steady when we analyze canoes, which is difficult. Thus, in practice, stand-in ax tests would be created to test axis against expected loads and wood-types etc. As far as a "power user" for the given rocket analogy, I agree that as-is it may not be a good analogy to explore that aspect, and I'll accept the blame for picking an analogy lacking in that area. Although, it was useful in bringing "prediction and control" to the forefront as something to explore deeper, perhaps in another topic. How this issue affects developers and power-users seems to be an important factor in the different design strategies we each seem to focus on. As I hinted above, if rockets became more modular in the future, then a "power user" may be involved in module selection and combining. But since there's no actual experience to draw from on that, it probably won't do well. -t
In speculation-land, perhaps they could be "mission managers" who are not engineers, but want the ability to mix and match modules as needed to optimize their particular missions without having to consult engineers except in relatively rare circumstances. Consider a drag-and-drop Lego-like modelling kit/UI that would allow mission managers to model the accuracy, safety, economics, etc. of various space module configurations. As far as who is the best one to evaluate such an interface, the power-user or engineers, that's a tricky one. I suspect the best evaluation approach would be a combination of both studying it and asking each other questions: the engineer asks the power-user about the interface intuitiveness (usability), and the power-user asks the engineer about the accuracy and thoroughness of the modelling tool. It's what the suits call "synergy". -t