Just like physicists seeking out a "grand unification theory" of physics, we software weenies often think about a similar principle for software engineering. Can a single EverythingIsa be created that can be everything to everybody, or at least most things to most bodies?
I suspect that this is not possible because constraints are part of the alleged power of some paradigms. Paradigms can be viewed more or less as contracts: You live by rule/restriction set X and then I can deliver benefits and technique set Y. (And provide a somewhat consistent way to package things.)
For example, FunctionalProgramming has the "no side-effects" rule that allegedly makes it easier to reason about the combined affects of bunches of FP atoms (emergent behavior). And, relational has rules such as a primary key (uniqueness to tuples) that navigational lacks. In that sense, are we really fighting over "un-features" instead of features?
If so, is it possible to at least make those constraints easier to add and remove as needed so that the portions we do agree on do not have to change? It could resemble the FeatureBuffetModel. For example, there may be a menu of constraints, and OOP fans will turn on different constraints than what relational fans or functional fans do. Static typing/strong fans may also turn on a different set of constraints than dynamic/loose-typing fans. (Perhaps the type system can double as the OO class system. There are lots of similarities.)
It is essentially the subtractive approach to tool design instead of the current additive approach. The advantages of the the subtractive approach are twofold. First, you don't have to reinvent from scratch features you may later want to add (switch on). Second, the features/constraints that are common to two or more paradigm groups are the same. This reduces the communications, learning curves, and tool/add-on construction costs.
--top
I don't think your generalization unreasonable. The combats between various paradigms and programming disciplines really are about constraints in expression and semantics (e.g. to allow better optimizations, to guarantee security, to avoid crashing, to aide in proving correctness, to avoid or support hierarchy, etc.). Fundamentally, any 'property' you can say is true of a discipline or paradigm will be true of that paradigm because of constraint on expression or features - it wouldn't really be a property of the paradigm if the paradigm didn't force it to hold true! Whether or not a given property is 'good' is, of course, an entirely different question.
But I would not call constraints "un-features". If anything, features (at least in languages, including programming languages) ARE constraints. Consider phrases, for example: if a phrase is unconstrained in its meaning, it would essentially be meaningless and lack definition. When you give a phrase a particular meaning, you've constrained the meaning of said phrase... and you've also added utility and value to it. Consider that any blank page is unconstrained, having the potential to be filled with anything at all; that potential disappears only when you start inking words upon it and thus creating something with discernible features, which are properties.
I second this. It's the DifferenceThatMakesaDifference. By introducing a difference, a constraint, we partition our solution space into the part where it holds and into the part where it doesn't. Each part may have its own further distinctions, its own optimizations (like easier proofs for the functional side). On the other hand the very moment we introduce specializations on one part that have no matchings on the other we lose uniformity. -- GunnarZarncke
Is it possible to make constraints easier to manipulate? Maybe. Language itself can be manipulated from within another language quite easily, so long as the initial semantics exist for said operation (and I've done a great deal of research on that subject in particular). WaterbedTheory would have you watching out for what this might make more difficult... which, among other things, seems to include writing a parser for the contortionist language. But I think that's the lesser of problems. It's very important to keep in mind that many of the most valuable properties enforced by constraints are emergent in nature - you mentioned it yourself when discussing the emergent behavior of FP atoms. TypeSafety, Authority, Trust, HardRealTime constraints, etc. are among the set of emergent properties... and introducing one little loophole, even one, can break the property: you'll no longer be able to prove, reasonably assume, or even (if you are a rational being) claim that the property continues to hold. Thus, while you might be able to add constraints to allow some proof over various sub-components in your language, you cannot readily remove them.
Of course, you can't add just any set of properties like spices to some great stew. Some properties are contradictory. Often even 'good' properties are contradictory. You cannot have, for example, both HardRealTime constraints AND communications/networking protocols that will survive disruption and arbitrary delay - at least not at the same time and in the same communication environment.
- The goal is not really to be able to switch off constraints without side-effects for existing applications/systems. The goal is more so to reduce the learning curve, communications, and tool building effort between different parties rather than go paradigm hopping with an existing domain system. I readily agree that "live" paradigm hopping is probably an unrealistic or unobtainable goal, and so we can probably agree to drop that goal for now. --top
- I was leading more towards the ability to enable certain constraints for certain parts of a common program. You can't enable all combinations of constraints, but perhaps you can enable just those constraints you require at a given point in the program - i.e. so you can have a kernel with HardRealTime constraints, full static (no runtime check) type safety, and bounded memory constraints hooking into a DistributedDatabase? component with disruption tolerance and possessing at best soft temporal guarantees on certain query speeds whilst lacking any guarantees for update rates, and possessing soft typing (which allows for runtime checks where static proof was difficult). There would need to be some integration efforts at the -boundaries- of these computation spaces - e.g. the kernel cannot wait indefinitely on the database and maintain its hard realtime constraint, and any response from the database potentially needs to be typechecked (but you can't be sure because of the soft typing, so you need an automated decision here). To whatever degree this integration effort can be automated by a computation-space aware language, it should be automated (even if it takes options like 'defaults' on timeouts, or whether the kernel can later take advantage of the query result even if it comes AFTER the timeout, etc.). And this isn't unrealistic, at least no more than 'optimizers' are unrealistic. It's just rather researchy, and (much like optimization problems) will involve massive databases on pattern-recognition and known solutions and strategies to common but often specialized situations.
It wouldn't be impossible to start without any constraints and always add them as needed to each subcomponent, and make this easier... but doing so doesn't leave us in much better condition than we are today (where you start without a goal, then gain a goal, then choose a language and computing platform, then do some design work, then write some code - each step on the way constraining the implementation of the component upon which you're laboring). Then you start choosing standards for communication between components - codecs, protocols, etc. Since your initial environment contained no constraints that the communications be well typed and such (and won't crash or root-hack your system), you'll need to run appropriate checks.
What would make this easier overall?
- Well, for starters, state-of-the-art lacks much support for modeling and constraining computation environments, which is needed if we're going to easily prove emergent properties over the communications of subcomponents within the environment (e.g. that all transactions halt, that no protocols are violated, that no deadlocks occur, that only one process thinks it's holding the token at any given time, etc.). Further, we need to make it easier to manipulate the environments. Consider two processes in the same 'trusted computing environment' on the same system: they might send messages to one another directly by passing pointers to value-objects in shared (and 'trusted') memory. It would be wonderfully nice that if the same processes, should they migrate to be upon separate machines, automatically additionally sign and/or encrypt and/or reorder and/or retransmit messages in order to guarantee desired communication constraints. But perhaps can still assume (without checking) that the messages are well-typed in accordance with the schema (since they DO trust each other) and thus not bother checking - allowing for an optimization that isn't really feasible without automation. ConceptOrientedProgramming touches on these ideas - extra processing for communications at 'edges' of 'spaces', etc. but it isn't a detailed enough abstraction to make this work with any automation - we'd need to choose a set of common properties for environments to possess, and allow these to be selected from the FeatureBuffetModel. What does this imply?
- The ability to easily declare computation and communication 'environments' or sub-environments as possessing certain features (where by 'features' I mean 'constraints') - e.g. local/mobile/distributed/etc., secure communication, trusted communication (e.g. that all processes trust each other, or they are all trusted by the DoD, etc.), well-typed communications, default-ordered-communications, etc., named components common to processes in the environment (e.g. shared databases), etc.
- Also, inference over these 'features' and properties of the environments. If you don't declare it to be true, that doesn't mean it isn't true. This would make things 'easier' by reducing the programmer-burden of explicit declaration.
- The ability to -automate- the communications-safety checking and other communications properties at the edges of untrusted computation environments. A fancy set of specialized (and optimized) automations ready and roaring to go for at least the most popular combinations of features (given N features, there would be 2^N combinations, so it would be unreasonable to handle ALL of them with full optimizations). As an example of the feature combinatorial explosion, consider features for just message passing and optimizations thereof: 512 message-passing feature combinations of feature_buffet:(secure, ordered, reliable, trusted, disruption-tolerant) X channel_type:(many-to-one | one-to-many | one-to-one | many-to-many) X platform:(single processor shared memory | distributed processor shared memory | distributed processor with partitioned memory and failures | open distributed).
- Hierarchical encapsulations: Ability to create a internal-computational-environment for sub-components within an environment that has additional constraints, even for such things. Might further encapsulate sub-components. Doesn't mean a navigational data access, so don't blow your- uh- top. ^_^ Needs to inherit the edge-processing automation of the external environment if it exposes communications interfaces across multiple boundaries. This part ConceptOrientedProgramming had right.
- Might as well expand on this idea a bit - if you have a computation environment, you're obviously going to have processes within it that can receive communications and perform computations. However, perhaps a computation-environment should be viewed as a 'service cloud' rather than a set of named processes, though the computation-space could easily be constrained such that exactly one process gets to fulfill a role at any given time. The computation-space could be described as automatically creating some default process to fulfill a given role if none exists in the space when requested (e.g. lazy rendering of services). provision via exactly one process. And processes within a 'service cloud' can perform one or more roles. These might not require anything 'special' (no keywords, no unique semantics, etc.) - just some shared cells/memory for concurrent users of the computation space - but it does allow a considerable expansion upon the normal ideas involving traditional processes. When all that needs to persist across operations is a rather blank and abstract 'computation space' with a little encapsulated state instead rather actively running processes, it far more easily supports OrthogonalPersistence (moving parts don't do so well when holding still, which is why traditional processes are volatile), and easily supports (and encapsulates) GenerativeProgramming within a space.
- In addition, you must be able to add constraints/features to the computations themselves (as opposed to just their environments). E.g. declaring that a particular computation MUST halt (generally undecidable, but provable in most cases where you really need to prove it), that a particular computation MUST return a particular type (in accordance with some predicate) or within a certain temporal delta, or consume no more than a finite constant bound memory resources, or that a particular set of operations shall be performed sequentially, or in parallel, or that a computation shall await an answer (or many answers, or a particular pattern of communications, etc.) before doing evaluating and doing more work.
- State-of-the-art has MUCH more to say on this subject. We already have considerable experience with type-systems, workflow languages, pattern-recognizers, etc.
- Finally, you might want to be able to adjust the language - both the expression and the semantics. It isn't necessary for any of the above, but it does make description and expression of common patterns of constraints easier (along with making expression of every other pattern of anything you can dream up easier) because you can capture said pattern with a new bit of syntax associated with a new bit of semantics. ExtensibleSyntax? has also been researched (since the seventies) and is very promising and capable of this despite its lack of popularity. LanguageOrientedProgramming is finally gaining some degree of popular support, which means it may even be marketable within a decade.
The approach described above would allow you to design computation-spaces that internally utilize your paradigm-of-choice without breaking them in the other computation-spaces. The spaces themselves are necessary because you can't
remove most constraints without breaking them entirely... so, instead, you need to choose your constraints on a per-space basis and start with as few as possible. If it is all done in a single language (possibly one that can be reshaped into any other language... e.g. supporting query languages at one point and high-level process-pipelines at another), then one would never need to leave their toolbox to do the jobs and could be confident of the integration and of the various code-proofs that are performed.
This differs from the BigIdea topic in that the idea is not to force one best paradigm on everyone, but rather unite them all without taking away significantly what people like about each one.
Of course. EverythingIsa a constraint - or at least every feature.
See: LiberatingConstraint, BigIdea
CategoryBuildingBlocks