GoalBasedProgramming as-yet under-developed paradigm for programming that is consequent/executive. (Relative to the other major paradigms: imperative is mechanical/executive, functional is mechanical/evaluative, and constraint/logic is consequent/evaluative - see ThereAreExactlyThreeParadigms for associated discussion.)
The fundamental operations required for GoalBasedProgramming are:
assert(42 = x); assert(42 = x) magic; satisfy(43 = x) assert(43 = x);The first assert is a regular assert as it doesn't follow a 'magic' in its scope. The latter is a postcondition for 'magic'. A compiler could fulfill the postcondition by replacing 'magic' with 'x=43'. (syntax note: 'x=43' here is an equality test, not an assignment). For the 'satisfy' solution, the '43 = x' might be replaced by the assignment statement 'x = 43'.
... Or, of course, it might replace it with 'Kill all humans, then set x = someHideouslyInefficientCalculationThatReturns43()'. The need to constrain actions '(b)' becomes obvious when thinking about all possible ways the compiler could decide to get x set to 43.
In addition to the listed primitives, there are some requirements if one wishes GoalBasedProgramming to be even moderately efficient with regards to expression, evaluation, and execution:
GoalBasedProgramming is also discussed in ThereAreExactlyThreeParadigms and CritiqueOfIntentionalProgramming.
Ever checked out ConstraintImperativeProgramming? Alma, Turtle, and Kaleidoscope are languages of that kind. You can get ideas from them. I don't like the way your code works, though -- the semantics of assert() become overloaded. Look at DesignByContract, which should be integrated with GoalBasedProgramming. Also, instead of using a magic() procedure and asserts, why not create a new statement: the satisfy() statement, which invokes a constraint solver on a boolean constraint passed to it. For example: satisfy (x = 4) //this will assign 4 to x
I hadn't heard the phrase 'constraint imperative programming' before, but I'll certainly run some searches and check it out. I thank you for pointing me towards them. I invite you to create the 'ConstraintImperativeProgramming' page, along with pages for AlmaLanguage?, TurtleLanguage?, and KaleidoscopeLanguage?, if you already know much about these.
In any case, DesignByContract and GoalBasedProgramming are not conceptually identical and should not be merged or integrated. DesignByContract is an approach to verifying potential solutions by means of asserting preconditions and postconditions. GoalBasedProgramming aims to automate the creation of solutions by asserting the same things, freeing the programmer from concerns regarding 'strategy'. As far as the issue with overloading 'assert' - I certainly agree that there are better ways of going about it; something like 'satisfy(4=x)' or various other postconditions would certainly do the job. Ultimately the same set of concerns apply, though - restricting strategy with rules and heuristics, controlling side-effects; figuring out how to specify what isn't allowed to be changed (the inverse FrameProblem), invariants that must be maintained across the operation, etc.
Thanks for the feedback. As far as DesignByContract is concerned, is there some negative aspect I am overlooking on integrating it with GoalBasedProgramming? I can only see positive benefits from integrating the two, and it would be a nearly seamless semantic fit! Anyways, I'll see if I can dig up a couple references to ConstraintImperativeProgramming, and some examples of code in the languages.
They aren't semantically identical, only related. DesignByContract approaches program-design in terms of verification of human-written strategies based on stated preconditions and postconditions. GoalBasedProgramming is the automation of strategy based on human-written goals. The difference is critical. Linking them is okay, but combining them is not. Besides, there's the whole 'this phrase really is trademarked' issue for "Design by Contract".
Ok, I understood the differences the first time I read this page. What do you mean by "linking" them? How is combining them not okay? How else are you going to accomplish GBP if not by modeling the domain that you solve constraints over when you invoke magic()? For imperative operations, HoareLogic (or something similar to it) is perfect for modeling imperative domain semantics and providing the constraint satisfier with the knowledgebase it needs to utilize them. You shouldn't feel limited by the trademarking on "Design by Contract". DbC may be a trademarked term, but that doesn't trademark its ideas: it is grouded in HoareLogic. They don't own the right to designing things based on HoareLogic. I envision the execution of a GBP language to be similar to Prolog.
In general, WikiWordsAreConcepts; combining two different concepts into one page will only force people to be more ambiguous and less specific when they name the concept. And linking them refers to including the name 'GoalBasedProgramming' in the 'DesignByContract' page and vice versa.
Ok, I get it: we had a misunderstanding. Probably my fault. When I mean "combine" them, I don't mean on the wiki, I mean in an implementation of a language -- in practice. You don't even have to use the term DbC: just provide the automatic compiletime checking of the hoare clauses for logical consistency, and you have implemented the esssence of DbC.
Ah, yes. That sort of combination is quite reasonable, especially since all the logic necessary for DbC-style verification would already (necessarily) exist as part of the GBC support.
The domain will, of course, possess some explicit or implicit modeling, potentially as part of a database for an inference engine that models a great many other domains. But the fact that we model domains is hardly the critical aspect of DesignByContract - it's common in explicit form to all approaches to ConstraintAndLogicProgramming, and in implicit form to everything else. As such, the question isn't pivotal or relevant to any argument for combining GoalBasedProgramming with DesignByContract.
I am not sure if this explains it, but when using DbC clauses, one has the epiphany that the pre and post conditions actually are a part of the target domain model! For example:
proc Uncheck(CheckBox c) require c.IsChecked?; begin c.IsChecked? := False; end;can be rewritten as:
proc Uncheck([checked]CheckBox c) c.IsChecked? := False; endAs we can see, the pre and post conditions are critical to the correctness and rigourousness/thoroughness of the domain model implementation.
HoareLogic is not unique to DesignByContract. Even if it is good, or even 'perfect', it isn't a reason to combine DBC with GBP. However, it is also not 'perfect' for GBP. HoareLogic handles some aspects of the semantics for a constraint solver, but does not include (for GBP) the necessary support for rules and strategies on which programs are acceptable (e.g. there is no clean way to say: you aren't allowed to kill your opponent before making the next move), and neither is there support for heuristics for determining which programs are 'better' than others (e.g. no clean way to say: efficiency is good!, no way to deal with fuzzy risks and anticipated consequences, etc.). I.e. it is, at best, a partial solution.
Well, it doesn't seem like a hard thing to create a language that combines both HoareLogic and PredicateLogic. Than you _can_ say things like "player cannot kill opponent before making next move" -- you can state them as tautologies in predicate logic. You can also include time constraints into the goal to satisfy, if you want efficiency! For example:
always[minimize now()] //global constraint that says "always seek to minimize the current time for all points of program execution"Such things don't fit into the HoareLogic. With an appropriate set of extensions, I am certain you could eventually reach a logic sufficient for describing program goals. Issues like "player cannot kill opponent..." are more difficult to handle due to the inverse FrameProblem introduced whenever one tries to constrain actions.
How don't they fit? You can always add pre- and post- conditions that basically imply something like "this action has taken time-order N-squared". Another way to do it is to say time(action A) > time(action B). But basically, optimization might simply involve exposing the efficiency semantics of actions, strategies, or whatnot to the constraint solver or something.
Hoare logic is based in preconditions and postconditions. It doesn't provide means for describing heuristics and optimizations (such as 'minimize') that would require comparative application to distinct solutions. It also doesn't provide support for spatial or global constraints (such as 'always'). Thus, such things don't fit into the HoareLogic... not even if temporal assertions are supported. In any case, there is nothing 'simple' about exposing heuristics and strategies to the constraint solver; I've explored the concept quite a bit in my own time, and I don't believe you're aware of just how deep that rabbit hole goes.
At the moment I consider GBP well beyond state-of-the-art and requiring StrongAi as described in ThereAreExactlyThreeParadigms and CritiqueOfIntentionalProgramming. The question 'how are you going to accomplish it?' seems moot. But since it was a NamelessConcept found in those other two locations it deserves a dedicated WikiWord such that it can be described in both its capabilities and limitations (if it existed) and for describing the reasons it doesn't and won't exist prior to the emergence of StrongAi (which would largely absorb and obviate GoalBasedProgramming).
You state you envision GBP as similar to Prolog, and there would be some similarities... but also quite a few differences. Prolog only needs to locate values, and thus it can backtrack at will (locating a value has no inherent side-effects excepting time, space, and energy for computation). Backtracking isn't usually possible when dealing with real actions, and BrownianMotion gets you nowhere fast, so instead one must create and plans, execute a partial plan, then (usually) determine if things seem to be going as planned and whether to continue or re-plan. In GBP, the planning to accomplish goals, dealing with contingencies, and re-planning on-the-fly in response to observations (i.e. non-static GBP) would all have a significant impact on the execution process. These are things that simply don't have any correlation to the Prolog environment.
I brought up prolog because it allows both imperative-style and logic-style coding, and essentially provides the semantics of preconditions (like in DbC). It serves as a proof-of-concept that the benefits of GBP and DbC can be reaped together, and I mentioned it all on the assumption that you thought DbC and GBP could not possibly be combined.
Ah, well it seems you've resolved that misunderstanding. I was talking about how the WikiWords oughtn't be combined.
Note: DeclarativeMetaprogramming comes very close to GoalBasedProgramming. They may even be identical for forms of DeclarativeMetaprogramming that allow for flexible runtime construction of procedural code formed on the basis of runtime observations and code-time policy. However, it would be difficult to justify 'static' DeclarativeMetaprogramming as readily qualifying.
Hmm, if you can make x equal to 43, and it will be equal to it.. why would you be asserting it?
A necessary part of GoalBasedProgramming is relaying the 'goal' by use of the language. In this case, 'assert' is performing double-duty, specifying a post-condition for 'magic'. If there is some confusion, though, the x = 43 expression is an equality test, not a statement of assignment. Another author suggests use of 'satisfy', which may be clearer.
i.e. I see kind of a chicken and egg problem. If we can make the program do anything we want, then the program would have to know everything about everything... every algorithm in the world. Or it would have to offer some form of limited known magic only? That still begs the question: are there some more real world examples that would make more sense than the assert example..
Technically, the compiler of the language can always utilize the one-million-monkeys on one-million-typewriters approach to solving the problem - brute force search on all possible algorithms that can be described by strings of length N or less, followed by looking at all strings of length N+1 or less, and so on - then look at each proposed solution and use heuristics and proofs to determine whether it fulfills the specified goal. So 'knowledge' isn't a fundamentally necessary thing, at least not so long as you don't give a damn about compile-time.
In a more realistic scenario, the language implementation will utilize a combination of known strategies, heuristics, programmer 'suggestions', memory of previous compiles (so it doesn't waste time searching for the same solution twice), and (of course) some limited amount of search. In those cases where it finds what seems to be a set of suitable solutions, these solutions can be further tweaked (e.g. in response to proof attempts, or via use of genetic programming techniques) in order to seek a more 'optimal' solution. In those cases where it fails to find what seems to be a suitable solution, the compiler will politely request the programmer provide some additional suggestions or strategies, or even give real-time advice on the search - at which point it could, using an associative memory, 'remember' this search and use it for future compiles and creation of strategies when it encounters similar cases in the future.
A better example than the 'assert' above can be found in the CritiqueOfIntentionalProgramming (which intends to make it clear that there is no 'real' magic going on here).
See Also: DeclarativeProgramming, DeclarativeMetaprogramming, FunctionalProgramming, ImperativeProgramming, PrologLanguage