Is Declarative Less Expressive

(refactoring in progress from DatabaseNotMoreGlobalThanClasses)

Further, how does one isolate personal preferences? For example, my eyes can simply spot more patterns if I tablize information than the same info in code. I used to line up parameters in function calls long before I ever touched a database to help me spot patterns and typos. I was born that way. I have described many times how relational allows one to tune their view of something so as to better study it. That is subjective, but it is a truism for me. My eyes and brain simply work better with tables. In most OO apps I see, at least 50 percent of it is attributes or can readily be converted into attributes. If this is true, then why not tablize those attributes so that my table-oriented head and eyes can better study and sift it?

[You seem to work under the assumption that you can understand the whole program. For many non-trivial systems, this isn't the case. OOP lets you hide away parts of the complexity so that you don't have to think about them. You can't "see" the patterns because there are none. If there are, you haven't properly factored your code. -- JonathanTang]

Because the tables would be full of instance data, which isn't especially interesting. Code shows the abstractions. Tables of data show the instances. Why should we look at the data? We're not clerks, we're programmers.

[What the man said... I don't want to look at data, data can't put itself on the screen, programs have to do that. Code is about behavior, how we manipulate that data, and get it on screen, or enforce rules on it, but I don't want to see the data, I want to see the code that does the work on the data. Having a Customer table with some data tells me nothing of what can the program do with customers. Databases are important for storage, because it is true, data outlives applications, but that doesn't mean data's more important than applications. Data and applications are partners and are equally important to the user. Data without a program to manipulate it is no better than paper. Programs are what the user sees, uses, curses, blames, loves, etc... programs are the users connection to the computer, not data.]

{Most behavior can be converted to data, aka declarative.}

(Questions about such claim moved below.)

[Schema's show me nothing of how that fancy screen works,]

[It doesn't exist, you can't make an argument by saying to use something that doesn't exist. There's no declarative language that can take you from db to screen and do the ui and rules and everything else, certainly not SQL.]

HaskellLanguage?

{I never claimed that SQL or declaritive techniques do *everything*. Why do people keep implying I did? It is a matter of shifting the burden away from declarative code, not eliminating the burden entirely. I never claimed entirety. -- top}

[Programmers are the man behind the curtain, we make those things work, sometimes you make me think you're just user that learned a little code. You think all that stuff happens by magic man, no way, a programmer did it.]

So, it is mostly implementation detail. How the declarative framework is "executed" does not have to be one's concern. That is abstraction and DivideAndConquer. You guys keep talking about how abstract OO allegedly is, yet here you are wanting to know exactly how every bit moves.

[or how that data get's from one system to the other, or how that business rule was enforced,]

Many business rules can be enforced using declarative techniques.

One uses a combination of declarative techniques and imperative techniques in practice. OO just makes too much imperative that should be declarative IMO. A large part of OO interfaces are essentially DatabaseVerbs.

[or how the screen show's data that isn't in the database but is derived from it.]

Like a total? Throw in a few events and totals are a snap.

[Events are code... programs..]

[Data can't show me a picture and let me drop it onto a grid to purchase that item, nor can it show me how it was done. Data can't show me a graph or chart to make it easier to understand, nope, more programs do that.]

Declarative graph interfaces are perfectly possible.

[Don't tell me about possible, tell me about is... pink elephants are possible, but that doesn't make em practical or make me likely to see one.]

See ChartingExample.

[Give a user a database and a SQL interface, and no-one would use computers.]

It is for the developer, not the user. You are putting words in my mouth.

[The data is meaningless without a program to allow the user to interact with it. The user and the data interact with each other, the program is what allows it, enables it, prevents it, the program is what makes it all work. All data's the same, all programs aren't. When I see a crud screen, I don't even pay attention to the labels anymore, I want to see the code that makes it work, the data's irrelevant, uninteresting.]

I have used multiple CrudScreen frameworks without knowing the details of how they work. (Although I would provide more trace-ability options and make the priority rules more declarative if I built it.) Is your wanting to know how the GUI framework works necessary for the job, or merely technical curiosity? Many developers don't know how the compiler specifically works, how the file system works, or how the OS works, yet successfully use them. Why should the GUI framework be any different? Isn't that what some call "Encapsulation"?

Practical declarative approaches minimize the influence of behavior, not entirely get rid of it. One factors out the stuff that is better as declarative. What is left over are usually relatively small tasks or events. There can be a wide continuum to how much is turned into behavior and how much is imperative (code). The biggest advantage of an event-driven framework is that one does not have to worry about "large scale" code structures for the most part, and thus can focus on relatively small snippets of behavior code without having to navigate the big code picture. There is no "center". Of course for non-interactive applications (batch jobs), event driven approaches are less effective and unnecessary. Traditional "top-down" procedural programming works better under such circumstances IMO. -- top


Is the Human Brain Mostly Declarative?

The human brain is mostly declarative in nature, or at least the models we use are. The "algorithm" to calculate signal "strength" and propagate signals is relatively simple. The "power" of the brain is not really in this algorithm, but in the "attributes" of the brain cells (the weighting level and links). Yes, one needs the algorithm (behavior) to set everything in motion, but the real bag of goodies is in the attributes. True, few would want to program this way, but it is certainly a powerful exposition of the significance of attributes.

Good event-driven frameworks are a lot like this. The event engine does not really care about the meaning of the attributes and small snippets of code; it just processes them based on user inputs, its dispatching rules, and related propagation of events. It is "dumb" to the meaning of it all. (See "BrainSchema?" on TheAdjunct)

-- top

Yes, but that's like saying that the program that is input to a TuringMachine is a list of "attributes", whereas the "true algorithm" is the definition of the TuringMachine itself - i.e. that's not true. TuringEquivalent "attributes" are not attributes, they are algorithm. -- dm

Well, DataAndCodeAreTheSameThing. However, one can change the "tilt" of their viewpoint of either one.

But does anyone really think that restricting our view of instructions to sets of attributes will help us write software? And if so, how?

"Restrict" implies it is an inferior format. I find it:

"Restrict" implies that we limit ourselves to one view instead of many. Let's consider the behavior defined by a widely available application such as WebLogic 8.1. Are you saying that viewing that behavior as attributes (the bytecode values, I assume) is more compact than viewing the Java source code? Are you really arguing that there's some benefit to viewing behavior as a table of integer values? It sounds like you're arguing for machine code over any higher level language.

Bytecode? I think you are approaching it too literally. I will see if I can find a specific API or interface to declaratize, for WebLogic is a big product. Perhaps you wish to suggest one?

I'm not talking about the external interface to a program, I'm talking about the program itself. No matter how you present the API, there has to be code behind it to do what you want.

For one it talks about "role-based security". This can be handled with via AccessControlList techniques. You can make whatever interface for it you want. One of the beauties of declarative approaches is that different languages can access the info. It does not matter as much which language implements the UI, for example.

Yes, it's trivial to set map ACLs to a database in WebLogic. But we're not talking about administering WebLogic. We're talking about creating WebLogic.

The work-flow system at http://edocs.bea.com/wlpi/wlpi121/tutorial/ch3.htm seems readily convertible into attributes. In fact, some of it lets you see the XML generated. (XML is not the ideal form of attributes IMO, but it is indeed one.)

[Your missing the point. Of course you can take an existing system, and modify it to make it configurable by declaration. But you can't write new systems that way. Let me see you write a new system with new behavior without writing code for that behavior... you simply can't declare behavior that doesn't exist.]

Again again again, I never said it was all-or-nothing. As far as techniques for hooking up declarative info with code, look how HTML interfaces with JavaScript in an EventDrivenProgramming fashion, and EvalVsPolymorphism. (In the future I expect file systems will be replaced with relational databases, and the "integration" will be tighter and/or simpler.)

[EventDrivenProgramming in Javascript is OO, and still requires new code, what was your point again?]

How does one tell it is OO? This is NOT inherently OO:

  <mytag foo="bar" onClick="myFunction(77)">
It may be implemented in OO, but OO is not the only way to implement it. The user of HTML+JavaScript (app developer) does not know and cannot tell, and is thus not concerned. If you mean the DOM model, then yes, that is OO, but not what is being addressed here. We are not talking specifically about UI frameworks here.

[Because events happen to objects. The event system comes from the DOM (Document Object Model), thus mytag is an object in the DOM and onClick is an event that it throws, good code however, isn't written that way, as it's not reliable. Good code uses mytag.addEventListener("click",aFunction). EventDrivenProgramming always uses objects... functions don't throw events, objects do, EventDrivenProgramming is OO by it's very nature.]

You seem to be using a rather wide definition of OOP. Or perhaps the so-called "physical" definition. I doubt it is the consensus definition, but please work that out in DefinitionsForOo, not here. As far as your "good code uses...." comment, I would like to see more justification for that statement, if you don't mind. Personally, I usually find it easier to use a visual IDE to add event snippet code to a screen widget. But if one must deal with code, something like:

   <widget name="buttonX" .... onClick="clickButtonX()">
is fine by me. DOM and OO are not the only way to get such, just the "in-style" approach. (Ideally, if one has 500+ widgets in a bunch of screens and several hundreds of events associated with them, a database starts to look like a better place to manage such rather than a bunch of files and/or RAM pointers IMO.)

[Anyone who dispatches every event to a database needs to learn to program better, if you're still writing code in the Visual IDE properties, don't come preaching about how to program, because you still don't know.]

What if you need more than "event snippet code"? What if you need to write an entirely new program?

I am not quite sure what you are asking. If you need a new app, them simply make one. I see nothing from stopping somebody from making one. Are you talking about creating an event-driven GUI framework from scratch itself?

If we put all of the code for our new app in one event "snippet", then making it isn't so simple.

Who ever proposed putting it in one big snippet? Most decent apps have lots of small snippets.

No-one proposed putting it in one big snippet. I'm trying to get you to explain how you would approach writing a program that doesn't consist of event snippet code attached to GUI triggers. I'm still trying to get you to explain how looking at the code as attributes will help me.

Our new app needs hundreds of thousands of lines of code. It needs structures and lists and queues and sorts and all that stuff they teach in first year CS courses.

Well, I think CS courses are archaic in that sense. Dedicated structures are not change-friendly. See DedicatedStructuresVersusRdbms

Regardless of change patterns, the application needs lists of foos, queues of bars, bizarre sort routines that aren't commercially available, and other kinds of new code. I'm describing the kind of programming I do. It can't and won't be done entirely in databases or off the shelf applications, although it will interact with them.

Let's assume that the event-driven GUI framework already exists, but that one of these events has to trigger truly novel behavior. There's no off the shelf component we can plug in. There are no APIs to call. We have to create a non-trivial program.

I would have to investigate a specific example. Even if by chance our framework is limiting, there is usually a decent compromise if you ponder alternatives a while. It may not be cost effective to make something 100 percent open just to handle a handful of odd requirements. It may be letting the tail wag the dog. Besides, in the age of open source, there is always the source to tweak.

I don't think you understand. I'm not talking about modifying an existing application. I'm describing an entirely new application. No-one has ever written one of these before.

In the 1950s folks realized that sticking all the code in one "snippet" was hard to manage and wastefully repetitive.

Again, I never proposed One Big Snippet. I don't know where you got that impression.

Again, I'm trying to get you to talk about creating novel applications as opposed to associating event snippets with GUI triggers. I'm sorry if it sounded like I was saying you proposed one big snippet.

We'll need to break the code into manageable units that can be individually tested and reused where possible. One popular way to do that is to define separate procedures and structures. Another popular way to do that is to associate procedures and structures in classes. The way you seem to be advocating (viewing the instructions as attributes) hasn't been popular for 50 years or so. What benefit will your approach give us?

What specific 50's failure cases are you talking about? Lisp? You don't wanna piss of Lisp fans. They will make me look like a pussy cat in comparison.

Machine code. When you talk about "table-izing" code and viewing code as attributes, that's all I can imagine. Give us an example of how you would view the program I'm describing as attributes.

Which one? Several were mentioned.

Either WebLogic 8.1 or the transaction processing system described above.

[Any of em... Top, don't you understand, no-one ever knows what the hell you're talking about. You have all these fucked up weird little ideas about programming but you're in your own little world and you just assume we understand what you're saying. Well we don't. No-one does, quit jabbering nonsense and show us something, show us what you're talking about.]

The bottom line is that without specific requirements and UseCases to compare, communication will probably be difficult. Maybe there are ways to communicate without pointing to code and requirements, but right now I am stumped. -- top

[All you ever show are database schema's and a tidbit of simple procedural code and you make grandiose claims about how powerful your system is, but you've never shown anything to back up your claims. We all know how powerful OO is, that's why it dominates the industry, but no-one even knows what you're proposing because you don't ever show any programs. You think TableOrientedProgramming is so powerful, then put up a sample application showing off that power or shut the hell up and quit blowing smoke up everyones ass like you know what you're talking about.]

Okay, how about we divide typical custom biz app design into 3 parts:

The first one I assume you generally are familiar with. The second one gets into issues of GUI/UI frameworks. I don't think this is the place to battle over GUI framework paradigms, so lets mostly ignore it for now.

The event snippet approach I prefer is more or less like those found in the VB-like tools. I assume you are familiar with VB, right? You build the UI interface interactively with GUI tools. The design is generally declarative in nature, although it does not really matter because the developer does not see the "source" of the GUI itself, only the visual representation and "property lists". If you don't like visual GUI builders, it does not matter much here. You specify widgets either with the mouse or in code. Take your preference.

Then one adds event snippets to screen widgets either by mouse or by some kind of coded association. In VB-like tools an app developer does not know or care how this works internally, and generally don't have to. It might be implemented with a GOF Listener pattern or gerbils on wheels, but they don't care as long as the external rules are spelled out. The association between events and widgets can be via object pointers, database keys, or XML tags (such as the on-click example shown around here somewhere). I would prefer a database if I had to dig into the nitty gritty so that I can query and view it any way I please, and you probably prefer object pointers for whatever indescribable reason. Either way it is an association, regardless of its machine or paradigm representation. It is just a link between A and B.

I see nothing in WebLogic so far that cannot be built this way. Maybe the UML drawing engine is an exception because I have never built a UML diagrammer from scratch and thus have no experience there. But beyond that, WebLogic is just regular ol' CrudScreens system that allows users to add, change, delete, list, and search tons of attributes, instances (records), and associations.

You're going to build a distributed application server out of CRUD screens? And what "UML diagrammer" are you talking about?

No one here has advocated abandoning RDBMSs. I'm asking how viewing the instructions as attributes will help us write something like WebLogic 8.1. Can you tell me?

[See, already we disagree. I have yet to see a typical custom business app that could be built like this. You've simply left no place for complex business rules and processes.]

What is an example that event-driven architectures could not handle? I realize you believe that it cannot easily handle a lot of stuff, but you are failing to communicate specifics on the failures. You are bothered by something in it, but I cannot read your mind. I need specs and/or code.

[Schema's are good at holding simple declarative rules on fields, but nothing too complex. I certainly wouldn't want discounting process workflow for a product line declared with triggers and constraints. It should be more like... keeping simple but rearranging.]

[And that's at a minimum, I could easily add several more layers to any even slightly more complex application.]


Is OO really declarative in disguise?

Re: {Most behavior can be converted to data, aka declarative.}

[That's a bold claim, and I seriously doubt it's true. If that were the case, programmers wouldn't be needed 90% of the time. I have never seen a project where this was the case.]

I think Turing (or his colleagues?) pretty much proved they are the same thing in the end. The debate here is more about usefulness to humans and human grokkability than mere ability to achieve it.

Also note that roughly 40% to 80% of the class interfaces I have seen over the years are composed of essentially DatabaseVerbs or stuff that can readily be converted to declarative. With all the suggestions that "OO is about behavior", most methods are essentially action-a-tized declarative elements. (Didn't I say this already? Deja vu) OO's behavioral ability is apparently going to waste. Most of it is filling up, finding, linking, and emptying stuff. It is just bad InterfaceFactoring. Factor that all to the root object if you want to clean it up. But then you would have a..........DATABASE? Tada!

[Most elements are actionized declarative elements when that's all you know how to do, sure, but we do far more than that. Don't blame OO because your programs are too simple to need it.]

Without something more specific, we are back to a DomainPissingMatch. The question remaining is where, when, and how do declarative approaches fail; or at least fail to be expressive enough, hence we return to the title of this topic.

[Your always missing the point, and I think you do it on purpose, there's a lot more to programming that just verbs/functions. Verbs and functions are the basic building blocks, the smallest abstractions, you use them to build bigger abstractions - objects, and you use them to animate a model, DomainDrivenDesign. Tough problems are solved by having entities interact with each other in intelligent ways, this greatly simplifies the problem by hiding complexity. As a programmer you have several forms of abstraction available to you, functions/objects/interfaces/closures, and you should be using them all, for they are all necessary in just about every single program. If you only use functions, then you're missing out, you're crippling yourself for no reason. Functions don't handle or model state very well, objects do, objects don't model verbs as well as functions, the best languages allow you to use both whenever you need them.]

[If I'm modelling a warehouse business, for example. Then it makes sense to actually model it, to have a warehouse and products and move things into and out of inventory, just as the real items in the real world do. It's not just data, yes the data is important, but it's making sure that data is valid and consistent that makes the model valuable. Some problems in the domain will be best modelled as functions that work on objects passed to them, others will be best modelled as objects, that have capabilities or allow certain kinds of interactions. Your arguments always revolve around the idea of rejecting abstraction and having the database do everything, and that simply isn't an option in the real world. Abstraction is what enables us to make complex software, while keeping it free of bugs. You either take advantage of all the tools available to you or you don't. If I can hide a complex piece of functionality under a declarative word, I will, if it makes sense to do so. If you can get away with being a one trick pony, good for you, but don't criticize the rest of us for learning to use everything at our disposal.]

The biggest reason for "complexity" that I encounter is relationships AMONG things, not so much complex actions on individual, isolated things. A warehouse similarly involves lots of relationships. For example, asking how long an item has been in a given warehouse is about a relationship between the product and the warehouse itself. Most biz apps are about tracking and managing relationships between stuff, at least those that I see. OO has nothing built-in to help with these. If anything, encapsulation is anti-relationship, or a least does not help (see CantEncapsulateLinks and PrimaryNoun).

Even physical modeling involves a lot of relationship tracking. In most video games things just don't spontaneously do things (at least that is not the hard part). They have to interact with other nouns before something happens, and the result depends the kinds of and attributes of the things interacting, such as the "energy level" attribute of each item. Somewhere in such games there is probably something similar to a many-to-many table which defines the results of an encounter for each kind of thing.

If you have a specific scenario where declarative approaches sour, I would like to hear about it. If we are to explore multiple paradigms, lets also identify their weaknesses with something more than anecdotal evidence. I don't claim that declarative approaches solve every single problem well, just the vast majority of those I encounter. If by chance it makes 90 percent twice as easy and 10 percent twice as hard, then declarative still wins in the end. If there is a class of problems that trip under declarative, I have yet to see them, or at least don't see them often enough to turn the whole boat around.

As far as OO being more "abstract", I would like to see your justification for that claim. I find relational more abstract because it factors out DatabaseVerbs into the database instead of replicating them for each and every entity, as the laws of encapsulation dictate for OO. Encapsulation results in each class having to reinvent the DatabaseVerbs wheel in order to be "self handling", and thus scores lower on OnceAndOnlyOnce, which tends to indicate lower abstraction.

[Languages have statements, more abstract than statements are procedures/functions, which are assembled from statements and give scope to a group of statements. More abstract than procedures/functions are objects, which are assembled from procedures/functions and give scope to a group of procedures/functions. Each one is a higher level abstraction built upon and with lower level abstractions. I said nothing about relational, I said objects are a higher level abstraction that procedures/functions, a statement of fact, as I've just demonstrated.]

Well, okay, I will agree that objects are indeed a higher level of abstraction than functions by themselves in general. But relational is higher yet than OOP IMO because it factors commonly-found state, attribute, and relationship management interfaces into a single shared interface, which OO cannot do (without turning into a database). And, OO is not orthogonal to relational for the most part; so given a choice, I will go with relational and move the purer behavioral stuff to functions. -- top

[Don't compare OO to a database. OO is a programming technique, a way to organize source code, databases are programs, apples and oranges. You can compare relational to OO, but not databases, and when you do that... you need to get specific, so you're really comparing SQL, in various flavors, to OO. Let's at least get that strait.]

It is not just about SQL. For example, in ChartingExample the charting engine does not know or care how the chart config attributes got there. It only takes them and acts on them. An OO version would have methods to add series and link them with data sources, or even addDataPoint methods. There is no equivalent to an OO addSeries methods in that interface. It does not exist as part of that interface. Think about it. OO interfaces spend a lot of methods and code on basic DatabaseVerbs stuff of filling things up and associating them. That is usually where it conflicts with databases and declarative thinking. To me OO stinks largely because it spends so much code and interface room on stuff it is shitty at. You claim OO is all about behavior, but in practice its mostly declarative stuff disguised as behavior (see above). Factor that crap out to a larger thing/paradigm/technique, and OO may then look more inviting to me.

[See, you still don't get it. OO isn't trying to handle things it's bad at. Most OO programs in the business realm happily use relational databases. A database will store the relationship between an order and a line item, but the OO program enforces process and complex rules on the establishment or destruction of that relationship, something databases suck at, OO is augmenting and extending the capabilities of the database. Collections in OO programs are often nothing more than pretty wrappers over a generated SQL statement using the database to filter the set to the proper result, exactly what databases are good at. Collections often offer a predicate based query api native to the language that is easily interpreted into sql to be executed to then fill the collection. Rather than passing a collection of rows to a function, we reverse it and pass the function to the collection of objects, so while you work at the low abstraction level of "select", we get to work more higher level abstractions such as "do, select, reject, detect, inject, first, rest, last". Which can all be interpreted down to SQL's select removing much of the burden from the programmer. I much prefer "detect" to "select top 1" and "reject" to "select where invert conditions", abstraction is a good thing. OO is quite happy to take advantage of what relational techniques offer, while keeping all the advantages OO offers. You might see AddLineItem?(anItem) in the interface, but that's because that is the natural place to enforce rules about addition of line items to an order, but it'll still be a foreign key in the database keeping track of the relationship. It'll still be SQL statements generating reports and doing complex joins and subqueries to get derived data, but once that data is moved into memory, it'll be wrapped in an object to simplify programming with it.]

One can put function wrappers around frequently-used SQL statements also if the SQL grows ugly. However, doing it for SQL that is called/used from a single spot is often a waste of code. (There is another topic about that somewhere.) SQL is a fairly high-level language. It is often hard to beat its expressiveness with imperative code. Even Capers Jones' allegedly pro-OO study showed this. Wrapping high-level concepts with different high-level concepts is not a road to effective abstraction because of the translation tax/overhead. Only wrap if there is a big difference in abstraction levels. (See WrappingWhatYouDontLike.)

[See, I don't want to pass around rsCustomer, I want to pass around aCustomer, because aCustomer can protect itself, while rsCustomer can't. Procedures working on aCustomer, can't violate its integrity, but procedures working on rsCustomer can. aCustomer is easier to work with than rsCustomer. When I add aCustomer to anOrder, anOrder can protect itself against invalid customers, but when I set rsCustomerID field in rsOrder, rsOrder can't protect itself, I could link it to the wrong kind of customer. Objects let us pass around the data with the rules attached to it, because fundamentally, there's no difference between any of the following.... Integer, DateTime, Decimal, Double, String, Customer, Order, LineItem?, Product, Money... they are all datatypes, and programming is about working with datatypes and doing work with them. Simple datatypes come built in, as they apply to all problems, but more complex data types are very specific to certain problems, so you must build them yourself, things like Customer, Money, and Order. You seem happy programming with simple datatypes, why can't you accept the more complex ones that are domain specific?]

We are drifting here from "expressiveness" (the topic) toward integrity enforcement. Perhaps we should move such discussions to DatabaseNotMoreGlobalThanClasses, GateKeeper, and ThereAreNoTypes (I don't believe in the usefulness of "complex" types.) The tight relationship between nouns and verbs does not exist strongly in the real world. A one-to-one relationship is phoney. I reject the "self-handling nouns" approach of OO as superior until I see it in action. Artificial, forced tight-coupling between nouns and verbs is a ticking maintenance bomb.

Keep in mind that putting the integrity checks at the database (referential integrity, update triggers, etc.) allows multiple languages and multiple classes to be under the same rules. If you truly want global enforcement that app developer can't flub up, the database is the place to put it. In the end, OO classes are only conventions, not sure-shot integrity problem prevention mechanisms.

Typing discussion moved to ThereAreNoTypesDiscussion.


From ConvertingImperativeToDeclarative:

Better questions is why bother, an object is already a data structure, you don't don't gain anything by making it a struct.

Yes there is. One can alter, customize, and query one's view of it, as described in CodeAvoidance. I don't know about you, but I often don't like the original form code comes in. I like to customize my view. Maybe I am spoiled by my past use of NimbleDatabases, or just mentally deficient due to lack of being breast fed as an infant in that I cannot alter the view in my head instead of on the screen. But doesn't it at least seem like a good idea to separate meaning from presentation? It is better meta-ing, if you will. Even MS is slowly moving in that direction with their CLR, making debates about whether C# is better than VB.Net mooter and mooter. (Keep in mind that I would rather have relational table structures than map structures {OO}, but will take the second if I cannot get the first.) -- top


Expressions

I will concede that "expressions", such as Boolean expressions and mathematical expressions are often hard to deal with in a data-centric format (such as an AbstractSyntaxTree), unless they contain some kind of heavily-repeated pattern. Thus, expressions tend not to be items that I convert/factor into tables or data structures. (Although expressions can be stored in tables.) -- top

Some of us call those expressions... programming, maybe you should look into it.

If I pursue this line of discussion, it has an estimated 80% chance of turning into a LaynesLaw battle over DataAndCodeAreTheSameThing. -- top


My attempt to answer the question in the title: Is declarative less expressive?

Yes. Because a single declarative statement may map to several imperative implementations, the declarative is less expressive. In other words, it can't express which implementation to use without becoming less declarative.

The extra expressiveness of an imperative program can be a disadvantage whenever automation can a provide pragmatic implementation from a declarative program. Especially if that automation can take into account variables that are unknown until runtime. i.e. The same declarative program may end up being executed in different ways at different times depending on the runtime situation.

-- RobertFisher


EditText of this page (last edited February 26, 2009) or FindPage with title or text search