Separation And Grouping Are Archaic Concepts Discussion

Continuation of SeparationAndGroupingAreArchaicConcepts

I believe your argument that you can meaningfully "bring together" snippets of code for editing, debugging, or inspection is predicated on the generally invalid assumption that the code chunks have a semantics (formal meaning and application) independent of their context. That this assumption is invalid also limits the degree to which you can even classify code in terms of entity/task/compSpace. -2

In fact, your entire argument for 'SeparationAndGroupingAreArchaicConcepts' seems dependent on such invalid assumptions. If context is relevant to the interpretation or processing of messages, code, etc. then separation and grouping based on identifying distinct interpretation and processing contexts is fundamental and cannot (usefully) be dismissed as an 'archaic concept'.

--AnonymousDonor

I'm not sure what you mean by "context". If you mean that classification requires programmer or analyst intervention when including something in a given file or routine is "automatic", then I partially agree. That something must be in a given file is generally a constraint forced on one by the compiler. In fact, that may not be true either because most compilers/interpreters allow one to put everything in one big file called "file" without any context other than the program code itself, which could be messy with variables with names like A, B, C and no functions or poorly-named functions. (And even if size constraints require splitting, the multiple file groupings could be rather random and undocumented.)

Thus, file names, file divisions, and function names and groupings are all meta-data voluntarily provided by the programmer. Almost none of it is necessary for the computer to actually "run" it. It all depends on "volunteerism" anyhow. My suggestion merely allows one to take meta-categorization further by allowing any given code chunk to belong to potentially infinite number of classification sets; something file-and-function approaches don't do.

As far as technology that forces developers to classify stuff properly, I doubt such exists, at least not in a practical way. But I am exploring enabling tools, not spanking tools at this point.

--top

Your hypothesis at what I meant by "context" seems to be pretty far off the mark. I'll try for a little clarification.

The subjects of the "context" I described is "messages" and "snippets of code". In general, "context" for X refers to everything related to X that is not X. For messages, the "context" would include: who sends the message (where it came from), who receives the message (where it is going), and when was the message sent (relative to other messages in the past and future). For snippets of code, the "context" includes: with which explicit parameters is that snippet of code called, in which environment (implicit parameters - globals, environment variables, special variables, thread local storage, etc.) is that snippet of code called, who receives the return value for that snippet of code, the relative order in which snippets of code are executed, and under which conditions will that snippet of code be reached for execution.

My concern with your suggestion is that context seems to be essential for interpreting messages, understanding snippets of code, etc. Indeed, I believe that context is essential to the point that you cannot readily classify most code in terms of entity/task/compSpace, much less usefully "bring them together" for debugging or avoid the need for SeparationAndGrouping? for snippets of code.

I'm not certain how "files" got involved. Many languages do derive semantics from filenames, file sections, order of code in file, etc. but my understanding is that you're ignoring those for now. I suppose we could consider 'context' in a more fractal sense by looking down at them from a higher viewpoint (e.g. whole 'script' files are effectively 'snippets of code' to be executed in a console environment, and 'filenames' in context are processed by makefiles). Function names serve great purpose in this larger context, being critical for executables (e.g. console typically executes 'main', thus 'main' is important in application context) and for modular programming function names are exported to hook components together (function names provide hooks into a module). But, excepting where the language itself derives semantics from the filenames or file divisions or function names (a few exist), these issues are somewhat superfluous with regard to 'separation and grouping' vs. 'context dependence'.

If your conclusion is that we can be rid of some SeparationAndGrouping? on the basis that it isn't providing any semantics, I'll agree. We could be rid of files, and function names are just pointers except where they are 'exported' into the object code context for use by the extra-language environment. But that really isn't strong enough a conclusion to support titular claim 'SeparationAndGroupingAreArchaicConcepts'. This page includes no provision at all for reducing or eliminating the practical requirement to group multiple snippets of code based on the parameters it receives, the conditions under which it is to be reached for execution, the environment in which it runs, who gets the return value, the relative order in which they must run, interdependencies, etc. etc. etc.

And these issues will, in general, defeat or make useless your suggestion to classify 'chunks of code' by entity, computation space, and task.

-- AnonymousDonor

Keep in mind that I am focusing mostly on code maintenance and not on execution of code for now, but will revisit this later. Let's assume for now that we are using a compiler and all the code is automatically assembled into compiler-friendly files for the purpose of generating the final executable when a "build" is ran.

In such as case, the existence of functions and files may still be tracked. We don't have to do away with the concept of functions and files to achieve better index-ability and tracking of code parts. It is not an either/or decision. How this works with code editors etc. may have to change from what people are used to, but let's save the topic of "new age" code editors for another day. Consider the following schema based on the prior examples:

 codeChunks
 ----------
 snippetID 
 sourceText
 functionRef 
 sequence  // ordering within function (double-prec.)

How do you decide how much sourceText to group into each codeChunk?

 functions
 ------------
 functionID
 moduleRef   // file-reference
 functionName
 parameterDeclaration
 etc...
 // p.k. = (moduleRef, functionName)

modules // maps modules to files ----------- moduleID fileName filePath

categ_code_assoc // many-to-many -------------- snippetRef categRef

How do you categorize a snippetID when the meaning of whole function/macro/statement/etc. can change based on the context in which it is applied?

 Etc...

This allows the snippets to be put into functions and files as needed for compiling, but also allows the other meta-data to be tracked. It is a super-set of the traditional file-based layout and the prior schemas. We can track snippets as unique file-plus-function combinations without sacrificing the ability to give them other attributes for our code tracker.

(One may want to stop using sub-folders if the meta-base can provide similar info, and instead put all the code files in one folder.)

That being said, if/when such techniques become popular, we may want to change the way we organize code and move away from file-centric thinking. For example, EventDrivenProgramming tools and techniques tend to lean in this direction because the GUI engine is often the main manager of execution sequence, and not explicitly coded sequences. One thinks about code snippets in terms of event handlers associated with specific widgets and not functions in files. Although not all GUI kits use this approach, it was made popular by VisualBasic. But the problem with VB is that one could not search and analyze the code using query-friendly techniques. It kind of had its own private meta-base of code snippets with proprietary tendencies. (It could save as code files, but not very usable code.)

--top

I still feel you are assuming that this approach will actually work. I've explained why I believe it will not work without even beginning to touch on inverted dependencies and aspect oriented programming. Moving away from filesystems is an agreeable possibility IMO, but is hardly new (e.g. OCaml, Java, etc. are respectively based on module objects and classpaths instead of files). However, I don't believe you are, in any real sense, succeeding at your goal... you're just making really small files called 'snippets', making some attempt to classify each snippet, and the classifications will rarely be accurate or meaningful because how one classifies a snippet like 'x++' is highly context dependent (potentially including the context of the function call, as opposed to just the context of the chunk within the function).

-- AnonymousDonor (AD)

Every classification is "context dependent". That doesn't tell us anything new. A criminal may classify a kitchen knife as "a murder weapon", but that doesn't stop the store from classifying it as a "kitchen utensil" and customer software using this classification to help customers find the product while browsing. The classification of code chunks is primarily for humans, not computers. (Perhaps it can also be used for code validation.) --top

You imagine that the problem can be "solved" by using a 'couple extra' classifications. I suspect you are treating 'context dependence' far too lightly. Go ahead and tell me what classifications 'x++' should bear. --AD

Why are you asking me? Without a specific application or shop-specific classification guide, I couldn't do such. The classifications are invented by developers/analysts. Suggestions are given above such as GUI-related, database-related, security-related, etc. Perhaps some of this can be automated, such as classifying any braced code block, "{....}", calling the "runQuery" function/method as "database-related". However, automation is not necessary to achieve the basic goal. --top

I'm going to contradict you on that. I believe that, without automation, your idea is nearly guaranteed to fail in practice. Relevantly, these classifications need to be source derivatives. But automating won't be too difficult. If annotating common existing languages, I'd use HotComments to do the job (similar to how HotComments are used to document code).

You have not identified any general show-stoppers other than vague claims.


To help illustrate the problem, consider the extreme position: you decide that every code chunk can be broken apart into a database for later automatic grouping and separation. So, now your database has some 2000 'snippets of code' each consisting of one assembly statement (sometimes with slightly different parameters). All you need to do is include all the context for each of these 2000 snippets of code so that each of them will be executed under the correct conditions, in the right order, returning a value to the correct destination, etc. with all the various 'context' properties being correct.

Unless you can find a practical way to do this without resorting to 'separation and grouping' (effectively gluing together code chunks larger than one assembly statement) based on contexts and relative ordering, I believe you cannot claim SeparationAndGroupingAreArchaicConcepts. SeparationAndGrouping? as concepts remain applicable, useful, practical, modern, and in most senses the very opposite of 'archaic'.

And that's even before considering hardware barriers on communications.

I am not clear on this. What is "correct destination", for example?

When 'returning a value', that value must go to the a particular register or memory address. If it does not, the code will run with errors. Thus, certain destinations can be called 'correct' while all others cannot.

I don't know what kind of problem you are envisioning still. Does anybody else want to volunteer to restate from a different angle if they think they know what is being described?

I'm taking the concept you're advocating (SeparationAndGroupingAreArchaicConcepts) to an extreme and asking if it is still valid. If not, the concept is not valid at all - it fails by an inductive principle since you won't be able to 'classify' any 'chunks of code' without first 'grouping' them. The extreme described above is for assembler code. Another extreme, for SKI calculus, is to refuse to group SKI statements, so you only have exactly three code snippets: S, K, I. That's it. All possible unique and independently executable 'chunks of code'. How does this fit into your whole system? How will you go about 'classifying' these three statements?

The code that an existing COTS compiler gets does not have to look any different. Unless I know what problem you envision, I cannot "fix" it. Yes, existing compilers/interpreters have certain requirements and the food we feed them needs to be in a certain format, but that does not outright stop our goal of managing *code* with more powerful classification and query tools. It merely puts some preconditions on it.

I imagine that we can usefully classify *some* chunks of code, so long as we first group the chunks of code (directly contradicting SeparationAndGroupingAreArchaicConcepts) into larger semantic units like functions or objects such that the code is reasonably specialized in its application. But my suspicion is that your approach won't succeed when the application of a code chunk is very context dependent. The idea breaks down when you're working with metaprogramming, macros, individual stages in event processing, queues, virtual dispatch, etc.. E.g. given int f(int& x) { return ++x; }, how does one go about classifying chunks of f? When people start writing code tools that are used to build code tools (aka systems programming), 'f' is what code tends to look like - simple operations, meaningless by themselves, wrapped in packages like blocks or functions for application in a larger context.

I understand that you don't do much systems programming. That's alright, but like all people your ability to 'envision' problems that may arise in areas outside your experience is extremely limited.

As described above, the actual classifications are domain or shop-dependent. I don't know how systems programmers will want to classify stuff. Maybe they don't. If you don't need code classification to better manage code, then don't use it! (I did give classification examples for typical biz software.) Code management tools/techniques are NOT absolutely required to produce software. For that matter, neither is a compiler: write binary code directory into RAM.

I suggest we move your suggestions to a page other than 'SeparationAndGroupingAreArchaicConcepts'. It seems you are unable to defend that titular claim for more than just "files" under a very limited set of circumstances.

You need to be more clear on what is missing.

Your ideas have some merit for tracking sections of code in limited circumstances, but that merit has nothing to do with separation and grouping being or not being archaic concepts.

[The page's title is dire and obviously wrong, but I see some merit in an obvious derivation of Top's idea: It would be nice to -- for example -- be able to examine all the event handlers in one place, then examine all the database queries as a cohesive set, then view all the form code, or even look at all the "for" loops, or all invocations of function "x", and so on, in some clean and effective manner as part of an IDE's functionality.]

Can you use something akin to formal logic to prove its "obviously wrong"? I am growing angry and am tempted to say something.

[Such a "proof" was provided at the top of this page.]

It wasn't clear up there either.

I clarified what was meant by 'context'. I'd bet money that you didn't even bother attempting to apply said clarification back into the original statement.


Re: "...contradicted by how he attempts to apply it (e.g. he suggests grouping code into smaller "chunks", he refuses to acknowledge that functions themselves are described by groups of chunks of code, etc.)."

Any classification can be viewed as a group and vise versa. I don't wish to get caught up in a definition battle of "grouping" because it likely won't go anywhere. The main issue is a complaint about the old style belief that one must "group related concepts" in code by making dedicated modules for various aspects such as SQL, GUI code, etc. This would be unnecessary if we had a more powerful system that didn't force mutually-exclusive choices. If I added all the conditions and disclaimers to the title, it would be mega-long. Titles are merely descriptions, labels, not logical proofs in themselves. If you are bothered by the title and want to rework it, let's kick around some suggestions. --top

You've suggested nothing that gets away from grouping based on 'various aspects of code'. Nor have you addressed any of the valid reasons that people feel there is value in making separate modules for SQL, GUI code, etc. (such as the ability to make maintenance of certain aspects someone else's problem, or the ability to link and test such code independently).

And titles don't need to include all the conditions and disclaimers, but they also shouldn't make strong, bold statements if the author plans to make more than a couple clarifications on scope. I'd suggest pithy, alliterative titles like 'CrossCuttingCodeClassification?' for your ideas to 'fix' the problem.

How about CrossCuttingCodeConcernManagement??

Not too bad. Simple is also good, so something that is parallel to 'SeparationOfConcerns' would do well. 'ConnectionOfConcerns' is taken. A possibility is 'TrackingOfConcerns?' (in which you'd be suggesting a variety of relational and annotation-based mechanisms to solve the problem of identifying/debugging/editing/etc. with concerns that are scattered throughout code).

How about TrackingConcernsInCode??

I like it - it is easy to inject into a sentence and as a topic title makes sense for the suggestions you've offered. It certainly is better than 'SeparationAndGroupingAreArchaicConcepts'.


Downsized DBA Example

My approach does not preclude that.

Just to clarify, the comment to which you are responding has little to do with 'your approach'. It's a complaint against your opening statements, your assertion that SeparationAndGroupingAreArchaicConcepts. You have yet to convince me that your approach has much ado about separation and grouping.

Suppose we wanted SQL code to only be managed by the database team (for now). In the code editor a code section would be classified as "DB-related". Any block with such a classification then would only be editable via the DB team and not the app developers per policy settings. Perhaps an "inline" checkbox could be created if we want a mere in-line, otherwise parameters are defined and an anonymous or auto-named function is generated for it (depending on the language).

The advantage of this over manual separation is that one *could* see them together if they want. For example, if the department is downsized, the same person may do both app coding and DBA. They'd no longer want to hop around so much. How you see it is controllable. What is together and what is apart is merely a view. One is not forced to go to a separate module to see all the SQL or GUI code. The "hard" separation is the "archaic" I am talking about. It's an outdated mode of thinking because it is no longer necessary to do things that way.

Methinks you generalize too much from experience mostly with limited procedural programming languages. I imagine a little thought-bubble above your head containing: "A function is just a block of code, you can 'merely a view' it as inlined!", but conspicuously missing is the accompanying thought-bubble: "Ah! But polymorphic dispatch, MultiMethods, TemplateMetaprogramming, etc. counter that idea - we can't just view function calls as inlined, not without knowing more context... potentially context only available at runtime."

Another advantage is that a block can be in multiple categories. A block that queries the user names and passwords from a user-info table may be under *both* the "DB" and "security" category. Traditional (hard separation) does not allow that.

Traditional 'hard separation' would put those password hashing functions in one module and the table management in another and a third module would have the task of figuring out how to usefully interleave the two, but would effectively be under *both* the "DB" and the "security" category. I agree that the mechanism your advocating would be useful for locating and debugging, say, all code related to security. But I don't think that has anything to do with (hard) separation and grouping being or not being 'archaic'.

How is it "under"? In our minds? Mutually-exclusive categorization via file modules *is* in my book archaic. (It's still probably the best KISS for smaller apps, though, the same way a pencil and pad is better than a DB or spreadsheet for short lists.) If we have 5 aspects, then we have a potential of 10 "link" modules for the combos (if my quick calculations are correct).

Counts of "potential" link module combos aren't particularly meaningful (potentially there are 2^5 combinations of five aspects per policy, but you aren't going to implement all of them... you only need to implement one). And if you're going to fall back on an "in my book" defense regardless of the arguments presented to counter yours, then I'll let you 'win' - I lack the rhetorical tools to defeat FoolishConsistency.

Reasons I favor hard SeparationOfConcerns among modules:

Perhaps I make more formal distinctions than you do on this subject - to a person who writes compilers, a compiler only performs code transformations (source code -> assembly code being a common one), so the compiler will never need to 'track' source code over time: there is no need for the compiler to keep a history or perform classifications beyond parsing unstructured data into some sort of c.

Hell, even if source code were primarily represented in a relational database, for reasons like those listed above I'd want support for 'modules' - e.g. big FLIRT files consisting of the data to handle just a particular set of concerns, plus the language-supported ability to 'import' and 'combine' data from multiple modules into a larger project.

I suspect that TopMind is concerning himself only with one 'issue': TooBigToEdit - most projects will be too big to edit in just one file. If 'TooBigToEdit' is the only reason to break a module into two modules, then I see a lot of benefit from the ability to just keep it all together for KISS principles.

Where did I promote files?

Where did I say you promote files?

What am I allegedly promoting in your "too big" dig?

Anything that 'keeps it all together'. For you, I suspect that would be a relational database.

Together? Together can be relative. That's the point. You are still thinking physical.

Neither files nor databases are "physical", Top - either of them can be distributed across networks, persistence resources, and access protocols. I think you too often forget that "relative" does not imply "subjective". I agree that "togetherness" is a matter of degree, a "relative" connectivity that can be described by such things as CouplingAndCohesion, can be measured by such things as drawing points and dependency-edges in a graph then computing density. But togetherness is no less real for being relative, and its reality can be felt in terms of 'real' costs, tools, conflicts, contracts, and services. So I'd appreciate it if you stop with your HandWaving BrochureTalk BullShit.

They originally modeled physical things. Regardless, files are limited. Trees as large-scale organizational structures have problems, orthogonality being the primary one. Living with hierarchies because "we and our tools are used to it that way" is not good enough. Time to evolve.

I find it amusing that you resort to the same sort of arguments you're usually deriding. How is: "Living with procedural programming because 'we and our tools are used to it that way' is not good enough. Time to evolve." IIRC, you'd usually point the speaker towards MindOverhaulEconomics and ProgrammingIsInTheMind.

That said, I'm with you on avoiding hierarchies as an organization and classification tool. As classification and organization structures in the macro scale, I agree: "trees have problems", well described in LimitsOfHierarchies. But for that macro scale there are FileSystemAlternatives, many of them non-hierarchical. So the question is: can you provide any good argument why 'files', which might be better described as 'small-scale data structures', "have problems".

The mutually-exclusive problem is the primary one. Trees have poor control over overlapping categories. There's already examples in LimitsOfHierarchies. Besides, if you agree, why are you asking me to justify it? Shouldn't we only debate things we disagree with?

People should debate or query each other when they aren't at an agreement (which is not necessarily the same as being at a disagreement). And I don't feel we are at an agreement, specifically, regarding the small-scale 'files'. (Note that I only indicated agreement about the macro-scale, the hierarchical 'file systems', not for 'files'.) Mutual exclusion has not been demonstrated to be a "problem" for files. The whole issue of "overlapping categories" doesn't seem to apply to individual 'files'.

The known alternatives to file systems are either navigational/network (pointer/graph-based), or relational -like (set and predicate-based) structures/databases. Most code-based solutions, such as aspects built into the language, are generally navigational.

So we are at least agreeing that our code units (existing or new) would benefit from a way to have potentially multiple classifications to make it easier to track and manage code elements; and that classifications should be easy to add, change, and delete without limits imposed by our chosen structures?

I believe we agree we could benefit from TrackingConcernsInCode? with such features as:

We also agree that these features can be accomplished mostly by an IDE, independently of the source form (e.g. even modules will work).

I do not agree: that SeparationAndGroupingAreArchaicConcepts (especially wrgt. code ownership, sharing, security), that the organization (e.g. 'modules') of the actual code is merely a view, that 'files' are problematic, or that TrackingConcernsInCode? should be a first choice for managing CrossCuttingConcerns (first choice for me is genuine KeyLanguageFeature support for CrossCuttingConcerns and inverted dependencies).

Such IDE's tend to reinvent a databases of sorts, in a half-ass way. And they are currently still file-centric.


re: code reuse, independent testing & development

My suggestion does not stop these. (Also, the whole idea that things must be split into lots of little functions in order to test should perhaps be rethought. But even if you have lots of little functions/methods, classification of them can still be useful.) -- top

I have a feeling that the "my suggestion does not stop these" is about as meaningful as "you can program functionally in SnuspLanguage"... it only takes extra work. Modules are designed for these purposes, they make them easy. How easy do you believe independent testing, development, code reuse will be with your system? Plenty of examples for modular systems exist. Can you run through a few UserStories of integrating independently developed code, reusing libraries, and independently testing code units in your system? (E.g. consider integrating 3rd party support for encryption, and 3rd party support for decoding and displaying video files.) One question to ask when finished is: does your approach result in some equivalent to mutually exclusive modules? (because, if so, then your arguments against them fall apart.)

Like I said above, it does NOT necessarily remove existing file-based, class-based, and function-based modularity. You have not shown where it removes anything you love dearly. Although in the future I expect the use of the above for modularity would diminish in such an environment, relying on this viewpoint is not necessary to my base argument.

You know... I'm not going to be convinced by what you "said" anywhere. I can't believe you anymore. After all, when you make claims, you don't mean "everywhere, always". So perhaps it DOES necessarily remove existing file-based, class-based, or function-based modularity... and it just happens to be somewhere or somewhen that your claim doesn't apply (after all, your claims seem to apply only when and where they are true, which might not be what is implied by the statement... I wouldn't want to "make something up out of the blue").

So, I'm asking you to convince me. I suspect you'll run into a few problems regarding independent maintenance that you're ignoring with all your HandWaving 'claims' and BrochureTalk, but I'm sure I can't convince you of these problems except by letting you run into them.

--AD

Consider this:

 table: functions
 ------------
 funcID
 funcName
 classRef  // f.k. to "classes" table
 nameSpaceRef  // f.k. to "nameSpaces" table
 fn_contents   // program code
 etc...
 (constraint: functName+classRef+nameSpaceref must be unique)

table: func_aspects ------------- funcRef // f.k. to "functions" table aspectRef // f.k. to "aspects" table

table: class_aspects ------------- classRef // f.k. to "class" table aspectRef // f.k. to "aspects" table

This represents a template for use with a "typical" current language. A code editor could tie into it such that one would never need to touch actual files. A "make" step would generate files and run the (file-centric) compiler/interpreter. A developer could be completely isolated from "files". They only see name-spaces, classes, functions, and aspects; and access code through a CrudScreen interface that has find-lists, QueryByExample, and so forth.

Thus, the "old" groupings are still there; they are just not file-based.

--top

I note you didn't provide any of the clarification I specifically requested (integrating independently developed code, reusing libraries, and independently testing code). Nice misdirection.

I look at your tables and see a FileSystemAlternatives (ugh, plural problem) in which files are uniquely identified by: funcName+classRef+nameSpaceRef, and for which each file is an executable script. I also see a language that doesn't readily support data, types or sharing services (e.g. global registries, etc.) between projects and within projects, and that likely makes invoking functions ridiculously verbose.

Where is this in that topic?

I look at your question and wonder: Does TopMind honestly believe that all alternative FileSystems must be listed in the C2 WikiWiki page entitled 'FileSystemAlternatives' in order to qualify as such?

And those 3 dimensions are primarily only to fit it to existing languages, not be the entire classification system.

That really doesn't matter. What matters is that, at some point, you've got a unique identifier (URI) for a block that carries the 'contents'. Even if you represented program code (sequences, expressions, function-calls, data) directly in the database, all you'd be doing is representing more formally structured data at that point - the ability to represent data structures in files is something that has been promoted as a feature of the KillerFileSystem, but it does mean giving up on the PowerOfPlainText.

Anything more complex/flexible than a file system with meta-data abilities is probably at least border-lining on being a database. It then becomes a matter of WHICH KIND of database is used. Back to the 'ol navigational-versus-relational fights. You appear to be agreeing with me without knowing it. Juicing up a file system to add what I ask for is producing for you a database. --top


PageAnchor: outside-code-2

I'm not sure what you are envisioning. File dates can be used to detect out-side changes (for the code that is stored in tables instead of files) if we wanted that. One would generally not "register" the vendor's code in the system, or at least mark it as being read-only from our code-manager's perspective.

My preliminary configuration for your scenario would put the vendor's code in regular file folders (because you have not identified a need to manage them through our tracker because they are used as-is), but our own company's code in the tracker, and thus in the code repository database instead of files.

The build sequencer would place or copy the generated files adjacent to the vendor's codebase text as needed. For example, assume that in the code build that is to target the compiler, we target a folder called "build_B", and the vendor's code is copied to a sub-folder called "vendor_X" under the build_B folder.

 build_sequencer (table)
 ------------------
 sequenceID
 ordering
 sourceType   // function, module, namespace, aspect, filepath, etc.
 sourceName
 destinationFile
 includeFilter   // list of aspects to include (blank=all)
 excludeFilter   // list of aspects to exclude
 etc...

Example Contents:

sourceType sourceName destination ---------- ---------- ----------- module internal_foo [root]builds/build_b/[same].lang filepath c:/outsider [root]builds/build_b/vendor_x

Actually, it might be best to use the OS's command line or script language for file copies rather than reinvent it for our tool. But it is shown here as part of the builder "language" to simplify the example. In reality, it may look more like:

 sourceType   sourceName    destination
 ----------   ----------    -----------
 module       internal_foo  [root]builds/build_b/[same].lang
 execute      "c:/vendor.bat [root]/build_b/vendor_x"

It just runs a given OS command with a command-line parameter that is substituted by our tool. (The alignment is off in the example due to the length of the command line.)

--top


OOP as a solution?

If we were to go back to the heart of oop (I am this, and I have these qualities, and I can do these things) life would be so much easier...

Can't we just get back to basics?

This is getting off topic, but I've seen very little coded demonstrations of OOP making the code clearly "better", except in narrow circumstances/niches. ArgumentsAgainstOop


Multi-Scoping

In most languages, variable scope is determined in a more-or-less hierarchical fashion. However, perhaps the scoping could also be set-ified in the language. A given code block could potentially have multiple scopes by listing which scope aspects apply to it:

  code_unit foo {
    scope: blerg, znog, foo;
    regularStuff(...)
    ...
  }

The priority for any overlaps would depend on which is listed first. It may remind some of FORTRAN "common" blocks; but hopefully it is more natural and flexible than that.

I'm somewhat curious how such a language would operate. It makes some sense for global variables to be 'set-ified', but such variables aren't hierarchically organized in most languages so I don't imagine you're speaking of them. That leaves the lexically and dynamically scoped variables from method calls, which tend to be instantiated once for each instance of a call then cease to exist when the call is no longer ongoing. How would this 'multi-scoping' apply to such variables? What does it mean for one procedure to have access to a variable that would otherwise be scoped within another procedure?

Or is this just an idea you're throwing at the wall to see if it sticks?

Think of each scope as kind of a global associative array. You can only access the array if you mention the array's name in the "scope" clause list. Except, you don't need array syntax to access the members. Conflicts could perhaps be settled by scope-clause mention order. This would not necessarily replace current scoping mechanisms, but rather complement them.

Scoping declaration could be done in such a way:

   var thing scope foo;
--top

I'm not certain what it means to think of a local scope with its limited lifespan as a global associative array.

A typical call-stack looks something like the following, with the downwards being higher in the stack:
  THREAD_INIT (OS data): vars 'pfnThreadProc' 'pUserArg'
  ThreadProc?: vars 'pUserArg', 'result'
  TaskA: vars 'A', 'B', 'C',
  Helper1: vars 'arg', 'H1', 'H2'
  Helper1: vars 'arg', 'H1', 'H2'
  Helper1: vars 'arg', 'H1', 'H2'
  TaskAFunctor: vars 'arg', 'D', 'E'
  TaskAFunctorHelper: vars: 'F'

In a running system, there could also be a number of such call-stacks. In a functional program, one might never return to some of them (continuation passing style). In some languages like Lisp there can be some 'special' variables that are normally available in all later scopes. But, before I could even think of applying this idea of yours in such situations, I can't figure out how it applies even for a simple procedural-language call-stack. How would you go about doing so?


Incidentally, WikiWiki has more or less the same issue: lots of content that is difficult to search and inspect. A partial solution was to create category topic tags. These do help, but their granularity is often too large. If we go with a ParagraphWiki and use similar tagging techniques except via a database, then it tends to resemble the kind of contraption I envision. --top

Agreed, a finer-grained Wiki is an interesting idea in many ways. Not certain how practical the 'paragraph' granularity is. I'd like to see a GraphWiki, perhaps with support for SemanticWiki tasks.

Related wiki engine topics: ExtendingTheWikiParadigm, FlikiBase


See Also: SeparationAndGroupingAreFundamentalConcepts


CategoryScope DecemberZeroEight


EditText of this page (last edited January 10, 2012) or FindPage with title or text search