Generative Communication

A model of parallel computing in which processes that have been spawned dynamically turn into data upon completion, and data may be stored in tuples in one or more shared TupleSpaces. A process may add tuples to a TupleSpace, or copied/removed by using a pattern tuple to perform an associative lookup. GenerativeCommunication is usually talked about as an alternative to MessagePassing.

See: AssociativeMemory, TupleSpace


Is this like BlackboardMetaphor? Yes, very similar.


"Processes that have been spawned dynamically": I interpret this to mean: "Processes that have been started by some other process."

I would think that includes every single process running on your computer, save the "init" process, which starts all other processes.

I'm not sure what it means for a process to "turn into data upon completion." Do you mean: An image of the processes instruction code is turned into data, (like a CoreDump), and then that floats in the TupleSpace? Or do you mean: The process' return value is stored in the TupleSpace? Or: something else?

Then it seems to me that you mean:

While a process is running, it can participate in the TupleSpace. For example: It can add to the tuple space, it can query the tuple space, it can remove from the tuple space, and it can change the tuple space. (This part I believe I understand, and it makes sense to me.)

I don't understand why it's called "Generative Communication." I wish there was some explanation of that.

I have found a "Generative Programming" wiki: http://www.program-transformation.org/Transform/GenerativeProgramming


Generative Communication and GenerativeProgramming are two completely distinct things, as different as a "memory heap" and the heap data structure.

The easiest way to understand generative communication is in contrast to the more common "message passing" system.

In message passing, object A wants to communicate a piece of information to object B. So it packages that information in an agreed-upon form, and then tells object B, "please receive this message". This implies that A knows of the existence of B and its willingness to receive the message, and assumes that the lifetimes of A and B overlap. If object A doesn't specifically know about object B, some systems allow it to broadcast the piece of information to all other objects, but this still assumes that the lifetimes of the sender and recipient overlap.

In generative communication, object A emits the piece of information into a shared space (which is essentially message passing to the shared space). Other objects can then observe this information, which is not removed from the shared space unless another object does so explicitly. The source and destination objects' lifetimes don't need to overlap: Object A can cease to exist before an object that cares about the information is created. It's called "generative" because the existence of a quantum of information isn't just a side-effect of a message-passing mechanism; instead, the information created by object A is an entity in its own right, with a well-defined lifetime.

Generative communication is a mechanism by with the blackboard metaphor is implemented, but it's not limited to that use. Also, while generative communication was pioneered by the Linda project and its TupleSpace research, it's not necessarily tied to a TupleSpace implementation--any searchable data store will do.


Allow me to fill in based on my research, and let me know if I understand right:

The idea seems to be that you keep a big gigantic graph, that we call a "TupleSpace."

This "TupleSpace" represents pretty much everything that your program (or family of programs) knows. So, let's say you get a web request.

Suddenly, in your TupleSpace, the following might appear:

  (Node "Web Request") -- (Edge "With URL") -- > "http://c2.com/cgi/wiki?GenerativeCommunication"

(Node "Web Request") -- (Cookie) -- > "some cookie string"

(Node "Web Request") -- (Date) -- > (some date node)

(etc., etc., etc.,.)

Now, I believe the idea of GenerativeCommunication, is that this "communication" (this injection of tuples into the TupleSpace) causes a sort of chain reaction. Because, somewhere, there's a trigger. There's a sort of trap that has been laid. The trap said: "Whenever there's a match for (Web Request)--(With URL)-->(foo), start up this program."

The program then makes a Response node in the TupleSpace, and attaches a bunch of data to it.

Perhaps the last thing the program does, is say: "(Web Response)--(is complete)-->(true)," which then causes another program to start- the WebServer.

The server is raised, takes the web request and web response pair, sends off the response to the InterNet, and then removes both the the request and response data from the TupleSpace.

The act of raising programs may be the "generative" part here. The communication part may be- well, the entire TupleSpace is a gigantic communication of state. If you ever need some piece of information, you just look it up in the TupleSpace.

Am I understanding correctly? Is this the right idea?

-- LionKimbro

Embedding pattern-recognition for individual-tuples and (especially) groups-of-tuples and temporal-tuple-associations (e.g. 3 tuples with properties x,y,z added in any order within 3 minutes of one another), along with automated callbacks (or process-spawning) when the conditions are met to execute useful work, would be a VERY considerable enhancement to the basic TupleSpace and GenerativeCommunication. As would be transaction support or ACID properties; a TupleSpace is SharedMemory in all the relevant senses, and doesn't save you on concurrency issues; you still need to use transactions (at least minimal ones) if you're going to have considerable concurrent read/add/delete operations upon the space (at least if you wish to avoid such things as two different programs reading one tuple, deciding they want to remove it and do work on it, then both removing it).


CategoryCommunication


EditText of this page (last edited November 23, 2014) or FindPage with title or text search