Wiki Code Runner

A hypothetical link to a remote server here could interpret AntiComment code in wiki pages to produce output as a gif or jpeg. The server could be provided by an interested wiki member for experiment the intention is not to modify the existing system. The link could be a simple textbox with submit button with usage

  wikiobject method p1 p2...
where pi are method arguments

For example

  ObjectCircle draw 'red' 10 100 100

would cause the remote server to use http to fetch the existing wiki page for ObjectCircle, look for the //with code behind it, then interpret the code, create a temp svg page, convert to gif or jpeg and return a link to the jpeg which the user could paste into ObjectCircle. Perl libraries for http fetch, parsing and svg->jpeg generation are freely available so for experiment is could easily be done one someone's linux or Bsd box with apache (unfortunately can't volunteer). Even with a month worth of various users running tests a few thousand jpegs (say 3x3 inch size) would not take up a lot of space maybe limit one jpeg per wiki page if space is an issue to keep the result link "live". The perl (remote) parser would control the power of the interpreted AntiComment code so even if someone wrote
  ObjectFile? delete 'c:\*.*'
if the server does not implement it nothing would happen.

Now circles are pretty basic but one could then define say uml object in terms of these basic shapes. so for example we want to draw() a UmlActor? object it would define varables as ObjectCircle and ObjectLine? pages with appropriate values. A call to the server with

  UmlActor? draw 100 100

Would then return a jpeg with an actor at 100,100. The submit process would traverse the dependency of each object referring to the sub-objects (use http to get UmlActor? page code then ObjectCircle page code etc) getand marshall all the code before interprting and generating the svg then jpeg. Pretty soon MitochondriaObjects? could be visualized, perhaps even with animated gifs. Each object would have very simple code but the effect would be like thousands of ant's creating a complex colony by simple actions (EmergentBehavior).

Note this is not the same as VisualizeTheWiki which is looking at the graph of relations between pages. It is rather an attempt to simulate each object where possible. Even abstract objects like parsers say an ElizaProgram defined in wiki could return the result as text in svg (then becomes a jpeg) this medium just provides a canvas. Wouldn't be interactive but in a sense wiki would become "executable" to a limited degree. The remote server would have very little logic just enough to interpret simple statements, expressions and render geometric shapes. All the real logic would come from the wiki AntiComment code.

Example (wuki user types into the server side textbox):

  ElizaProgram ask 'I am afraid of the dark'

Could return the jpeg with text:
  what is it that makes you afraid of the dark?

perhaps some kind of session key could be maintained to continue the conversation. Again many such programs exist but this would be composed of definitions in wiki pages themselves. How to create detailed object oriented parsers is described in "Building Parsers with Java" the striking thing is how relatively simple the code in each class is (though there are lots of classes) But in principle if Tokenizer, Assembler, Sequence etc objects were defined as wiki pages with AntiComment code ElizaProgram (and other parsers) could be composed of these. That is at the complex end of the spectrum but circles, squares etc should be easy to start. Just throwing it out think it would be interesting.


EditText of this page (last edited October 18, 2003) or FindPage with title or text search