Static Html

static web pages

(moved from FilesystemBasedWiki)

Most wiki pages are simply read like a web pages. Each page is read an average of ___ times per edit (write)(save).

It seems "obvious" to many people that caching wiki pages would make the wiki run faster (but perhaps this is PrematureOptimization).

Someone at http://usemod.com/cgi-bin/mb.pl?LinkPattern suggests "do most of the rendering when the page is saved", with only the minimal amount done by the web server CGI script as the page is read.

It is intellectually interesting to see how far we could push "minimal". The extreme limit, of course, is *everything* being pre-rendered. This has the advantage of no web server CGI script needed for page reads. However, that benefit still occurs even if we back off a bit, and just pre-render *some* things at page-save time, and post-render everything else on the client's computer with JavaScript or Java.

So we don't need any web server CGI script for page reads. That makes us wonder - do we need a special CGI script on the web server at all?

Surprisingly, the answer is "no". In theory, edits could be sent to some *other* computer, that would immediately upload (fully or partially) pre-rendered pages to the file server. On the web server itself, all files are static (but are rapidly being replaced by updated versions sent by the other computer). However, this is unlikely to be the SimplestThingThatCouldPossiblyWork (compared to putting this "extra" software directly on the web server). While this is still theoretical, TiddlyWiki comes very close.


Yes, you're absolutely right. But if that's the worst problem, we could kludge around it. (Perhaps by pre-rendering *both* a (broken) link to non-existent page, *and* (if the page does not yet exist at render-time) a question mark ... then if/when the page with that name comes into existence, the link starts working).

No, this is not right. You CAN link to newly created pages. I am tempted to write "exercise left to the reader", as this bit took a bit to puzzle out. :)

The trick is to keep a server-side table (database or flat-file) with a list of links, and for each static HTML page to load an associated CSS and/or Javascript that updates the state of defined/undefined links. When you save a new wiki page, you consult the table to see which existing pages reference the new page. For each of the existing pages of interest, you re-generate the associated file (not the HTML). The associated files can be very small, so the amount of data generated on the server by a page update, and the amount of updated data loaded by clients, can be small.

Why regenerate HTML when the text of a page has not changed? Only link-related metadata has changed, and the metadata can be stored separately.


For ReallyValuablePages to be saved locally, but where one does not want to print them, preserving their appearance, it is an option to convert them into to local Pdfs. This not only preserves the appearance (the result of any JavaScipt?, ASP, Css and the like), but also preserves links. -- DonaldNoyes.20110319


Many people have access to a web server that allows them to upload ".html" HTML files (and also ".js" JavaScript and ".class" Java files), but far fewer people have access to a web server that allows them to install programs that actually run on the web server.

True, though the ability to run PHP, and sometimes Python or Ruby, server-side is increasingly common.


EditText of this page (last edited May 25, 2013) or FindPage with title or text search