Linearizing Wiki

I have the case where people want to take a document home and read it. Hypertext systems don't make this easy.

I was thinking of taking a root link and for every child link introducing a header and making one big html page that could be printed. Maybe a PDF document. I could check for cycles, create a table of contents, etc.

Anyone has this sort of thing or have better ideas?

Give them a laptop? HaHaOnlySerious.

Haven't you ever wanted to just get a big document and read it instead of jumping links all the time? It reminds me of how people complain that LotsOfShortMethods make OO code hard to understand.

This creates the same problem as online documentation for applications which substitutes for a printed manual. Major vendors who publish in this way and charge you extra for a printed manual. Remember CorelGraphicsPackage3? I think that was the last time Corel actually published a book that had all the jillions of little clipart images shown in it. It is not easy to spin through all the images on a disk to find the one you need.

It would be good if somebody came up with a linearizing tool. Until then one is left with capturing a page and all its links using a tool like Site Snagger, realizing that you'll get plenty of useless fluff as well.

TransClusion can be a very effective solution.


CategoryWiki


EditText of this page (last edited July 3, 2010) or FindPage with title or text search