? What are good ways to organize, index, and navigate among evolving hypertext pages.
> Put search and indexing information on each hypertext page.
? What is the best search and indexing information to put on each wiki page.
- Wiki will automatically create forward links (and through the search mechanism, backward links) if a term has two capital letters in it.
> Put one or more standard tags on each wiki page. Make each tag a wiki link. If enough people use these standard tags, the tags will form an automatic indexing system.
? What tags should be used to automatically index wiki pages.
> Put a category on each wiki page that you would like to index.
PleasePleaseDontCategorizeEveryPageOnWiki
I'd consider that this is an example of MeaningfulUrls?. Since each page has a title which can be fed directly in as a URL, it makes finding things fairly straight-forward, even without a search-engine/indexer.
URLS are always partially meaningful, as c2.com, for example, divides up the universal concept space into a little chunk. But, alas, implementation is exposed in the URL, as opposed to interface only.
URLs are interface, not implementation (protocol aside). Do we care, really that it's a CGI? No. We care that we can find what we're looking for, and having MeaningfulURLs assists that.
I'd suggest that even more meaningful would be a url like http://www.c2.com/Wiki/WikiNavigationPattern.
Then, we'd know it's the Wiki subsection of c2.com, and the WikiNavigationPattern subsection under that. "/cgi/wiki" exposes implementation.
Along with MeaningfulUrls?, which hide implementation, goes PersistentURLs. They are bound mostly to concepts, and not implementations. Alas, hostnames get in the way there, which binds to organizations, not concepts. OhFnord?.
I see (one use of) wikis as a collaborative knowledge base. The biggest improvement I look for is improved search capability, including phrase, boolean, and proximity search capabilities on the full text of the wiki page.
If the wiki content is stored appropriately this better search capability can be provided by an external search engine (ZyIndex? (windows only?), AltaVistaPersonal? (defunct?), htDig, or ?). (One example of appropriate storage is each page in a separate text file.)
PS: I am confused about the capabilities of Web search engines to index wiki pages (dynamic pages).
ANY http search engine can index ANY wiki. It is just a matter of combination of configuration, URL space and page content (META tags) whether it is possible. There are no inherent TECHNICAL limits.
I was initially going to add this on PleasePleaseDontCategorizeEveryPageOnWiki -- but it seems perhaps more appropriate here. Unfortunately, finding "here" was somewhat difficult -- which is part of the whole point ;-)
I couldn't find any reference to the N-grams technique of full-text searching, so I thought I'd bring it up as a possible mechanism. I would think that it could potentially help with the issue of finding "matching" pages as it measures the "similarity" between text and isn't so dependent on someone's (somewhat arbitrary) definition of what "goes together".
It would be reasonably straighforward to implement incrementally as pages are saved, thus minimizing the amount of computation except for the initial factoring process. It could then be used via a search page, or automated with a "See Similar" link in the same way backlinks are done from the page header.
-- KaiBouse?
See also: WikiCategories