Wiki Needs Trust Metrics

Someone suggested that Wiki needs a killfile, but kill-files don't work here. AdvoGato uses TrustMetrics for much the same purpose. How could that be made to work here?


[If this were done, there would be constant deletions and restorations, deletions and restorations, over and over again. For example, all the scurrilous insulting of LinusTorvalds, DennisRitchie, RichardStallman, AndyTanenbaum?, TedNelson, would be deleted. Then it would be put back. Etc. No, the suggestions below are the correct way to go, I believe. But they are not the WikiWay...]

You are claiming that it is not true that the vast majority of the people are trust-worthy and responsible. I believe you are wrong, and that the current brush-fire will die out and the wiki will survive.

[Sorry, I misread you before. I am not claiming what you say I am at all. My claim is that by the above criteria, it would be correct to delete that material. I then am assuming that Kulisz will keep putting it back. This is not a current thing, much of that material has been there for a long time and people have just let it go. But I think there are some people now who will not let it go. So unless Kulisz doesn't keep putting it back, we could have the kind of activity I describe above going on for a long time despite the fact the vast majority of people here are trusty-worthy and responsible. The other problem is that amongst that irrational, vicious, vitriol, there are some legitimate criticisms. It is difficult to delete it, difficult to let it stay, but difficult to edit it without a total rewrite, which is thankless since Kulisz will then just restore the original stuff or close enough to it.]


We'd have to abandon SoftSecurity. People would need to create accounts and login with them. But there'd be no requirement for account names to map to real names. Instead we'd have consistent pseudonyms so that over time you'd come to have expectations of people.

Every user would get a homepage based on their account name and kept in a different namespace to regular wiki pages (much like WikiPedia's /Talk). As soon as an account is created your 'juice' or trust would be set to T. This would be the threshold value. If your juice falls below that value (say because you upset sufficient people with high levels of juice) you'd lose the ability to edit any pages other than your homepage.

At the same time people would be able to sponsor new wikizens by certifying them. Negative behaviour by those you sponsor would of course reduce your own juice.

This way, we'd always know who had said what and no newbie could go on a rampage that caused massive amounts of harm over a prolonged period of time.

Of course, all of this has already been explained so much better over on http://www.advogato.com. Take a special look at http://www.advogato.org/trust-metric.html and http://www.advogato.org/person/bytesplit/ (who represents precisely the kind of attack that the trust metric was designed to thwart) and of course the FAQ: http://www.oshineye.com/texts/advogatoFaq.html.

Isn't it a demonstration of a broken trust metric if bytesplit is certified at the Journeyer level?

Nope, Advogato is using a two-dimensional trust-metric (see this page for a diagram: http://www.oshineye.com/software/advoSpaceOutput.html) where someone can be a highly-rated open sourcerer but have their diary rated beneath the visibility threshold.

Actually, it would seem that the certification metric is irrelevant for trust. The diary rating is the metric that controls visibility of posts. Why have the certification at all, unless the diary rating is also related to the certification level in some non-obvious way?

Yep, the diary rating and the certification metric are indeed related. The idea is that diary ratings from people you've directly or indirectly certified count for more than those of random members of the community.


Wiki Scoring Proposal, from StevenNewton

Illustration of scoring (example - I don't know if these counts are really right):

To score Once and Only Once:

Seed pages, score 3: FrontPage WikiWikiWebFaq

Pages linking to Once and Only Once: WikiWikiWebFaq WikiEngines RecentEdits

Score for RecentEdits is 1

Score for WikiEngines is: (3 for WikiWikiWebFaq + 1 for RecentEdits)/2 = 2

Score for Once and Only Once is: (3 for WikiWikiWebFaq + score for WikiEngines + score for RecentEdits)/3 (3 + 2 + 1) / 3 = 2

Thinking about a second dimension of trust, now. A visitor can attach an arbitrary score of 0-3 to any page, but that score is for that user only. In other words, if a visitor thinks WikiEngines is a boring topic, he or she can score it as 0, and that score will be used in place of the community score, for that visitor only, which presumes that there must be some way for wiki to identify a particular visitor. Just to be clear, for that visitor the score of 0 not only applies to that page, but to the scores assigned to other pages based on links from it. So the score for Once and Only Once becomes (3 + 0 + 1) / 3 = 1.33...

While this is no doubt a technical tour de force it scarcely counts as the SimplestThingThatCouldPossiblyWork. WardsWiki has survived for years largely undamaged. Let's see how things progress without such technical changes.

Quite right. The tour de force is a thought experiment by me, not a serious proposal that I wish to pursue here. I have adjusted it to suggest a second dimension of trust - visitor self-scoring, based on the discussion on this page.

SoftSecurity is a good thing. TrustMetrics? are interesting, too. Both are attempts to mirror in a technical way real human community interactions. For example, in any trust metric, I would expect friendship (trust) to be unidirectional. That is, if I trust you, you are not required to trust me. Also, it should not be the case that friends of my friend are automatically my friends as well.

-- StevenNewton


The above assumes that trust is unitary. So someone who writes only pablum would get a high trust value while anyone controversial would get a low value. And of course, there is no way to distinguish between, say, a controversial left-winger (eg, NoamChomsky) from a controversial right-winger (eg, CostinCozianu). Rather, it conflates individuals' trust in people into "the community's trust" in someone. That just gets you into the problem of aggregation of preferences. Specifically, it is impossible to aggregate preferences in any non-arbitrary way.

[The latter claim is highly debatable, however, the weaker claim that we don't know much of an idea of how to do it now, if it is possible, is enough to establish that this proposed "tour de force" will not work. It enforces community solidarity over the honest individuals search for truth. An individual interested truly in the search for truth could quite like get shut out of such a system. I am working on a program that might get around this problem, but it would never search as a replacement for Wiki even if it was good, because of network and lockin effects]

Actually this would work assuming you had a multi-dimensional trust metric so that users could be rated in as many dimensions as the system support. Some of those rating dimensions would be collaborative and some would be based on your particular perspective. Advogato already does this. Certifications are collaborative whilst diary ratings are perspective-based.

Also we'd also want to start new users just below the edit threshold otherwise persistent trolls could just keep creating new accounts. That particular problem has plagued WikiPedia for a while now.

[What I think you want (what my program alluded to above is intended to do) is to let everyone edit everything, but allow users to have filters based on their own evaluations of reputation. These filters could be arbitrarily finegrained, allowing them to see additions from certain people only on certain subjects, and they could be turned up or down depending upon the users mood - does he want to get down to business and really learn something, or instead slum around with what he considers various interesting crackpots? Both are possible.]

The program alluded to by our friend in the square brackets sounds very much like PurpleWiki (available at: http://www.blueoxen.org/tools/purplewiki/) which has a model of wiki that tracks fine-grained elements (finer than the page level that is) and as such offers TransClusions.

[Yes, my (only partly designed and implemented) program is at the very fine-grained level (alas, in some ways it is not as fine-grained as wiki, since because of expediency in programming (the use of Tk canvas widgets to handle justified text formatting on the desktop version), it does not allow structure inside each individual "paragraph" (which is the unit of granularity)). In fact there are no "pages" in the wiki sense, but pages are created dynamically from focal items (what are here WikiNames), based on selecting a set of items from the set of all items that are explicitly related to it. These items could thus potentially be in many "pages" at once. If you know Ecco or Lotus Agenda there are similarities there. Unlike PurpleWiki, the item level of granularity is there from the beginning. I've been "working" on this stupid program since before I heard of wikis or indeed before they existed, but I am not a superman like Kulisz so I haven't made a lot of progress.]

The downside to letting everyone edit everything (like the downside of killfiling people on UseNet) is that you'd still see the side-effects. If someone you like or have whitelisted responds to flame-bait then you end up having to wade though the ensuing flame-war. The beauty of collaborative filtering is that the system filters based on the totality of your preferences. It should be possible to make an objectionable person 'disappear' from your perspective and reduce the visibility of anything they touch just by applying a rating. Thus if I rate someone (called Alice) as 0 then anyone who rates Alice as 10 or who Alice rates highly up ends up with with a lowered rating from my perspective. Allow the community's perspective to be an input to your trust metric (if I rate Bob as a 10 and Bob rates Charlie as a 0 then the system tentatively assumes that I have a low opinion of Charlie) and I can filter out most of the undesirables (present and future) just by rating a few individuals.

[I didn't mean everything, but they should be able to add comments everywhere as they wish. The hope is that they can be filtered out using various mechanisms such as those you describe. Maybe this is hopeless. My main interest now for my own program is in the single user version, which people can use to develop their own worldview over time, and then expose parts of it for commentary from and to help others. The collaboration model is more along the lines, to use a political/ethical metaphor, of Kant's kingdom of ends rather than Rousseau's body politic. Each person would have his own sovereign structure which they could consider separately from the totality at any time.]

The difficulty with any security system is that it must deal reasonably with a variety of different kinds of 'attacks'. These range from newbies who commit social blunders, to WikiSociopath?s all the way up to people bent upon the deletion/corruption of the entire wiki. As such the system has to be based on feedback so that the more serious the transgression the more severe the response. It should be easier to lose trust than to gain it that way people have a disincentive to misbehave or tolerate misbehaviour. The downside is that someone who is sufficiently unpopular get visibly shunned and becomes a voice ranting in the darkness. The upside is that unpopular people ranting in the darkness are out of sight and so can safely be put from one's mind.


In 2004 the community was infected by a small number of troublemakers (fewer than five - probably two or three). These troublemakers were sociopaths who the community had to deal with in some fashion. Once these particular troublemakers had gone, things settled down, - until the next set of high-strung Wikizens come along and start another bar room brawl. Remember that most of the people referred to as "troublemakers" are, in fact, people whose intentions are nothing but the highest. They see themselves as Keepers Of The Truth, etc., and are willing to cause all manner of "disruption" to see their goals met. Folk like that are not unlike religious extremists - except that they don't strap on Semtex vests. Well, not yet, anyway.

The combination of system support for TheTimeOutStrategy and community resolution on the matter extinguished the winter 2003/2004 brushfire. The solution may not be "long term", but it seems to be working. It took nine years for serious troublemakers to emerge. If an episode like this happens no more than every three or four years, that's good enough for me -- especially if the learnings from this one prove helpful in addressing the next recurrence.


For a solution to the required login see AccountlessUserIdentification.


Let's be frank: Wiki "needs", or at least could make good use of, a whole lot of things. None of which WardsWiki is likely to provide. There are literally dozens (perhaps over a hundred) wiki codebases, most of which have more features than this one. Call it minimalism, elegance, pragmatism, apathy, negligence, whatever you want, but you're just not going to see drastic changes made to the codebase of this wiki to support things like trust metrics, karma, moderation, and so forth.

There are parallels here between c2 and other online communities. A few cranks and/or stubborn vocal types take over discourse and either game or otherwise stoke up ill will around whatever technical mechanisms of social control exist. It happened all the time on usenet groups (especially when propagation was more reminiscent of uucp peer-to-peer and not large hubs like giganews), it happened on LambdaMoo with a few characters and the arbitration system (those that were there know who, no point in naming them), and doubtless many more communities where I've not witnessed it myself. See WikiDeclineLament?. One might experiment with socio-technical mechanisms ElseWiki?, but they're likely to find little traction here.


The BookMarklet at UserName is a kind of SoftSecurity and is better than none. Even a consistently used DramaticIdentity helps to signify the general nature of posts being made, and is particularly useful for people with multiple IpUsername (e.g. dialup, or switch between work and home).


It seems we've been working on these problem, in parallel, but independently.

I think you can separate out the two orthogonal issues and desires best with UserRanking (like AdvoGato's TrustMetric) and PerItemVoting (rather than ranking a user's blog quality). You don't need multi-scale voting, just a simple + or - 1 will do. The rest is handled though counting intrawiki linking. Or, on AdvoGato, how often users link to other users pages or content (each link can count as a vote). Ultimately, what this is all pining towards, I'll argue, is the GlassBeadGame and a new Internet that embodies the WikiWay. -- MarkJanssen


The practical bottom line is that if you want a heavier GateKeeper approach to an IT wiki, you'll probably have to build it yourself(s) how you think it "should be" and hope people come. The WikiZens of this wiki will probably want to keep things more or less the same because it's a self-selecting group in that those who really hate the "open" style probably stopped coming, meaning the existing WikiZens will tend to favor the status quo. It's roughly comparable to a rock band changing their style: existing fans want things as-is because they are fans precisely because they like the band's "sound", and it takes a while to build a new fan base if they switch. In general it's far easier to lose fans than gain them. Perhaps it's better to focus on incremental fixes. - t


See also EditsRequireKarma


EditText of this page (last edited June 4, 2013) or FindPage with title or text search