Capacity Constrained Flow Network

Flow is a term from graph theory, describing the amount of some quantity which passes through a certain node in the graph. Flow is never used up. Flow could be used as a measure of trust in a user - someone with a high flow would be more trusted than someone with a low flow. The only thing which would 'consume' your flow would be assigning trust to other people - if you have 100 units of flow, then you can declare 'I trust user X enough to give them 100 units' and that user will then also have 100 units. Or you could pick 100 users and give them all one unit of flow. This doesn't change your actual level of flow either, though. You still have 100 units yourself.

Side note: There would probably have to be a limit on the amount of flow you can assign to any one user, which is less than the total required to become highly trusted - otherwise it would only take one person's 'vote' to give an otherwise unknown user trusted powers.

In a capacity constrained flow network, such as the AdvogatoTrustMetric, trust is 'spread around'. The only way to get a high flow is if lots of people trust you.

A capacity constrained flow network is robust to attacks or noisy cert data. It is difficult for an attacker to acquire large amounts of flow - there is a nearly linear scaling between the amount of flow you can get and the number of certificates you have to obtain from the 'good' set. RaphLevien's PhD research work contains a proof of this assertion.

There is nothing to prevent a trust metric or flow network system from working with pseudonymous identities, and in fact there are a few at AdvoGato.

It would be an interesting experiment to start a new wiki that uses a capacity constrained flow network. (Perhaps some ideas from the QualityOfService? and ClassOfService? in internet routing could be relevant). (But loading down *this* wiki with complicated security features would probably upset the careful dynamic Ward has cultivated).

The exact mapping from "flow number" or "trust number" to editing privileges is really outside the scope of this page. But for the sake of example, one possibility would be: (feel free to edit this example to make it better/simpler)



A capacity constrained flow network suggests a radically different form of authorization in distributed systems than the classic password and AccessControlList mechanism. In the latter, once you gain authorization, you are able to do unlimited amounts of change within the rights specified in your AccessControlList, but none outside that list. Further, someone has to configure the AccessControlList, and there's a very large chance they'll get it wrong, both by making it too restrictive so you can't get your work done, or by making it too permissive so that an attacker (or bumbler) can damage the system.

In a CapacityConstrainedFlowNetwork, by contrast, your ability to make changes is limited by flow. This flow is computed over a network, or graph, in which the nodes are people, and the edges are certifications of trust, ie, "I give this person flow because I trust HimOrHer".

A very interesting property is that the system is robust to attacks or noisy cert data. It is difficult for an attacker to acquire large amounts of flow - there is a nearly linear scaling between the amount of flow you can get and the number of certificates you have to obtain from the 'good' set. My PhD research work contains a proof of this assertion.

One interesting example is the AdvogatoTrustMetric, which suggests that the theory actually works well in practice.

I originally came up with the concept while trying to build a more robust PublicKeyInfrastructure. That, however, turns out to be a HardProblem?, so I've more or less given up on it. However, I see it as being useful for a number of things:

I'm pretty busy with other things right now, so I'm not really actively following up on these prospects, intriguing as they may be. For one, building infrastructure of any kind is a HardProblem?. It's basically a Catch-22 situation. But existing systems are so fragile and vulnerable (particularly our email system) that I hope other people adapt the ideas.

One particularly intriguing application is to fit such a flow network to a Wiki. Ideally, it would not get in anyone's way for actual editing, once everybody acquired their certs, but would go a long way towards preventing a WikiMindWipe. Who knows, though? Maybe there are other, better ways of handling this, or maybe it's not a problem that needs solving. It would make an interesting experiment, though, I think.

-- RaphLevien


Hmm. A WikiMindWipe isn't a problem to the extent that trust metrics are needed. Why do I need to trust anyone on this system, or in turn why does anyone need me to indicate a trust in others? I can't verify who said what anyway, I can only read the pages. Nothing has to be signed here. So I can't indicate trust, even in principle. In what do I trust here, except the system itself?

I can rank a page a la collaborative filtering. But how do you rank a mutable entity over time? Imagine if there were books on amazon whose contents could change at any time. Such entities can't be ranked in any practical way. You can only rank a snapshot. Amazon has a glacial version of this problem when it lets prior version reviews of a book roll into the latest version. Why even rank it when I can in my eyes, make it better?

However an automated wipe is a problem. See: WikiWipeout. Think DDOS with write permissions. Eventually, someone will attempt this.

-BillDehora


Of course, such a system would make many anonymous contributors leave. One of Wiki's charms is that everyone is free to contribute.

I agree that this is a desirable feature. I come from a strong CypherPunks background, so I am definitely sensitive to the issue. One distinction I'd like to make, though, is between anonymity and pseudonymous identity. There is nothing to prevent a trust metric or flow network system from working with pseudonymous identities, and in fact there are a few at AdvoGato.

See http://usemod.com/cgi-bin/mb.pl?AnonymityVsPseudonymity.

Incidentally, this Wiki isn't 100% anonymous either, as it stores your IP address in a log. So, if I were to contribute to a very sensitive discussion, I might still be tracked down.


Several points. First, the mind wipe was made by an experienced (and theoretically certified) member. So the wipe was not preventable by the trust network. [No, if you believe this you're obviously missing the point of using a capacity constrained flow network as opposed to simply using certification as a form of access control. Since AdvoGato essentially does the latter, I can see how it can be confusing. But it's not what I'm proposing here.] In fact, if some of you recall, the community on wiki essentially allowed the wipe to continue unabated as no vindictive harm was made.

Second, Wiki (at least this one) is already protected against mass deletions. It would take a certain viciousness and cleverness to figure out how to do it. It takes effort.

Excellent.

Third, pure anonymity is verboten on this Wiki ever since Anonymizer.com was banned. (see BecomingAnonymous)

Fourth, I don't think there's a problem that needs to be solved, personally. Loading a wiki down with complex security would drastically change the dynamic. In fact, it's bad FeatureKarma to add it for no good reason. Ultimately, wikis rely mostly on community solutions (http://usemod.com/cgi-bin/mb.pl?CommunitySolution) with a little technical assist.

I think the trust metric is like this. It is not impossible to thwart the network. Just look at the hoopla surrounding http://advogato.org/person/esr. But the hacks are entirely social. And this is the point: you'll never build a community on technology alone. You need people. -- SunirShah

You may well be correct that Wiki is fine the way it is, and that the merger of Wiki and trust flow stuff is less interesting than Wiki alone. In any case, the world is not in serious danger right now of having such a prototype built, at least not by me.

I agree with your assumptions that feature karma is bad, that purely technical approaches suck compared to community ones. All I proposed is that it would be an interesting experiment. Is that so wrong? -- RaphLevien

Not at all. That wasn't my point exactly, though. First, wikis are cool because they are anarchies, so community solutions are always the first answer proposed. Second, it's bad karma because the question is not of "build it and they will come," but more of "What do I envision and how do I get there?" In fact, UseModWiki was fairly untouched until Cliff volunteered MeatballWiki; then feature requests came in pretty fast.

Content over form. You first have to figure out what community you want to build, and then shape the technology around it. Maybe the oxymoron of a restricted openness makes sense - BalancingForces to each other. OrgPatterns is a restricted wiki, but it had a reason for that. -- SunirShah P.S. I didn't mean to sound like I was attacking you in any way; if I did, I apologize.

Not at all. I can understand the frustration around feature requests, though. It seems like about half the metadiscussion on AdvoGato is feature requests, particularly ways of making the AdvogatoTrustMetric more complex. I think this is particularly ironic considering that the other half of that discussion is people not understanding what the existing trust metric is about :)

As for experiments vs systematic design, when it comes to online communities, count me firmly in the former camp. I do not have anywhere near the hubris to predict how users will actually respond to a given online community setup. I certainly wouldn't have predicted that the anarchy of Wiki would have led to the high quality it has. To me, that's why doing the experiments is so interesting. -- RaphLevien

A lot of the discussion on MeatballWiki is on feature requests for Cliff, so I understand completely. In fact, he left it up to me to implement all those http://usemod.com/cgi-bin/mb.pl?IndexingSchemes. But you make the good point about trying to BigDesignUpFront a community. That will fail inevitably. Perhaps you need the lucky mix between an good experiment and a purpose. I think Advogato is excellent for this reason. It has a very good purpose and it is technically very well conceived. -- SunirShah (P.S. Have I exceeded my InterWiki quota yet?)


I don't really understand the basic idea. What is "flow" and how is it used?

If I have 100 units of flow and it takes 10 to edit a page, does this mean I can edit 10 pages before needing to ask for more flow? Do different pages have different flow, or can I take flow acquired to edit one page and apply it to another? How do I get flow if I just joined and nobody knows me? Does it take an explicit action from someone else or do I get it as a right? Does flow expire? Can I give my flow to someone else while I'm not using it? How does flow distinguish between valuable edits and destructive ones? And so forth. How does flow work?


Note: I'm attempting to explain how it might work, not suggesting that it be used. I don't think we need it, but it's a fun GedankenExperiment since I like graph theory.

Flow is a term from graph theory, describing the amount of some quantity which passes through a certain node in the graph. Flow is never used up. Nobody here has explained exactly how it would work in terms of Wiki, but it would go something like this:

Flow could be used as a measure of trust in a user - someone with a high flow would be more trusted than someone with a low flow. 'Trust' represents arbitrary ability to edit Wiki - maybe your trust rating specifies how often you can make edits, for a simple example. Flow is not a currency - if you have 100 units, you have whatever editing ability people with 100 units of flow have, and exercising this ability does not change your flow.

The only thing which would 'consume' your flow would be assigning trust to other people - if you have 100 units of flow, then you can declare 'I trust user X enough to give them 100 units' and that user will then also have 100 units.. or you could pick 100 users and give them all one unit of flow. This doesn't change your actual level of flow either, though. You still have 100 units yourself, you would just have zero available units.

Side note: There would probably have to be a limit on the amount of flow you can assign to any one user, which is less than the total required to become highly trusted - otherwise it would only take one person's 'vote' to give an otherwise unknown user trusted powers.

In a system like this, trust is 'spread around' - the only way to get a high flow is if lots of people trust you (to varying degrees). If nobody knew you, you wouldn't have any flow and would be limited to the most basic editing - maybe only one edit per day, or something. This means that your one edit per day needs to be insightful and interesting - because if it is, then someone may well give you some flow. Most of your questions don't make sense in the context of flow being an 'unspendable' quantity (but that's not your fault, you just weren't aware of the graph theory meaning), so I'll skip them. One important thing is that flow can of course not tell the difference between useful and destructive edits. You could make a rule that users with less than some amount of flow cannot delete content, only insert, but many deletions are useful refactorings, and that would confine most edits and refactorings to people with high flow. Maybe this is desired, maybe not - it would be the exact mapping from flow -> editing ability which would determine that.

So, you would have limited editing ability until someone noticed that you were making good edits, and your editing ability would then increase. At some point you would be able to start assigning your flow to other people, thus propagating trust throughout Wiki.

Also, it means that some original amount of flow has to come from somewhere, otherwise nobody is trusted at all. The most likely place would be for Ward to assign himself infinite flow and to then apportion it out to known Wiki users as he saw fit, whom would then in turn distribute their flow to other users, and so on. This sounds despotic, and in fact is, but based on the community we already have here, people who everybody 'knows' like Ward, Ron, Kent.. are in fact pretty fair =)

That's just my stab at describing how it could work - please, correct any mistakes I've made with respect to graph theory, and if someone has a different method, please explain it too! I quite like the idea actually - it's similar but not identical to how Advogato works. I might have to run off and implement a flow based IRC bot or something, just as an experiment - I already have a Slashdot-like IRC bot which works a little (only bot has ops, people request that others be banned/kicked/silenced..etc and the bot weighs requests according to how often the requestors are in channel, talking, being banned themselves..etc - similar to how Slashdot assigns moderator privileges).

-- TorneWuff


A lot of the ideas in the discussion here are already available in switches and routers as QualityOfService? (QoS) and ClassOfService? (CoS) options. Giving users and applications different types, flow rates and quantities of bandwidth based on role and need. I.e. Variable bit rate, committed bit rate etc. Video and voice which are continuous need to be shaped differently than email which is more intermittent. In the context of a Wiki there must be a way to convert some of these ideas without ReinventingTheWheel.


Side note: There would probably have to be a limit on the amount of flow you can assign to any one user, which is less than the total required to become highly trusted - otherwise it would only take one person's 'vote' to give an otherwise unknown user trusted powers.

Say it takes 300 flow to become "highly trusted", and a system puts a cap of a maximum of 100 flow that one person can assign to any one other person.

If one person controls 3 sock puppets, and somehow tricks people into giving each one a total of 100 flow. Then that person could leverage that trust into creating an infinite number of sock puppets, each of which has over 300 flow (and becomes "highly trusted").

Fortunately, as soon as the people who were tricked revoke enough of their flow to bring the total below 300, then all of those sock puppets simultaneously become no longer "highly trusted". We don't have a "whack-a-mole" problem.


a spam-resistant email infrastructure

That really caught my eye. So this is a potential SpamSolutions?

But I'm a bit puzzled as to how exactly it would work. Certainly I could state I "trust" Alice 1000 units, which would somehow allow Alice to send more email. But currently most spam involves a spammer impersonating someone else. What stops some spammer from impersonating Alice, annoying me and DOSing Alice? And once you've worked out a way to stop that spammer from impersonating Alice, couldn't you use that method to stop most spam, without a CapacityConstrainedFlowNetwork?

-- DavidCary


EditText of this page (last edited June 24, 2006) or FindPage with title or text search