Make The Client Pay

Whenever you have a freely available resource, some people will abuse the resource, with the direct or indirect consequence that others will be unable to use it legitimately. The solution to this is to make the client sacrifice some nominal amount of something which is personally valuable to them. The idea isn't to make clients pay you back, just to make them pay.

So for example, in the case of freely available medical services, making clients pay in money is a poor method of sacrifice. That's because money is very unevenly distributed so that some small fraction of the population doesn't care about it at all, whereas a large fraction doesn't have enough to pay for legitimate uses.

Happily, computer networking power is not so unevenly distributed. Personal computers differ in power only by two or three orders of magnitude, and not 9 orders of magnitude as is the case for money. So in the case of all kinds of denial of service attacks, it is sensible to make the client pay for requested services in computing power. This also applies to the processing of requests for services which is itself a service.

But to make the client pay for a specific service, the client must first request that service. There's no way to send the money before some sort of request. As long as a client can send a request at some protocol level, a server is vulnerable to denial of service attacks.

Did you miss the point where it's not about sending payment, let alone money, but about proving it's been sacrificed? If you send someone a video showing you burning a million dollars, that's proof of sacrifice of a million dollars.

If a client spends as much or more computing power formulating and sending a request as a server spends receiving and examining it then this is an appropriate sacrifice for most applications. This is the most elemental request for service / sacrifice pair there is and it is indivisible; you can't make a request without sacrificing something.

How to convince a rogue client to abide by this rule is left as an exercise for the lowly programmer. Meanwhile the InteractionDesigner is busy trying to construct more circle quadratures type of designs.

Here, let me hold your hand while I do your thinking for you. Since someone obviously neglected to explain the facts of life to you, I will start off with those.

The reason you can't cite any law or produce any proof that thwarting denial of service using a 'make the client pay' tactic is impossible is because there isn't one. And the reason there isn't any such proof is because one can't be constructed. And the reason why one can't be constructed is because the opposite (that it does in fact work) has already been proved and accepted as true.

HashCash, which SJ was so good to point us towards, is a working 'make the client pay' solution to the problem of defending human attention from denial of service attacks (ie, spam). Given that the tactic is known to work, I need merely give the general shape of a solution in the case of TCP. And the only reason I need to do this at all is to show that 'make the client pay' can be applied much further than anyone has ever thought it applicable before.

Once TCP is re-engineered to prevent, or at least make much more difficult, DoS attacks, then IP-based attacks are a completely different thing. These are much more difficult to prevent and it would take re-engineering IP to prevent them, but maybe once people get tired of a single badly configured company router being able to hose half the internet then something will be done about the problem. Your whining about it accomplishes nothing and it's quite obvious that a Single Totally Integrated Solution that thwarts DoS all the way from the IP layer to the email layer, and provides cake and cookies too, is neither possible nor desirable.

Now be a good boy and if you manage to assimilate all these facts then you'll earn yourself a pat on the head.


Richard - I must be missing something.

This seems independent of the mechanism of P, and to prove that a DoS attack can succeed by definition.

Could you explain where I have gone wrong? Thanks.

First, if you really want to understand the issues then you might want to stick to the right language and context. That context is economic, so you speak about resources being spoiled until others starve. If you don't use this language then by now, Eric has assured this, you send a clear sign to me that you're unwilling and incapable of learning.

The whole point of the payment / verification scheme is that it is radically asymmetric. Paying requires a great deal of computation whereas verification of payment requires minimal computation at best. In the case of multiplying a row of numbers and checking whether they fall within a list of bounds, the cost of doing this is trivial. And it's obvious to see that it is not independent of the payment method.

Further, wasting CPU verifying payment isn't a problem in reality because CPU power is plentiful and networking bandwidth is scarce. That's why you use the thing which is plentiful (though not human-plentiful) to pay for the thing which is scarce, and not vice versa. If the situation were reversed then you would just reverse what resource pays for what other resource. Again, this is just economics. -- RK

Even before S confirms that P is valid, S still has to allocate network resources to receive R. Those resources can be exhausted by A just as they are today. You can keep pushing that problem to lower levels (that "aren't your problem"), but that doesn't make the problem go away. -- EH

Go away, go away, go away. You're uneducable Eric. The only thing you achieve by persistently asking your questions is to make me dismiss the possibility that anyone else has the same comprehension problems. And since your questions inhibit other people asking the same thing, your participation is actually a net negative contribution to this discussion. -- RK


This page should probably be renamed ProposedDefencesAgainstDos? because each DoS attack that targets a distinct resource may need to be defended against differently.

Let's try this again.

Most programmers think that DoS is just something that happens when you try to provide a service to the broad public, without any accountability or traceability. When asked to characterize DoS, they would say that "DoS attacks are based on exploiting the vulnerability of allowing access" or "simply overloading the server with more requests than it can handle", but this is not true.

It's not true because servers get overloaded all the time and usually what happens is that a few requests get delayed or dropped, so server overloads are not DoS. The typical programmer's way of thinking about DoS is very engineering and security-minded, and it happens to be incomplete. What that means is that you can't possibly engineer a security solution to the problem of DoS, and also that it's not a security problem in the first place so there might be other solutions.

Now let's think about DoS for a moment. Let's take it out of the computer networking arena and think about the essence of DoS. The essence of DoS is that there is some resource which is freely available to a group of people and which one or more members hoard until the rest of the group starves. In essence, DoS is a case of famine and not just any kind of famine but artificially induced famine. A famine where there's plenty of food, or other resource, to go around for all legitimate purposes, but hoarding and misallocation causes the majority to starve.

Now, what we see is that DoS is an economic problem so it's not any wonder then that there isn't any kind of security solution to that economic problem. What we also see is that DoS isn't just any kind of random economic problem but a very specific problem that's been extensively studied and is very well documented. AmartyaSen got a Nobel Prize in Economics for having pointed out the causes of famines, and Shlomo Reutlinger & Marcelo Selowsky verified it. So by now the thesis that famines are caused by unequal income distributions has widely become accepted.

Note that famines are not caused by lack of supply of food. Famines are not caused by demand "overloading" supply, they're not caused by overpopulation, they're not caused by having too little food to go around. In computer networking terms, this means that DoS is not caused by servers (supply) being overloaded by requests (too much demand). Famines are caused by a misallocation of the supply, such that some people have enough to throw away while others don't have enough to avoid starvation. What causes the misallocation isn't very relevant either so long as it isn't outright theft.

How do you rectify income distributions? Well, that's a problem with a known solution. The solution is ancient, in fact, and was first pointed out by JohnMaynardKeynes in his work on depressions. Keynes's solution to depression is for the government (primary resource holder) to distribute money (the resource) based on some large scale project involving massive amounts of manpower. If the project has some kind of ancillary benefit, then that's great but its purpose in pulling the economy out of depression is simply to function as a makework scheme with the characteristics that it prevents hoarding and is targeted at the needy.

Note that this level of indirection (famine of food -> famine of money -> solve famine of money -> solve the famine of food) is completely unnecessary. In ancient Egypt, people were paid directly with grain and the pharaohs engaged in massive construction projects as makework schemes. Egypt's economy functioned very well on this basis for eons until the Romans conquered it and "modernized" the economy.

So we have that DoS is an economic problem with a known economic solution. What, then, does this solution look like? The solution involves distributing, either directly or indirectly, the resource according to some makework scheme. This works because you don't care that your clients pay you back but you do care that they pay some purely nominal fee in some medium which is equally distributed (eg, labour) and valuable to the clients so that they avoid making spurious requests.

Whatever resource is attacked by DoS (whether storage, computation or network capacity) the solution is some kind of makework scheme. In the case of storage for connection state, which is what's attacked during SYN floods, the beginnings of a solution is detailed below. In the cases of network bandwidth and CPU, which are attacked during DDoS, it would take a very different solution since they are self-regenerating resources.

Unlike storage, it isn't possible to create so many forged requests that the server is incapable of handling any new requests at all. Which characteristic also makes it much more difficult to attack these resources since a DoS targeted at bandwidth or CPU can only succeed at that point in time and not for all time. Even if one limits storage allocation by expiring it, the number of storage*time blocks available in a given time period is more limited than CPU time slices.

Note also the problem of theft. Effective makework schemes in history rely on the impossibility of stealing another person's labour. In the case of DoS, problems arise when one computer may steal the computing power of thousands of others. This then turns into a mixed economic-security problem, the solution to which is to impose costs on the stolen computers so high that their rightful owners will be so inconvenienced as to motivate them to fix their security problems. IF this cannot be done then DoS has no solution in the current regime.

Finally note that HashCash uses precisely this tactic (Make The Client Pay). The hash cash scheme thwarts spurious requests for human attention by making clients pay in human-visible processing power.

What beautiful mischaracterization of a problem leading up to non-solutions, like the one proposed below. --CostinCozianu

Agreed. What works for a real-world example (famine) won't work for a virtual-world situation, because the constraints of physical presence don't exist. A food distribution system for a famine-stricken area will be accessible only by the locals, and so the travel time to the food centers is a factor, as is the pipeline between centers. The Internet makes nearly every computer equidistant, and so a sufficiently distributed botnet can DOS any server.

Are you some kind of idiot? This issue is addressed explicitly and it has nothing to do with "the constraints of physical presence". (And who the hell said anything about food centers??) It has everything with the ability of computers to steal each other's computing power. And even then, making the client(s) pay is a useful tactic because it increases by a few orders of magnitude the minimum size of a botnet to successfully perform an attack, and increases by a few orders of magnitude more the minimum size of a botnet to perform an attack unnoticed by the zombie computers' owners. If you successfully create a differential between the two, the greed / stupidity of the botnet controller will ensure that attacks are visible to the zombie computers' owners. And by visible is meant annoying, hence they'll tighten security and take their computers out of the botnet.

Your example is famine. If it's a bad one, withdraw it. And your proposal lacks the basic knowledge of how computers communicate, so I thought that a little explanation of why they differ from real-world things like people would be useful. If the resource is purely connectivity, and the client needs only to make the server take some time. and can ignore the server's request, sending another request immediately (if waiting that long). means that your plan is flawed. It's also not kosher to claim that your plan would be good "if IP was designed correctly" or some other dodge, since we have to deal with the initial constraints of the world as it is.

My example is famine and despite a lengthy exposition which it is obvious you haven't read a single word, you don't understand anything about famines. Distance measures are completely irrelevant in either case except perhaps in the case of food distribution which you're the only one concerned with in the first place.

Your example, I remind you. If you think that it is a bad one, refactor it

If the resource is bandwidth ("connectivity" doesn't have any well-defined meaning) then it's a whole different problem. It's then a DDoS problem and not a DoS problem. Your mendacious "we have to deal with the initial constraints of the world as it is" is particularly idiotic given that this proposal would see the reengineering of TCP. A different proposal using the same general tactic would see the reengineering of IP. Either way, your "things have to stay exactly like they are" position is ridiculous in the extreme.

I don't think they should stay the same, but that's not what you started this out as, before you moved the goalposts. Had you started with the premise that IP does not allow for the limitation of client-side to prevent server overload, you'd be fine. But you instead start with a flawed idea, and then spend the rest of the time trying to twist the initial conditions around until your idea might jsut work. And the distinction between DDos and Dos is trivial, and in the real world, irrelevant, because the use of a simgle client to DoS a server has passed into history - the BlackHats? have botnets of thousands of machines at their disposal

Handwaving away reality usually fails. In our world, the number of single-machine DoS attacks are vanishingly small, because it's vastly easier to overload a server from several machines, and because all of the machines are compromised machines. No one attacks from a machine they own, because the backtracking would reveal them to the authorities. All DoS attacks these days are a) distributed and b) launched from compromised machines. So you can ignore the single-machine attacks - they are too easily stopped with other measures.

Now for the case of many machines, if your proposal actually worked, and cause significant load on the client machines, it might have the beneficial effect of forcing the users to claen their machines. But as others have pointed out, your plan, in the current Internet, fails to force the client to perform anything, and it can blissfully send packet after packet, paced enough to prevent user-affecting load. The cumulative effect of 10,000, or 100,000, or 1,000,000 clients sending packets to the server will bring it to a halt, all without any noticeable load on the horde of clients. If you can't understand this simple fact, you are lost.

It's obvious you haven't read this page nor do you understand anything on it. Who are you anyways? Just so I can mentally plonk you. -- RK

Why should I give you that? If you can't aruge the logic away, you want to ignore it?

Just returning the favour. You're already ignoring everything I wrote, I just want to reciprocate. In any case, I think you're Eric since no one else so blatantly ignores what others wrote to repeat their position. -- RK

Nope, not part of the Eric Cabal.


A proposal to thwart SYN floods

A way to prevent SYN floods is to design the protocol so the client must spend some time on each request. For example, in the case of the three-handed handshake, when the client sends a connect request, the server sends back a moderately large number in the current range it services and dumps any state information. The client now has to factor this number into its component primes. This takes 1/10 second. The client then sends the results back to the server, the server verifies the result and verifies that the product lies within the current service range.

Making each request cost the client a little bit of CPU time ensures that clients can't take more than a small share of resources. Under the above scheme, a computer wouldn't be able to make more than 10 TCP requests each second. Note that this thwarts botnet attacks also since subverted computers would grind to a halt and cause their owners to intervene. Either that or it would multiply the minimum size for a successful botnet attack by several orders of magnitude.

Some problems with this idea

Works against spam, not DoS

The proposal is essentially HashCash. Basically, HashCash is a scheme whereby the sender of an email has to include the solution to a SemiHardProblem (partial hash collisions), where one of the parameters to the hard problem is the text of the message (or a digest thereof). Solving the SemiHardProblem requires a significant fraction of a second of CPU resources for the sender; but the solution is verified in microseconds by the receiver. Email filter can be programmed to discard emails without valid solutions as spam (along with other filters, such as accepting emails from known senders, discarding emails with V1AGRA in the header, etc.).

This prevents spam, but not Denial of Service. The email server has to remember which emails had which hard problem, or else a spammer can use any solved problem to get through the filter. So a DoS attacker (not a spammer), would overload the server with emails and the server would have to remember state, and remembering state is the weakness that most DoS attacks are based upon.

How in hell do you manage to read "keeps state" in "the server dumps state"? But I don't think Scott wrote that passage above, I think it's Ben deeply misunderstanding the situation.

The idea is ridiculously useless

It is completely and irremediably broken. Of course, RK wouldn't know because he has disdain for anything ComputerScience, and he cannot be bothered to write it down properly (meaning at least pseudo-code). So there's this discussion going at the level of handwaving vis-a-vis the "bottom of protocol stack" and "higher up the protocol stack". It's an example of how not to design (designing from the position of ignorance) coming from our chief InteractionDesigner. -- CostinCozianu

Note how all that "bottom of protocol stack" and "higher up the protocol stack" handwaving crap was introduced by nitwits who can't bother reading the proposal in the first place. If there's a problem with it, it would be more constructive to point out what it is rather than this grandstanding you're so fond of. -- RK

No, you write it down properly, and then you'll see that it's bloody obvious that if the server discards the state, it'll be flooded by easily forged packets regardless. By the way, if you write down what is the logical condition that you try to prove about the protocol, you may be able to get to a half decent design. That would be proof driven design, as recommended by EwDijkstra. -- Costin

No, it would be engineering. Don't think that what you happen to mean by the word is the same as what designers mean. And don't presume that your version of the word 'design' is more authoritative than the one used by a field that names itself after the word.

So you'll get forged packets, so what? It's not like they'll overwhelm temporary storage because under the proposal there isn't any anymore. The only things they can overwhelm (resources they can exhaust till others starve) are CPU and network bandwidth. CPU isn't a problem since the network stack isn't CPU bound. And overwhelming network bandwidth lends itself to a completely different attack pattern than SYN floods. -- RK

The forged packets will make you have the socket open on the server. Cause you don't know which sockets are from legitimate clients and which are from flooders. And that's DOS for you. -- Costin

Under the proposal, the socket doesn't even open until the client has solved the SemiHardProblem the server requires. There are no open sockets with forged IPs. I don't understand why people persist in assigning the flaws of TCP/IP to a proposal that seeks to redesign it entirely.

Here's the key: "The socket doesn't even open until the client has solved the problem". That would be deriving by the way of "because I think so".


In order to prevent (or generate) a DoS attack, all layers and components of the system must be understood. As pointed out above, many attack vectors work by consuming system resources which a system must make available to outsiders in order to be useful. If I'm running a server, I've got to accept incoming connections so legitimate clients can use my service; until a connection is negotiated a server cannot distinguish between a connection request from a legitimate user, and from an attacker; making excessive connections (or irregular ones which stimulate incorrect behavior in a system) is a good way to attack a system. In order to prevent this sort of attack from occurring, a server would have to distinguish between hostile and friendly connections before the connection is made. At the low levels of the "protocol stack" (particularly at the IP layer and TCP layers, where many such attacks take place), there is little or no authentication performed - thus spoofing is easy to do and difficult to detect. Application-layer authentication schemes, such as HashCash, are of no use against attacks against the network, transport, or session layers. (Conversely, of course, SYN cookies other low-level defenses aren't of much use against spam email).

Under the proposal, you can't make a connection at all until after you've proved that you've invested some amount of computation into it. Authentication is a red herring.

Be careful of possible different meanings of "connection" in this context. One meaning would be the establishment of a session via a protocol such as TCP; a SemiHardProblem could be made a prerequisite to establishing such a connection at the transport/session layer.

But to negotiate such a thing, the host must be open to receiving "connect requests" (which the word "connection" may also refer to) from the outside. And for the protocol to work as describe, some state must be maintained between connect requests and the subsequent answer to the challenge; otherwise the server can't verify that the challenge is correct (or that a submitted answer to the challenge is indeed what was asked). If such state is maintained, an attacker could initiate numerous connect requests, and never answer the challenge.

One possible workaround to the state-maintenance would be for the challenge to be digitally signed by the host; that way a hostile attacker wouldn't be able to "manufacture" challenge answers and avoid the required investment of computation. OTOH, that opens up a new attack vector - just flood the server with bogus answers.

''Under the above proposal, there is a global window of challenges and no state is maintained per-request. So for example, the server could be sending challenges between 120 gagillion and 121 gagillion. And in the next minute, it's between 121 and 122, and a minute after that it's 122 and 123, and so on. You've got a minute to answer, after that your challenge is obsolete. As stated, the only state kept around is what window of numbers (challenges) you're servicing and this is global to all users and O(1).'

Having to sign challenges is much more computationally intensive than just shoving them back out the door.


No matter how you define "connection" or "socket" in this protocol, there is a limit to the number of initial requests a server can respond to in some time period. CPU and memory may not be the limiting factors, but network bandwidth is finite. All DoS nodes have to do to push the server to that limit is send initial requests. The protocol does nothing to restrict their ability to do that. Once the server's limit is reached, legitimate requests can't be handled. This should be obvious. -- EricHodges

And network bandwidth at that level is under the auspices of IP, not TCP. You'd have to re-engineer IP to thwart that.

But you haven't, so you haven't prevented DoS attacks. And IP doesn't matter. Any network will have a bandwidth limit regardless of protocol. -- EH


May I ask a question, that is not directly related to the aspects discussed so far? Yes, thanks. If this concept was abopted in general, wouldn't this immediately mean, that rich clients get better service, than poor ones? And with our society having at least 6 orders of magnitude in income, wouldn't that put the poor quickly below the ability to use the net for anything useful? Can this concern be addressed? -- GunnarZarncke

No, because you're not making clients pay in money, only in computing power. And computing power only has a spread of 2 or 3 orders of magnitude at best. And income has a spread of 9 orders of magnitude, not 6. You forgot to count the third world. -- RK

Discussion continued on LogarithmicWealth.


FebruaryZeroSix

CategorySecurity CategoryInternet CategoryDistributed


EditText of this page (last edited December 15, 2010) or FindPage with title or text search