Denial Of Service Existence Proof

On MakeTheClientPay, there is much discussion on denial of service attacks, and a scheme is proposed for thwarting some types thereof.

This page is an attempt at a proof that such schemes cannot in general work. Please BeConstructive with your edits to this page.


OK, here is a proof, perhaps in outline only, of why "Denial of Service" attacks cannot be thwarted by a "Client Must Pay" scheme. If you spot a hole, please be specific and constructive.

This seems independent of the mechanism of P, and to prove that a DoS attack can succeed by definition.


However, this proves a somewhat useless truth: if A has enough resources, he can always simply overwhelm S, should he wish to do so. Against brute force, there is no defense, even more so, if the attack is carried out with essentially stolen resources in the form of a Distributed DoS attack with a botnet.

Traditionally, only attacks where the attacker spends considerably less resources himself than he is able to tie up at the server were called DoS attacks. The classic example is SynFlooding?, where A only sends a few small packets and is rewarded with a completely clogged server. To some extent this is still possible with every web server, which can be coaxed into forking a thread and probably doing lots of calculation by just sending a small request. Against attacks of this type, the scheme described in MakeTheClientPay may well help.


Since the definition of DoS was contested, here are a few definitions taken from the 'net:

If your definition differs substantially from these, it would appear you are talking about something completely different. If it doesn't, the original reasoning seems to stand.


I removed this line of reasoning from MakeTheClientPay because I didn't want to muddy the waters over there, I wanted a clear discussion from a single line of reasoning. However, to answer some of the points you raised over there:

It may be a question of economics, you may be absolutely right, but that doesn't show where the line of reasoning fails.


The flaw in your proof is here:

RK hasn't really been clear on when and where this proposal applies. It does not work against DDoS attacks. The assumption in HashCash and similar algorithms is that the client cannot marshal significantly more computing resources than the aggregate legitimate users. The type of DoS attack addressed by HashCash etc. is where a rogue client uses an abundant resource (CPU time) to take up a scarce resource (TCP connection state, user attention). It's based on forcing a client to pay exponentially more time to establish a connection than a server pays to service it. If the client sends an invalid certificate, then the server can just drop the connection, and it takes up fewer computational resources than servicing a legitimate client.

It does absolutely nothing about trojans that can commandeer thousands of computers to look like legitimate users. There's no defense against these - all they have to do is flood the network pipe until nobody else can get through. But it can be a significant deterrent against the single guy in Russia with a spam-mailer, DSL connection, and few thousand e-mail addresses. And one of the proposals is to create a "beacon" - a prefix that must be appended to the hashed string - and change it every week. Under this proposal, your zombie DDoS machines would need to call home to their target every week, exposing themselves and making it obvious that a DDoS attack is in the making. -- JonathanTang

Well, that's not quite true. Let's put everything in terms of HashCash because it makes things clearer and less controversial. Even without having the beacon, hashcash makes spamming more difficult for botnet operators.

If hashcash increases the cost of sending an email by a factor of 100x, then it means the number of bots you need in your botnet to produce the exact same result is multiplied by a hundred fold. That's a big factor and in the real world it matters. It matters because it means that the total number of spam attacks that can be performed in the world is suddenly decreased by a hundred fold.

Now, there's another thing about botnets which is extremely important. It is that bots are stolen property. That doesn't seem to matter from a technical point of view, but it sure matters a lot from a politico-economic point of view. Because if clients are stolen (bots) then putting pressure on the bots until they are unusable to their legitimate owners will force those owners to secure their computers and pull them out of the botnet.

Now, botnet operators still have the option of operating invisibly but only at the cost of increasing the size of their botnet by whatever arbitrary factor the hashcash user decides to price their service at. Botnet operators aren't very smart people and so it's inevitable that they will screw up their stealth operations, revealing the zombie state of many of their bots and making botnets less viable. -- RK

Right, but that's not the type of DDoS attack being assumed above. That attack is to send a knowingly false certificate to force servers to spend computational CPU resources verifying a bunch of random bits. Instead of the price ratio being cost of generating certificate / (cost of verifying certificate + servicing request), it's cost of generating a string of 0s / cost of rejecting certificate. The latter is still in the attacker's favor, but by a much smaller margin.

A server's still way better off employing HashCash or similar scheme than not, but it doesn't complete eliminate the threat. You can prevent attention theft from spam: the limiting resource will be CPU, so the mail server will overload long before the message reaches recipients. You can also prevent the SYN-flood attack that you mentioned on the original page: the bandwith pipe overloads long before the server runs out of buffer space for connection state. But you can't prevent all DDoS attacks, and it's the trivial cases like bandwidth overload that Eric etc. are harping on.

This is inherent as long as a.) it is possible to gain control of a large number of machines and b.) it takes a non-zero amount of resources to reject a connection. Simply throw enough machines into pinging a site until it overloads. If Google wanted to take down your home PC, for whatever reason, there's no way you could stop them because they have 100,000 machines and you have 1. Even if you set your firewall to block all traffic from Google, it'd just saturate your incoming pipe and router. The only way to prevent this is to prevent attackers from gaining control of a large number of machines, which is a security issue. -- JonathanTang


From a security point of view, the only thing wrong with D/DoS floods is the fact the zombies are stolen. If the only solution is to prevent their being stolen then this is literally not a problem. At this point I can wash my hands of the whole thing.

And at this point, Richard has moved the goalposts. His initial claim to stop D/DoS attacks hinges on his idiosyncratic definition of what DoS means, and now that he defines it as something other than what the rest of the world does, he can claim success.


The fact that S must allocate resources to receiving R is the core of the problem. S remains vulnerable to exhaustion of those resources no matter how efficiently it evaluates P. No matter what protocol level we look at, a server is vulnerable to DoS attack as long as receiving information consumes finite resources.

-- EricHodges

Does it take more resources for an attacker to send a falisifed payment than it takes for the victim to verify that the payment is false? -- AnonymousDonor

A falsified payment is no more than a memset (+ network send), verification means computing the hash of that falsified value. Hash functions are usually pretty fast, but there's no way it'll be faster than a memset. (Actually...conceivably it's possible if memory reads are significantly faster than memory writes, enough to make up for the arithmetic computations of the hash function. I don't know any archituctures where this is the case.) -- JonathanTang

The most efficient verification would be a compare (assuming that a given window can be assigned a single valid value). That's the best possible case - where network bandwidth is the limiting factor. -- EH


A few points:

Jonathan: why is it not true that Which part do you disagree with? If I have access to a fat pipe and a really, really, really fast machine, I can make essentially arbitrarily many requests. Are you concerned about the limit on the communication capacity? CPU capacity? Do you want me to change the word "arbitrary" to tighten the statement?

You wrote: I'm not saying that all attacks are identical. I am saying that loss-of-service is the result of insufficient server resources to meet the demands being placed on it. The causes may be different, but the line of reasoning I was trying to give is independent of that.

I'm trying to clarify the situations under which the scheme described on MakeTheClientPay does or does not work. If you have access to a fast machine and a fat pipe, bigger than that of the server you're attacking, you will always be able to overwhelm it through sheer network traffic. But that's outside the scope of what Richard's talking about; he acknowledges that, then glosses over it as unimportant (I'm not certain I agree, but I don't want to debate the relative importance of various types of attacks, so I let it slide). You're all talking past each other.

Can we frame it like this?

Your assumption that I quoted holds in case 1), but your proof isn't being disputed in that case (I'm not sure we're making this clear). In case 2, it's not the case that one side can marshall arbitrary resources, by definition. -- JonathanTang

I don't gloss over it as unimportant. It's very important, it's just not my concern because I claim that this is not a failure. I claim that you should be able to DoS a server through sheer power.

The correct solution to some nobody denying a popular service is not to whine about it and try to prevent such attacks using some sort of totalitarian scheme (and that is exactly what would be required). The correct solution is to distribute the service across every computer that would have it. That too is just basic politico-economics.

If the web and DNS architectures don't allow such massive distribution, then the failure doesn't lie in IP but in the web and DNS architectures. Which incidentally we know to be broken for entirely different reasons. So let's fix those and we'll be done with DoS, at least for any services we care about.

If you ever fix the web and DNS (replace them with something radically different) then something downright interesting happens. What happens is that DDoS no longer applies to the vast majority of services but becomes a problem only to a very specific type of service; corporate services. A corporation can't afford to hand over even partial control of its data and processes over to third parties because that would inevitably cut into its revenues. Nor can it afford to arbitrarily add more computing power because that would increase its costs. Everyone else is safe but corporate services aren't and that is something downright interesting. If DDOS is a corporate problem, why should anyone care about it? -- RK

Wait a second, let's make this explicit. You're claiming that your solution to DoS attacks doesn't prevent some DoS attacks (those that consume all available network bandwidth) because that kind of DoS attack shouldn't be prevented? Do I understand you correctly? -- EH


CategorySecurity CategoryInternet CategoryDistributed


EditText of this page (last edited August 31, 2006) or FindPage with title or text search