Thursday, September 11, 2008

Core Stateless Fair Queuing

This paper provides a quick and dirty solution to the large amounts processing power needed for Fair Queuing. Instead of doing the processing at the routers, this system uses the routers along the border of an AS to estimate the total amount of traffic that a given client is generating. It then labels the packets with this information, and allows the internal routers within an AS to drop packets at random, based on their current congestion. These packets are dropped randomly upon arrival at routers, which use exponential averaging to estimate what the "fair share" of traffic should be.

This paper was not a fantastic read for a couple of reasons. First and foremost was the confusing notation used to describe its exponential averaging scheme. Second was the tiny graphs that describe the results (I can barely make out which line is supposed to be which).

This paper very nicely addresses the issue of fair sharing of bandwidth for small regions, but I find it unlikely that such a system would extend to Tier 1 ISPs, which undoubtedly have enormous amounts of data flowing across their borders at any given time.

This paper also leaves out an important issue that was addressed by the Fair Queuing paper, which is latency for connections that are well under their fair share. I suspect that this kind of queuing does absolutely nothing in that regard, since it does not rearrange the packets that it receives. In this case, its best contribution is the ability to punish TCP users who try to exceed their fair share of bandwidth. However, they still can't do this to UDP users, which is an important issue.

No comments: