Monday, September 15, 2008

RED

This paper blathers on for pages and pages about nothing at all. I find it extremely boring to read. That being said, this paper describes a Random Early Detection algorithm for detecting congestion in gateways and signalling the nodes attached to the gateway by "marking" packets, which consists of either setting a bit in the header or dropping the packet. In order to accomplish this, it uses exponential averaging to keep track of the average queue length. This allows the algorithm to behave well in cases of "bursty" traffic, while still controlling the flow should it grow too large. It has a minimum threshold and a maximum threshold. When the calculated average is below the minimum threshold, no packets are marked. Above the maximum threshold, all packets are marked. Between the two thresholds, packets are marked with a probability to be calculated by the method below.

As the average queue length increases, the probability of marking a packet increases linearly from 0 to 1 between the minimum threshold and the maximum threshold. Thus, the formula looks something like p = (qlength - minthresh)/(maxthresh - minthresh). However, this formula generally leads to clumped markings. Since we would like the markings to be evenly spread out, we count the number of packets sent since the last marking, and then multiply this probability p by a factor: 1/(1 - count * p). With this formula, we can achieve a much better spacing between marked packets, and still (through tweaking of parameters) have the same number of marked packets. I didn't see a great mathematical justification for this in the paper. You can now skip reading the paper. That's really all there is to it. The rest is fluff.

I'd really recommend removing this paper from the syllabus. The idea is simple enough that it can be explained in five minutes, and the results are not fascinating. Furthermore, I find the graphs that are drawn very hard to read/understand.

The only benefits of this algorithm are 1) the small amount of state needed. An algorithm like fair queuing needs a lot of state, but this algorithm needs only a few variables per router. 2) the ability to easily handle "bursty" traffic. The good news is that this algorithm is quite easy to implement on routers that need to handle many many connections, and cannot be bothered to keep state for all of them. The bad news is that I can't see any other purpose for it.

The paper compares the algorithm to random dropping and FCFS, which are not fantastic policies to begin with. I would be more interested to see a comparison of this algorithm to fair queuing or CSFQ. In particular, an analysis of the processing time per packet would be a very interesting paper to read.

2 comments:

Randy H. Katz said...

I'm afraid that I have disagree with you on blathering on and worthlessness of the proposed ideas. I actually think this is a nicely written paper, and not that difficult to understand. There are some very powerful ideas here, but the most important is the idea of signaling impending congestion. Waiting for it to happen by filling queues is just a bad idea. Hopefully, we have convinced you in class today that bursty sources are of interest and should experience fair bandwidth share in the network.

ozan said...

I hope I didn't leave the impression that I thought the ideas in the paper were worthless. When I said that the paper "blathered on", what I intended to say was that the paper could easily have been 1/5th of its current length without losing any content.