Thursday, September 3, 2009

Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks

[Paper link]

Congestion control is designed to keep network devices from being overwhelmed by traffic. A congested network will see high rates of dropped packets as queues overflow. Congestion control is a dynamic resource management problem, where hosts should modulate the frequency of their packet transmissions to reflect the current state of the network. This paper focuses on congestion avoidance, which focuses on keeping traffic at a manageable level that does not approach excessive. (This is opposed to congestion recover, which keeps the traffic load right at the edge of being overly congested.) More specifically, the authors analyze the increase/decrease algorithms of the binary feedback scheme, a decentralized decision-making system. The binary feedback scheme has network resources monitor their state, and when above their optimal range they send a feedback packet with 1 for the congestion bit. Users then adjust their sending according to an increase/decrease algorithm.

They look at 4 criteria for algorithm controls: efficiency, fairness, distributedness (binary feedback vs universal knowledge), and convergence. Their questions based on these criteria are: (1) "What are all the possible solutions that converge to efficient and fair states?" and (2) "How do we compare those controls that converge?" Their main result is that increase should be additive and decrease should be multiplicative.

- Notably, this is an example of something that goes against the (strict?) end-to-end argument. Although the hosts are doing the actual throttling according to the increase/decrease algorithm, this still relies on the addition of a binary feedback bit in the header whose value is set by routers (in the "dumb" core). However, it seems unreasonable to think that any given host could see enough of the network to be able to make such a judgement without advice from routers (as they discuss when they say they desire "distributedness"). It seems like sometimes routers know best about routing (e.g., traffic conditions).

- This model assumes that users will respond honestly and in everyone's best interests when told that the network is being overwhelmed. What if users try to ride the fact that other users will rate limit, and therefore don't limit themselves for the advantage of getting more bandwidth for themselves? Oddly enough they don't mention this in their "Practical Considerations" section.

- Not a huge fan of this paper. It's mathy and details-oriented, but it's not clear to me that understanding those details (and not just the principles/guidelines derived from them) is useful unless you are directly doing research in this area, especially since they fail to consider practical things such as misbehaving hosts.

No comments:

Post a Comment

About Me

Berkeley EECS PhD student