[Paper link]
This paper discusses the question of what responsibilities should be given to a communication system vs. the applications using it. They believe that most functions are more suited to the applications rather than the communication system, and they call this the "end-to-end argument." A motivating example they give is file transfer reliability: should the communication system worry about checksumming and internally confirming each packet that is sent across the network, or should the host application checksum the files once and then request a retransmit of the whole file if the recipient application's checksum fails to match? The high cost of executing the former option outweighs the low odds of a file transmission error occurring. Additionally, one must consider problems external to the communication system, such as the reliability of the hard drives; in order to account for this, the host applications would need to perform their own integrity check even if the communication system provided its own guarantee of reliability. The "end-to-end argument" is that many functions (e.g., reliability) are better performed by the end applications rather than the communication system itself. The communication system should only offer these functions as performance enhancements, and one needs to be careful that the performance "enhancement" actually works out to be beneficial and not overly complex.
In the previous paper on DARPA Internet design, the author also talked about this issue as being a motivating factor for splitting up TCP/IP. Different applications (e.g., voice vs rlogin) need different reliability guarantees; rlogin will wait to receive its packets in order, whereas a voice application would rather drop a packet and insert silence rather than delay transmission for an out-of-order packet. For this reason, TCP and IP were separated so that IP was a lower building block with TCP and other alternatives like IP on top of it, handling reliability as they saw fit. I like this paper because it formalizes the rationale behind TCP/IP separation and makes it a more general principle.
The authors also apply the end-to-end argument to encryption, duplicate message suppression, guaranteeing FIFO message delivery, and transaction management. The general theme is the same as with the file transfer example: the end hosts need to perform the function anyway (and probably can do it better since it has more semantic information), and doing the work in the communication system would just be wasteful duplication. Another important point is that not all applications need the same underlying "help" from the communication system, so what is useful performance enhancement for one type of application might slow down another application.
This same type of discussion could be applied to NIDSs -- should the work of intrusion detection be done on the network or by the host? Doing work on a dedicated NIDS will save hosts computation time, and it's easier to keep a NIDS programmed and updated than it is to keep host security software updated. On the other hand, understanding network traffic sometimes means the NIDS has to infer application behavior, which can become very expensive and lead to DOS attacks. Instead of having a NIDS do bifurcation analysis, it might make more sense to have the host be responsible for filtering its own incoming traffic.
The paper does acknowledge that not EVERYTHING should be moved to the end applications -- there is always a tradeoff. For example, retransmitting a single packet makes more sense than retransmitting an entire file. I would be interested in seeing a formal cost/benefit analysis that could be used to make the decision of when one approach makes more sense.
Blog Archive
-
▼
2009
(32)
-
▼
September
(11)
- Understanding TCP Incast Through Collapse in Datac...
- Safe and Effective Fine-grained TCP Retransmission...
- VL2: A Scalable and Flexible Data Center Network
- PortLand: A Scalable Fault-Tolerant Layer 2 Data C...
- Detailed Diagnosis in Enterprise Networks
- Floodless in SEATTLE
- Congestion Avoidance and Control
- Analysis of the Increase and Decrease Algorithms f...
- Understanding BGP Misconfiguration
- Interdomain Internet Routing
- End-to-End Arguments in System Design
-
▼
September
(11)
Subscribe to:
Post Comments (Atom)
About Me
- Adrienne
- Berkeley EECS PhD student
I also felt that a slightly more formal study of where we loose and where we gain when we push down functionality would have made the paper more compelling. Just how unreliable a network can we tolerate before a file transfer program becomes unusable? Would we see a drop in stolen credit card #s if the network enforced encryption in addition to whatever browser level encryption was going on?
ReplyDelete