[Paper]
SEATTLE claims to "...[achieve] the best of both worlds: The scalability of IP combined with the simplicity of Ethernet." On one hand, Ethernet is simple to manage thanks to flat addressing. On the other hand, Ethernet doesn't scale well -- Ethernet bridges use broadcast messages (e.g. ARP requests and DHCP) & paths need to be a spanning tree. To reconcile this problem, network admins break up their large networks into small Ethernet networks connected by IP or use VLANs. The authors assert that both of those solutions are not good enough (inefficient and harder to manage since you need to worry about addressing). SEATTLE is intended to provide the same simple plug-and-play semantics of Ethernet, but also scale well to large networks.
[ to be done: further summarization ]
My criticisms of their simulations:
-- For the packet-level simulation, they start with a real LBNL trace and then inject extra traffic into it. They have fake hosts sending traffic to random destinations. I find this curious considering they make this earlier claim: "In enterprise networks, hosts typically communicate with a small number of other hosts [5], making caching highly effective."
-- They also have to make some kludgy assumptions about the number of hosts connected to each switch due to the anonymization of the traces. I suppose they can't really help that, it's a general field problem.
-- They have really huge error bars in many of their graphs. Is this normal for network simulations due to the nature of error in the physical links etc? Notably, the SEATTLE results have tight error bars even when ROFL and ETH have huge ones (e.g Fig 5a and 5c)...is their simulation somehow skewed for their own results?
Sorry about the lack of summaries......I had prelims on Tuesday and have two papers due Friday......multi-tasking is not going so well.
Blog Archive
-
▼
2009
(32)
-
▼
September
(11)
- Understanding TCP Incast Through Collapse in Datac...
- Safe and Effective Fine-grained TCP Retransmission...
- VL2: A Scalable and Flexible Data Center Network
- PortLand: A Scalable Fault-Tolerant Layer 2 Data C...
- Detailed Diagnosis in Enterprise Networks
- Floodless in SEATTLE
- Congestion Avoidance and Control
- Analysis of the Increase and Decrease Algorithms f...
- Understanding BGP Misconfiguration
- Interdomain Internet Routing
- End-to-End Arguments in System Design
-
▼
September
(11)
Subscribe to:
Post Comments (Atom)
About Me
- Adrienne
- Berkeley EECS PhD student
Prelims! I was wondering where you were on Tuesday.
ReplyDeleteI had not picked up your point on the random workload before. It is true that most hosts communicate with a small number of servers and rarely (if ever) communicate with each other. Nevertheless, I guess you could consider that a random workload constitutes the worst case.
BTW, google "multitasking and focus" and you will find many scholarly studies backing up your observation!