Some questions from a discussion in my office this morning.
1. Can you give an example of redundant retransmissions caused by coarse feedback?
During class I explained what coarse feedback means but never gave a real example on how this could lead to redundant retransmissions. Vern Paxson gave a real example in his thesis (Page 317-318, ftp://ftp.ee.lbl.gov/papers/vp-thesis/dis.pdf). I could not explain any better than Vern Paxson so let me just refer you to the thesis for the example.
2. To pessimistically estimate the loss rate on an alternative path consists of default paths with loss rate of p1, p2, .. pk, can we really sum up the loss rate (Slide 64, Lecture 10)??
I stand corrected. We assume that loss rate on the paths are independent. Thus, the loss rate on the alternative path, i.e., probability that a packet is lost on one of these default paths, should be 1 – (1-p1)(1-p2)..(1-pk).
In this lecture, we are going to first look at how DNS works in details through a measurement study, and then see how we can exploit DNS to help us measure latency between any pair of end hosts on the Internet.
J. Jung, E. Sit, H. Balakrishnan, R. Morris “DNS Performance and the Effectiveness of Caching,” IEEE/ACM Transactions on Networking, 2002, 10(5), [ Google Scholar]
KP Gummadi, S. Saroiu, S. Gribble, “King: Estimating latency between arbitrary Internet end hosts“, ACM IMC 2002 [Google Scholar]
This paper uses DNS in an unintended and clever way to measure latency between any pair of hosts on the Internet.
A way to overcome the difficulties in simulating the Internet is to look for invariant — characteristics of the Internet that holds over time/space. In this lecture, we will see how to model and generate the topology of the Internet.
M. Faloutsos, P. Faloutsos, C. Faloutsos, “On Power-Law Relationships of the Internet Topology”, SIGCOMM 1999, Google Scholar
A. Medina, I. Matta, J. Byers, “On the Origin of Power Laws in Internet Topologies”, SIGCOMM 2000, Google Scholar
Due to university-wide e-Learning week, there will not be any analog lecture on 9 October. Instead, I am posting a series of screencasts below.
In this lecture, I will introduce a scripting language called Tcl, an object oriented extension of Tcl, called OTcl, and NS2, a network simulator, which is written in C++ and OTcl.
I structure the lecture to include things that you need to know for your Assignment 2. The segment on NS2 sometimes contains code that does thing magically. Don’t worry about it. Understanding the meaning of the value calculated (e.g. queue length, mean delay) suffices — you generally need not know the implementation details in NS2.
The best way to learn a new language/tool is to get your hands dirty and write some scripts yourself, play with different combinations/options, and see what happen.
The lecture is divided into four videos for practical reasons. The audio quality is not too good (my apology) but the video quality is excellent. You will need QuickTime video player to play the video (VLC might work too — it seems to play almost everything!)
This lecture will first overview the different research methodologies in networking, then discuss in detail the difficulties and common pitfalls of using simulation as a networking research tool. Strategies to cope with the difficulties are also discussed.
S. Floyd, V. Paxson, “Difficulties in Simulating the Internet” ,” IEEE/ACM Transactions on Networking, 9(4), August 2001. ACM DL | Google Scholar
This week, we will discuss equation-based congestion control (suppose to be last week but I ran out of time) and how a router can manage its queue(s) to avoid congestion on the Internet. I will also allocate some time to answer questions that you may have from the first four lectures.
I will scribble on the slides during class so it would be helpful to bring a printed copy of the slides to class. Don’t forget to bring last week’s slide along.
S. Floyd, V. Jacobson, “Random Early Detection Gateways for Congestion Avoidance,” IEEE/ACM Transactions on Networking, 1(4), August 1993. PDF from RED Homepage
This is the original paper the proposes RED and ignite the research on active queue management.
B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J. Wroclawski, L. Zhang, RFC 2309: Recommendations on Queue Management and Congestion Avoidance in the Internet, April 1998. [IETF]
This RFC provides high-level descriptions on RED and the need for router-level mechanism for congestion avoidance on the Internet.
S. Floyd, K. Fall, “Promoting the Use of End-to-End Congestion Control in the Internet,” IEEE/ACM Transactions on Networking, 7(4), August 1999. CiteSeer
The paper argues for the importance of end-to-end congestion control, and proposes several router mechanism to provide incentive for flows to be TCP friendly.
This week, we will re-examine TCP Reno and look at two improvements to TCP, namely NewReno and SACK. We will briefly discuss the notion of TCP friendlness and the importance of congestion control, then see how the TCP throughput model can be used for congestion control.
I will scribble on the slides during class so it would be helpful to bring a printed copy of the slides to class.
K. Fall and S. Floyd, “Simulation-based comparison of Tahoe, Reno, and SACK TCP,” Computer Communication Review, vol. 26, pp. 5–21, July 1996. [CiteSeerX | Google Scholar]
The paper consists of two parts. Section 2-5 gives an overview of different TCP versions, while Section 6 illustrates their differences through simulations. The examples given in Section 6 is too involved, thus I am using simpler examples in the lecture.
S. Floyd, M Handley, J Padhye, J Widmer, “Equation-based congestion control for unicast applications,” Computer Communication Review, vol. 30, pp 43–56, October 2000. [Google Scholar]
We will focus on Section 3 of this paper, which gives the details of the TFRC protocol.
This week, we will model the throughput of TCP as a function of loss rate and RTT. Our derivation in class will be a simplified version of the derivation in the original paper by Padhye et al. This lecture shows how simplifying assumptions and rough approximations can lead to simple yet reasonably accurate model of a complex process.
This is a whiteboard-based lecture. My notes for this class (linked below) should be read in conjuction with the original paper (e.g. figures, experimental results).
J. Padhye, V. Firoiu, D. Towesley, and J. Kurose, “Modeling TCP Throughput: A Simple Model and its Empirical Validation,” SIGCOMM 1998. [CiteSeer]