[General boards] [Fall 2018 courses] [Summer 2018 courses] [Winter 2018 courses] [Older or newer terms]

Calculation for TCP question in the review slides


#1

Hi,

If I understand correctly from lecture, with slow start, we repeatedly double the amount of packets we send until we eventually lose a packet, in which case we divide the last amount of packets sent by two and start using that value as our congestion window. Additionally, we would begin using additive increase to slowly grow the congestion window’s size until the next dropped packet.

In our of the review slide questions we were asked:

“Consider a TCP flow over a 1-Gb/s link with a latency of 1 second that transfers a 10 MB file. The receiver advertises a window size of 1MB, and the sender has no limitation on its congestion window (i.e., it can go beyond 64 KB). … How many RTTs does it take to send the file?”

I understand the answer, but the calculation seems oversimplified.
In part one, we calculate an n value of 10, meaning we continue doubling the number of packets sent until we reach 2^10 packets, at which point we exceed the 1MB treshold and divide our rate. The second part of the question then seems overly simplified: it asks how many RTTs it would take to transfer the 10mb file.

When we sent 2^10 packets we experienced packet loss, but some of the packets in that stream would have been received. The answer doesnt seem to account for that. It also states that “Starting from the 11th RTT, the sender will send 1MB to the network.” Why are we sending 1MB to the network? If anything, we would have haved our congestion window to 10^9, which is 768000Bytes, which is less than 1MB, and we would have applied additive increase with each transmission. The answer doesnt account for any of this. I’m ok with simplifying, seeing as being precise would make for a very tedious and error prone calculation, but on an exam, how would we be able to tell that we are allowed to make these simplifications if they aren’t specifically stated in the question?

Also, how would we know to assume a packet size like 1500 bytes?


#2

When we hit the 1MB window size that doesn’t mean that we divide our rate.
The Window size means that the receiver will only buffer up to 1MB of data. So it is advertising to the sender not to go above that amount at one time. So long as the sender respects that window size then there shouldn’t be any packet loss (best case scenario).

So in this question the sender ramps up to 1MB, but stays constant at that transmission rate since there is no mention that there was packet loss.

As for 1500 Bytes for a packet size, that is just the standard MTU for a packet.


#3

This still doesnt make sense … the window size for the sender is suppose to be the min of the advertised window and the congestion window. We do slow start until we get a packet loss, which would happen after we cross the 1MB threshold. After that, we do a multiplicative decrease down to 512 packets (times 1500 is 768000 bytes). The congestion window of 512 packets is less than the advertised window, so we should use that and perform additive increases, based on how the protocol is defined. Why would we just jump up to 1Mb ? This seems like it defeats the purpose of slow start.


#4

I assume that you are referring to slides 41 and 42. Here is what happens: the congestion window size is initially 1 packet. If the advertised window size is 1MB, then by assuming that the MSS is 1500, it is equal to 667 packets.
Now if we start with 1 packet, then in the next RTT we send 2, and then 4, …, 256, 512, and finally 1024. However, because it is larger than 677, the congestion window size will be 677 and remain at that value. This is why the answer to first part is 10.
Now the next part asks how many RTT we need to transfer the whole file. In the first 10 RTT, we sent 1.5MB. In each RTT after that, we send 1MB, so we need 19 RTT to transfer the whole file.


#5

So when we passed the 1MB receiver window, did we start using that (not because slow start encounter a packet drop) but because we always use the min of (congestion window, receiver’s advertised window) as our sender window ?