If I understand correctly from lecture, with slow start, we repeatedly double the amount of packets we send until we eventually lose a packet, in which case we divide the last amount of packets sent by two and start using that value as our congestion window. Additionally, we would begin using additive increase to slowly grow the congestion window’s size until the next dropped packet.
In our of the review slide questions we were asked:
“Consider a TCP flow over a 1-Gb/s link with a latency of 1 second that transfers a 10 MB file. The receiver advertises a window size of 1MB, and the sender has no limitation on its congestion window (i.e., it can go beyond 64 KB). … How many RTTs does it take to send the file?”
I understand the answer, but the calculation seems oversimplified.
In part one, we calculate an n value of 10, meaning we continue doubling the number of packets sent until we reach 2^10 packets, at which point we exceed the 1MB treshold and divide our rate. The second part of the question then seems overly simplified: it asks how many RTTs it would take to transfer the 10mb file.
When we sent 2^10 packets we experienced packet loss, but some of the packets in that stream would have been received. The answer doesnt seem to account for that. It also states that “Starting from the 11th RTT, the sender will send 1MB to the network.” Why are we sending 1MB to the network? If anything, we would have haved our congestion window to 10^9, which is 768000Bytes, which is less than 1MB, and we would have applied additive increase with each transmission. The answer doesnt account for any of this. I’m ok with simplifying, seeing as being precise would make for a very tedious and error prone calculation, but on an exam, how would we be able to tell that we are allowed to make these simplifications if they aren’t specifically stated in the question?
Also, how would we know to assume a packet size like 1500 bytes?