Transmission and Programming | TCP Congestion Control | Alison
Loading

Module 1: Transmission and Programming

Nota de Estudos
Study Reminders
Support
Text Version

TCP Congestion Control

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

TCP Congestion Control
Welcome back to the course on Computer Network and Internet Protocols. So, we arelooking into the Transmission Control Protocols in detail. So, in the last class, we havelooked into the flow control algorithms in TCP.(Refer Slide Time: 00:28)So, in this lecture, we look into the congestion control algorithms in TCP. In the contextof generic transport layer protocols, you have looked into that this congestion controlalgorithms in a transport layer, they use a nice principle to ensure both efficiency as wellas fairness.So, here we will look into that how TCP utilizes the concept of capacity and fairnesstogether in the congestion control algorithm, which it incorporated.(Refer Slide Time: 01:03)So, here is the basic congestion control algorithm for TCP. So, the basic congestioncontrol algorithm for TCP is based on the implementation of additive increasemultiplicative decrease, using a window based control mechanism, and TCP considerspacket loss as a notion of congestion. So, earlier we have looked into these principles ofadditive increase multiplicative decrease, where we have seen that well AIMD provides anotion of maximizing the capacity as well as fairness in the network.So, what we have looked into that whenever there are multiple flows, which are gettingcongested at the network bottleneck or that the bottleneck link, both the flows are tryingto maximize their capacity. And when both the flows tries to maximize their capacityduring that time, we need to ensure that every flow gets the required capacity thatmaximizes its fairness as well. So, considering the global network perspective, wheremultiple flows are contending with each other to get the information to get the data,during that time this notion of congestion and the notion of fairness since congestion isan important aspect.And what we have seen earlier, that in case of a distributed network ensuring hardfairness is very difficult. And at the same time, hard fairness can affect your capacity, theavailable capacity or it can under utilize the available network capacity. So, to removethe under utilization of the network capacity that in network; generally, we use the maxmin fairness principle and the max min fairness principle says that well, your objectivewould be to maximize the end to end capacity that you can get for a particular flow. Inthat notion, we have seen that AIMD provides this max min fairness along withmaximizing the utilization of the link bandwidth that you have at the network bottleneckin contrast to additive increase additive decrease principle of rate control, andmultiplicative increase multiplicative decrease notion of rate control.So, TCP incorporates the same principle of additive increase multiplicative decrease,where it increases the rate additively, whenever there is sufficient amount of bandwidthavailable in the end to end path. And whenever TCP detects a congestion with the help ofa packet loss, where packet loss gives a signal to congestion, then it drops the rate in amultiplicative way or following a multiplicative notion. So, this AIMD principle that wediscussed earlier is incorporated in TCP.So, to incorporate this notion of AIMD for congestion control, while maintainingfairness; TCP maintains a window, which is called a congestion window. So, thiscongestion window is the number of bytes that the sender may have in the network atany instance of time. Now, if that is the case, then your sending rate will be equal to thecongestion window divided by RTT. So, RTT is the round trip time, the time to propagatea message from one node to another node and then getting back the reply, this total roundtrip time.So, if you divide the congestion window size that means, the number of bit or thenumber of bytes that the system can push to the network divided by the total RTT, thatgives you an approximation of the sending rate, when the sender is sending data at a rateof congestion. But, as you have seen earlier, we have another notion here, which is thereceiver window size at a rate of the receiver, which comes from the flow controlalgorithm.So, ideally what we have to do, we have to incorporate or we have to merge the flowcontrol principle along with the congestion control principle. And what we have seenearlier that well, the flow control it maintain its own window size, which is based on thefeedback of receiver advertised window. So, the receiver advertised window, receiverannounces that that is the available place at the available buffer at the receiver. So thesender can send that much amount of data, so that buffer overflow does not occur fromthe receiver side. And with that, from this receiver feedback window size, the sender alsocontrol its rate.So, ideally your sending rate should be the minimum of the receiver rate, the rate atwhich the receiver can receive the packet. And the rate at which the network can deliverthe packet to the other end. So that is why your sending rate should be minimum of thereceiver rate and the network supported rate. Now, this concept of network supportedrate, it is coming from the congestion control algorithm; and the receiver rate that iscoming from the flow control algorithm.Now, the receiver advertised window size that gives you the possible rate at which thereceiver can receive the data. And the congestion window size that is giving the rate atwhich the network can support transmission of data over the end to end links. So thatway your sender window size should be the minimum of congestion window, and thereceiver window. So, this congestion window is providing you the rate that networksupports, and receiver window is giving you the rate that receiver supports. So, that wayand thus we are getting the sender window that is giving you the sending rate or thesender rate.So, this is the notion of combining congestion control and flow control together in TCP.And with this principle, TCP controls the sender window size. So, initially yourcongestion window size is kept at a very minimal rate. Then, we increase the congestionwindow size gradually to find out that what is the bottleneck or what is the capacity ofthe link. And if a congestion is detected, then we apply the AIMD principle to decreasethe rate. So that is the broad idea of congestion control in TCP.(Refer Slide Time: 07:50)Now, let us see that how this entire congestion control in TCP was invented, and thiscongestion control in TCP is a nice example of a networking or a better to saydecentralized or distributed networking protocol. So, this entire concept of congestioncame in 1986 from a event called a congestion collapse. So, in 1986, this growingpopularity of internet it led to the first occurrence of congestion in the network, so wecall it as the congestion collapse. So, this congestion collapse was a prolonged periodduring which the goodput of the system that they dropped significantly more than afactor of 100.So, based on this notion of congestion collapse that observation of congestion collapse;Van Jacobson was the key person who actually investigated this phenomenon ofcongestion in the internet. And this early TCP congestion control algorithm that wasdeveloped by Van Jacobson. And the initial proposal came around 1988, which wassuccessor of the event of congestion collapse that happened in 1986.But, the challenge for Jacobson was to implement the congestion control algorithmwithout making much changes in the protocol, it was necessary to make the protocolinstantly deployable in the network, because during that time, many of the systems wereusing TCP for transmission of data. And the notion of internet was already there, peoplewere accessing data over the internet. So, during that time if you design a completelydifferent protocol, what you have to do, you have to make the change to all the machinesin the system.And during that time, the size of the network was may not be as large as it is today ormaybe some thousand factors lesser than today, but still it was significant. And design ofa complete new protocol may lead to a problem of backward compatibility. So that wasthe major challenge for Jacobson to design a new protocol, which would be compatiblewith the existing implementation of TCP, and you do not require much changes in theTCP protocol.So, to solve this, Jacobson had a nice observation, and the observation was that he foundthat well packet loss is a suitable notion for congestion. Now, here there is a glitch,remember that during that time, when Jacobson invented this congestion controlalgorithm in TCP, the notion of wireless network was not that much popular or it was justin a very nascent stage, so most of the internet was connected by the wired network. Andin a wired network, in general you do not get a loss from the channel, because thecommunication media is guided, it is where so your link layer protocol will take care ofthe interference in the channel, and that is why you will not experience a loss from thechannel. So, your channels are kind of lossless in case of a wired network.So, if there is a loss of packet, that loss is certainly from the network buffers from theintermediate network devices. And because of that if there is a packet loss from thenetwork buffer, you can certainly infer that the buffer has overflown, and because thebuffer has overflown, you are having a congestion in the network so because of thecongestion, the buffer can only overflow. So, that way Jacobson found out that the packetloss is a suitable signal for congestion, so you can use the timeout to detect a packet loss.And then tune the congestion window based on the observation from packet loss.So, you increase the congestion window value, the moment you observe a packet lossthat gives you an indication that well, there is a notion of congestion in the network orthere is some occurrences of congestion in the network, because of which the packet haslost, which you have detected from a timeout. And whenever you are detecting a packetloss from a timeout, then you apply AIMD principle to reduce your rate based on themultiplicative decrease principle.(Refer Slide Time: 12:09)Well. So, here is the another interesting observation that how will you adjust thecongestion window based on the AIMD principle. So, one of the most interesting ideas inTCP congestion control is use of acknowledgement for clocking. So, here this picturedepicts the interesting fact that well. Whenever you are sending the packet, so assumethat this is the network scenario, you have the sender, and an intermediate router, andthen the receiver, they are connected via two links, so it is a two hop path. So, this link isa fast link. So, this is a fast link, and the second link is a slow link. So, this second link isbasically your bottleneck link. So, the congestion will happen, when lots of data will bepushed to this lower link or the bottleneck link.Now, whenever you are sending the packet, so you are sending the packet from thesender in the form of a burst, that traverse the faster link, then in the slower link, becausethis link is slower, your packet will take more time to propagate to the receiver. Now,when if you look into the acknowledgement the rate of the acknowledgement willactually be the rate of sending the packet at this slower link; so whatever be the rate ofacknowledgement at this slower link, the same rate of acknowledgement will perceive inthis fast link as well. So, that way whenever the sender will get the acknowledgements,that acknowledgement actually indicates the rate of the slower link, which is there inyour end to end path or better to say if you have multiple hop path, then the rate of theslowest link in that the rate, the acknowledgement will arrive at the sender.Now, if sender monitors the rate of acknowledgement that gives an idea that well,possibly at that rate, the packets are being transmitted to the receiver. So, theacknowledgement returns to the center at about the rate, that the packets can be sent overthe slowest link in the path. So, you trigger the congestion into adjustment based on therate at which acknowledgement are received. So, here these acknowledgements are usedas a clocks to trigger the congestion control in the network. So that was anotherinteresting observation by Jacobson, while designing the congestion control algorithm,so well, so that was the basic principle.(Refer Slide Time: 14:41)Now, whenever you are getting an acknowledgement, you will you will trigger or youwill adjust your congestion window size, but the question comes that at what rate youwill apply additive increase in the network. So, initially what you can do that you can setthe congestion window to one packet, and then gradually increase the size of thecongestion window, when you will receive an acknowledgement.Now, let us see what happens, if you apply a additive increase principle. So, if you inapply a additive increase principle, what additive increase says, that initially you sendone packet, whenever you are making an acknowledgement, then you increase thecongestion window by one, so now your congestion window is two, so you send twopackets. So, once you are successfully receiving that two packets, the acknowledgementfor those two packets, then again you increase your congestion window by one, so nowyour congestion window is three, so you can successfully transfer three packets. Youwait for the acknowledgement for those three packets, whenever you are receiving theacknowledgement for those three packets, again you increase the congestion window tofour from three, and send four packets.Now, if you are applying this additive increase of congestion window over the network.So this AIMD rule, it will take a very long time to reach a good operating point on fastnetworks, because this congestion window started from the small size. So, let us see anexample. So, assume that you have a 10 mbps link you have a 10 mbps link with 100milliseconds of round trip time, and in that case, your appropriate congestion windowsize should be equal to BDP the Bandwidth Delay Product, that we have seen earlier. So,with 10 mbps link and 100 millisecond RTT, your bandwidth delay product comes to be1 megabit.Now, assume that you have a 1250 byte packets, you have a 1250 byte packets means,you have 1250 into 8, so that means, 10000 bits packet. And if you have a 10000 bit10000 bit packet so if you have a 10000 bit packet that means, with 1 megabit BDP, youcan transfer you need to transfer at least 100 packets to reach to the bandwidth delayproduct. Now, if you assume that the congestion window starts at 1 packet, and the valueof the congestion window is increased 1 packet at every RTT, because this RTT is basedon the rate at which you are receiving the acknowledgement. So, at every RTT, youincrease 1 congestion window.So, you require 100 RTTs and 100 RTTs means with 100 millisecond per RTT, it isapproximately 10 second before the connection reaches to a moderate rate or it reachestoward its maximum capacity. Now, if you think about the web transfer the HTTPprotocol, so none of the HTTP connection takes 10 second to reach at that operatingpoint. So, by the time, you will reach at the maximum capacity, we will probably closethe TCP connection, and start again a new connection, which will again increase fromone packet per second. Now, to increase this rate, we apply a principle in TCP, which iscalled slow start.(Refer Slide Time: 18:08)So, what is the slow start, so the slow start is something like this that initially you make aexponential increase of the rate to avoid the slow convergence. Now, this is the irony inthe name that slow start does not mean that your rate is not - your rate is slow, it is justlike that, you are starting from a slower rate rate and making a faster convergence to thehigh rate.So, what we do at slow start, we increase the congestion window by two that means, thecongestion window is doubled up at every RTT. So, rather than have an additiveincrease, initially we do a multiplicative increase by doubling up the congestion windowat every round trip time.
TCP Congestion Control- Part 2
So, that is the notion of TCP slow start. That every acknowledgement segment allowstwo more segments to be sent. So, for each segment that is acknowledged before theretransmission timer goes off, the sender adds one segment worth of bytes to thecongestion window.So, what happens here with this example, so initially your congestion window was 1.Once you are receiving the acknowledgement, your congestion window becomes twothat means 2 into 1. Once you are receiving the second acknowledgements, yourcongestion window becomes 4. Then once you are receiving all these fouracknowledgements, your congestion window becomes 8, so that way at every RTT andall of this transmission takes around one RTT.In the 1st RTT, you are acknowledging 1 packet. In the 2nd RTT, you are acknowledging2 packets. In the 3rd RTT, you are acknowledging 4 packets. In the 4th RTT, youracknowledgement you are acknowledging 4 packets. And if the pipe is full, then that isthe level at which you are getting converged, so that way in TCP low start at every roundtrip time, we double up the congestion window size ok.(Refer Slide Time: 19:58)Now, if you just make a multiplicative increase of congestion window, then again itviolates the principle of max min fairness that we have seen earlier. So, MIMDmultiplicative increase multiplicative decrease does not lead you to a to a optimaloperating point, where both the capacity and fairness constants are satisfied. So that iswhy what we have to do, we have to make a additive increase at some point of time. So,what we do here? So the slow start it causes the exponential growth, so eventually it willsend too many of packets into the network too quickly.Now, to keep the slow start under control, the sender keeps a threshold for theconnection, we call this threshold as the slow start threshold or ssthresh. Now, initiallythe slow start threshold is set to BDP or some value, which is arbitrarily high, themaximum that a flow can push to the network. And whenever a packet loss is detected bya retransmission timeout, the slow start threshold is said to be half of the currentcongestion window. So that is the idea.(Refer Slide Time: 21:10)(Refer Slide Time: 21:18)So, whenever your, so let me try to explain it with a diagram, so initially these thingshappen. So, at this axis, I have the time; and at this axis, I am plotting say the congestionwindow. So, you start with one packet, and initially you have a exponential growth. Andsay at this point, you get a loss or retransmission timeout, and whenever you are having aretransmission time out, so you set. So, this was the initial value. So, you set half of thatas the slow start threshold.So, you drop the rate, and now your slow start threshold is here. So, again you start withone congestion window, so you go exponentially up to the slow start threshold. After youhave reached slow start threshold, you increase through AIMD. So, here your AIMDstarts. Now, again after AIMD at this point, if you have an RTO, then you make half ofthat at the updated slow start threshold, drop the rate, again make an exponential increaseup to slow start threshold, and start AIMD after that. So, your slow start is up to the slowstart threshold and after that, you are going with AIMD.So, after slow start threshold, we move to the additive increase, so in this additiveincrease, which we call as the congestion avoidance. So, whenever the slow startthreshold is crossed, TCP switches from slow start to additive increase. So, it is usuallyimplemented with a partial increase for every segment that is being acknowledged, ratherthan an increase of one segment part RTT. So, this one segment part RTT is your slowstart phase.So, to do that, we make a common approximation is to increase the congestion windowfor additive increase based on this formula. So, the congestion window is increased asthe current value of the congestion window plus the maximum segment size intomaximum segment size divided by the congestion window. This gives an approximationof the additive increase that how much data or how much new byte need to be added tothe congestion window to follow the or to increase the congestion window value basedon additive increase at every round trip time. So, at every round trip time, weapproximate the increase of congestion window based on this formula. So, this formulais applied at every round trip time to have the congestion window follow additiveincrease principle.So, to trigger a congestion, as we have mentioned that we normally trigger a congestionwith the help of a retransmission time out, that indicates a packet loss, but there isanother way to trigger the congestion. So, in general, TCP follows two different ways totrigger a congestion; one is the retransmission timeout, and the second one is by using aduplicate acknowledgement. So, duplicate acknowledgement means, you arecontinuously receiving the acknowledgement of the same packet.Now, why we use basically the duplicate acknowledgement to trigger a congestioncontrol, because if you use retransmission timeout, you have to wait for that timeoutduration to detect a congestion. Whereas, duplicate acknowledgement gives you a earlynotion of congestion, and you can trigger the congestion control behavior much earliercompared to waiting for a complete timeout period.Now, interestingly this retransmission timeout RTO is a sure indication of congestion,but it is time consuming. Whereas, this duplicate acknowledgement, here as we knowthat the receiver sends a duplicate acknowledgement, when it receives out of ordersegment, which is a loose way of indicating a congestion. So, TCP assumes that if youare receiving three duplicate acknowledgements, then it implies that the packet has beenlost, and it triggers the congestion control mechanism. Now, this assumption of threeduplicate acknowledgement is something arbitrary, there is as such no logic behind that,so Jacobson assumed that well, you can take three triple duplicate acknowledgement toindicate a congestion.Now, another important or interesting observation from this duplicate acknowledgementis that (Refer Time: 26:47) because TCP uses cumulative acknowledgement, by lookinginto the duplicate acknowledgement, you can identify the lost packet, that which packethas been lost. So the very next packet in the sequence that have been lost, so that is why,you are getting this duplicate acknowledgement. So, what may happen, say you havereceived 1, 2, 3, then you have lost packet 4, and you are receiving 5, 6, 7. Whenever youare receiving 5, 6, 7, you will receive an acknowledgement up to 3. So, every time youwill receive a duplicate ACK for ACK 3.Now, in this case, you can infer that the packet 3 is actually the packet that has been lost.So, you retransmit the lost packet, and then trigger the congestion control algorithm.Now, in TCP Reno, there is another interesting observation from the implementation ofTCP Reno. So, once you are detecting a congestion through 3 duplicateacknowledgement, do TCP really need to set congestion window to 1 MSS? Now here,this duplicate acknowledgement; it means that some segments are still flowing in thenetwork, it is not like that you are not able to send any packet. So, if you are having aretransmission timeout. So, what happens in fast recovery, that you set the slow start threshold to one-half of thecurrent congestion window, retransmit the missing segment that is the first retransmit.Then you set the slow start threshold to the current, then you set the congestion windowto the current slow start threshold plus 3. Why 3, because you have received threeduplicate acknowledgement that means, the receiver has already received three morepackets. So that means, you can increase the congestion window, and send three morepackets to the network, because that has been received by the receiver, although out oforder.So, here if you compare with the TCP Reno variant, we are not making the congestionwindow. So, TCP Reno, it makes the congestion window again to 1 MSS, and then applyslow start threshold up to slow start, up to the slow start threshold, and then applies theadditive increase. So, here we are applying additive increase much faster, that will helpyou to reach to the operating point that much faster compared to TCP Reno due to theimplication of this fast recovery algorithm and avoiding the slow start phase, againwhenever you are detecting a congestion through the triple duplicate acknowledgement.But, if you are detecting a congestion by a retransmission timeout, then you always setthe congestion window to one 1 MSS, and start with the slow start threshold, because astart with the slow start phase, because a retransmission timeout will give you anindication that a severe congestion has been happened. And whenever there is a severecongestion in the network, it is always better to start with a very low rate that means,setting the congestion window value to 1 MSS, so that is the broad difference.So, if you are detecting the congestion by a retransmission timeout, you set thecongestion window value to 1 MSS. Apply the slow start, whenever you will reach thecurrent slow start value, which is the half of the congestion window that was detectedduring the congestion detection, and then apply additive increase. And if you aredetecting a congestion through triple duplicate acknowledgement, you do not need toagain perform the slow start phase, because some packet are still flowing in the network,you directly apply fast recovery, and then move with the additive increase. So, this is thevariant of the TCP Reno.After that, many other variants of TCP came into existence like TCP new Reno, thenTCP selective acknowledgement or SACK, so originally there are normal TCP protocol,it uses the principle of go back N flow control principles or go back N ARQ principle.Whereas, the TCP SACK the selective acknowledgement variant of TCP, it works withthe principle of this selective repeat ARQ, where explicitly we send the SACK packet toindicate that which packet has been lost. And the sender retransmit that packets withoutsending the whole packet of the current window.So, there are lots of such variants of TCP. And after that, many of the variants also cameinto practice. So, I am not going to the details of all those variants. If you are interested,you can go with that. The basic notion of TCP congestion control is this three phases, theslow start followed by the congestion avoidance, then the fast recovery, and the fastretransmit. And after that, people have incorporated few other optimizations. So, if youare interested, you can look into the RFCs to (Refer Time: 36:26) know them in better.So, this gives us an broad overview of the congestion control algorithms, which are therein TCP. And we have given you a fairly detailed description of the transport layerprotocol along with a focus on this TCP. So, as I have mentioned that a huge amount oreven more than 90 percent of the internet traffic, it actually use a TCP protocol. And TCPis possibly the most widely deployed protocol in internet the transport layer protocol ininternet, but well there are certain applications in the internet that use UDP, which doesnot support reliability, flow control, congestion control, as like TCP.So, in the next class, we will look into the details of the UDP protocol.So, thank you all for attending this class. See you again.

Notification
Você recebeu uma nova notificação
Clique aqui para visualizar todos eles