Buffer Management and Congestion Control
Welcome back to the course on Computer Networks and Internet Protocols. So, we arediscussing about the transport layer and various services under the transport layer.So, in the last class we have discussed about the basic performance modules in thetransport layer along with a hypothetical transport layer protocol that how will youinterface the transport layer with the application layer . So, continuing from that point, inthis lecture, we will discuss about the buffer management mechanism in the transportlayer and then we will look into the details of the congestion control algorithms.So, coming to the buffer management; so, as you have looked into the last class that atthe sender side as well as at the receiver side, we maintain a transport layer buffer whichis a software buffer maintain as a queue and in that queue, at the sender side, wheneveryou are sending some data, you put the data in that queue and then based on your ratecontrol algorithm, the transport layer will fetch the data from the queue and send it to theunreliable IP layer of the network layer.On the other hand that the receiver side, you have a queue where the data from thenetwork layer is put into the queue and then the application will fetch the data from thatqueue. Now let us look into this diagram. So, here at the network layer, this ip receive isagain a hypothetical function that receive the data from the ip layer on the network layer,and it put the data into the queue and this queue is the buffer at the transport layercorresponds to an application, and we use port number to identify that in which bufferyou need to put the data.So so, once this network layer put the data at the transport layer, then the application,from the application, you have to use the read system call or there are other system callslike the write system, the write system call through which you will receive the data fromthe transport layer. So, this read system call which is the part of the socket programmingthat we will discuss later on in details.So, the read using the read system call, you will receive the data from the transportlayer. So, if you are using this write call at the sender side, then you can use the read callat the receiver side at the application to read the data and regarding this transport layerimplementation, as we have seen that this entire part of the implementation of theprotocol stack that is implemented inside the Kernel, if you consider an a Unix type ofoperating system or broadly, you can say that this protocol stack is the part of theoperating system where as this application is written in the user space.So, the frequency of the read call that is actually managed by the application which youare writing and the application is using this socket programming that we will discusslater in details; it will use the socket programming to read the data from this transportlayer buffer. Now it may happen that well application is reading the data, data at onespeed where as the network, it is receiving the data at another speed.Many of the times, it may happen that will the network is sending the data at a morehigher speed compared to what the application is reading, say may be the network isreceiving data at a rate of some 1 Mbps and the application is reading the data at the rate10 Kbps. So, the application is reading the data at a rate of 10 Kbps means at that rate,you are executing the read system call. So, you are may be coming at every 1 second andyou are making a read system call with the buffer size of 1 Kb - 10 Kb. So, you willreceive the data at a rate of 10 kbps because of this difference, we face certain problemsbecause the data that will get buffered inside this transport buffer.So, because you are receiving data at a higher rate so that particular data will get bufferinside the transport layer, buffer inside this particular queue and after sometime, it mayhappen that the queue becomes full. If the queue becomes full, that will be a problembecause in that case, if the senders sends more data to the network, then that particulardata will come here at the transport layer, but because this buffer has become full, youwill experience a packet drop from this particular buffer.Now, you want to avoid that. Now to avoid that; what you have to do? You have to tunethe flow control algorithm. So, how will you tune the flow control algorithm? You haveto have something called a dynamic buffer management where this receivers buffer side,it is changing dynamically, it is changing dynamically because of this rate differencebetween the application layer and the transport layer, at rate at the rate at which it isreceiving the data from the network layer, because of this rate difference you may face aproblem you may have dynamically changing buffer size.Now, to handle that; what you have to do, like you have to send that information to thesender side so that sender can stop sending data, unless there is sufficient space in thereceiver buffer. So, this particular concept, we call as the dynamic buffer management atthe receiver side. So, let us look in to this concept of dynamic buffer management in littledetails.(Refer Slide Time: 05:53)So, in case of dynamic buffer management for window based flow control for slidingwindow based flow control, what you have to do that the sender and the receiver needs todynamically adjust their buffer allocation.So, that is based on the rate difference between the transport entity and the application,the available size of the receiver buffer may change; so, in this particular diagram. So,these are the set of segments that the application has already read out. So, that has wentfrom the application buffer. Now this is your complete buffer size. Well, so, this is yourcomplete buffer size. Now out of that there are 3 segments, which are already they areinside the buffer.So, these segments are waiting inside the buffer for the application to read them. Nowhere the free space that you have in the receiver buffer that is this amount. So, you needto advertise this amount to the sender. So, that the sender doesn’t send more datacompared to the what the receiver buffer can hold. So, the sender should not send moredata compared to the receiver buffer space. So, you need to dynamically adjust thewindow size; the sender window size based on the availability of the receiver bufferspace.So, what we have looked into the window based flow control algorithm that you candynamically tune the sender window size and the sender window size basically depictsthat how much data you can send to the receiver without waiting for theacknowledgement. Now if you send the feedback from the receiver side to the senderthat when the receiver has this much of buffer space available, then the sender can set itswindow size to maximum that value. So, that it will never send data to the receiver morethan that particular size.Now, once the receiver will receive that data, after receiving the data, the receiver canagain send an acknowledgement. Once this data has been read by the application andwhen it is sending that the acknowledgement with the acknowledgement, it can announcethe available buffer size. So, let us look how this particular procedure works.(Refer Slide Time: 07:55)So, here is an example. There are 2 nodes A and B. They are communicating with eachother.So, the first message is that A request for the 8 buffers. So, here we are representingbuffer at the number of segments that you want to transfer and we are assuming thatevery segment is a fix size although that does not hold true for TCP, but here this is justfor simplification, we are making an assumption that every segment has of same size andthe receiver at the sender, sender A. So, A is my sender who will send the data and B ismy receiver who will receive the data.So, the sender first request for 8 buffer. So, A wants 8 buffer space from B, but B findsthat well only 4 buffer space are available buffer space for 4 segments are available. So,A sends back an acknowledgement with an acknowledgement number along with thisbuffer space value. So, the buffer space value is mentioned as 4 that determines that Awill only grant 4 messages from message 0 to message 4 or for segment 0 to segment 4.Now A sends 1 message; so, once A sends this message, this data with sequence number0. Now at this time, A has sent 1. So, A has 3 buffers left, then A sends another messageA m1. Now A has 2 buffers left, then A has sent another message and assume that thismessage has lost.Now, this message has lost. So, although at the receiver side, you have 2 buffer spaceleft, but A thinks that there are only one buffer space left because A has already sent 3segments. So, at this stage B acknowledges 0 and 1. So, once B acknowledges 0 and 1; Asend B sends an acknowledgement 1. So, here this acknowledgement is a cumulativeacknowledgement. So, once you are sending back acknowledgement 1; that means, youare acknowledge acknowledging both message 0 and message 1 along with that it isadvertising that the buffer space is 3.So, A get an update that well this message 0 and message 1 has been successfullyreceived by the receiver and it has a available buffer space of 3. So, A again sendsmessage m3 because it has sent message m2 already, it has sent message m2 already. So,it does not know that whether the message has been received or not then it again sendsm4 and finally, sends m2.So, after sending this 3 it the advertised buffer space was 3. So, it has sent 3 message. So,once it is it has sent 3 message, during that time, A cannot send anymore data because A’ssending window was set to 3. So, it has already transmitted 3 3 messages. Now at thisstage, this B, it sends an acknowledgement saying that acknowledgement number equalto 4. So, when this acknowledgement number is equal to 4, at this stage A finds out thatwell, all the 4 messages starting from m1, starting from m2, m3 and m4, they gotacknowledged because 4 is again a cumulative acknowledgement.So, at this stage, there was a timeout for there was a timeout for message m2 for which ithas not received the acknowledgement and it transmits that message again. So, B hasreceived m2, m3 and m4. So, B has sent an acknowledgement 4 with buffer space 0. So,with this buffer space 0, what A is acknowledging? The A is acknowledging that that thisparticular message, all the message has been received by B, but it does not have anybuffer space available; that means, the application has not read that data. Now once theapplication has read that data, A sends another acknowledgement saying that theacknowledgement number the same acknowledgement number 4, but announcing thebuffer space as 1.So, at this stage, A makes one buffer space available, B makes 1 buffer space available toA saying that it can send one more message, one more segment. So, here you can see thatonce you are advertising buffer space 0, after that, once that buffer space becomesavailable, you need to send another acknowledgement, otherwise, the sender will getblocked at this stage because the sender; once it gets that the buffer space is 0, it will notsend anymore data.So, it is it will get blocked here. So, that way the things get continues continued. So, herein this case, A A sends the data and gets blocked and then it gets an acknowledgementnumber with buffer space 0, here A is still blocked, then A A can send get get anothermessage with the available buffer space.So, here you can see that well, it may it may sometime happen that that because of youare sending this advertisement that that you have do not you do not have any sufficientbuffer space, there is a possible possible deadlock at the sender side, because the sendercan find out or sender can thinks of that no more space is available at the receiver side.Now, to avoid this particular thing, what you have to ensure? You have to ensure that theacknowledgements are flowing in the network continuously. So, in this particularexample, if it happens that well initially, you have advertised that the buffer space is 0,then B sends another acknowledgement saying that the buffer space is available, butsomehow this acknowledgement got lost.So, because this acknowledgement got lost, the system can lead to a deadlock unless Bsends another acknowledgement after it gets a timeout. So, it need to explicitly tell to Athat now sufficient buffer space is available. So, A will be able to send more data. So, inthat particular case, you have to ensure that after every timeout, B should if B is notreceiving any more data from A and the connection is still open. So, B should send theduplicate acknowledgement announcing that it has sufficient buffer space to receivefurther data, otherwise, there is a possibility of having a deadlock in case theacknowledgement get lost.Now, we will see another important aspect of the transport layer which we call as thecongestion control. So, what is that congestion control? So, you just initially think of acentralized network scenario. So, each node has an edge. There is an edge between 2nodes and we have an edge wait.So, this edge wait signifies that what is the capacity of that particular link, say if youwant send some data S to D, at that case, if you want to find out that what would be thecapacity of that flow what would be the maximum capacity of that flow.So, you can apply this max flow min cut theorem which is being covered in thealgorithmic course. So, you can apply the max flow in cut theorem and from the maxflow min cut theorem you can find out what is the minimum cut here. So, just by lookinginto the scenario, this seems to be the minimum cut because this is the minimum cut. So,you can send a maximum flow at the rate of 6 plus 4 plus 10 plus 2, 12.So, you can send a data at the rate of if you send a think as the unit as Mbps, so you cansend the data at a rate of 12 mbps from S to D. Now if you have this kind of centralizedscenario, you can apply this kind of algorithm, this kind of mechanism to restrict yourflow rate to 12 Mbps, but if it is not there, if it is not there, then how will you be able tofind it?Now, your flow control algorithm will not be able to guarantee that your transmission isalways restricted to 12 mbps because you are getting the rate from multiple paths and thething is restricted to this maximum segment size that is an approximate calculation thatwe have looked earlier. So, because of that it may happen that in a distributed scenario,the sender may push more data compare to this 12 Mbps bottleneck capacity which isthere in this particular network. So, this capacity is the bottleneck capacity. So, if youwant to send some data from S to D even if there are no other flows in the network, youwill never be able to achieve more than 12 mbps.Now, the scenario get more complicated if you have multiple flows, if you think of thatthere is another flow from this S 1 to D 1, that will also use these links this individuallinks in the network, you can think of that there is a link from here to here with thecapacity of 4. Now it will use this particular link. Now this flow may go through any ofthis link and there would be this kind of overlapping among multiple end to end flowsand they will share the capacity in this bottleneck links.So, this entire thing is difficult to implement in a real network, because in a real network,you have to implement it in a distributed way. So, in that particular concept, thecongestion may occur in the network where this bottle neck links in the bottleneck link,you are trying push more than the capacity of the particular bottleneck link. Now if youwant to push more data compared to the bottleneck link, capacity of the bottleneck link,what will happen that the intermediate buffer at the nodes, they will get filled up, youwill experience packet loss, which will reduce the end to end transmission delay orwhich would fill increase the end to end transmission delay significantly.Buffer Management and Congestion Control- Part 2
So, from here let us look into that how your bandwidth changes when you allocate startmore flows between the same source destination pair. So, initially say your bandwidthallocation we are normalizing it in 1 mbps. So, initially if you have a single flow, so wejust think of a network like this. So, you have say this network and you are you aresending a single flow from this source to S1 to destination D1 and assume that thisbottleneck link capacity is 1 Mbps.Now, if that is the case, then once you are starting flow 1, then flow 1 will be able to usethis entire 1 mbps bandwidth which is there in this bottleneck link. Now say after sometime at once again you start another flow from says this S2 to D2. This is say flow 2.Now if you start that then this link capacity is 1 mbps. So, this link is being shared byboth F1 and F2. So, ideally what should happen that well whenever you are starting thisparticular flow, it will it will the entire bandwidth the bottleneck link bandwidth will bedivided between F1 and F2. So, everyone will get approximately both flow 1 and flow 2will get approximately 0.5 mbps of speed, if their sending rate is more than 0.5 mbps.So, in that case, this entire bottleneck capacity is divided between flow 1 and flow 2 andafter sometime say you have started another flow, flow 3 which has required whichwhose required bandwidth is little less, say its required bandwidth is something close tosay 100 kbps. If that is the case, it will drag this 100 kbps bandwidth from here and thenthe remaining bandwidth will be shared between flow 1 and flow 2, and flow 1 and flow2 are utilizing both this bottleneck bandwidth.Now, after some time if flow 2 stops, then flow 1 say flow 2 gets stops, flow 2 flow 2finishes, at that time flow 1 will get the bandwidth which is close to 900 m, 900 kbps andflow 1 is utilizing some 100 kbps. That way this entire bandwidth allocation amongmultiple buffer multiple flows gets changed over time.(Refer Slide Time: 20:59)So, in this context, the congestion we discussed the congestion controlled algorithm inthe network. So, this congestion control algorithm, it is required because this flows theyenter and exit network dynamically, the example that we have seen and because of thisreason, applying an algorithm for congestion control in the network is difficult becauseyou do not have this centralized network information, like the first example that I haveshown you where you can apply this min cut theorem to find out the maximum flowbetween one source and one destination.The scenario is much more difficult here because every individual router do not havethis entire network information, even the end host do not have that entire networkinformation and a distributed or in a decentralized way you have to allocate the flowsamong flow rates among different end to end flows. So, we apply something called acongestion avoidance algorithm rather than a congestion control because aprioriestimation of congestion is difficult. So, rather than going for congestion control, we gofor congestion avoidance.So, the congestion avoidance is that whenever there is a congestion, you detect thatcongestion and then try to come out of that congestion. So, how will you do that? So,you regulate the sending rate based on based on what the network can support. So, yoursending rate now is the minimum of something called the network rate and the receiverrate. So, earlier your sending rate was equal to the receiver rate.So, based on the buffer advertisement that was given by the receiver, you are controllingyou window size and you are sending the data at that particular rate. Now you aresending rate will be the minimum of your network rate, what the network can supportand what the receiver can support. So, this network rate, receiver rate it comes from theflow control algorithm. So, the it comes from the receiver advertise window size for asliding window based flow control. So, the receiver is advertising that particular windowinformation and this network rate you do not have any control over the network rate orrather to say you do not have any estimation mechanism over the network rate. So, whatyou can do that you can gradually increase this network rate component and observe theeffect on the flow rate. So, ideally what can happen in case of wired network, if youassume that the loss due to channel error is very less; that means, if there is a loss fromthe network that loss is coming due to the buffer overflow at the intermediate routers.Now if buffer overflow happens that gives an indication that, well the total incoming rateto the buffer exceeds the total outgoing rate from that buffer.So, as an example, if you just think of an intermediate buffer intermediate buffer queue itis receiving data from multiple flows and there is some outgoing rate the outgoing ratecan also be multiple. So, assume that total incoming rate is λ and total outgoing rate is μ.Now if your λ is more than mu, this indicates that after some time the buffer will getfilled up and once the buffer will get filled up there will be packet drop from the bufferand you will experience a loss.So, the same things happen in case of network and we are sensing acongestion from this packet loss because packet loss gives an indication that the bufferthat you have, that buffer is getting exceeded.So, you identify it as a congestion and you again drop the network rate. So, the broadidea is that you gradually increase the network rate at some time, you will experience,the packet loss the moment you are experience the loss, then you drop the rate and againincrease the rate and again whenever you’ll get a loss, you will drop the rate. So, thatway we apply the congestion control algorithm in the network.Now, the question comes here that how will you increase the rate. So, we see the first wewant to see the impact of network congestion over good put and delay. So, what we seethat if you look into the network good put that the number of packets per second that youare receiving at the transport layer and with the offered load. So, you have a maximumcapacity. So, normally what happens that well the rate gets increased up to the maximumcapacity.But at the moment there is this congestion, you see a certain drop in the good putbecause your packets are getting dropped and if packets are getting dropped, the flowcontrol algorithm will try to retransmit the packets. So, you will get a certain drop here,we call that the congestion collapse. Now when the congestion collapse happens, youexperience a significant increase in the delay.Because packets are getting dropped, the flow control is trying to retransmit the packet,if at that time the link is not able to come out of the congestion, again that retransmittedpacket will fall in the congestion and there is a high probability that the buffer is still fulland the packet may get dropped. So, because of that, the total end to end delivery of thesuccessful packet that may get decreased.So, to ensure congestion control over the network we need to ensure another thing whichwe call as the fairness. So, what is fairness, the fairness ensures that the rate of all theflows in the network is controlled in a fair way. What do what does mean that? Now badcongestion control algorithm may affect fairness; that means, some flows in the networkmay get starved. So, because because it is flowing in the congestion, you can just thinkof a scenario in a road network, if you are falling in a congestion, if a car is falling in acongestion, so the car can have a very bad luck or have a huge delay to reach at thedestination. So, the similar thing may happen in the network that some flows may getstarved.Now in a decentralized network, ensuring hard fairness is very difficult because youagain you require the entire network information and want do some mathematicalcalculation to find out this min cut from that min cut theorem, what would be theavailable capacity and restrict the bandwidth to that particular capacity. So, let us look into an example of max min fair allocation. So, in this particular examplewe have multiple flows here. So, you can see that this is the bottleneck capacity where 3flows are sharing the link. So, 3 flows are sharing the link means each of them will getone third of the bandwidth. So, this particular link is shared by maximum number offlows.So, this is shared by 3 flows this is shared by 2 flows, this is shared by 2 flows and this isshared by one flow. So, here is the bottleneck. So, each of them will get one third of thebandwidth if they are getting one third of the bandwidth, the flow which is this flownumber D, it will use one third of the bandwidth, this flow number C and it will use theone third of the bandwidth. Then flow B which is moving from here because of thisbottleneck capacity, it is using one third of bandwidth. So, it will utilize one third ofbandwidth in this link. So, in this link the remaining capacity is 2 third. So, this flow A, itcan use the 2 third of the bandwidth. So, this is the max min fair algorithm because ifyou want to increase the bandwidth of this from one third say for flow B, if you want towant to increase it from one third you have to decrease the band width of flow A.Similarly, if you want to increase the bandwidth for flow C or flow D because you cansee that in this link the total capacity that is being utilized two third, it is not utilizing thefull capacity here also, it is not utilizing the full capacity and just by taking that if youwant to increase the capacity of any of this link because of this bottleneck capacitydistribution, you have to decrease the capacity of say flow B. So, this particularallocation is a max min fair allocation.Now, in a distributed way we can we can ensure max min fair allocation by applying thisAIMD algorithm. So, we call it the algorithm as additive increase multiplicative decreasewhich was proposed by Chiu and Jain in 1989. So, the algorithm is something like this.So, if you are making as an additive increase followed by a multiplicative decrease gaina additive increase followed by a multiplicative decrease and both the users are followingthis principle, gradually, they will come to the optimal point similarly if you start fromhere you make a additive increase and then towards this center point make amultiplicative decrease again make a additive increase make a multiplicative decreasegradually you will move towards this optimal point.So, that way this AIMD algorithm additive increase up at 45 degree and multiplicativedecrease towards the line points to origin, if you apply this particular algorithm, you canconverge towards the optimal point. So, this particular algorithm is used by TCP to adjustthe size of the sliding window to control the rates that will look into the details sometimelater. Well, so, in this particular lecture you got the broad overview about the congestioncontrol algorithm and with this lecture, we have covered basic services which are beingoffered at the transport layer.So, in the next set of lectures, we will look into more details of the transport layer fromthe protocol perspective, and also we look into that how this different services can becombined together to build up the final services.So thank you all for attending, hope we will meet in the next class.
Log in to save your progress and obtain a certificate in Alison’s free Advanced Diploma in Computer Networks and Internet Protocol online course
Sign up to save your progress and obtain a certificate in Alison’s free Advanced Diploma in Computer Networks and Internet Protocol online course
Please enter you email address and we will mail you a link to reset your password.