TCP Flow Control
Welcome back to the course on Computer Network and Internet Protocol. So, in the last class, we have looked into the details of TCP connection establishment.Now in this particular class, we look into the further details of TCP, the flow control algorithm, which is used by TCP to manage the rates of sender rates at sender side. Andthe different timers associated with this flow control algorithm and how you set a propervalue for those timers.(Refer Slide Time: 00:48)Well, starting with the flow control algorithm. So, TCP uses a sliding window flowcontrol algorithm with this go back N principle go back N ARQ principle. Where, ifthere is a time out, then you retransmit all the data which is there, inside your senderwindow.So, the idea is something like this, in a very simplified notion, that you say start to itssubsequent number 0. So, remember that 0 is not the initial sequence number rather herejust for explanation we are utilizing this sequence number as 0. But ideally it will startfrom the initial sequence number that has been established during the hand shaking phaseof the connection establishment.So, here the sequence number is like, if it is sequence number so, the initial sequencewhich is being established for last the sequence number that we are talking about here.So, just for simplicity of explanation we are starting with sequence number 0. So, let usstart with sequence number 0 and at this stage the application does a 2 kB write at thetransport layer buffer.When the application does a 2 kB write at the transport layer buffer, so, you send 2 kB ofdata and you are sending a 2 kB of data with sequence number 0. So, initially this is thereceiver buffer. So, the receiver buffer can hold up to 4 kB of data. So, once you aregetting that so, the receiver buffer it says you that well, it has to received 2 kB of data.So, it has only 2 kB of data which it can accept. So, it sends back with anacknowledgement number of 2048, 2kB is equivalent to 2048 bytes so, because we areusing byte sequence number. So, it sends an acknowledgement to it, 2048 and thewindow size has 2048 so, it can hold 2 kB of more data.So, at this stage, that then again application does a 2 kB of write. So, when theapplication does a 2 kB of write, you are sending 2 kB of data, further data along withthe sequence number starting from 2048. So, it is received by the receiver. So once it isreceived by the receiver. So, here because you have already sent 2 kB of data and theearlier advertised window size was 2048, so the sender is blocked from this point,because the sender has already utilized the entire buffer space that the receiver can hold,the sender cannot send any more data. So, the sender is blocked at this stage. So thereceiver buffer becomes full after receiving this 2 kB of data. So, the receiver sends anacknowledgement saying that it has received up to 4096 bytes and the window size is 0.So, it is not able to receive any more data. So the sender is blocked at this point.Now, at this stage, the application reads 2 kB of data from the buffer. Once theapplication reads 2 kB of data from the buffer, so, it has this it has read this first 2 kB.So, again this first 2 kB becomes full. So, when the first 2 kB becomes full, the receiversends an again an acknowledgement that well the acknowledgement number was 4096the one which was there earlier, but the window size now changes from 0 to 2048. So, itcan get 2 kB of more data.So, once it the sender receivers back sender comes out of the blocking stage and once thesender is coming out of the blocking stage, so the sender may send up to 2 kB of moredata. So, at this stage say the sender is sending 1 kB of data with sequence number 4096.So, that 1 kB is received by the receiver it put it in the buffer and it has 1 kB of free. So,if the receiver wants to send an acknowledgement, in that acknowledgement number itwill use the acknowledgement number as 4096 plus 1024 that it has received.And the sequence and a this window size as window size has 1 kB so, window size at1024 So, that acknowledgement it will send back to the sender.So, that way the entire procedure goes on and whenever sender is sending some data, thatat this stage the sender this has send some data of 2 kB of data. Then in the sender bufferthat 2 kB of data is still there until it receives that acknowledgement. If thisacknowledgement is lost, then based on that go back N principle it will retransmit thisentire 2 kB of data which was there in the sender buffer ok.(Refer Slide Time: 05:17)So, the algorithm is pretty simple considering this sliding window protocol with go backN principle. But there are certain tricks in it. Let us look into those tricks. First of allconsider an application called telnet, I am not sure how many of you have used telnet.So, telnet is an application to make a remote connection to a server. So, with this telnetapplication you can make a remote server remote connection to a server and then executethe commands on top of that.So, whenever you are making this remote connection to a server and executing thecommands on that, say you have just written “ls”, the Linux command ls to listing all thedirectives which are there in the current folder. So, that ls command need to be send tothe server side over the network because, that is remote connection using telnet that youhave done.So, this telnet connection it reacts on every such keystroke, in the worst case that it mayhappen that whenever a character arrives at the sending TCP entity, TCP it creates a 21byte of TCP segment, where 20 byte is there in the header and 1 byte of data. TCPsegment header is 20 byte of long, but telnet is sending the data to the server byte bybyte. So, telnet application at the client side it has just received 1 byte of data and that 1byte of data it is trying to send with the help of a TCP segment.So, in that TCP segment what will happen, that the TCP segment side will contain 20byte of the header and only 1 byte of data. So, you can just think of that what is theamount of overhead you have. So, with that 21 byte of packet, packet or rather 1 byte ofsegment, you are sending only 1 byte of data. And for this segment, another ACK andwindow update is sent when the application reads that 1 byte.So, the application reads that 1 byte and application sends back an acknowledgement.So, this results in a huge wastage of bandwidth, just you are not sending any importantdata to the server side, rather you are sending very small amount of data and the hugeamount of resources utilized because of the headers.(Refer Slide Time: 07:28)So, to solve this problem, we use the concept called delayed acknowledgement. So, incase of delayed acknowledgement, you delay the acknowledgement and window updatesfor up to some duration 500 millisecond in the hope of receiving few more data packetswithin that interval. So, it says that well whenever you are getting a character from thetelnet application, you do not send it immediately. Rather you wait for certain amount ofduration that is the 500 millisecond by default in TCP. And your hope is that by that timeyou may get some more data and you will be able to send a packet where with 20kilobyte of sorry 20 byte of header you will have more than 1 byte of data.However, in this case, the sender can still send multiple short data segments because, ifthe sender wants. So, it is just like that whenever whenever you are sending theacknowledgement acknowledgement to the sender, you are you are sending delaying theacknowledgement. You are delaying the acknowledgement, that means, you are notsending any immediate acknowledgement. And a sender to remember that, a senderunless it gets an acknowledgement with the available buffer space, the sender will notsend anymore of data. So, the receiver just keep on waiting that, whenever it will getsufficient data from the sender it will have sufficient space at the receiver, that then onlyit will send back that acknowledgement to the to the sender. So, the receiver will notsend immediate acknowledgement to the sender to prevent the sender to send further datato the receiver.(Refer Slide Time: 09:01)Well now, we have another algorithm. So, in the earlier case what we have seen that wellwith the delayed acknowledgement, you are expecting that unless you are sending anacknowledgement to the sender, the sender will not send any further data. But sender isnot restricted to that sender is that whenever it will get data from the telnet application itwill immediately send the data.Now, to prevent sender for sending this kind of small packets or small segments, we usethe concept of Nagle’s algorithm. What is this? The Nagle’s algorithm tells that, whenthe data come into the sender in small pieces, just send the first piece and buffer all therest until the first piece of acknowledgement. So, it is just like that, you have received asmall data segment or single bytes you have received byte A, you send that byte A fromthe sender say this is the sender and this is your receiver.And you keep on buffering all the subsequent characters A B C D until you get theacknowledgement from the sender. So, the hope here is that whenever you are sendingsome short packet in the internet, you are not sending multiple short packets one afteranother. That means, you are not sending a packet A, packet B, B, packet C like segmentA, segment B, segment C over the network rather only one short packet will beoutstanding in the network at any given instance of time.So, that way by the time you will get the acknowledgement for this packet A yourexpectation is that, you will get multiple other characters in the sender buffer. Wheneveryou are getting multiple other buffer characters in the sender buffer, you can combinethem together, construct a single segment and send it over the network.(Refer Slide Time: 10:57)The question comes here that we want to use Nagle’s algorithm all the time? BecauseNagle’s Nagle’s algorithm intentionally increasing the delay in transfer. So, if you arejust using telnets application and applying Nagle’s algorithm, your response time for theapplication will be slow. Because although you are typing something, that TCP ispreventing that single byte to reach at the server side unless it is getting anacknowledgement for the previous short packet.And that is why do not want to use Nagle’s algorithm for delay sensitive application.And there is another interesting observation here that, if you implement Nagle’salgorithm and delayed acknowledgement altogether, what may happen? That the in theNagle’s algorithm the sender is waiting for the acknowledgement. The sender has sentone small packet or a small segment and the sender is waiting for the acknowledgement,but the receiver is delaying that acknowledgement. Now if the receiver is delaying theacknowledgement and the sender is waiting for that acknowledgement. So, the sendermay go to starvation and you may have a significant amount or considerable amount ofdelay in getting the response from the application. So, that’s why if you areimplementing Nagle’s algorithm and delayed acknowledgement altogether, it may resultin a scenario, where you may experience slow response time from the applicationbecause of the starvation.So, in broad sense, the delayed acknowledgement what you are doing? You arepreventing the receiver to sending small window updates. And you are delaying thisacknowledgement at the receiver side with the expectation that the sender willaccumulate some more data at the sender buffer. And it will be able to send the largesegment rather than a small segment.Whereas, in case of Nagle’s algorithm you are just waiting for the acknowledgement of asmall segment with the expectation that by that time the application will write few moredata to the sender buffer and these two together can cost a starvation. So, that’s why wedo not want to implement delayed acknowledgement and Nagle’s algorithm altogether.(Refer Slide Time: 13:14)So, one possible solution comes from, another problem in this window update message,which we will call as the silly window syndrome. So, let us see that what is silly windowsyndrome? So, it is like that data are passed to the sending TCP entity in large block, butan interactive application under receiver side reads data only one byte at a time. So, it isjust like that, if you look into the receiver side, the receiver this is the receiver buffer say,this is the receiver buffer.So, the sending application is sending data at a rate of 10 mbps say, the sender has lots ofdata to send, but you are running some kind of interactive application at the receiver side.So, it is receiving data at a very slow rate like at a rate of 1 kB at a time or 1 byte at atime the example that is given here at 1 byte at a time.Now, if it happens, so, this is the kind of problem. Initially, say the receiver buffer is fullwhen the receiver buffer is full, you are sending an acknowledgement to the sendersaying that the acknowledgement the corresponding acknowledgement number followedby the window value as 0. So, the sender is blocked here, now the application reads 1byte of data. The moment application reads 1 byte of data; you have a free space here inthe buffer. Now say, the receiver is sending another acknowledgement to the sendersaying that the window size is 1.So, if it sends this window size small window size advertisement to the sender, what thesender will do? Sender will send only 1 byte of data. And once it sends 1 byte of datawith that 1 byte of data again the receiver buffer becomes full. So, this becomes in a loopand because of this window update message of 1 byte, the sender is tempted to send 1byte of segment with every window update message. So, this again create the sameproblem that we were discussing earlier that you are sending multiple small segmentsone after another.And we do not want to send those multiple small segments, because it has suchsignificant overhead from the network perspective. It conceives a huge amount ofbandwidth without transferring any meaningful data to the receiver intake.(Refer Slide Time: 15:43)So, to solve this problem, we have a solution which is proposed by Clark, we call it as aClark solution. So, the Clark solution says that do not send window update for 1 byte,you wait for sufficient space is available at the receiver buffer. Once some sufficientspace is available at the receiver buffer then only you send the window update message.Now, the question comes that what is the definition of the sufficient space. That dependson the TCP implementation that if you are using some buffer space, then you use certainpercentage of the buffer space. If that is become available then only you send thewindow update message to the sender.
TCP Flow Control- Part 2
Well, here the interesting fact is that to hand glass handle the short segments at thesender and receiver altogether, that this Nagle’s algorithm and the Clark’s solution tosilly window syndrome - they are complementary, just like the earlier case like theNagle’s algorithm and the delayed acknowledgement can create a starvation that will nothappen here.So, the Nagle’s algorithm it solves the problem caused by the sending applicationdelivering data to TCP 1 byte at a time. So, the sending it prevents the sendingapplication to send small segments. Whereas, the Clark solution, here it prevents thereceiving application for sending window update of 1 byte at a time. So, the receiver,receiving application fetching the data from the TCP layer 1 byte at a time for that youwill not send immediate window update message.There are certain exception to that because; whenever you are applying this Nagle’salgorithm and the Clark solution. Again it will have some amount of delay on theapplication perspective. The application response time will be still little slow, becauseyou are waiting for sufficient data to get accumulated and then only create a segment.Similarly, on the receiver side you are waiting for sufficient data to read by theapplication and then only you will send the window update message, this may still havesome higher response time from the application perspective, may not be as high as like astarvation which was there for Nagle’s algorithm and delayed acknowledgement. But, forcertain applications say for some real time application, you want that the data istransferred immediately bypassing the Nagle’s algorithm and the Clark solution; in thatcase in the TCP header you can set the PSH flag.So, this PSH flag it will help you to send the data immediately, it will help make informthe sender to create a segment immediately, without waiting for more data from theapplication side. So, you can reduce the response time by utilizing the PSH flag.(Refer Slide Time: 18:43)Well now, the second thing is that handling out of order segments in TCP. So, what TCPdoes? The TCP buffer space out of order segments and forward duplicateacknowledgement. So, this is an interesting part of the TCP this concept of duplicateacknowledgement. So, what TCP does that whenever you are receiving certain out oforder segment say for example, I am just trying to draw a yeah, so, I am trying to say thisis the receiver buffer. In the receiver buffer, we have received up to say this segment andthe receiver is say this is say 1024. It has received up to 1023 and it is expecting from1024 and you have received the segment from say 2048 to something else.Now, at this case, whenever it has received this previous segment, it has sent anacknowledgement with sequence number as 1024; that means, the receiver is expectingand segment starting from byte 1024, but it has received this out of order segment. So, itwill put the out of order segment in the buffer, but it will again send anacknowledgement with this same sequence number, that it is still expecting sequencenumber 1024.So, this acknowledgement we call it as a duplicate acknowledgement. So, this called aduplicate acknowledgement or in short form DUPACK. So, this DUPACK, we willinform the sender application that well ah; it has this particular receiver has not receivedthe byte starting from 1024, but it has received certain other bytes after that.So, this has an important consequence in the design of TCP congestion control algorithm.So, we look into the details of this consequence, when we discuss about the TCPcongestion control algorithm in the next class.(Refer Slide Time: 21:14)So, here is an example, say the receiver has received the bytes 0 1 2 and it has notreceived the bytes 3 and then it has received bytes 4 5 6 7. So, TCP sends a cumulativeacknowledgement with acknowledgement number 2 which acknowledges everything upto byte 2.So, once this four is received a duplicate ACK with acknowledgement number 3 that isthe next expected byte it is forwarded. This triggers a congestion control algorithm whichwe look into the details in the next class, after time out the sender retransmits byte 3. So,whenever the sender is retransmitting byte 3 so, you have received byte 3 here.So, the moment you have received byte 3 here, you have basically received all the bytesup to byte 7. So, you can send another cumulative acknowledgement withacknowledgement number 8; that means you have received everything up to 7 and nowyou are expecting byte 8 to receive ok.(Refer Slide Time: 22:15)TCP has multiple timers implementation. So, let us look into those timers in detail. So,one important timer it is TCP retransmission timeout or TCP, we call it in short form TCPRTO. So, this retransmission timeout helps in the flow control algorithm. So, whenever asegment is sent, this retransmission timer is started if the segment is acknowledged so, ifthe segment is acknowledged before the timer expires the timer is stopped and if thetimer expires before the acknowledgement comes, the segment is retransmitted. So, onceyou have transmitted a segment from the sender side you start the timer, say within thistimeout if you receive the acknowledgement, then you restart the timer otherwise oncetimeout occurs, then you retransmit this segment.So, timeout occurs means, something bad has happened in the network andsimultaneously it also triggers the congestion control algorithm that we will discussduring the discussion of the congestion control algorithm, but it also retransmit the lostsegment. So, if it does not receive the acknowledgement within the timeout, it assumesthat the segment has lost.(Refer Slide Time: 23:36)Now, the question comes that what can be an ideal value for this retransmit timeout. So,how will you say this retransmit timeout? So, one possible solution is that to estimate theround trip time because, you have sent a segment and you are waiting for thecorresponding acknowledgement. So, ideally if everything is good in the network, thenthis segment transmission and the acknowledgement transmission it will take one roundtrip time.So, it is one round trip time it is expected to get everything, but because of the networkdelay and something, you can think of that well I will setup the retransmission timeout tosome positive multiples of RTT. Some n x RTT where n can be 2, 3; something like thatbased on your design choice. But then the question comes that how you make anestimation of RTT? Because your network is really dynamic and this RTT estimation is adifficult for transport layer. So, let us see that why it is difficult for transport layer.(Refer Slide Time: 24:32)So, if we make a plot something like this so we are we are trying to plot the RTT, theround trip time and the data link layer and the transport layer. So, the difference is that incase of data link layer, you have two different nodes, which are directly connected vialink. So, if these two different nodes are directly connected via link. So, how much timeit will take to send the message and get back the reply.But in case of your network layer, in between the two nodes you have this entire internetand then and another node and then you are trying to estimate that, if you are sending amessage to this end host and receiving back a reply what is the average round trip time itis taking.Now, if we just plot this round trip time, the distribution of this round trip time, we willsee that the variation is not very high whenever you are at the data link here because, it isjust the single link and in that single link this dynamicity is very less because, thedynamicity is very less for a single link you can make a good estimation, if you take theaverage with that average we will give you a good estimation of that round trip time.(Refer Slide Time: 25:37)But that is not true for the transport layer, in case of transport layer because there are lotsof variability in between this intermediate network between the sender and the receiver.So, your round trip time varies significantly so the variance in round trip time it is veryhigh.So, if you just take an average, the average will never give you a right estimation it mayhappen that well, the actual value falls somewhere here and there is a significantdeviation from the average. And if you set retransmission timeout by considering thatRTT estimation you will get some spurious RTO’s. So, the solution here is that you use adynamic algorithm that constantly adopts the timeout interval, based on some continuousmeasurement of network performance.(Refer Slide Time: 26:27)So, how will you do that? So, to do that we have something called the Jacobsonalgorithm proposed in 1988 which is used in TCP. So, the Jacobson algorithm says thatfor each connection, TCP maintains are variable called SRTT the full form is SmoothedRound Trip Time which is the best current estimate of the round trip time to thedestination.Now, whenever your segment whenever you are sending a segment you start a timer. So,this timer have two different purpose like it can you it can be used to trigger the timeoutand at the same time it can be used to find out that how much time it take to receive theacknowledgement.(Refer Slide Time: 27:09)So, whenever you have say sent a sent a message say this is the sender, this is thereceiver you have send the segment and you have start the timer. So, the timer the clockwill keep on ticking. So, if you receive the acknowledgement here so at this stage youcan think of that well this the timer stops here and this difference will give you anestimation of round trip time. But if you do not receive this acknowledgement, then aftersome timeout this timer expire say, here and once the timer expires, you retransmit thesegment.So, it can be used for two different purposes this the same timer. So, ah, so, you measurethe time if you receive back an acknowledgement and you update the SRTT as follows.So, SRTT would be some alpha times the previous estimation of SRTT plus 1 minusalpha of this measured value R. So, this algorithm this mechanism we call asexponentially weighted moving average or EWMA. Now alpha is a smoothing factor thatdetermines that how quickly the old values are forgotten like what weight you are goingto give in the old values typically in case of TCP Jacobson set this alpha to a value of 7by 8.(Refer Slide Time: 28:39)Now, this EWMA algorithm has a problem like; even you give a good value of SRTT,choosing a suitable RTO is nontrivial. Because the initial implementation of TCP it usedRTO equal to two times of SRTT, but it has found out that still there is a significantamount of variance say ah, from the practical experience people have seen that a constantvalue, this constant value of RTO it is very inflexible because, it fail to response whenthe variance went up.So, if your RTT has a measured RTT has too much deviation from the estimated RTT,then you will get the spurious RTO. So, in case your RTT fluctuation is high you maylead to a problem. So, it happens normally at high load so when your network load isvery high your RTT fluctuation will become high.So, in that case, the solution is that apart from the average one, you consider the varianceof RTT during the RTO estimation. Now how we consider the variance of RTT?Now, another question comes which is like how will you get the RTT estimation when asegment is lost and retransmitted again. If a segment has lost and retransmitted again,then you will not get the proper estimation of RTT because this segment you havetransmitted the segment. So, the segment has lost you have started the timer here. So,there is a timeout you again after the timeout we transmitted the segment and you got theacknowledgement.So, there are other TCP timers like this persistent TCP timer, which avoid deadlock whenreceiver buffer is announced as 0. So, after the timer goes off the sender forwards a probepacket to the receiver to get the updated window size, there is something calledKeepalive timer. So, this Keepalive timer it closes the connection when a connection hasbeen idle for a long duration. So, you have set up a connection at not sending any data.So, after this Keepalive timer it will go off and then the time wait state which we haveseen in case of connection closure. So, you wait before closing a connection which is ingeneral twice the packet lifetime.So, this is all about the flow control algorithm and different set up of your TCP timervalues. In the next class we have we’ll see how we apply this loss or duplicateacknowledgement that we have seen here for the management of TCP congestioncontrol.Thank you all for attending this class.
Log in to save your progress and obtain a certificate in Alison’s free Advanced Diploma in Computer Networks and Internet Protocol online course
Sign up to save your progress and obtain a certificate in Alison’s free Advanced Diploma in Computer Networks and Internet Protocol online course
Please enter you email address and we will mail you a link to reset your password.