Transport Layer Performance
Welcome back to the course on Computer Network and Internet Protocols. So, in the lastclass we have looked into to the two different services of transport layer. So, we havelooked into the connection establishment in details and then, the flow control andreliability support protocols like this ARQ protocols in details. We have looked into threevariants of ARQ protocol - the stop and wait protocol and two different sliding windowARQ protocol go back N and selective repeat.(Refer Slide Time: 00:54)So now, we look into certain performance issues in transport layer and along with thatwe will look into that how we can improve the end to end performance of a protocol orduring the data transmission. So, here our broad objective would be look into the detailsof transport layer protocol performance and how you can improve it by incorporatedincorporating certain additional features over the transport layer.(Refer Slide Time: 01:20)So, the first things that we are going look into in details is a parameter which hassignificant impact over the transport layer protocol. And based on that you can modifythe protocol parameters or you can tune the protocol parameters or some time it will alsohelp you to choose that which variant of protocol you should use in a practical purpose.So, for example, by this time you already know three different variants of ARQprotocols; the sliding window ARQ, the two different sliding windows ARQ go back NARQ and selective repeat ARQ and along with that stop and wait ARQ.Now, if I ask you the question that for a typical network like said some networkparameter is given that the network has this capacity, this end to end capacity, this is theamount of delay for transmitting a packet. The question comes that can you tell a feasiblechoice of which particular sliding window type of protocols or whether you are going touse this stop and wait ARQ protocol. So, which particular ARQ protocol you are going touse for your network.So, for that, one important parameter that will help us in this decision making issomething called a bandwidth delay product. So, first we look into the details ofbandwidth delay product and its implication over the selection of window size for slidingwindow protocol as well as the choice of choice of particular protocol in protocoldesigning.So, let us look into the concept of bandwidth delay product. So, as the name suggests thatbandwidth delay product is product of link bandwidth and the link delay. If you consideran, end to end link. So, if your end to end link has a bandwidth of say 50 Kbps and it hasone way transit delay is 250 millisecond, then your BDP is 50 kilo bit per second into250 millisecond comes to be something similar to 12.5 kilo bit.So, if you just think about TCP or transport layer segment point of view, so segment isthe transport layer data that you want to transmit. So if you think about that you have a1000 bit segment size, so this BDP bandwidth delay product comes to be something liketo 12.5 segments. So, that is the definition of bandwidth delay product.So, let us look into that how this bandwidth delay product has an impact over yourprotocol performance or your design choice of a particular transport layer protocol. So,you consider the event of a segment transmission and the correspondingacknowledgement reception. So, this entire thing takes a round trip time. So, what is around trip time?So, you have a sender here, you have a receiver here; say this is your sender, this is yourreceiver and in between you have the network. The sender and receiver is connected. So,you transmit a data frame and you get the corresponding acknowledgement. So, you havetransmitted a data and you have got an acknowledgement. So, this total time like themoment you have transmitted the data from there the moment that you have received anacknowledgement, if you find out this timing difference; this will give you a round triptime or an RTT.So, a RTT is basically twice the one way latency. So, because you can see that it willtake; if your one way latency is something similar to say 200 millisecond, then it willtake 200 millisecond to transfer this data frame and another 200 millisecond to get backthe acknowledgement frame. If you are thinking of that there is no congestion in thenetwork or there is no kind of uninterrupted delay or unintended delay which is their inthe network. If you just think about your network is behaving very smoothly, there is nosuch intermediate delay components which will increase your delay, end to end delay.And your delay the total end to end delay can be approximated with the propagationdelay. Then with that particular concept you can think of that well this RTT will give youan approximate time that what will be your end to end delay of transmitting a packetbecause you are sending the data you are getting the acknowledgement. You aremeasuring the time difference between them and from there you can make an estimationof the RTT. So, this RTT becomes twice the one way latency.Now, if you just think about an end to end link. So, the maximum number of segmentsthat can be outstanding during this duration is equal to 12.5, that was your bandwidthdelay product. The one way bandwidth delay product, the bandwidth multiplied by oneway latency into your if you think about it into 2 equal to 25 segments. So, the 25segments can be outstanding. Why this is so?(Refer Slide Time: 06:25)So, let us look into one example that if you just think about sender; so, this is your senderand the other end you have the receiver. You can just think of this entire thing, this entirelogical channel in between has a pipe. Well now this particular pipe, so whenever you arethinking about this two way communication, that you are sending data through one pipeand you are receiving acknowledgement through another pipe, well so this is the pipe forsending the data and this is the pipe for getting the acknowledgement.So, the total number of request that can be outstanding within this two pipe is like thetotal amount of bits that you can push in this two pipe. Now this latency is that howmuch time will it take to transfer a bit from this sender to this receiver. So, if you justthink about the latency; so, this latency denotes the length of the pipe. On the other hand,if you just think about the bandwidth, bandwidth gives the area of their, this particularcircle; that means, what is the cross section area of this pipe. So, that is signifies by thebandwidth.Now if you multiply bandwidth with latency, you can think of it as the amount of datathat can be there inside this pipe. Now because if you have two way communication; inone way you have the data and another way you have the acknowledgement. So, by aprinciple of this sliding window protocol, the acknowledgement will filled up this pipewhere as the data, will filled up this pipe, and that way the acknowledgement which arefilling up this pipe, the data will for those particular acknowledgement will be storedinside the receiver buffer because receiver has received this data. So, this buffer willcontain the data that has been received. So, the receiver has this amount of data and thereceiver has started sending the acknowledgement. So, this acknowledgement is fillingup this pipe and at the same time the sender has sending the data that data is filling upthis data pipe.So that way, if your bandwidth delay product is, according to the earlier example is 12.5segment, so you can have 12.5 segments of data which is filling up this pipe say this datais filling up this pipe. And another 12, say another 12.5 segments of data which are beingthere in this buffer and the corresponding acknowledgement is filling up this secondpipe. So, that way total 25 segments of data can be there which is outstanding, so whichis there in the link as well as which is there in this receiver buffer.So, that way you can say that your maximum window size, the sender window sizewhich can be there, so if you think about this as the sender window, I am writing it as‘swnd’. So, the sender window is the maximum amount of segments that can beoutstanding in the network and for that you can wait without getting anacknowledgement. So, if you make the sender window size equal to 25 segments, thenyou can be sure that well whatever data you are pushing in the network that data will usethe entire capacity of the channel. And that way you will be able to sure that well it willhave, it will provide you maximum utilization of that end to end channel, the maximumutilization of the pipe which is there in between the sender and the receiver.So, this gives a theoretical bound, the maximum bound on this on this window size, thesender window size will which will provide you the maximum capacity. Now justrelating it with the sequence number that we have discussed earlier, the relation betweenthe window size and the sequence number, so, once you choose the window size in thisway, say you have chosen the window size w in this way. Now assume that you are usinga go back N ARQ protocol, if you are using a go back N ARQ protocol and you knowthat in that case, your maximum window size can be 2n-1. So, from there you can find outthat what should be what should be your sequence number space. So, how many bits youshould reserve for the sequence number such that you can have the expected windowsize, and at the same time that window size will fill up the end to end capacity of thenetwork.Similarly if you are using selective repeat ARQ, for a selective repeat ARQ, you knowthat w is equal to 2n/2. So, you can you can select the window size in you can select thesequence number in such a way so that this particular relation holds. So, that way am youcan you can find out the maximum window size which will provide you the maximumutilization of the channel. And accordingly you can set up the sequence number space fordifferent flow control algorithm.(Refer Slide Time: 12:16)So, now the thing is that like this. This is the description that I have given like ideallywhat we can think of that the maximum number of segments that can be outstandingwithin this duration is these 25 segments which is equal to the channel capacity plus 1.So, this plus 1 is like that the acknowledgement for one frame which has just received bythe sender. So, the sender has not processed it yet.It is just received by the sender. So, that is why we adopt this 1 here which gives you themaximum link utilization. So, it is just like that you have filled up this entire two pipesand one acknowledgement has just received at the sender. So, you have filled up the 2pipes which are there in between and so, one data pipe and one acknowledgement pipeand one acknowledgement has just received at the sender. So, that way it comes equal to2BD plus 1.So, the window size equal to 2BD plus 1, it will give you the maximum link utilizationwhere BD denotes the number of frames equivalent to BDP. So, this is an importantconcept to decide the window size for a window based flow control algorithm; so theexample that I have given you earlier.
Transport Layer Performance- Part 2
So, let us see an example of this. So, consider a network with link bandwidth end to endlink bandwidth as 1 Mbps, the delay equal to 1 millisecond and you consider a networkwhere as segment size is 1 kilo byte equal to 1024 bytes. Now the question is that whichparticular protocol would be better for flow control whether you are going to use a stopand wait protocol or a go back N protocol and selective repeat protocol. So, for to solvethis problem, so, we first compute the BDP, we see that the BDP comes to be 1 Milli byte1 Mbps into 1 millisecond equal to 1 kilobyte; that means 1024 byte. So, the segmentsize is eight times larger than the BDP.So, here am here your BDP is 1 kilo bit and your segment size is 1 kilobyte because yoursegment size is one kilobyte; that means, the link cannot hold an entire segmentcompletely. So, the pipe that you are considering here between the sender and thereceiver; so here this pipe assume that this pipe considers both the data and theacknowledgement, data plus ACK pipe. So, this ACK pipe this data plus ACK pipe itwill not be able to hold this entire segment inside that because BDP comes to be 1 kilobit where as your segment size is 1 kilo byte.Now in this case, the sliding window protocols do not improve performance becausewhy because we will not be able to send multiple segments in parallel even one segmentis not able to fill up the your pipe completely; because one segment is not able to fill upyour pipe completely. There is no reason to send multiple segments in parallel becauseany way you will not be able to get the advantage of parallelization in this particular casewhere your link bandwidth is 1 mega bit per second and delay is 1 millisecond.So, under this particular case it is always good to choose a stop and wait protocolbecause stop and wait protocol has the least complexity. So, sliding window protocol asyou understand, because of the design choice, it has more complexity, you have tomaintain the sender window, you have to maintain the receiver window. Then you haveto maintain the sequence number field; all this different overheads are there. But with astop and wait protocol, the logic is pretty simple that you send segment and then wait foracknowledgement once you are getting acknowledgement, you send the next segment.So, that way your stop and wait protocol will have significantly more - sorrysignificantly less overhead compared to a sliding window protocol. And because here,we see that we are not getting the advantage of parallelization, we always prefer to use astop and wait protocol under this particular example scenario.So, this gives you an intuition or an example that how this parameter BDP bandwidthdelay product help you to make a design choice that which particular sliding windowprotocol, you should use to improve the network performance with having minimalcomplexity. And the same time the example that I have given you earlier that bandwidthdelay product will help you to choose the optimal window size that what window sizeyou should use such that you can utilize the maximum capacity of the network. And onceyou have selected that window size and your happy with a sliding window protocolbased on this philosophy, you can find out that what sequence numbers space you shoulduse such that there is no am no confusion in the protocol design during the execution ofthe protocol. Like the examples that we have looked earlier in the case of sliding windowprotocols; different variants of sliding window protocols like the go back N protocol orthe selective repeat protocol. Well.(Refer Slide Time: 17:32)So, from here let us look into that how we basically interface the application layer withthe transport layer at the sender side. This will give a design idea that how will be able todesign a transport layer protocol. So, the example that I have taken is from the Linuxoperating system. So, you have the user space and the kernel space. At the user space,you are running certain application that is sending the data. So, you have certain systemcall at the kernel the write() system call and the send() system call – we’ll look into allthis system calls later on whenever we discuss about the socket programming. So, thereare this system calls through which you can send the data to the kernel, from theapplication.Now, here you have this transmission rate control module. This transmission rate controlmodule based on your flow control algorithm, it will use this function. So, this name ofthis function are hypothetical not directly matches to it what is implemented in theLinux, just to give you an idea about how you can implement your own transportprotocol.So, you have a function called TportSend, it is triggered periodically based on yourtransmission rate control your based on your flow control algorithm; based on yourwindow size that how much data you can send. So, this particular function is being calledand the data is sent to IP, the next layer of the protocol stack, the network layer of theprotocol stack.Now you can think of that this rate and this rate are asynchronous. So, here theapplication can generate data at a more high higher rate compare to the rate at which thetransport layer can send the data to the other end. So, this transmission rate control mayfind out that well the optimal rate is something equal to 2 Mbps where as the applicationgenerates rate at a at 10 Mbps.Now if this is the case, the application generates rate at 10 Mbps and the transmissionrate control generates rate of 2 Mbps.Oobviously, you need to have intermediate bufferwhich will store that data.(Refer Slide Time: 1942)So, at whatever rate the application is generating the data, some time it may be higherthan the rate at which this transmission rate control module works. So, this application, itwill write the data in the buffer and then this TportSend() function. It will pick up thedata from the buffer based on the rate that is being provided by the transmission ratecontrol module and the data will be send to the next layer of the protocol stack.Now, in this case it may happen that different connections, you may have differentconnections at the application layer. They are treated differently. So, we need connectionspecific source buffering. So, this particular buffer we call it as a source buffer. So, forevery independent connection that you have from the application layer, we have onetransport layer buffer associated with it. And then there is another interesting fact isabout this write call, the write call through which you are sending data from theapplication. This write call it blocks the port. So, here is your port through which you areuniquely identifying an application. So, it blocks the port until the complete data iswritten in the transport buffer.So, it is just like that it may happen that well some time your transmission rate control issending data at a rate of 1 Mbps and application is generating data at a rate of 10 Mbps,the example that I have given earlier. So, application is sending data at a more higher ratecompared to what the transmission rate control is sending the data to the lower layer ofthe protocol stack.So, after some time; obviously, this buffer has a finite space. So, the buffer will get filledup. Once the buffer gets filled up, so then the transport layer it blocks the application towrite any more data in that buffer to avoid the buffer overflow from this transport layerbuffer.(Refer Slide Time: 21:39)Now, let us look in to the receiver side. So, the receiver side the idea is again similar. So,your TportRecv(), the transport receive function; it will receive the data from thenetwork layer. So, once it has receive the data from the network layer, it will look intothe port number in the transport layer header. So, by looking into the port number, it willdecide that in which application which transport layer queue it should fill it. So, thistransport layer queue is for one application, this transport layer queue is anotherapplication. So, every such queue is bounded to it one port because as you have seenearlier that this port number it uniquely identifies an application. So, based on that, youput the data in the buffer. Now from the application side you make a read call or areceive call which you through which you will read the data from this buffer. And herethe method is something like that whenever you are making a receive call; it will wait onthis CheckBuffer() function. It may happen that whenever the application is making areceive call during that time, this received buffer is empty it has not received any data.So, the call will be getting blocked here. And the moment you receive the data in thisbuffer, it will send the interrupt signal to this call and this call will get the data from thisbuffer and send it to the application.So, with the help of this interrupt, we can make this receive call to interact with thisbuffer. Now here you can see that this receive call, it is a blocking call, the read call orthe receive call; it is a blocking call until the data is received, then the complete data isread from the transport buffer.So, it is like that whenever you have made a receive call, during that time the call isgetting blocked at this port until you are getting a data in this buffer. And once you aregetting a data in this buffer, then use a this check buffer function, it will send interrupt tothe read call and the receive call and it will get this entire data from the buffer andrelease this particular call.So, that way you can see that both of this calls are kind of blocking call at the sender sideas well as the receiver side. So, the sender call gets blocked when the buffer is full, thereceiver call gets blocked when the buffer is empty.(Refer Slide Time: 24:04)So, the question comes that how can you organize this buffer pool? So, there are multipleways you can organize the transport layer buffer. It is a software buffer. So, in case yoursegments are of same size. So, all the segments are of same size, you can organize thebuffer as a pool of identically size buffer; that means, you can hold one segment at everyindividual buffer. So, a segment contains certain number of bytes. So, your individualbuffer size will be equal to your segment size and every individual buffer contains onesegment and this buffer pool they can contain multiple segments all together. Now forvariable segment size you can use this chained fixed size buffer. So, it just like a linkedlist.So, your individual buffer size is the maximum segment size and say individual buffersare connected by a linked list kind of data structure and they entirely construct the bufferpool for a particular transport layer port number corresponds to an application. Now inthis case, if you are using chained fixed size buffer, the space would be wasted ifsegment sizes are widely varied. So, if one segment is 1024 kb, another segment is 10 kbin that case, you can have a significant amount of free space here which is being wasted.Now if you make a small buffer size, then you need multiple buffers to store a singlesegment which adds the complexity in the implementation.(Refer Slide Time: 25:41)Now, in this case we use the variable size buffers. So, with the variable size buffer, sohere is an example of a variable size buffer. So, you have the pool of buffers which areconnected via linked list data structure and they will have variable size. The advantage isthat you can have a better memory utilization, you can reduce the amount of loss oramount of memory space wastage. If you have a large data large segment, you put it inthe large buffer; if you have a small segment, you put it in a small buffer. But thedisadvantage is you have a complicated implementation because individual buffer spacesare dynamic. So, you have to dynamically allocate the memory.The third solution that we prefer is to use single large circular buffer for everyconnection. So, we have a kind of circular buffer and that circular buffer containsindividual segment. So, in a circular buffer you can have one segment of say 1 kb size,another segment of having 4 kb size, another segment of having 10 bit size and that way.So, you can put individual segments of different sizes one after another and ultimatelyyou can have a unused space which can be used to store the next incoming segments.So, this single large size circular buffer, it provides a good use of memory when theconnections are heavily loaded. Because if the connections are heavily loaded again, youare going to use or going to waste huge amount of memory space inside the bufferbecause you are using a fixed size buffer, then in that case the variable size buffer mayperform well.So, that way your choice of designing a transport layer buffer depends on what type ofapplications, you are going to use and what type of data the applications are going togenerate; so based on that you can decide that which particular buffer will be moresuitable for your application.So, this gives you this particular lecture give you a broad idea about your design choiceof multiple transport layer parameters and how it impacts the transport layer protocolperformance. And we have looked into a hypothetical implementation of differenttransport layer functions and how the transport layer calls are getting interfaced with theapplication layer and in that particular scenario what type of transport layer buffers youcan use based on your need of the application.So, in the next class we will continue from here and look into one another service attransport layer like the congestion control mechanism. And then, we will go to the detailsof the transport layer protocol implementation. We will talk about the TCP and UDPprotocol implementation in details.So thank you all for attending this class.
Log in to save your progress and obtain a certificate in Alison’s free Advanced Diploma in Computer Networks and Internet Protocol online course
Sign up to save your progress and obtain a certificate in Alison’s free Advanced Diploma in Computer Networks and Internet Protocol online course
Please enter you email address and we will mail you a link to reset your password.