Loading

Module 1: Internet QoS

Notes
Study Reminders
Support
Text Version

Traffic Scheduling

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Traffic Scheduling
So welcome back to the course on Computer Network and Internet Protocols.So, in the last 3 lectures we have looked into the basic quality of service architecture toprovide quality of service over the internet. So, today we will dig again further to lookinto different kind of a queuing and congestion avoidance strategies, which we apply toprovide the quality of service to different applications.So, we start from this point that we have already seen earlier, in the basic quality ofservice architecture what we have seen that whenever the packets are coming to thenetwork. First you have to use this admission control strategy to enter the flows in thenetwork for which you can support the desired quality of service requirement.Then we do something called a classification and marking. So, this classification andmarking it is used to mark individual packets, based on their quality of service classeslike the red packets, blue packets or green packets.Then we apply something called traffic policing, in case of traffic policing we have seenwe have seen that different type of traffic shaping and the policing mechanisms areapplied in intermediate routers. So, we apply different kind of traffic policing anddifferent type of traffic shaping mechanism.Now, in between we have this module which we called as the traffic scheduling. So, whatthis scheduling actually does? So, whenever you have this different marked packets bylike, the red packets, green packets, and the blue packets. So, now different markedpackets require different level of quality of service. So, because different marked packetsrequire different level of quality of service, you need to schedule those packetsaccordingly at the intermediate routers. So, all these different gates that you areobserving here you can think of a intermediate routers of the network.So, in that routers we need to apply the scheduling strategies to ensure the desired qualityof service for different marked packets either red packet, green packets and the bluepacket. So, you can think of the red packets as the voice applications like this VoIPapplications, which require strict quality of service. Say the green packet may besomething like a video on demand services, which also require another class of quality ofservice. It requires more bandwidth and requires a less jitter whereas, this blue packetsare normal best effort services like this FTP based traffic delivery.So, you need to give more priority to VoIP than the video and the less priority will be theFTP. So, that way you need to design a scheduling strategy at individual routers so thatdifferent level of quality of service can be ensured of different classes of packets. So,today we will see all these quality of service in details. So, let us start our journey on thisquality of service concept.(Refer Slide Time: 03:17)So, to do the traffic scheduling as I have mentioned the first stage is this classificationand marking. So, this classification and marking it ensures that the marked data packetsthey enter into the network and we divide them into different traffic classes like from theusers.Say for example, whenever you are connecting your smartphone to the internet you areenabling the data services over your smartphone during that time. So, if you have certainlevel of quality of service associated with your service provider, then the smartphone willor the smartphone or your SIM card will actually create one service level agreement withyour service provider, say the Airtel or BSNL that I want this much amount of serviceand I am paying you money for this particular service so treat my packets accordingly.Now in that case say for example, if you are going to have a VoIP service, voice over IPservices. So, this voice over IP services is still not very popular in India, but in manycountries they are very popular. So, if you are going to support this voice over IP servicesfrom your smartphone, then your network service provider at the first hop router say thebase station where your smartphone is connected it should understand that will you aregoing to transfer this VoIP data, voice over IP data. Now, whenever it understand thatyou need to transfer this voice over IP data during that time it marked this particularpackets which are coming from your smartphone as the voice over IP data.Now, remember that in your smartphone you can run multiple applications. You can runFacebook, you can run YouTube and at the same time you can run voice over IPapplications. Now the network need to understand that this particular application isactually voice over IP application and for that I should give the required resources forensuring quality of service of that VoIP application. So, that is why this kind ofclassification and marking is requirements.So, this classification and marking it marked the data packets into different trafficclasses. So, the mark traffics are of different priority classes and require different level ofquality of service based on the service level agreements. So, this SLA stands for servicelevel agreement, the service level agreement is something like whenever you aresubscribing to a particular network during that time, you say the network serviceprovider that I am going to use VoIP services and you should give me this much amountof data for or this much amount of services you should ensure from your end to transfermy voice data or VoIP data over your network.So in that I think you have seen some level of service level agreement, whenever you arepurchasing certain packs from Airtel or Vodaphone or any other service providers, youwill see they mention something like that will provide you these minutes of free callingsend them 100 SMS per day then 1.2 GB of uplink data and 5 GB of downlink data perday. This is one example of service level agreement that you have with your serviceprovider.So, this is not application level service level agreement, but this is like user level servicelevel agreement. So, whatever data you are going to transfer you will get that 1.5 GB ofuplink bandwidth and 5 GB of downlink bandwidth if you have made a service levelagreement like that. So, that way all your applications will use that bandwidth, but youcan also do that application level service level agreement to it the network serviceprovider.So, as I have mentioned just a couple of minutes back that these things are not verypopular in India, because you are not using VoIP services right now. And that is why youdo not see this kind of application level service level agreements, but once this VoIPbecomes popular and our network service provider migrates to the 5G cellular networkand start using or start providing VoIP services possibly we will see this kind ofagreements which are coming whenever you are going to purchase some packs from theservice providers.So, the solution is that in case of multiclass scheduling here is a possible solution to dothat say because your traffic class 1 was a high priority traffic. So, you ensure aminimum queuing delay for this packets for the packets from those traffic class. So, whatwe have seen earlier that queuing delay is the dominant component of delay and becauseof the queuing delay we expect a significant loss in quality of service.So, for traffic class 1 you ensure minimum queuing delay, for traffic class 2 you ensuresufficient bandwidth because those are bandwidth hungry applications and traffic class 3it is a best effort traffic. So, you do not have any specific requirements. So, you usingyou start using the best effort services. So, whatever bandwidth you have you try to serveusing that.So, now, to differentiate among this different traffic classes based on their requirementwe used different queuing strategies. So, this queuing strategies ensures that I havemultiple different queues in my device rather than maintaining a so, you have a singlepacket buffer queue where all the incoming packets are getting entered.Now, from that packet you apply the marking policy, the classification policy to classifythe packets into different traffic classes and put the packets into different queues. Say thisis a high priority traffic, this is a medium priority traffic and this is the low prioritytraffic. So, you put it in different queues now different queues will be treated differentlybased on the their queuing based on their a class requirements are based on their servicerequirements.So, in this queue we will apply one scheduling strategy, in this queue we will applyanother scheduling strategy and in the third queue we may apply a third schedulingstrategy. So, that way we will try to provide the quality of service support for thisdifferent classes of services in together. So, that we call as the multi class scheduling.So, the first scheduling that we are going to look into we call it as the priority scheduling.So, what happens in case of priority scheduling, we have multiple queues of differentpriority. Now we have a incoming traffic, the classifier classifies the incoming traffic andput it into different queuing queues either in the high priority queue or in the mediumpriority queue or in the low priority queue.The second type of priority queuing strategy you call is the preemptive priorityscheduling. In case of preemptive priority scheduling what happens that the schedulerserves in the round robin way, but it may get preempted, preempted in the sense like sayfor example, it had served all the packets from the high priority queue then it comes toserve the medium priority queue. When it is serving the medium priority queue by thattime some packets come to the high priority queue. Then it will preempt the service atthe medium priority queue and immediately goes back to the high priority queue andserve the packets from that high priority queue.Again when the high priority queue becomes empty it will come to the medium priorityqueue and then once the medium priority queue becomes empty it will come to the lowpriority queue. But while serving the low priority queue again if a packet comes to thehigh priority queue or the medium priority queue, it will preempt the service at the lowpriority queue immediately you will return back and the serve the packets from that highpriority queue or the medium priority queue.Now, in case of a preemptive service as you can understand that sometime the lowpriority queue may get stuck because always you are receiving the high priority queue,high priority packets or the medium priority packets. So, the scheduler is never able toserve the low priority packets, but the advantage is that you are providing very lessamount of delay and you are ensuring low jitter for the packets at the high priority queueand the medium priority queue.So, it is like that whenever you can just think of the high priority queue as the VIP passedline. So, the VIP’s need not to wait whenever they are going to that particular queue theyare immediately send 2 inside. So, in a airport you can think of that as a as the VIP gate.The second type of scheduling strategy that we are going to discuss it called as thecustom queuing. So, what happens in case of custom queuing? So, you have differentqueues of different lengths say for example, I am normalizing the queue length to 1 andin this example the first queue has a length of 0.3 the second queue has a length of 0.2and the third queue has a length of 0.5.Now, in this context remember one thing that if you do not have sufficient number ofpackets and if your network is very lightly loaded then it does not matter actually, thenquality of service indeed does not matter because you have a sufficient amount ofcapacity and everyone will get served within their time bound. But the problem startsoccurring when the network capacity is not sufficient and during that time you are goingto push the packets in the network.So, you can just think of the airport scenario whenever it is a non peak time say at at aaround 2 PM in the noon when there are not much passenger, so you in you go to the anyof the gates you will need to wait for a minimal amount of time. But if you go at the peakhours when there are huge numbers of passengers in the airport then you have to reallythink about this kind of quality of service. So, you have possibly seen that well duringthe non peak hours whenever you are going at least that happened to myself a quite a fewnumber of times that I normally prefer flights at the non peak hours and during that timewhenever I go to the airport I find that well even I am being allowed through the VIPgates.So, it is something like that. So, no one cares about what is the quality of service becausethe load is not very high, but the problem starts occurring when the load is very high andyou have certain congestion in the network and during that time you have to really thinkof that what is happening inside network.
Traffic Scheduling - Part 2
So, this particular concept is actually importantin the context of this custom queuing, why? Let us see.So, you have 3 different queues and in that 3 different queues of 3 different plane. So, thefirst queue has length of 0.2, the second queue has a length of 0 point sorry the firstqueue has a length of 0.3, the second queue has a length of 0.2 and the third queue has alength of 0.5. Now, just think of what will happen at the peak hours. So, in the peakhours all the queues are full and what the scheduler is doing? The scheduler is simplyapplying a round robin scheduling.A round robin scheduling means it is just taking one packet from the first queue, then onepacket from the second queue, then one packet from the third queue, then one packetfrom the fourth queue, one packet from the again, one packet from the first queue, onepacket from the second queue, one packet from the third queue. Then one packet fromthe first queue, one packet from the second queue, one packet from the third queue. So, itis scheduling it in a round robin fashion.But in the peak hours the queues are always full. So, when the peak queues are alwaysfull and you are getting certain traffic if you do not have any passage in a queue, thepacket will actually get dropped. So, at what you are necessarily doing here? So, you areactually providing 30 percent of your capacity to this particular queue, 20 percent of thecapacity to this particular queue and 50 percent of the capacity to this third queue.So, that way this particular custom queuing mechanism where you have different queuelength and in the peak hour so, and there are lots of traffics which are coming for these50 percent queue size it has more amount of spaces it can hold more traffic. So, it canserve more amount of traffic from that particular queuing. And it can serve very lessamount of traffic from this 20 percent queue.So, that way this custom queuing mechanism it supports what we call as the guaranteedbandwidth. So, you can provide guaranteed bandwidth with the help of this kind ofcustom queuing strategy. So, whenever you require guaranteed bandwidth like this videokind of application you can use custom queuing mechanism.(Refer Slide Time: 21:26)Now, let us see the third queuing mechanism which we call as the weighted fair queuing,again in the case of weighted fair queuing we have 3 different queues. But here youconsider that well the packet sizes may vary; in the earlier cases we have considered ascenario when there are fixed packet sizes, but here the packet size can vary. So, what ishappening here? You can think of that the blue packets are of size 1 unit the red packetsare of size 4 unit and this green packets are of size say 2 units. Then in that case in caseof weighted fair scheduling, what we try to do? We want to ensure fairness amongdifferent classes of traffic. So, we want to ensure that all these different classes of trafficshould get almost equal amount of bandwidth, then what you have to do? You have totransfer 4 packets of one unit then 1 packet of 4 unit then 2 packets of 2 unit. So, you cansee that now total amount of blue packet is 4 unit, total amount of red packet is 4 unitand total amount of green packet is again 4 unit. So, you are providing what we call asthe fairness in this particular system and remember that normally what we do that weapply multiple queuing strategies together.So, sometime you require providing priority classes and at the same time you need toprovide certain level of a fairness among the priority classes of that different traffic.(Refer Slide Time: 23:04)So, in that particular architecture what you can do, that after your packets are gettingclassified, then you put it into different priority classes say this is a priority 1, this is apriority 2, this is a priority 3. Now you know that in priority class 1 you can havedifferent packets of different sizes.So, here in the first level we are applying say priority queuing, now at the second levelsay for priority 1 classes, it can have different size packets it can have small packets aswell as large packets. So, that is why from here you can again apply something calledthis weighted fair queuing. So, the second level of scheduling can be a weighted fairqueuing scheduling.So, that way we can certain sometime applies multilevel queue scheduling. So, here thisfirst level of scheduling it ensures a priority scheduling, whereas, this second level ofscheduling it supports fairness in the system. So, that way we will be able to support bothpriority as well as fairness in your system. OK.(Refer Slide Time: 24:31)Now, that these are the different type of queue scheduling which we have. Now we lookinto another interesting concept which we call as the congestion avoidance in theinternet.So, as we have discussed earlier that TCP it does not avoid congestion, what it does that,whenever congestion occurs in the network then it responses on detection of thecongestion in the internet. So, what TCP does that, TCP detects congestion based onpacket loss and whenever there is a congestion detected then it ensures that a flowperformance does not get affected by the congestion and it tries to drop or reduce thesending rate.So, these congestion avoidance that we are talking in the perspective of internet that isdifferent from TCP congestion control. So, we are not actually controlling congestionrather we are avoiding congestion. So, what we are doing, that we are ensuring thatcongestion does not occur in the internet. So, this is like before the congestion actuallyhappens we are considering certain measures so that we can ensure that the high prioritytraffic does not get affected due to congestion.Now one interesting question that you can think of that if congestion avoidance is therein the network layer, do we still need congestion control at the transport layer? Do youstill need the service from the TCP?The answer is yes, we need why we need that particular service because whenever youare applying the congestion avoidance algorithm you will see that we are actually againapplying congestion avoidance on class based.So, we are ensuring that the high priority traffic does not go into the congestion, if at allcondition occurs in the net network that should occur under low priority traffic site. Sayfor example, if you have VoIP services over your internet and at the same time you haveFTP services then this congestion avoidance algorithm ensures that the VoIP does not getinto the congestion. But well FTP can always get into congestion and in that case yourequire congestion control algorithm for the TCP which is running the FTP to make FTPcome out of the congestion. So, that is the difference between the congestion control andcongestion avoidance.So, in perspective of this congestion avoidance we actually require both in the internet,we require both congestion control and congestion avoidance to support services over theinternet. Now, the reverse question is also there as I have already mentioned that ifcongestion control is there we also require congestion avoidance to support quality ofservice. Otherwise the voice traffic or the high priority traffic will also get intocongestion. Now, let us see that how we avoid congestion in the internet.(Refer Slide Time: 27:36)So, that is another problem that why congestion avoidance is necessary for quality ofservice. So, internet carries multiple data packets from different applications havingdifferent quality of service requirements and broadly we have 2 different classes oftraffics we call them as the elastic traffic and inelastic traffic.So, this elastic traffics are the TCP like traffic which ensure elastic nature of flow controlbased on the AIMD principle that we have learned earlier. So, it increases the ratewhenever there is no congestion and on detection of congestion it reduces the rate. So, ithas certain kind of elastic behavior. So, expand the rate and then reduce the rate, againexpand the rate reduce the rate.In case of in a inelastic traffic, they are the kind of UDP traffics they are kind ofsmoothed or the controlled or constant bit rate traffic. Now, these kinds of inelastictraffic are preferred for real time applications, so why? Because they do not get affecteddue to the overhead of TCP that we have. So TCP congestion control is always aoverhead for the quality of service of associated traffic. So, first of all you can think ofthat in case of TCP because of this elastic nature you are actually introducing jitter in thenetwork.Because whenever you have in, whenever you are increasing the capacity we will haveless amount of delay. Whenever you are dropping the rate you will have more amount ofdelay for the application data. So, that way by TCP congestion control you are actuallyintroducing jitter in the network. So, that is why for real time traffic there are protocollike real time streaming protocol or the real time protocol, RTP they prefer UDP basedconstant bit rate delivery.But do not get confused with YouTube, YouTube is not a real time, YouTube live is realtime, but your standard YouTube the thing that you are watching now it is not real time,it is the video has been already recorded and now it is getting streamed.So, the algorithm that we applied for congestion avoidance in the internet we call it asrandom early detection or RED. So, in case of random early detection what we do first,the first principle is that we drop the packets; we drop the packets for certain applicationsto avoid the congestion. So, remember that to avoid the congestion, the only principle isthat if you are expecting that certain application is sending too much traffic which ismore than the capacity then you drop the traffic for those applications.So, here is the principle of RED. So, first you determine the possibility of packet drop byobserving the average queue length. So, you have the incoming packet after incomingpacket you compute the average queue length, now you have 2 different threshold. Oneis the maximum threshold, this maximum threshold is the maximum queue lengththreshold a minimum queue length threshold and you have this average queue length.Now, if your average queue length is less than the minimum threshold; that means, youare in a safe zone. So, you enqueue the packet if your minimum threshold, if youraverage queue length is in between this minimum threshold and the maximum threshold;that means, you are going to the danger zone. So, you calculate some packet droppingprobability; if the packet drop probability is high you drop the packet otherwise youenqueue the packet.And if your average queue length is more than the maximum threshold; that means, youare already within the danger zone. So, to avoid the congestion you drop the packet. So,that is the principle of random early detection.(Refer Slide Time: 33:22)So, this is the way we calculate this packet drop probability. So, we have to calculate thepacket dropping probability here. So, we calculate the packet drop probability here sayassume that Max p is the maximum packet drop probability and d k denotes the dropprobability, then d k will be Max p into k minus MinThresh divided by MaxThreshminus MinThresh. So, k is the current queue length. So, we calculate the packet dropprobability from the current queue length. Now, let us see what is the significance of thisequation.(Refer Slide Time: 34:00)So, to look into the significance of this equation we plot this packet drop probability withrespect of average queue size. So, if you look into the packet drop probability of thisaverage queue size you will see whenever it is less than this minimum threshold yourpacket the probability is 0. After that the packet drop probability increases linearly basedon the equation that we have written, the packet drop probability increases linearly.Now, whenever you are crossing this maximum threshold your packet drop probabilitybecomes equals to 1. So, that is the significance here that as you are moving from thisminimum threshold to the maximum threshold you are gradually increasing the packetdrop probability and once you have reached to the maximum threshold, your packet dropprobability becomes 1 and you drop all the packets for that particular application.Now, here the interesting fact is that what we are doing here, we are ensuring thatwhenever things are going good we do not do anything, but when things are movingtowards the bad side we you do some kind of early detection of a congestion byobserving the current queue length, because the current queue length gives you a reliableindication of the congestion. If the queue length becomes high, that means, you havemore number of packets in the queue and a queue has say length 5 and you have alreadyfilled up the queue with 4 packets; that means you are gradually going towards thecongestion.The moment there will be 5 packets in the queue and the queue become full, you willstart experiencing congestion. So, that is why as you are increasing the queue length youare moving more towards congestion and accordingly you detect it early based on theaverage queue length and then randomly drop the packets to ensure that things are goingout of congestion.Now, this random drop has an implication you remember that in case of TCP we detectsomething as congestion whenever you have 3 consecutive packet loss. So, you aregetting 3 duplicate acknowledgements or you are having a time out. Now, if you have arandom packet loss then it is just like that one of the packet will get lost and we will get asingle duplicate acknowledgement or you will not experience any time out for thatparticular packet.So, that way TCP will not get triggered a congestion control there, but as you aregradually moving towards congestion, you will detect a drop more packet and TCP willtrigger the congestion control algorithm at that instance of time.So, that is all about this congestion avoidance algorithm in the internet, which helps youto come out of congestion or have a early signature of congestion, but as you have seenthat as the load increase gradually it moves towards the scenario of congestion and thenTCP should come in the picture and run it congestion control algorithm, to make thesystem come out of congestion. So, to support quality of service we need to run bothcongestion avoidance as well as congestion control in our system. So, that is all in thenext class we will look into 2 specific QoS architecture in the internet call integratedservice and the differentiated service architecture.So thank you all for attending this class.