1. Queuing Basics
Now, this is the first introduction video on the condition management. Now in this video we’ll see some of the basic reasons for congestion, when and which scenarios the condition will occur and what are the different king mechanisms we can use to overcome this condition process. Now, we have different mechanisms, we’ll be talking about that more in detail as we go ahead in the later on videos as well. But in this video we’ll majorly try to understand the reasons for congestion and the basic twing process. Now the first thing we’ll try to understand the reasons for congestion. Now let’s take an example. I have a router which is forwarding the traffic, receiving the traffic on multiple interfaces and it is supposed to send outside on the router.
Now normally the congestion happens when you have two or more interfaces from where you’re receiving the traffic and you are supposed to send on the one interface. Maybe you are receiving from two different branch offices or maybe from the lan interface. Now, congestion generally occurs at any point of the network where there is a speed mismatches or aggregation. Like here you can see there is aggregation of these three links and all these three links are sending over one common link. Now, in this kind of scenarios there is a possibility of congestion and because the reason is here, all the three different links are sent over the same exit interface. Now, even the other reason can be a mismatch of the speed.
Like here I got some diagrams where you have a switch which is sending connecting in the lan and it is supporting around one gig 1000 mbps. When you are sending the information coming from the lan, you are receiving at a speed of 1000 mbps and the router is receiving a huge number of IP packets. But it is simply processing on the outside of the interface, output of this interface at a speed of only two mbps. Or you can take in your land where you have a switch, where you have a switch connection and sending at a speed of 1000 mbps, but the output on this interface is just 100 mbps. Now, in this kind of scenario you receive too many excess of packets and the device will find difficulty to send it forward to the next interface because this interface supports just 100 MPs.
Now, these are the common scenarios where you will see the congestion. Now to overcome this congestion generally. Now if there is any extra congestion, by default it’s going to drop the packets. What we can do is we can ensure that these packets, instead of getting dropped, they are managed and given some priority. And that can be done by using something called queuing mechanisms. Now, queuing is a mechanism of arranging the packets into the local buffer. So before it actually sends on the output queue, it’s going to store them in a buffer with the different priority levels. Like we can place some packets which are very high priority as a voice or video traffic, which is considered as a high priority traffic.
And we can arrange the packets in a medium priority in normal and low priority traffic. And we can ensure that this high priority traffic packets are sent first before it sends any other private traffic and then the next importance to be given to the medium priority traffic and then to the normal and then to the low. Now, this will ensures that your important traffic like voice and the video packets should not get dropped in case if there is any condition. And that is what we call as queuing mechanisms. Now, we are going to manage the way the packets has to go on the output queue before it actually sends on the output interface. We are managing and giving some different levels of priority and reservations for each and every type of bandwidth and that is what we call as queuing here.
So we’ll be talking about different queuing mechanisms we use in this section much more in detail. But before we go ahead with the different queuing mechanisms, first we’ll try to understand some of the basic queuing options. Now, as we discussed just now, that queuing is a mechanism of arranging the packets into the local buffer before it sends on the output interface instead of getting dropped. Okay? In case if there is any condition. Now, always remember one thing, we have two types of queues here, input queue and output queue. Input queue means it’s arranging all the receiving packets in the local input buffer before it sends to the router processing.
And the output queue is something like it is going to store in a buffer output buffer before it sends on the output interface. So that’s a major difference between input queue and output queue. Now, this input and output queue is something only happens if there is any condition. Like normally. Let’s take an example once the packet is received on the interface. So if the processor is not too busy and the packets rate is not too high, then the system will never use the input queue. Which means in simple words, if there is no congestion, it’s not going to use the input queue. It will simply ask the router to process the packets and send it back on the interface without any quing mechanisms.
But the queueing mechanism is only required if there is a congestion, if the router is busy processing some of the packets and you’re receiving some extra packets here. And right now the router is busy in processing other packets. So what we are going to do, we are going to store them in the input queue that is your buffer before it actually sends to the router for processing and that is what we call as input queue. Now, this queuing mechanism only will start doing its job when there is a condition. If there is no condition, it is not going to use the queen mechanisms here. Now, the same thing happens with the output queue also.
So output queue also if any packets are received on the output interface and the output queue here, this is not too much busy, it’s not really congested. The interface will simply forward the packets immediately without quing. There’s no need for queueing, but it requires a queueing only if there is a congestion on the output queue and before it actually sends on the output interface, it’s going to store in the local buffer memory. Now, in the input queue it’s always first in, first out. You will have only one queue by default and you cannot manage this, you cannot change anything and it’s always first in, first out. So input, output, input queue will always be first in, first out.
All the packets which are received in order will be processed first. But the output queue is something we can manage. Now, inside this output queue we have two different types of queues. We have something called software queue and hardware queue. Now here software queue and the hardware queue. The basic difference is now the software queue is something we can manage. Management is a thing, but we can arrange the package in different classifications and we can give priority before it actually puts on the hardware queue and send out of the interface. Let’s see how it is going to work. Exactly. So let’s say if a router receives any packet, before it places in the hardware queue, it will first see whether the hardware queue is full or not.
If the hardware queue is already full, before it actually places on the interface, if it is not full, it will simply process the packets and send it on the interface and it will simply forward out of the output interface. So which means a simple scenario where there is no congestion, but if there is a congestion, then probably the hardware queue will be full. Once it realized the hardware queue will be full, it is going to send the packets on the software queue. Now in the software queue is something where we can manage, we can arrange the packets, like we can arrange voice as a different class and your Http traffic referred as a different class.
And we can define that voice packet should be arranged first before it sends the Http traffic packets and then we are placing it on the hardware queue again back before it sends all the interface. And there we will say that the voice traffic should be sent first given priority over Http traffic, or we can tell that the voice traffic should guarantee a minimum bandwidth of 64 kbps and whatever the other bandwidth should be used for Http. So something like that. Now, that is what we are going to do here. Now, those are all different querying mechanisms which we are going to use to manage before your packets are sent outside on the interface.
Now by default it’s something like this the hardware queue is always post in first out. We cannot do anything on the hardware queue, but the software quing is something can be selected, can be configured depending upon the platforms and the sysqua was versions. Now in simple way, software querying mechanism is something where we can arrange the different types of traffic in different queues and we can send them in order wise and we can give priority for one specific traffic. Now by default if you want to verify this input and output queues, we can always use a command called Show interface.
At zero by zero I’ll see the maximum input queue packets it is going to allow and the output queue package is going to allow. Now, if you want to change this parameters, we can even change this parameters, it all depends upon the different platforms again, now that again varies based on the and once you change this parameters, you can see it changes here. Now you can always verify this input queue, output queue parameters by using Show interface commands but most of the time we don’t really prefer to change these parameters and once it reaches this limit it will start queuing mechanisms.
2. Legacy Queing Mechanisms
Now, in this video we’ll see some of the basic legacy queueing mechanics which was used earlier. Now probably we’re not going to use these mechanisms in today’s networks, but we’ll try to understand these things just to have some basic idea on the different king mechanisms. So the first one will start with first in first start. First on first start is the default king mechanisms which was used for all normally for even input queue. Also the default is first in first start, which means the first packet, whichever comes first, will be processed. It’s the simplest of all and it has just only one queue and all the individual queues are always first and first on where the first packets, whichever comes, will be processed first.
Now, the other queueing mechanisms, we have something called priority queen. Now, priority queen allows you to provide a range of packets into four different queues with high, medium, normal and low priority traffic. Now, like we can arrange the traffic in different classes. Like we can say that the voice traffic and the video traffic should be considered as a very high priority traffic when compared with other traffic. Like maybe a database traffic is medium priority traffic, whereas your Http is normal priority traffic and ftp is your very low priority traffic. Now, once we distinguish the different types of traffic in four different queues.
Now, if the packet is coming with a high priority, let’s say there is a voice packet is coming, it will automatically stop all the remaining queues because it is a high priority traffic. It will send that first in case if there’s a condition. And once it is able to send, let’s say there is a 64 kbps of waste traffic coming, it’s going to ensure that waste packets are sent first before it sends all the remaining queues. And once there is no high priority traffic, it will send the medium priority traffic. And if there is no medium priority traffic also, then it will start forwarding the normal priority traffic. And if there’s no more normal priority traffic, then it will start forwarding the low priority traffic.
Now, while it is forwarding the no private traffic in case if any packets comes with a high priority traffic, it will be sent first before it sends the low priority traffic. Now, this will ensure that your high credit traffic will be always forwarded first. That’s a good thing. But at the same time the major drawback is sometimes if you have a continuous high priority traffic, maybe your low private traffic will not be able to get forwarded because the device is always forwarding your high priority traffic most of the time. Now, that’s the reason we don’t prefer to use this priority querying in today’s networks, but we have some advanced implementation called low latency querying.
We’ll see that in the next videos where we are combining multiple prior tuing as well into that. Now, the next queuing mechanism we have something called round robin and weighted round robin. Now, in the round robin we are going to arrange all the packets in different queues where the device is going to send one packet from each and every queue. Like packet one will be forwarded from the Q one and the second packet will be from the Q two, third packet from the Q three. Again, fourth packet, fifth packet, 6th packet like that. Each and every queue, each and every queue, they will equally forward one packet from each and every one.
So dispatches one packet from each and every queue in a round robin fashion. Now here all the queues are treated equally and all will forward the same amount of traffic at the same time. But again, in this, the major drawback here is like let’s say you have a voice traffic, so maybe your voice traffic has to wait to send the next packet once it finishes the other queues. Now, in this quad Nike scenarios we can use something called weighted round robin. On the weighted round robin we are going to apply some weight. Let’s say we are applying weight of four and the weight of two and the weight of one here. Now, this Q one is going to send four packets almost four times of what the Q three sends or two times of what the Q two sends.
So you are going to define some kind of weight to a specific queue where it is going to send more packets from that particular queue when compared with other queues. So that’s what we call as weighted round robin queen mechanisms. Now, these are the actual old legacy methods which we used earlier implementations, but these are something which we don’t use in two dash networks. So we use some advanced quing mechanisms in today’s network like failure querying weighted fake queen and we use something called low latency quing mechanisms which are far more advanced implementations of the Queen mechanisms. So more on those the new quing mechanisms which we use the commonly used we’ll talk about that probably in our next videos in detail.
3. Weighted Fair Queueing
Now in this video we will try to understand a queuing mechanism of fail queuing and weighted fairgring. So first we’ll start with the fair queen. Fairquing is a method of sharing the resources equally among all the flows. It’s also called as maximum minimum fairness where all the flows, let’s say if you have some different flows, all the flows will share the resources equally. Like take an example here. Let’s take an example. I have some 100 kb of bandwidth which is available and this 100 kb of bandwidth has to be divided between ten different flows and each and every flow. Now if you divide 100 divided by ten, which means each and every flow will get around ten kb of bandwidth.
So all the flows will get equal share of forwarding the traffic. Now let’s take an example in case out of this ten flows there are two flows just using only six kb of bandwidth. So for the two flows they just need only six kb of bandwidth, whereas the remaining four kbps is unused and it’s going to be total eight kb. This remaining eight kb s will be shared between all the remaining eight flows. Now here you can see it allows the sharing of the flows where unclaimed bandwidth with the other flows. Now fear queue is a mechanism where it allows the sharing of the flows. That’s what it is doing, sharing of the flows, whatever the unclaimed bandwidth or the one use bandwidth is going to share with the other flows.
Now let’s take an example. If there are two flows which has excess of bandwidth share, like maybe there is a flow which needs around 15 kbps and 15 kbps excess of ten. Because as for my calculation, on the average each flow should get off ten kpps, then they get the maximum possible share. They are going to get the maximum possible share again based on the other flows. If the other flows are not using, they can still send excess of the traffic based on the unclaimed or the unused bandwidth by the different flows. Now the next thing we have something called weighted fair queen.
Weighted fair querying is the enhanced version of the fair quing method where it is going to assign the weight in each and every flow. Now based on the weight there might be scenarios where you may need a specific flow to forward huge more packets to be sent rather than when you compare with other flows. So in those kind of scenarios we can apply some specific weight so that one particular flow will send more number of packets when compared to the other flows. The main difference between the main advantage we get in the weighted fair queen is it is going to allocate the bandwidth completely based on the weight and that weight is again based on the IP president’s values or the reservations.
Reservations done by some rsvp, a legacy protocol used for reservations based on that it’s going to assign the weight. Now, the default formula it uses for assigning the weight weight is equal to k value k divided by president’s value plus one. Now, the default weight is something where it depends upon the ibiz versions. Now, if you’re using 120 or the earlier versions, it’s going to take 40, 96. Prior to this 120 versions it’s going to use 32384 is the value which is used in the calculation. Now, the lower the weight, the higher the priority and share of the bandwidth here based on this formula. Now, the addition of the weight is completely based on the president’s values.
Now, let’s take an example here, I got three different flows and in the three different flows I got one value with a presence value of five and the size of the packets, I got three different packets, 128 bytes, 128 bytes and 128 bytes. And in general in a normal, now if you’re using something like normal fare queen without weighted fair queen, the normal fair queen, it’s going to ensure that you are low size packets to be sent first before it starts sending the high the bigger size packets. Now, the fair queen will ensure that your small size packets will not get delayed or dropped and it’s going to ensure that your small size packets will be always forwarded first.
But whereas in case of weighted fare queen, it’s not only going to see the size of the packet, it’s going to see the president’s value. Now, based on the president’s value, it’s going to decide which packet should be forwarded first. Now, here it is going to forward a one, a to a three initially first because of the higher presence values and then it’s going to forward C one because of the in fact, both these values have the same presence values. So it’s going to see the small size packets and it will ensure that your small size packets get forwarded. The main difference between the fair querying and the weighted fair querying is in case of fair queen.
It will ensure that your small size packets small packets will get a more priority or forwarded first and that will ensure that your big size packets like ftp downloads, you have a very big packets. It will not eat up all your bandwidth and it will ensure that your small packets will get a chance to forward immediately when compared to the big size packets. But whereas in case of weighted fare queuing, that will be decided based on the size of the packets along with the weight value, that is the prison’s values. Now, if you verify the default quing mechanisms on the serial interfaces, like if you’re using any link with a speed of two mbps or less than that, by default on the slow speed links you have, the default quing mechanism will be weighted fail.
Now, you can verify with a command called Show interface and the name of the interface and if you’re using high speed links like Ethernet or fast ethernet links above two mbps, the default quing mechanism will be always first in, first start. Now if you want to change this mechanism, we can change like if you want to change the default quing mechanism to weighted fair quing, we just need to enable fail queue command. So it does only simple command we need to enable prior to the 15 ivs versions. Now, there are some additional parameters you can configure like condition discard threshold which is going to tell the maximum number of packets to be used and the default is 64.
And we can also define some dynamic how many queues you can create default. It’s going to make sure that there are 256 flows, few queues, because they’re all optional parameters. Now, if you want to enable, we just need to go to the interface and enable this command called fail queue. But if you’re using some new iOS versions, 50 iOS versions. Now this fail command is not no more support on the interface. We need to define this under the policy map and then that policy map we need to apply on the interface and to verify this again, once you add this command, so you can use show interface s one by zero, you can see the quing mechanism will be class based queen.
Now we’ll talk about more on this like class based weighted fair queen, okay, which is something we’ll be seeing next sessions. Somewhat advanced quing mechanisms like class based weighted fair queen and low latency ping. But right now in this section, we are just gone through with the default fair queen mechanisms and the weighted fair queen. So we didn’t differentiate the traffic based on the classes. Now the major advantage we get with the fail queen is it’s very simplified configurations and drop the packets of the most aggressive flows with the large size packets will be dropped in case if there is a condition, it’s something supported on most of the platforms.
But the major drawback with this weighted fail queueing is there is a lack of control over classification. So we are not manually classifying the traffic, we are not differentiating the voice traffic with ftp traffic. It’s going to do it on its own. Okay? And again, if you are using a low speed links, probably this weighted fare queen is supported. It’s something not supported on the high speed links. So we cannot provide a fixed bandwidth guarantees. Again, there’s no guarantee for a fixed amount of bandwidth for your voice or video traffic.