16. Class of Service
Let’s talk about class of service. When you forward packets on a network, the devices that are responsible for forwarding the packets such as routers and firewalls, they forward the packets independently meaning without applying any special treatment to those packets. Most of the time this is not a problem. So, for applications that we normally use every day like email and browsing, this works well. And this type of forwarding is known as best-effort service.
But there are certain applications that require special treatment to their traffic. And these are usually time sensitive applications, like real-time audio and video. If this type of traffic is delayed or is subject to congestion, it can reduce the quality.
So, delay is the amount of time it takes for a packet to reach from a source to a destination. And jitter is defined as the variation in the delay of received packets, and generally it is caused by congestion. So, if you have more congestion, it is likely that packets will be delayed. And if you have less congestion, the packet delay is going to be less. Now, when you have lot of congestion, it can also lead to packet drops, ultimately affecting the quality.
So, with class of service, you can assign traffic to different classes and you can define service levels for these different classes.
So, class of service allows you to provide differentiated services when regular forwarding or best-effort delivery is insufficient. You might want to prioritize time sensitive traffic over normal traffic that you send on the network. So, with class of service, you can configure classes of service for different applications.
So for example, you may have one class of service that is used to serve time sensitive traffic and another class of service for all other applications. The purpose of doing this is that you can assign different service levels, meaning different configuration to different types of traffic. You can create configurations with specific delay, jitter and packet loss characteristics for specific applications that you want to prioritize. Class of service guarantees a minimum bandwidth dedicated to a service class.
The important thing is that class of service must be implemented on each device in the path.
So for example, let’s say we have a network that has three routers, and if we were to implement class of service to prioritize certain traffic, that configuration needs to be applied on every router in that path. In this case, we would have to apply it on all the three routers.
Now let’s look at it from a package standpoint. So here is the IPv4 packet header and we have all these different fields in the IPv4 packet header. The field that is responsible for implementing class of service is this field over here. It’s an eight-bit field and it is called as type of service.
We have a similar field in the IPv6 packet header. It is again eight bits in length, but on the IPv6 packet header, it is called as traffic class.
Now it’s important to remember that even though the field is eight bits in length, the full eight bits are not used to implement class of service. In fact, only the six bits of the eight bits are used to implement class of service and in several documentations you will see that class of service is mentioned as differentiated services code point, or DCSP; and sometimes it is also referred to as dif serve code point. So if you see any of these words – differentiated services, code point or diff serve code point, it refers to class of service at the JNCIA level. We do not need to go into the configuration of class of service. We only need to know it at a conceptual level.
Now, for the exam, it is important for you to remember the purpose of class of service. So, you might see a question on the exam that says there is a certain type of traffic that needs to be prioritized over rest of the traffic. What feature would you use to prioritize the traffic? And the answer would be class of service.
17. Connection-oriented vs Connectionless protocols
Now let’s talk about connection-oriented versus connectionless protocols. Let’s start with the first one.
Connection-oriented protocols, as you would expect and as the name suggests, these protocols require a connection to be established between two parties before they can start exchanging data. A very common and a popular example is TCP, or Transmission Control Protocol.
TCP uses a mechanism called three-way handshake to establish a connection between two parties.
Let’s take a look at an example. So, here we have two computers. The computer on the left is the initiator of the connection, and the computer on the right is the responder. To establish a three-way handshake, the initiator will send the first packet, which is the SYN packet. The responder then responds back with a SYN-ACK packet and the initiator then sends the last packet, which is the ACK packet. When three packets are successfully exchanged, both parties are said to be in a connection and they can now start sending data.
TCP also has a similar mechanism for connection termination, which is known as a four-way handshake, which is not commonly talked about. But let’s take a look at it.
So let’s say both these computers are already in a connection and they now want to terminate the connection. So, the initiator will send the FIN packet. The responder will then send the acknowledgement (ACK) packet and follow it up with its own FIN packet. The initiator then sends the last packet, which is its own acknowledgement (ACK). And when both computers exchange two FIN packets and two acknowledgement packets, the connection is set to be terminated.
TCP is used by applications that require a connection to be established before exchanging data, and it is used by most of the common applications that we use every day, like, for example, HTTP for web browsing; FTP, or file transfer protocol; Telnet among others.
The reason TCP is so popular is because it has some really nice features that help in reliable data transfer, like, for example, ordered data transfer. This ensures that all bytes received by the receiver will be identical and will be in the same order as sent by the sender.
Another interesting feature is retransmission of lost packets. When the sender is sending packets, the packets have to be acknowledged by the receiver. Every receiver packet has to be acknowledged. If the receiver does not acknowledge a packet, the packet is assumed to be lost and the sender will retransmit that packet which is why it’s a reliable mechanism to exchange data.
TCP also has flow control. Flow control ensures that the sender sends data at a speed that the receiver can process. Different devices have different processing capabilities. Some devices may be able to handle packets much faster, while the others may be slow in handling packets. Flow control manages the speed of data transfer and ensures that the sender is sending packets at a speed that the receiver can handle.
On the other hand, we have protocols. And as you would imagine with these protocols, you do not need to set up a connection before exchanging data. A common example is UDP, or user datagram protocol.
Connectionless protocols have a lower overhead compared to connection-oriented protocols. And the reason for this is with connection-oriented protocols. When two parties establish a connection, certain resources are allocated for that connection. So, when a connection is established, the operating system of that host may reserve a certain amount of CPU and memory for that connection to work properly. So, there is resource allocation for every connection versus with connectionless protocols. Because you are not establishing a connection, there is less overhead and you can start sending data right away.
While this is a benefit of connectionless protocols, it also comes with its own disadvantages. For example, there could be loss of data. There could be errors in packet exchange. There could be packet duplication and out of sequence delivery. So, it has its own advantages and disadvantages
Connectionless protocols are typically used where speed of communication is the priority. For example, DNS or domain name system. When we access any website on the Internet using a website name, that website name is resolved to an IP address in the background. That’s called domain name resolution. And this resolution has to happen very fast. UDP is used for that. Other applications of UDP include real time applications like audio and video streaming and broadcasting.