1. 2_1- Hierarchical Network Design
In this section, we are going to talk about hierarchical network design. But first, let’s take a look at the flat network concept. Flat networks are networks in which all computers, servers, and printers are connected together using just one layer of switching. There is no use of subnet inflate. In addition, all devices in the flat network are located in the same broadcast domain, so broadcast traffic is transmitted to all devices on the network, and because of that, bandwidth is not used effectively. Flat networks cannot meet the needs of most enterprise networks or small to medium-sized businesses. The second module is hierarchical networking.
Hierarchical models allow for network design by using different layers. Layers of the hierarchical model are divided into specific functions that are categorised as core, distribution, and access layers. This categorization provides flexible design options and easy scaling of networks for us. In this section, we compare the flat and hierarchical networks in the flat network. As you can see, all devices are just connected to a layer to switch, and that’s it. And this is one large broadcast domain. In the hierarchical network, we have access, distribution, and core layers, and we have three separate broadcast domains. Let’s talk about the hierarchical network layers in detail now. In a local area network environment, the access layer provides the end devices with access to the network. The access layer serves a number of functions, including layer 2 switching, high availability, port security, quality of service, classification, marking, ARP inspection, virtual access control, list, spanning, tree, power over Ethernet, and something like that too.
The distribution layer will be the subject of our second discussion. The distribution layer aggregates the data received from the access layer switch before it’s transmitted to the core layer for routing to its final destination. The service layer can aggregate local area networks or provide longer network lengths. The core layer is also referred to as the network backbone. The core layer consists of high-speed network devices such as the Cisco Catalyst 6500 or Six 80. These are designed to switch packets as quickly as possible and interconnect multiple components, such as distribution modules, service modules, the data center, and the edge. At the core layer, considerations include providing high-speed switching, increasing reliability and fault tolerance by using faster and less equipment, and avoiding CPU by instances of packet manipulation caused by security inspection, quality of service classification, or other processes.
2. Network Topology Architectures
In our next section, we will talk about some network topology architectures. The three-tier architecture guide consists of three different layers: the access layer, the distribution layer, and the core layer. The core layer is in charge of festival routing simply because it is the layer that is a gateway to the internet or other sites, and the core layer also provides scalability, fast recovery, and skies when it comes to the distribution layer. In this layer, we have multilayers that are capable of doing routing as well and have high capacity, port speed, and density.
This layer aggregates the server access layer using stitches to segment workgroups and isolate network problems in a data centre environment. And the last layer we will talk about is the access layer. We can use the layer 2 switches in this layer because we are not based on routing but on some Mac address forwarding. Only the access layer is a layer that is used to grant user access to network devices, mostly on the access layer, to which we have some end user pieces, printers, IP cameras, and so on. Okay, so let’s go ahead with the two tiers. Now, the three-tier hierarchical design maximises performance, network availability, and the ability to scale the network design. Yeah, three-tier architecture is really great, but many small enterprise networks do not grow significantly larger over time.
Therefore, a two-tiered hierarchical design where the core and the distribution layers are collapsed into one layer is often more practical. A collapsed core is when the distribution layer and the core layer functions are implemented by a single device. The primary motivation for the collapsed core design is to reduce the network cost while maintaining most of the benefits of the three-tier hierarchical module. Now, let’s finish this session with this spine lift. For a long time, we’ve talked about Cisco’s two- and three-tier network designs, with access, distribution, and core layers. The access layer is connected to our end devices, such as clients and servers. However, within today’s data centers, a new topological design has taken over and is called “spine and lift.” So imagine a cabinet in a data centre filled with server skies. Fortunately, there will be a couple of switches at the top of each rack. And for redundancy, each server in the rack has a connection to both of these devices. You might have heard the term “top of rack” used to refer to these kinds of switches because they physically reside at the top of a rack.
Already, these Tor switches act as the leaves in a leaf and spine topology. Ports in a leaf switch have two functions: one is to connect to your node, as shown here, and the other is to connect to the spine switch. You will notice in the topology on the screen that each list switch connects to every spine switch. You can see the list of switches here, and here we are connected to this switch. Here we are connected to this switch, and here we are connected to this spine switch. So as a result, there is no need for interconnections between the spinal switches. Okay, you can see there is no interconnection between the spine switches. Also included is the lift switch. It is also interesting to note that the uplink connections from the leaf switches to the spine switches could be either layer-two connections or layer-three route connections. By using interconnection to connect your Tor data centre switches in a leaf and spine topology, all of your switches are the same distance away from one another.
3. 2_2- Cisco Switch Types
In this section, we are going to talk about Cisco switch types. Cisco designs the catalyst switch for campus networks and Nexus suites for data centers. In the context of our CCMP course, catalyst switching will be mostly discussed. We have access layer suites, distribution layer suites, and a core switch. For instance, 29 60 x is an access layer switch and 680 0 is a Cisco core switch. Let’s take a look at the layer two and layer three switches. Layer-two devices can do just switching, while layer-three devices can do switching and routing.
Layer 2 switches send packets based on the destination Mac addresses in the frame. When switching is done in Storm forward mode, the frame is checked for errors and passed on. Some models of the switch, with most being Nexus switches, prefer to read only layer-two information and changeframe information based on skipping CRC checking. This bypass operation, called “cut through switching,” reduces the delay of frame forwarding since the frame is not stored before it’s forwarded to another part. Most catalyst switches work in the store and forward modes. Let’s see how we can create the Mac Address table on a switch. Now the Mac address table is empty. First, as you can see, there is no Mac address or port entry.
If PC one wants to forward the packet to PC two, that’s PC one, and that’s PC two. It is disconnected from all ports and the PC One Mac address is added to the Mac Address table. That is the PC one’s mac address. As you can see, In the second step, PC Two sends a unique test response to PC One, and Switch adds it to PC Two. Mac addresses two. Mac address. Table two. As you can see, that’s the Mac address of PC 2, and PC 2 is connected to the third part of our switch. After this process, if PC One wants to send data to PC Two, the frame is forwarded directly in that direction. In the end, Switch learns all of the Mac addresses connected to all of its ports and establishes a full Mac address table. After this step, if PC One wants to communicate with, for example, PC Five, the packet is just sent from port one to port eight.
The Mac address is used for layer-two switching. table, also referred to as the MacAddress table, contains Mac address and destination port information about where the frame should be forwarded. If the destination Mac address is not found in the table, the frame is fluted to all parts of the same Miller, and we have a multilayer switch. As you know, multilayer switches perform not only layer 2 switching but also frame forwarding with layer 3 and 4 information. These switches combine switch and router functions in network devices with at least three planes of operation—management, control, and forwarding.
And let’s start with the management plan. The first management plane is responsible for network management operations such as SSH, access, and SNMP, the control plane is responsible for protocols and routing decisions, and the forwarding plane is responsible for the actual switching and routing of the packets. The control plane also programmes the forwarding plane about how to route packets as well. Multilayer switches have separate control and forwarding planes for high performance, and they can also use multiple forwarding planes at the same time. Cisco-based routers forward packets using one of three methods: process switching, fast switching, or safe switching, which is Cisco Express Forwarding. Because the processor does the forwarding, process switching is the slowest mode of transmission. Fast switching is a faster method by which the first packet in a flow is routed and rewritten by a route processor using software, and each subsequent packet is then handled by the hardware.
The SAF, which is Cisco’s Express Forwarding method, uses hardware routing tables for the most common traffic flows, with only a few exceptions. If you are using SAF, forwarding processes are often separated from other tasks. Rod caching is also known as streaming or request-based switching. When switch traffic flow occurs, a three-layer cache is created within hardware functions. Since there is no cache for the new stream in the route cache, the first packet in the stream is switched in the software by the processor, and for catalystswitch, fast switching is referred to as the route cache and theft is referred to as topology-based switching. The information in the topology-based switching and routing table is used to populate the route cache independently of traffic flow. This generated rod cache is called phipp, and the phipp creation tool is the set. Cisco express. Forwarding.