1. Introduction
You’ll learn about VLANs virtual local area networks. Before we get to them, I’m going to give you information about the standard way to do a design for your campus local area network. We’ll discuss the core distribution and access layers, what they mean and how they fit into the design. Once we’ve laid those foundations, we can move into VLANs.
I’ll explain what they are and why we have them. Then I’ll show you how to configure our access ports and our trunk ports. And then, finally, we’ll get into some more advanced topics like DTP, the dynamic trunking protocol, and VPN VTP, the VLAN trunking protocol. Okay, let’s go.
2. Campus LAN Design – Core, Distribution and Access Layers
In this lecture you’ll learn about design for local area networks. So that could be for a single building, or it could be several buildings within a few hundred meters of each other on the same local campus. We’re not talking about wide area networks here, like if you’ve got a building in New York and another one in Houston or Singapore. We’ll talk about wide area networks in a later section. So just local area networks in this section. The campus land should be designed for scalability, to support growth, also for performance and security, and to aid in our best practice design process. The network topology is split into access, distribution and core layers when we’re doing the design, the layers have their own design principles and characteristics that we’ll talk about in this lecture.
So first up, the axis layer. So you can see in the example local area network here I’ve got a main building in my campus and I’ve also got separate building one as well. Both of those buildings will have multiple access layer switches and the access layer switch is where your end hosts plug in. Now, in the diagram, I’ve just got one host plugged into each switch just because I’ve only got so much room on the side here. Obviously in the real world, you would have multiple hosts plugged into the same switch. So maybe in the main building we got four switches there, maybe a couple of them are on the ground floor. Switch three is on the first floor and switch four is on the second floor, for example.
And we’re going to have multiple hosts plugged into their local switch and we’re going to have the same kind of thing in building one. So your end hosts, such as your desktop computers, your servers and IP phones, always connect into the network at the access layer. It’s designed to have a high port count at an affordable cost to support lots of end hosts. Desktop computers typically have only one network interface card, so they can only connect into one switch. Or maybe if they’re on wireless, they’ll connect into one wireless access point. Servers, however, will often have dual mix to give them some extra redundancy. So they will connect into a pair of redundant access layer switches.
And your client access security measures such as port security that we’ll discuss in a later section are enabled at the access layer. The next layer up is the distribution layer. So you can see when we’re doing our campus land design, we follow a hierarchical model. So the end hosts plug in at the axis layer and then at the level above the axis layer, we have our distribution layer switches and the axis layer switches up link to the distribution layer switches. Notice that the access layer switches are not usually connected to each other, they connect upstream to the distribution layer switches.
The distribution layer switches serve as an aggregation point for the access layer and provide additional scalability in our local area network. They are typically deployed in redundant pairs. We don’t want to have a single point of failure at the distribution layer. That would maybe be acceptable in a very small campus, but any kind of normal size of campus you’re going to want to have redundant distribution layer switches organized in pairs. So if one of them goes out, your clients have still got connectivity. The doubts of the downstream access layer switches are connected to both of the pairs of the distribution layer switches. If I could go back a slide, you see all my access layer switches here.
They’ve got uplinks to distribution layer switch one and distribution layer switch two and you see that both buildings are designed the same. So my main building has got its axis layer switches and it’s redundant distribution layer switches and I’ve got the same thing in building one your end hosts connecting to the access layer. End hosts do not typically connect into the distribution layer switches directly. And what we do at the distribution layer, most software policy, such as quality of service policies is enabled at the distribution layer. The next layer we have up is the core layer. So your distribution layer switches up link to the core layer.
Notice that we had our access layer and our distribution layer switches in both buildings. The core layer switches are just going to be in one building and it’s the core layer switches that link all of your buildings together. So here we’ve got a pair of redundant core layer switches in the main building and obviously your distribution layer switches up link there. Your core layer switches, just like your distribution layer switches are typically deployed in redundant pairs with your downstream distribution layer switches connected to both. Traffic between different parts of the campus travels through the core so it is designed for speed and resiliency.
Software policy slows the switch down so it should be avoided in the core layer. That’s why we did things like our QoS policy at the distribution layer. Any kind of software policy that you’ve got enabled on your switch causes the switch to have to think to enable that policy so it slows it down. The core layer. The main thing is speed and resiliency. We don’t want anything slowing it down so we minimize software policy on our core layer switches. In a smaller network you could have a collapsed distribution and core layer. That is common because smaller campuses don’t need the scalability of three separate layers. So in those cases a collapsed distribution and core layer is used where the distribution and core layer functions are performed on the same hardware device. So this is what a collapsed distribution and core looks like.
We don’t have separate physical devices for the core and the distribution layer. We have a pair of switches here in our main building and they are being used as both a distribution and recores so they fulfill the functions of both. So to summarize that, our end hosts plug in at the axis layer. Our Access Layer switches are designed to support a high port count at an affordable cost. And we implement our land security policies on our Access Layer switches. Our access layer switches up linked to our distribution layer switches. We’re going to have those organized in pairs to give us redundancy. We don’t want to have a single point in failure.
And our software policies are enabled at the distribution layer. The distribution layer switches form an aggregation point for all of our access layer switches. And finally, we have the core layer. The core layer is designed for speed and resiliency. The core layer is what connects all of your different buildings. Distribution Layer switches together, we don’t want to slow our core layer down with software policies. Okay, that’s it for the land design. See you in the next lecture where we’ll start talking about VLAN.
3. Spine-Leaf Network Design
Lecture you’ll learn about the Spine and Leaf data center network design. So you can see the traditional campus design here. I covered this in the last lecture with the core distribution and access layers. In the example here, we’ve got the main building and building one and we could have the main building now is actually being a data center with, with our servers in there and with the old style traditional data center environments. This traditional core distribution and access layer would work just fine where we had mostly north south traffic flows. What I mean by north and southbound traffic flows is where the traffic is mainly flowing up and down.
So traffic would be going up and down the data center and then down to the clients in the other buildings. So you can see here it’s going up from the building through the access and distribution layer to the core layer and then back down from the core layer to the distribution and access layer. She can see with our north southbound traffic flows that’s going from the clients over here to these servers in the data center. And the traditional campus design works really well where most of your traffic flows are going in that north and southbound direction. But in modern data centers, there’s a trend nowadays where we see a lot more traffic going in an east and westbound direction. And what I mean by that is between the actual servers themselves and the data center.
The reason for that is that data centers are getting bigger. There’s a lot of virtualization now, so many virtual servers and those servers might be clustered where an app is spread across multiple different servers and all of those servers need to talk to each other. You also might have an application, for example, which has got a web based front end on a server and that’s talking to a back end database on another server as well. So again, that traffic, rather than going north and southbound through the different layers, would be going east westbound between the different servers themselves. And while the traditional campus design works well where most of your traffic is north and southbound, it’s not so good where a lot of your traffic is east and westbound.
And as I said, in modern data centers, you do have a lot of east and west flowing traffic. So because of that, there is another network design that’s very popular in data centers now, and that is the Spine Leaf data center design.
Now, you’re probably looking at this now and thinking, wait a minute, Neil, that looks pretty much exactly the same as a collapsed core and distribution layer with the traditional model. And yes, right now it does. So going back to the traditional model, again, you can see here that we pair up our distribution layer switches and we also pair up our core layer switches. That gives us some load balancing and it also gives us a redundancy there as well because we wouldn’t want to have a single point of failure.
And this example for the spine Leaf design right now, it looks the same right now, but the thing is that it’s actually designed where we can get additional scalability and better performance for our east west traffic flows. And what we can do is with the scalability, we can just add on additional switches in the east and west direction. So you can see here, if I’ve got a larger data center, I can just add additional spine switches and additional Leaf switches. With the spine Leaf design, we’ve got the spine switches here which are at a higher level in the hierarchy.
We don’t have our servers connected there. Our servers are connected into our Leaf layer switches and we have got a mesh between. So all of our Leaf switches are connected to all of our spine switches. And as I said, it’s really easy to scale this out just by adding additional switches in the east and west direction. So this gives us the good scalability. It also gives us good performance as well, because if any of the servers in the data center need to talk to each other, it’s only going to be a maximum of two hops away. Meaning if, for example, the server here is talking to a server on the right, it’s one hop at the spine switch and then goes down to another hop at the Leaf switch.
So that gives us the good performance and also the good scalability, where we’ve got a lot of east and westbound traffic. It still works well. We are still going to have north and southbound traffic here as well, but it does give us those gains where we’ve got that additional east and west traffic. Okay, that’s everything I needed to tell you about the spine Leaf data center network design. I’ll see you in the next lecture where we’re going to be back onto our main campus networks, which are the main focus of the CCNA exam. We’ll start getting into detail on our VLANs.
4. Why we have VLANs
In this lecture, you’ll learn about why we have VLANs. Our virtual local area networks are a layer two feature which are implemented on our switches. And to understand why we have them, you need to understand the problem that they solve first. So looking at router operations first, you already know routers operate at layer three of the OSI stack. Hosts and separate IP subnets must send traffic via a router to communicate. That’s a router’s main job routing traffic between different IP subnets.
Security rules on routers or firewalls can be used to easily control traffic that is allowed between different IP subnets at layer three. For example, let’s say all of your engineering hosts are in the ten 1010 O 24 subnet and your accounts hosts are in 10 10 24. You can easily implement security rules on a router or a firewall to block traffic from the ten 1010 subnet to the 1010 20 subnet. If your engineering hosts should never be talking to your accounting hosts, you’ll actually see how to configure that when we do the Access Control list section later on.
Routers do not forward broadcast traffic by default. They provide performance and security by splitting networks into smaller domains at layer three. Okay, so that’s our router operations. Our switch operations switches operate at layer two of the OSI stack, and they do forward broadcast traffic by default, unlike routers. So by default, a campus switched network is one large broadcast domain. Your switches flood broadcast traffic everywhere, including between different IP subnets. And that raises performance and security concerns.
If you have a look at an example local area network here. So we’ve just got a simple land one switch here, and we’ve got some engineering PCs and some sales PCs that are plugged into this switch. The engineering and the sales PCs are in different IP subnets at layer three, and we’ve got a router to route traffic between them if we send unicast traffic within the same IP subnet. So the Sales PC Two at 1010 210 wants to communicate with the Sales PC One at 1010 2011. So it sends some traffic with a destination IP address of 1010 2011 that will come into the switch. And as long as the switch has already learned the Mac address for Sales PC One, the switch will just send it out that port that Sales PC One is connected into. So this is very good for performance and security.
Traffic is only going exactly where it needs to go if we were going to send unicast traffic between different IP subnets. So let’s say a sales PC wants to talk to an engineering PC, so Sales PC two sends some traffic, it comes into the switch. The switch will then send that onto the router because the destination IP address is the engineering PC IP address, but the destination Mac address is the sales PC’s default gateway. The router will then route the traffic over to the engineering subnet. It will send the traffic back down to the switch.
And as long as the switch already learned the Mac address of the engineering PC One, the switch will just send it down to that PC. So as you can see, unicast traffic, whether it’s within the same or between different subnets, it’s always very good security and performance because traffic is only going exactly where it needs to go. And you can easily implement security policies on the router to limit traffic between your IP subnets. If we didn’t want to allow traffic between the sales and the engineering subnets, we could very easily do that with a security policy on our router or a firewall if it was a firewall there. But it’s different for broadcast traffic.
So let’s have a look and see what’s going to happen now. So in our example with sales PC, PC two sends out some broadcast traffic, like an ARP request for example, and that comes into the switch. And what a switch does with broadcast traffic is it floods it out all parts apart from the one that it was received on. So the traffic goes absolutely everywhere into the PCs that are both in the engineering subnet and in the sales subnet as well.
So the problem this gives us is it affects security because the traffic bypasses the router or firewall layer three security policies. Maybe we did have a security policy on the router which is blocking traffic between the sales and the engineering PCs at layer three, but when a sales PC sends out broadcast traffic, it bypasses that and it still hits the engineering PCs. So if somebody did some kind of layer two attack, this is a way that they could bypass your security policies. It also affects performance because every end host has to process the traffic, all of the sales PCs and all of the engineering PCs as well.
It also affects performance by using bandwidth and links where the traffic is not required. To highlight that, let’s look at a slightly different network topology here. So we’ve got the same switch in the middle with the sales and engineering PCs connected in there. And that switch is also connected to another switch in a different part of the building that has just got accounting PCs plugged in there. Again, when the sales PC sends in some broadcast traffic, it gets flooded out all parts on the switch, it hits the other switch and it gets flooded out all parts there as well.
So the accounts PCs and a link to the switch that the accounts PCs are in, the traffic is getting flooded there as well when there’s really no need to send the traffic there. So that was the problem. VLANs, or virtual local area networks are the solution. We can increase performance and security in the land by implementing VLANs on our switches. VLANs segment the land into separate broadcast domains at layer two. So VLANs are a layer two feature implemented on your switches. There’s typically a one to one relationship between an IP subnet and a VLAN.
So with the same network topology example, what we do is we create an engineering VLAN on the switch and we also create a sales VLAN on the switch. We put all of the engineering PCs and the router interface for the engineering subnet into the engineering VLAN and we put all of the Sales PCs and router interface and the sales subnet into the Sales VLAN. And the switch only allows traffic within the same VLAN. So what’s going to happen now? You see, for unicast traffic within the same IP subnet, this is actually going to be the same. So Sales PC Two sends some traffic.
With Sales PC One is the destination. It comes into the switch. The switch in our example has already learned the Mac address of Sales PC One. So it just sends it out that port. So that was the same as before, the same. For unicast traffic between different IP subnets, sales PC Two is going to send some traffic to an Engineering PC now so it comes into a switch. And you know how I said that the switch only allows traffic within the same VLAN? Well, the destination mark address is the Sales PC to default gateway on the router which is also in the Sales VLAN. So the switch will send the traffic up to the router only on that port because it had already learned its Mac address.
The router will then route the traffic to the engineering vuan and it will send it out the engineering interface. So it comes in on the switch in the Engineering VLAN and the switch will then send the traffic to the Engineering PC. It’s allowed to do that because it’s also in the Engineering VLAN. So for unicast traffic, whether it’s within the same subnet or between different subnets, that works the same really whether we’re using VLANs or not, where the big difference comes in and the big benefit is for broadcast traffic. So now Sales PC Two sends out some broadcast traffic that hits the switch and it floods it out all parts but only ports that are in the same VLAN. So it hits all of the other sales PCs and the sales interface on the router, but it does not not hit any of the engineering PCs. So the traffic only gets flooded where it needs to go. So that improves security and performance.