Student Feedback
2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course Outline
Managing Networking in vSphere 7
Managing Storage in vSphere 7
vSphere 7 Monitoring Tools
Managing Networking in vSphere 7
2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course Info
Gain in-depth knowledge for passing your exam with Exam-Labs 2V0-21.20: Professional VMware vSphere 7.x certification video training course. The most trusted and reliable name for studying and passing with VCE files which include VMware 2V0-21.20 practice test questions and answers, study guide and exam practice test questions. Unlike any other 2V0-21.20: Professional VMware vSphere 7.x video training course for your certification exam.
Managing Networking in vSphere 7
6. NSX-T 3.0 and vSphere 7
Routers have things like firewalls and switches, and it's a virtualized network solution. So, if you don't have NSX, you can basically create a or distributed switch and that's it; you can't create routers, firewalls, or insert third-party services, or any of that other nonsense. You can do all of that if you have a NSX. And one of the big features of thisverse seven is the fact that it has a higher level of integration with NSXT 3.0. So I'm just going to give you a kind of high-level overview here of what that integration looks like. And so with NSX 3.0, we have this type of virtual switch called an NVD, and if we're going to be creating networks within NSX, we would have to install this NVD on our ESXi hosts, and in NSX we call these transport nodes. So basically, think of the NSX distributed switch and the NDVs as a little switch module that is being installed on all of these ESXi hosts. And moving forward from there, I can go into the NSX user interface and I can create layer two segments, kind of like the way that I create port groups with the Vsphere distributed switch, which is very similar to that. But one of the benefits of NSX is that it could also create layer-two segments that span KVM hosts, bare metal hosts, or edge notes. So that's the reason why MSXT has this kind of separate interface: the fact that we can create networks that span not only ESXi hosts but also things like KVM hosts or bare metal hosts. And so, with Nsxt initially here, Nsxt 3 is obviously not the first version of NSX. NSX has been around for a while, and prior to NSX version three, the NVDs were our options for inversions. That was the way we had to create these switching segments that were specific to NSXT. Now the problem is, let's say that I already have a Vsphere environment running and I've got all of these physical adapters down here. These VM necks are currently allocated to a Vsphere distributed switch. Well, I have to choose what I'm going to do. Am I going to take these VM nicks away from the VISA for the distributed switch and give them to the NVDs? And that became a real challenge because, in a brown field deployment, most of my VMs are probably already allocated to the Vsphere distributed switch. So if I want to set up an NVD with NFX, am I going to start removing all of the physical adapters from the VSphere distributed switch? And if so, what about the VMs that are still connected to that VMware distributed switch? So it makes the migration a bit of a headache and oftentimes results in the need for more physical adapters. Fast forward to Vsphere 7 and NSX T 3.0, and with Vsphere 7 comes the release of the V spare Distributed Switch version 7, and the Vs for a Distributed Switch 7.0 supports NSX Distributed port groups. So now I can eliminate those NVDs, and we can use the same Vsphere distributed switch for Nsxt3-PO and for regular old distributed portgroups on a Vsphere seven distributed switch. And so now I can have port groups that are just like the port groups I've always had. But I can also go into NSX and create segments, and those segments will appear basically as portgroups on the same Visa Distributed Switch. So I could now have port groups from Nsxt and visit switch port groups, and guess what? They exist on the same VSphere Distributed Switch, and they can use the same set of physical adapters. And so it makes the migration process from the Vsphere Distributed Switch to Nsxt much simpler because I don't have this headache of moving physical adapters from one platform to another. However, another significant advantage is that certain Nsxt features, such as micro segmentation, may be of interest to users. So let's assume that that's the reason you want to roll out MSXT. You want micro segmentation.The other stuff is great, but you don't really care about the other stuff so much. You're not really interested in distributed routing or edge services or stuff like that. You're buying NSXT strictly for micro-segmentation purposes. So here I've got two virtual machines, and let's assume that these two virtual machines are connected to the same port group on the same VSphere Distributed Switch. And what I want to do is establish a set of firewall rules. So let's assume that VM One is a Web server. We're going to have traffic flowing into and hitting VM One, and we also want to allow some traffic to leave VM OneTwo, which is providing a database for the web server. And so we don't necessarily want the same type of traffic that's allowed to hit VM One. We don't want to allow that Web traffic to hit VM Two. We want to open up a very specific set of openings for VM Two. It should only be traffic coming from VMOne, and it should be traffic from a port connecting to my database. So ideally, what I'll have here is a set of firewall rules attached to VM One and a different set of firewall rules attached to VM Two. even though those virtual machines are on the same Layer 2 network. Even though those virtual machines are connected to the same port group on the Visa Distributed Switch, I want to give them different sets of firewall rules. So that's what micro segmentation is all about: giving me the ability to configure firewall rules on a per-virtual-machine basis. And that's one of the major features of NSXT. Well, what I can do with Vis for Seven and NSX Three-Point-Oh is I could install NSX T, I could use a Visa-distributed switch as my switch for NSX T, and then I would be immediately able to start creating these micro segmentation rules. So I could create these micro-segmentation rules for virtual machines, even though I haven't migrated them to an NSS segment. That's a huge advantage if you're just trying to get to the point where you can do micro segmentation and don't want to configure all of Nsxt's other features. You just want micro segmentation.You don't have to rearchitect your network here. You can just roll out NSX and start creating micro-segmentation rules on the virtual machines that are already connected to your existing port groups. Because now that my NSXT segments are being created on a Visa distributed switch, I could still have VMs. I could have VMs down here that are still on a regular old port group on a VSphere distributed switch. But because I also have NsXT, I can attach firewall policies to those individual virtual machines, and I can achieve micro segmentation. So I don't necessarily have to do a big transition from the Vsphere distributed switch to an NsXT layer two segment. I could just implement NSX, leave everything the way that it is, and then use NSXT only for the micro segmentation feature to establish those firewall rules on all of those individual virtual machines. And many of the students iteach cite micro segmentation as the primary reason for switching to NSX. It might not be for the virtual routing; it might not be for the ability to have a layer-three physical network. There are a lot of other benefits to NSX, but the most popular reason that I hear is micro segmentation.Well, now it's basically plug and play.I can roll out an SXT, I can do it on a Visa straight to Switch, and all of my VMs can just stay the way they are. They can stay connected to the port groups that they're connected to, and I can immediately start creating firewall policies and micro-segmentation rules that apply to those individual virtual machines.
7. Demo: vSphere 7 Distributed Switch Features
These tasks using my hands on lab that I built on. VMware workstation. If you're looking for the methodology for how to build that lab, check out my home lab course. So here we are at the V SFA client, and I'm going to go to networking, and some of the features that we're about to go over are actually available on both the Visa for Standard Switch and the Visa for a Distributed Switch. But some of them are strictly available on the VS Fare Distributed Switch. So I'll do my best as we go through this to differentiate. I'm just going to start by clicking on the Configure tab and taking a look at some of the general properties of this virtual switch. So here you can see the version of the VSphere Distributed Switch that we've deployed. You can see things that we've configured, like the number of uplinks and the MTU maximum transmission unit. And you can also see the discovery protocol that's been configured. And at the moment, this virtual switch is configured for Cisco Discovery Protocol in listen mode. So what this basically means is that this virtual switch is going to learn information about my physical switches. Like, for example, if I have a VM neck on my Visa for Distributed Switch, those VM necks are the physical adapters of my host, and they're normally plugged into a physical switch. Well, now this Visa Distributed Switch can find out what ports they are plugged into, what type of switch they are plugged into, and what's the software version on that switch? That's Cisco's discovery protocol. And as you can imagine, it only works with Cisco devices. So CDP is actually available on the Vsfair standard and on the Vsphere distributed switch. I'm going to go ahead and hit edit here and go to Advanced. And here you can see that we can choose either the Cisco Discovery Protocol or the Link Layer Discovery Protocol. So again, with CDP, that's supported by the standard virtual switch or the distributed switch. I can listen only to learn information about the physical switches. We can advertise to tell the physical switches about the virtual switch, or we can listen and advertise. And if we're using the distributed switch, the LLDP option is available. And LDP is basically the exact same thing as CDP. It's a discovery protocol where switches can learn about each other. But LDP is not specific to Cisco. It's an industry standard discovery protocol, so it can work with vendors that are not Cisco. So let's take a moment to look at the topology of this Visa Distributed Switch. And here you can see the topology. So we've got our physical adapters here on the right, and I'm going to expand these. And so you can see here for uplink one, we have two physical adapters, one VM Nic on ESXi 1, and one VM Nick on ESXi 2. So for each of these hosts that arebeing managed by this visa, for distributed switch,we can have four uplinks per host. The next option I want to take a look at is private VLANs. Now again, private VLANs are a feature that is only supported by the Vsphere distributed switch. So let's take a look at how we set these private VLANs up. Basically, a private VLAN is a way to set up a VLAN within another VLAN. So let's say, for example, we create Primary VLAN 10. I can create multiple secondary VLANs within that primary VLAN. So what we're basically doing here is creating boundaries within VLAN 10. So for example, I could have a virtual machine on a port group connected to primary VLAN 10 and secondary VLAN 10, and it's in promiscuous mode, which means that anything on primary VLAN 10 can communicate with anything connected to secondary VLAN 10. Then also on primary VLAN ten I couldhave a secondary VLAN called VLAN eleven. And the secondary VLAN is a community VLAN, meaning everything within primary VLAN 10 and secondary VLAN 11 can all communicate amongst themselves, and any devices connected to that group can also talk to the promiscuous secondary VLAN, which is a community VLAN. Now I could also create another community VLANand just be aware that that other communityVLAN, anything connected to that one cannot communicateto community VLAN eleven and vice versa. So, when you create a secondary VLAN as a community VLAN, you're basically creating this sort of isolated community that can communicate with each other and with the promiscuous secondary VLAN. And then thirdly, we've got an isolated VLAN here. So the isolated VLAN is exactly what it sounds like it is: isolated. So again, I could assign to a port group on my distributed switch primary VLAN ten and secondary VLAN twelve. As a matter of fact, let's go ahead and do that. So here's one of my port groups. I'm going to go ahead and edit the settings on this port group. And under VLAN, I'm going to go ahead and assign a private VLAN. So on this particular port group, I can assign a promiscuous secondary VLAN, my community secondary VLAN, or my isolated secondary VLAN. So now I've got this port group that I can connect virtual machines to. And all the VMs that are connected to this port group, with this isolated VLAN, actually can't communicate with one another. All those VMs connected to that pork group are completely isolated. The only thing that they can talk to are our devices connected to the promiscuous secondary VLAN. So private VLANs are an interesting feature because it gives us—let's think about an example—a hotel. So I have all of these rooms where all of these guests sit, and they all have their computers at this point, right? I could make each one of those rooms a member of an isolated VLAN. So when everybody hooks up their computers, the guy in room one can't communicate with the computer from the guy in room two, but they can both talk to the promiscuous VLAN, which is my router. And so they can get out onto the internet without being able to communicate from room to room. and that's good from a security perspective. And then maybe I've got a conference room where we've got a bunch of people connecting to a community secondary VLAN, and they can all communicate within that community, and they can all communicate with the promiscuous secondary VLAN as well. So that's really the point of these private VLANs: to basically take a VLAN and further segment it by creating a promiscuous community and isolated secondary VLANs inside of a VLAN. So that's private VLANs, and that's only available with the VSFare distributed switch. All right, so the next feature on our list here is Netflix. So here's what Netflix does. Basically, I can set up this VSphere distributed switch. And again, this is only a feature on the VSphere distributed switch. NetFlow is not supported on the VSphere standard switch. So what I can do is configure a NetFlow collector here. I'll put in the IP address of a server that's going to collect NetFlow data. And basically, what it's going to do is the distributed switch is going to send little summaries of traffic that's occurring, like which IP is traffic coming from, what IP is traffic going to, and what ports are they on? And the net flow collector is going to sit there and basically take in all of this information. I'll also have to give my distributed switch an IP address so that it can communicate with this net flow collector. So the net flow collector is getting all of these little traffic flows from the distributed switch. And what it's doing is compiling a history of the traffic that has passed through that virtual switch. So now if I want to analyze, say, if I'm having traffic problems at certain times of day or something along those lines, I can use that Netflix collector to kind of analyse all of those historical traffic patterns and figure out how much traffic is being sent to my email server. Is there a certain time of day where a lot of traffic gets sent to a database? And the list goes on. So that's really the beauty of Netflix. It gives me a historical record of all of my traffic and allows me to use that to analyse trends and traffic patterns, which can really be useful for troubleshooting potential issues. A good example of the type of problem that Netflix is really helpful for is, let's say that everybody is telling you, "Hey, at 2:00 each day, everything gets really slow." Well, that's a really tough issue to troubleshoot because it's really a challenge to figure out what the root cause of that problem is. And the problem is intermittent. Maybe some days that happens at two; some days it happens a little earlier or a little later. It's hard to kind of catch that problem while it's occurring and figure out what's wrong. But if you have a net flow collector now, you can go in and figure out, okay, what exactly was happening at 200 and is there a tonne of traffic being generated by something? You can really dig into that historical data and figure the problem out. So again, NetFlow is only available on the Visa Distributed Switch. just like this next feature we're about to talk about, port mirroring. So with port mirroring, what we can do is take traffic that is destined for one port and send a copy of it to another port. Why would I want to do this? Maybe I've got a sniffer on a virtual machine, and I want to forward all the traffic to that sniffer so that I can observe something that's going on in my network. That's a great use case for port mirroring. So there are a few different session types that you can see here, and there's a nice little information button here that will show you the different session types. Basically, the first one is the one that's most commonly used. And what you're doing with Distributed Port Mirroring is having one virtual machine connected to a port group. I want to mirror all the traffic for that virtual machine's port to another virtual machine that's doing distributed port mirroring. So maybe I've got one VM that's having some sort of issue. I want to monitor all of that traffic. I set up a sniffer on another VM and set up a distributed port mirroring session. Or I can do remote mirroring, where I'm taking all of the traffic from a port or from a set of distributed ports and sending it out of one of my uplinks. Or I can monitor an entire set of VLANs and send that to a set of distributed ports. Or I can do encapsulated remote mirroring, so I can choose a set of distributed ports and send them to an IP address of whatever I want to monitor that traffic with. So I'm going to set up a Distributed Port Mirroring session and hit Next. And I'll give the session a name, and I'll choose whether I want port mirroring to be enabled or disabled. I'm going to go with a Distributed Port Mirroring Session here, which I selected on the previous screen. And then do I want to allow normal IO on my destination ports? So I've got a sniffer installed on some VM, and that's my destination port. So do I want to allow normal traffic to and from that virtual machine or not? And I'll just go ahead and choose what's allowed in this case, and I'll choose my sampling rate. If I want to kind of cut down on the amount of data that's actually being mirrored, I can do that. And then at that point, you'll basically just choose a source port for the port mirroring session. So I'm going to choose this virtual machine as my source port, and what part do I actually want to mirror all of that traffic to? I'll choose a destination port. Maybe this is my sniffer; I'll go ahead and hit okay here, and that's really all there is to it. It's really easy to set up one of these port mirroring sessions and basically just send a copy of all of the traffic for one virtual machine to another virtual machine. Another feature that's only supported by the VSphere distributed switch is health checking. Just keep these things in mind as you prepare for your exam: private VLAN NetFlow, port mirroring, and health check. All of these features that we've shown you here are only available on the Visa for Distributed Switch. same with LLDP. We looked at LLDP as well. That's only available on the Visa for distributed switches. The only feature that I've shown you so far that is available on the Vise for a standard switch is CDP. So we can see here that Health Check is disabled by default, and that's a good thing. Health Check is not something that you necessarilyjust want to leave running because it doesintroduce a bit of a security vulnerability. So it's a security best practise to just leave Health Check disabled. However, if you want to validate that your distributed switch's MTU and VLAN configuration match what's set up in the physical network, or that your distributed switch's nicktiming and failover settings match what's set up on your physical switch, this is a great way to do so. So I'm just going to go ahead and enable Health Check here, and then I'll hit okay. And now you can see that both of these services are enabled. So here's what this is essentially going to do at this point. Here's our distributed switch. As you can see on our distributed switch, we've configured certain settings like the MTU on these port groups. We've configured settings like the VLAN that we want to utilise on these port groups. We've configured settings like nickteaming methods and LACP. Or maybe we're using originating virtual port IDs or IP hash load balancing. Well, all of those things that we're sending up there, basically, have to match up with what's in the physical network. So, if I create a port group and assign VLAN 10, and VLAN 10 isn't configured in my physical switch, that's not going to work. If I create a port group and select IP hash load balancing, I must configure port channel on my physical switch; otherwise, if I create an originating virtual port, I should not enable port channel on my physical switch. So not only do we have to think about this from an overall distributed switch perspective, but think about how many hosts were potentially attached here and how many different physical switches those hosts may be connecting to that's.What the health check is for is basically to validate that I've configured my distributed switch in a way that's been applied to all these hosts. These hosts may be potentially connected to a bunch of different switches. Is everything configured right? Is everything matching up? Okay. So let's jump back to our distributed switch here, and I'm going to make a little change to the properties of my distributed switch here. I'm going to go ahead and enable network I/O control. And actually, it looks like that feature has already been turned on here. So here's where we configure our network I/O control options under resource allocation. I'm going to choose system traffic. And if we look at the different types of traffic here, you can see we've got things like management, VMotion, Icecuzzy, and NFS and vSAN traffic. And let's say, for example, I wanted to configure VCSAN traffic. So basically what I'm doing here is prioritising different types of traffic. Like vSAN traffic has a normal number of shares right now,but I can change it to high or I can changeit to low or I can change it to some customvalue that I want to set this too. So let's set it to a custom value of $75. Basically, what I'm doing here is establishing a share structure, or a relative priority. So if we look at VSAN traffic, it now has 75 shares. So for example, virtual machine traffic is granted basically double the bandwidth on my physical adapters. Then I got scuzzy. Virtual machine traffic gets double the bandwidth of NFS. But these are shares. So the important thing to bear in mind with shares is that shares are only enforced during times of contention. Under normal circumstances, these share prices are not enforced. But if there's a lack of bandwidth on the physical adapters of a host, that's when the share structure is enforced. I can also configure reservations. So actually, let's go to virtual machine traffic. And what I'm going to do is edit VM traffic, and I'm basically going to guarantee virtual machine traffic at one gigabit per second. I'm setting aside a certain amount of bandwidth and reserving it for virtual machine traffic. That's a reservation that is in effect 100% of the time. Shares are only in effect when there's resource contention. Reservations are basically saying, "Hey, I'm taking these resources and I'm granting them to this traffic type 100% of the time." And so that's kind of what you're doing here with these share values. What you're basically doing is governing: hey, there's a certain number of physical adapters; there's a certain amount of bandwidth. Everybody play nice. And if there's a shortage of bandwidth, virtual machine traffic takes precedence. So we can use those shares to kind of prioritise one type of traffic over another. We can also define network resource pools. So what I can do is go in here and hit Add, and I can establish a network resource pool. I'm just going to call it high, and I can go ahead and establish a reservation as well. So maybe I'm going to reserve one gigabit per second for this new network resource pool, right? So now I've established a network resource pool, basically setting aside some bandwidth, and if I want to, I can go ahead and associate some of my distributed port groups with that network resource pool. So if there are certain port groups that I want to grant access to and allow them to utilise this bandwidth reservation, I can do that. That's the purpose of a network resource pool. So the bottom line for network IO control is that it's basically all about taking the physical bandwidth that we have and determining which types of traffic get priority access to that bandwidth, what types of traffic are going to have a guarantee of bandwidth using a reservation, and what types of traffic we may want to put a limit on.
8. Demo: Configure vDS Security Policies in vSphere 7
So here's a port group that I've already created. I'm just going to right-click it, go to Edit Settings, and under Edit Settings, I'm going to go to Security, and I have three options here. number one, promiscuous mode, which I can see is currently set to reject. So what Promiscuous Mode is going to allow me to do is sniff traffic on this network. So when I enable promiscuous mode on a portgroup, I can connect a virtual machine to that port group, and it can detect all of the frames that are being passed on that port group. So, for example, something like a sniffer or something like an intrusion prevention system could be a good candidate for promiscuous mode because it needs the ability to monitor all of the traffic on that distributed port group. But under normal circumstances, this is a really significant security risk. So I don't want to enable promiscuous mode unless I have a really valid use case to do so. And I should disable promiscuous mode when I don't need it enabled. so I'm going to leave that one to reject. And the next two settings are Mac address changes and forged transmissions. And these are very similar. So with the Mac address changes, with each virtual machine, there's a file called the VMX file, and the VMX file is the configuration file for my virtual machine. And within that VMX file, there's a Mac address. So when this virtual machine was created, I went ahead and gave it some network interfaces. Those network interfaces have a specific Macaddress associated with them, and that Macaddress is referenced in the VMX file, the configuration file of this virtual machine. So if the virtual switch now sees traffic coming from this VM, but coming from some other Mac address that's not in the VMX file, the virtual switch is going to assume, "Hey, there's something wrong going on here." This VM is spoofing the Mac address is being spoofed by this VM.Maybe this is a hacker. Maybe this is some sort of attack. And so by default, it will reject that traffic that doesn't match the Mac address referenced in the VMX file. Transmission forgeries are very similar. So the Mac address changes are relative to inbound traffic. Traffic entering my virtual machine with an aMac address that differs from the VMX forged transmit. This is for outbound traffic. So they're basically the same setting; Mac address changes are for inbound traffic coming into the VM. Forged transmissions are for outbound traffic leaving the VM. When would I want to enable these? because they're currently set to reject here.When would I want to set these to accept? Well, number one, I want to bear in mind that if I set them to accept, I am less secure. Now, taking a little bit of a security risk there, let's say, for example, you have some physical server that has a certain Mac address, and based on that certain Mac address, We have a software licence that is tied to that particular Mac address. Maybe I have a server-bound software licence tied to one specific Mac address for a physical machine. Well, if I now take that physical machine and convert it to a virtual machine, it's going to have a different Mac address. So I may want to override that. I may want to manually configure the Mac address to match what it was in the physical servers so that I don't violate my licensing, and that's the scenario in which I would have to set Mac address changes and forged transmits to accept, so that when I manually override that Mac address, it doesn't break the connectivity for that virtual machine.
9. Demo: Configure vDS NIC Teaming and Failover in vSphere 7
And how they differ in their approaches to load balancing traffic across a physical adapter on a distributed switch. So now I'm just going to show you how to configure those. So here's my port group, and I'm just going to right click this port group and go to edit settings, and I'm going to choose teaming and failover. So here are some of those different networking methods that we looked at earlier. We can route based on an originating virtual port. So each virtual machine has its own virtual port, and based on that virtual port, a certain physical adapter will be used for that VM's traffic, and Source MACASH is very similar to that. Based on the Mac address of the virtual machine, a physical adapter will be selected for all of that virtual machine's traffic route based on IP hash. If we're going to configure this one, we have to configure a port channel in the physical switch because each VM can use any of the physical adapters and the traffic is distributed across those physical adapters based on source and destination IP. And here's one that applies only to the Vsfare distributed switch route based on physical nick load. So let's assume that MyHost has four physical adapters. When one of those physical adapters becomes overwhelmed with traffic, what it'll start doing is moving VMs to a different physical adapter. So each VM is going to use one physical adapter for all of its traffic. But if that physical adapter hits a certain threshold where it becomes overwhelmed with traffic, then the VsDistributed switch will start migrating VMs off of that adapter to a different physical adapter. Again, with route based on physical nick load, with route based on source Mac, with route based on an originating virtual port, or any combination of those three options, we do not configure port channelling in the physical switch network failure detection. Do we want to do link status or beacon probing? Again, we saw this in a prior lesson. So with link status, the distributed switch is going to monitor the physical connection. If somebody cuts the cable, if somebody unplugs the cable, if a port fails, the distributed switch will detect that, and it will take the virtual machine's traffic and move it over to a different physical adapter. A very simple method of detecting failure. Whereas with beacon probing, each of these four physical adapters shown here is going to send out frames to each other, and those frames are going to pass through the physical network. This way, all four of my adapters are actually making sure that they can communicate with each other over that physical network. So it's a more in-depth way to monitor the actual connectivity. Now that being said, my example here really isn't that great because beacon probing works best with an odd number of physical adapters. So four physical adapters wouldn't really be a great fit for beacon probing. Maybe three would be better. So here's what "notify switches" means: maybe we've got a virtual machine that is consistently using uplink 1, but some sort of failure occurs, some sort of problem occurs, and now that virtual machine's traffic is going to be migrated to uplink 2. Well, in that case, it may make sense for our distributed switch to send a notification of that failure event to the physical switch so that the physical switch can update its Mac table, which can help improve the performance. When a failure occurs, it can help that failover happen more quickly. So in most cases, you want to leave notify switches enabled and then finally fail back. So let's say that a physical adapter fails if it is then restored. So let's say somebody unplugs a cable from one of the VM necks on our ESXi host. when they plug that cable back in. Should that physical adapter be returned to duty or not? By default, yes, it will be returned to duty when it comes back up. But if we have some sort of scenario where maybe a physical adapter is coming up and going down and coming up and going down, that's something we refer to as flapping. If a physical adapter is failing over and over again, we may not want to enable fail back.We may want to set this to "no." And then finally, here's another important aspect of managing this port group. Maybe on this particular port group, I want all of my traffic to flow out of uplinks one and two. Let's say, for example, this port group is for virtual machines that are HR. These are virtual machines owned by Human Resources. So they have to be isolated on their own physical network, and their physical network is links one and two. Uplink Three and Four are another physical network that could potentially work but should not be used under normal circumstances. So that's our scenario. Under normal circumstances, I want this port group to use Uplink One and Uplink Two. However, maybe I want to allow them to use uplink three and four if these two physical adapters fail. So what I can do is take up links three and four and move them into standby uplinks. That means that under normal circumstances, all traffic will flow over links one and two. But uplinks three and four are kind of sitting there in the background just in case something goes wrong. or if there's an adapter that just should never be used here. For this particular port group, I can move it to unused uplinks. So those are the Nick teaming and failover settings for my port group. If you want to get a little more information here, you can always have these little information icons. But those are the timing and failover settings for my distributed port group. And in the next video, we'll take a look at how traffic is shaping you.
10. Demo: Configure Traffic Shaping in vSphere 7
So as you can see here, I'm in the Vsphere client, and I've created a port group on my Vsphere distributed switch, and I'm just going to right-click that and go to Edit Settings. And under Edit Settings, I'll choose Traffic Shaping. And at the moment, ingress and egress traffic shaping are both disabled. Now, first off, I just want to mention that with a distributed switch, you have the ability to do both ingress and egress traffic shaping. With a standard switch, you can only do egress traffic shaping. So what does this mean? Well, let's go ahead and enable both ingress and egress traffic shaping, and let's talk a little bit more about this port group demo port group. So let's assume that this particular port group is used by a group of developers. A group of developers have virtual machines connected to this port group. And one of the things that they're developing is a group of file transfer servers that can consume a whole lot of bandwidth. Well, this is one of potentially many port groups on a physical host. This could be one of many port groups that are present on a single physical ESXi host. And that physical host only has a certain amount of bandwidth to go around. So if one port group is generating a tonne of traffic, that could potentially impact the performance of VMs on other port groups. And it could also impact the ability for my VM kernel ports to pass as much traffic as they need to. That's what traffic shaping is here to prevent. Basically what I can do is I can sayon this particular port group, I want to saymy average bandwidth is 50 megabits per second oras it's written here, 50,000 Kb/second. This is 50 megabits per second. That's the average bandwidth. That means that each virtual machine on this port group is going to be granted an average of 50 megabits per second for both ingests. And I'll set it up for egress as well. So I'm controlling the average amount of bandwidth that each virtual machine can generate. And then what I'm also saying is that there's a peak. So maybe under normal circumstances, each VM should only be able to consume around 50 megabits per second. But there are certain times when these file transfer servers need to send or receive a lot of data, and during those times they can get a little more. They can go up to 100 megabits per second when they really need it. So this gives me the flexibility for those sorts of bursty workloads where at times there's a lot of traffic, but at other times there's very little. So the peak bandwidth gives these VMs that are connected to the support group the ability to burst. And here's how much data they can burst: 102,400 KB, or around 102 megabytes of data. So what that means is that VMs in this port group can transmit or receive at 100 megabits per second, but only until they exhaust the data in what we call a burst bucket—until they exhaust this burst size of 102 megabytes. Once they've sent 102 megabytes of data at a rate higher than their average bandwidth, they are going to be forced back down to that average bandwidth. So then they'll be forced back downto that 50 megabits per second mark. Now, that's where they'll remain. They'll be limited to 50 megabits per second until such time as the VM actually drops below 50 megabits per second. So let's say that now this big file transfer is done, and the virtual machine is only consuming like ten megabits per second. Well, this is sort of like a kid saving up their money by saving up their allowance. As this VM is only transmitting or receiving ten megabits per second, it's well below its allocated average bandwidth, and so it's saving up burst size during those times of low usage, and eventually it'll build this burst size back up to 102,400. Then when it needs to, it can burst again, which will then consume all of the burst size, and eventually it'll have to save up again and rebuild that birth size. So it's very similar to somebody saving their money. You save up your money, and you spend it all. Now you're back to receiving your $5 weekly allowance until you can save up again by spending less and then rebuilding those savings. That's comparable to bursting with traffic shaping. So the overall end game here with traffic shaping: is there some sort of limitation, some sort of predictability? No matter what people do with these virtual machines, I know that over the course of time, each VM is going to average out to using 50 megabits per second of bandwidth or less. I know they're not going to consume more than that on an ongoing basis. So now I have some predictability as to how much traffic they can potentially generate and how that's going to impact other VMs that are sharing these physical uplinks for the ESXi host.
Pay a fraction of the cost to study with Exam-Labs 2V0-21.20: Professional VMware vSphere 7.x certification video training course. Passing the certification exams have never been easier. With the complete self-paced exam prep solution including 2V0-21.20: Professional VMware vSphere 7.x certification video training course, practice test questions and answers, exam practice test questions and study guide, you have nothing to worry about for your next certification exam.