Pass VMware 3V0-752 Exam in First Attempt Easily
Latest VMware 3V0-752 Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 83 Questions & Answers
Last Update: Dec 19, 2024 - Training Course 29 Lectures
Download Free VMware 3V0-752 Exam Dumps, Practice Test
File Name | Size | Downloads | |
---|---|---|---|
vmware |
161.6 KB | 1193 | Download |
vmware |
161.6 KB | 1370 | Download |
vmware |
216.8 KB | 1602 | Download |
Free VCE files for VMware 3V0-752 certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest 3V0-752 VMware Certified Advanced Professional 7 - Desktop and Mobility Design certification exam practice test questions and answers and sign up for free on Exam-Labs.
VMware 3V0-752 Practice Test Questions, VMware 3V0-752 Exam dumps
Introduction
1. Welcome
In this course, we will cover DRS configuration and how to customize it to fit your environment. I'll show you the theory behind how Reworks works and some of the advanced configurations we can use to better fit our virtual environment. Then I'll set up DRS clusters from scratch, which we will configure with more advanced options as we progress through the course. Finally, I'll discuss ways to monitor and troubleshoot a DRS cluster to keep things running smoothly. So let's get started.
2. What you should know
Before you get started with this course, there are a few things you need to know. I'll be working with anSix or VSphere environment running 60. I'll include information on other versions where possible, but we will mostly stay in the 60 realm. Now, before taking this course, you should have a basic understanding of the Vspherewebclient, Vcenter server, ESXi configurations, and virtual machines. If you would like to follow along, you will need at least two hosts running ESXi, a shared storage solution, a V Center server, and a good chunk of time.
DRS Overview
1. DRS basics
The basics of any technology should be mastered before diving into the advanced details. So let's do a quick review of the basics of a DRS cluster. Now, DRS is really a resource manager for a cluster. It allows the placement of virtual machines within the cluster to ensure that the cluster resources are shared as equally as possible among the hosts in the cluster. Now, before we can just begin with the features of DRS, we should discuss the foundation itself, the cluster. Now, the cluster is a group of hosts that we have combined administratively for the purposes of pooling resources together to solve a problem administratively. Even if we combine or amalgamate these resources, the total amount of resources available to a virtual machine cannot exceed the size of the host on which it resides. Last but definitely not least, our clusters should be provisioned with hosts that are built as similarly as possible. This not only helps DRS, but other cluster technologies as well. Remember, in a DRS configuration, depending on our automation level, VMs are fluid and will move through the cluster hosts as necessary. The more similar the hosts are in the cluster, the less of a problem we are going to have with Vmotion. Please keep in mind that CPUs must be from the same vendor family or live V motions will fail. So we will have processors from the same vendor within the same cluster. And in an ideal world, we would want those processors to even be the same model. When we deploy a cluster in our ESX environment, DRS isn't the only option available to us. There are several technologies that we can put into play. High Availability, or Ho, is a technology that responds to host failure within the cluster by restarting all the VMs located on that failed host on other active hosts in the cluster. Resource pools allow us granular control of the allocation of resources to VMs. We can even prioritise certain VMs within the cluster to receive more resources. DRS is a load balancing technology and the focus of our course. VSANs, or virtual storage area networks, amalgamate server hard discs into a single virtual disk. EVC, or enhanced V motion compatibility, allows processor features to be brought down to the lowest common denominator, making live V motions or powered-on VM migrations compatible. So what is the role of DRS within a cluster? What we already know is that it is a resource management technology. It balances resource consumption across hosts in the cluster. It does this by moving or placing VMs within the cluster on less congested hosts. This can be done for both compute and storage resources for the initial placement of a VM or a powered-on migration of a VM to other less-used hosts in the cluster. DRS, when paired with distributed power management, can also be used to make the cluster more power efficient. Distributed power management is a major feature within a DRS cluster. DPM turns off any unneeded hosts within the cluster to save on power consumption. If, at a later time, the cluster needs that host, DPM will power back on the host and rebalance the VMs using live migration. DRS and Ha, as well, have the ability to apply rules that determine how VMs are placed within a cluster. Bye.
2. DRS cluster requirements
Now, before we start checking boxes and implementing features, we need to understand the basic minimums that are required to even run DRS. DRS requires a shared storage solution. Shared storage allows for more than one host to connect to the same set of disks. Our shared storage solution is usually a fan or even a knot, although that is rarely used in an enterprise shared storage solution environment. The performance of the shared storage devices is incredibly important to a cluster. Make sure that any latency issues or any other problems in the storage area network have been minimised prior to a DRS implementation. DRS also requires the discs of a VM to reside on a VMFS volume or a data store. These data stores must be able to be reached from the original host and the host where a virtual machine may migrate to.This makes a lot of sense. If a host can't communicate with a data store, it can't access the file, and if it can't access the data store, it's trying to move the virtual machine, but the virtual machine file cannot be delivered there. Now, these virtual machine files are the VMDKfiles that you see on your data stores. These files help make up a virtual machine. So if we are unable to access or move them, we are unable to access or move a virtual machine. Now, we should, of course, ensure that the VMFS volumes we're using are large enough to store all the virtual discs within our cluster. Basically, make sure you have enough space for everything you're going to contain within a cluster. Also, you want to make sure that the names are easily identifiable for the different data stores in your cluster. Now, on an important note here, swap file locations need to be accessible by the host that holds the VMs or the hosts that will be receiving the VMs. Now, this is the same scenario we talked aboutin the VMDK files, but in this case, weonly need this to work if we're running aversion that is three, five or lower. Our swap file locations, if we're running three, five, or higher, can be on local disks, and it's not a big issue. But if you're running 3, 5, or lower for your Six version, you have to follow the same rules you are following for your VMDK files. DRS utilises live V motions, or the moving of powered-on VMs to different hosts in the cluster. Now, because of this, compatibility is incredibly important. One resource we really have to pay attention to are the processors of the hosts themselves. As we said in an earlier discussion, processors should be as similar as possible, especially so in an automated DRS cluster. When we are talking about similarity, we started with a minimum of having the same vendor or class of processor, basically an Intel or AMD solution throughout the entire cluster. What we really strive for, though, is to have the cluster use the exact same processor. if this is not a possibility, and I know for some of you out there it isn't. Then we can use another VMware technology known as EV-C, or Enhanced V Motion Compatibility. This allows us to boil down the differences in features offered by different models of processors into the lowest common denominator. By ensuring all hosts are running the same featureset of an active virtual machine, we can move from one host to another and not have to change the way a virtual machine interacts with its processor, which would ultimately cause V Motion to fail. I want you to think of it this way Having different feature sets on different hosts and moving a virtual machine from one to the other would be the same as trying to hot swap a CPU in the real world. It's just a bad idea. It's not going to work. Now, in that same vein, or with that same idea, we have compatibility masks that are set at the VM level. They are compared to the availability of the feature sets on the host to which the VM intends to move prior to the move. Now, we can hide some of these CPU feature sets at the VM level, and this will help with compatibility during V Motion attempts. If certain CPU features have to be enabledfor a VM, make sure compatibility of thosefeatures are available throughout the cluster. Now, since DRS uses V Motion for some of its features, we need to define some of the requirements for V Motion. First, we should ensure that the V Motion network has been enabled for a VM kernel port group. The addressing within this network should be on the same subnet for all the hosts in the cluster, but uniquely addressed within that subnet. V Motion requires gigabit connectivity, raw disc mapping, CPU affinity for VMs, and Microsoft cluster services for applications that are all not supported with V Motion. And last but not least in our requirements is DRS licensing. DRS requires Enterprise or Enterprise Plus. A licencing standard is not acceptable. This LLC licencing is done by the number of CPU sockets. Although licence bundling is available to help you out with costs, it is also required that you have a licence for VCenter Server for management purposes.
3. Lab introduction
So before we get started, I want to give you a small tour of the lab we're working in. First, we will be logging in using the IP address of our V Center server. At the top of the browser, you can see I've already entered the address for my environment. In this course, we will focus on using the V Sphere Web client, since the desktop client has been diminishing in functionality with each successive version. Now let's go ahead and log in. Now, please note that at the beginning of version five and in later versions, Six requires the use of the UPN format, or user principal name. This is the username followed by the @ symbol, and then the domain. It doesn't have to be an active domain; it can just be the domain you set up during the initial configuration of the VCenter server. Now, as you can see, we're at the home screen. Currently, I'm going to take a look at our hosts and clusters. So I'm going to click on that. Now, here we see a list of inventory items on the left. First we are going to focus on the infrastructure. These are my true physical servers. On them, you see a few VMs that makeup our shared storage and even the virtual machines that will act as our DRS cluster hosts. This is what is known as a nested environment. I have hosts that are actually virtual machines on other hosts. Although this is great for a lab environment, this is not recommended in a production environment at all. So if I take a look at the left-hand side, if you take a look at VM host one and VM host two, I've actually re-added those into the data centre as hosts 100 zero 211 and 100 zero 212. So those virtual machines have Six installed on them, and I assigned them to the centre server, which brought them back in at 100 zero 211 and 100 zero 212. You can also see I have a lab VCSA virtual machine that is my V Center server. I'm running the VCenter Server Appliance. Obviously, that's used for management. And the fourth virtual machine, V Storage, is actually my shared storage environment. So all of this is nestled in, and we're going to use this as the infrastructure for our DRS cluster. Now, we will continue to configure this cluster as we progress through this course. We're actually going to create a DRS cluster and use the host at the bottom as our host for our DRS cluster.
Inner Workings of DRS
1. Initial placement and admission control
When a DRS cluster is enabled, any power-on attempt triggers a recommendation by DRS as to where the virtual machine or machines should be placed within the cluster. Now, before we go further, I want to clear something up about admission control. There are three types of admission control. There's the host version, resource pools, and HA. Admission control. They are different. DRS performs a host admission control check when a VM is powered on. This is to ensure that there are enough resources available to provide for the virtual machine. This check is performed by our VCenter server. If there are not enough resources to power on a virtual machine, a failure notification is given and the VM is not powered on. DRS recommendations are configuration based. We will have different and sometimes varying recommendations based upon how we configure DRS for initial placement. We have two important options or configurations to consider. First, when automated, DRS will place the VM within the cluster without any manual action. Basically, this is a hands-free VM deployment to a host that DRS deems would lead to the most balanced resource utilization. A manual implementation will recommend a VM's placement but require you as an administrator to step in to accept or override DRS's choice. When powering on a VM outside of the DRS cluster, no recommendations are given even when that VM is powered on at the same time as other VMs that are in the DRS cluster. So let's talk about powering on a single VM. When we power on a single VM, we are going to receive our placement recommendation. Or if it's automated, it'll just place it. But if we're in manual, we'll receive the recommendation, which tells us which host the VM should be placed on. Now, that's if everything goes right. If there is something that needs to be done, your cluster will inform you by giving you a prerequisite list. This is a list of actions that DRS thinks are necessary in order to power on a virtual machine. This could include things like waking up a host or migrating virtual machines from one host to another. There will often be multiple lines of actions required, and you will be forced to accept all those prerequisite actions or cancel powering on the virtual machine. Now, things become a little more complicated when we attempt to power on multiple virtual machines at the same time. They don't even all have to be in the same cluster to power on together; just the same data center will do. We could even power on VMs outside of the DRS cluster along with VMs that are within the DRS cluster. But we need to note that when we do this, recommendations will only be issued for those VMs that are participating within the DRS cluster. So when we power on multiple VMs in two separate DRS clusters at the same time, notifications such as power-ones and failures will be issued per cluster. Any DRS clusters that are not automated will request administrator acceptance for the placement. Recommendations: For all of the VMs contained now, a single recommendation will be made per cluster. Non-clustered VMs that are included will have the power on results listed in the starting or failed VM power on tab. Bye.
2. Migrate virtual machines
DRS utilisesVMotion to migrate VMs from one host to another. This allows DRS, if configured to do so, to migrate VMs that are live between hosts to balance resource consumption across all hosts in the cluster. Now, there are three options when choosing the automation level of DRS. This determines how much action you want DRS to take without any intervention. Now, manuals don't allow DRS to do anything without your say so. Everything will require administrative action. Partially automated is a little different. We allow DRS to place VMs on recommended hosts, but only during the initial power-on of a virtual machine. Full automation allows DRS to place VMs wherever it believes it would be best to balance the cluster's resource consumption. How much deviation is tolerated is wholly dependent upon the migration threshold you configure. Now, when DRS determines that there is an unbalanced cluster, it will make migration recommendations or actually move the virtual machines if you're fully automated. Now, in a manual or partially automated DRS configuration, DRS can make recommendations only. It may not move VMs within the cluster without approval of the administrator. Remember, partially automated systems can move VMs, but only during the initial power on.Once they're running it too, it requires administrative action. Notification that there are migrations that are recommended will be at the summary tab for DRS. The actual recommendations, though, are going to be stored within the DRS recommendations page. So please check the Summary tab. Often, when a cluster is fully automated, well, it's just that it's automated. It's hands off.DRS will automatically move VMs to less-utilized hosts in the cluster. Now, as an administrator, this should scare you a little bit. You don't lose full control, though. You are allowed to manually migrate VMs in your environment, but don't be surprised if DRS moves them back. Now, by default, the setting you make manually, partially automated, or fully automated will be set for the entire cluster. But you can set VMs to override this cluster configuration so they can act in a custom manner. Now, migration thresholds are basically setting a certain level of acceptability for an unbalanced cluster. You are deciding when DRS should jump in and fix things. You did give it full authority to do so. After all, the scale we're going to use is one to five. One is the most conservative. DRS only jumps in when rules are being broken or when intervention is mandatory. A cluster could be wildly unbalanced in this case without any intervention. as long as the host itself is following the rules and there aren't any faults. Priority Five will attempt to rectify almost any unbalance it finds. So in some cases, because of how aggressive this setting is and the volatility of virtual machines in production, you could potentially see a continuous round robin of virtual machines as DRS tries in vain to balance the cluster load. Now, VMware's default value is three. This is really pretty good. I would recommend keeping it here unless you have a reason to move it. Now, these scans for load and balance are held every five minutes. And the app's average of the resources consumed is used and compared to the threshold value set.
3. Migration thresholds
Since migration thresholds are the magic that makes DRS work, let's take a moment to jump down the rabbit hole. First, let's go to the math to get some of these values in a more concrete manner. The first thing DRS does is figure out what the loads are on each host using the formula "sum of loads expected divided by the capacity of hosts." Next, DRS needs to figure out the standard deviation in the cluster. This is fairly straightforward since we already know the loads of each host and cluster. When we now take the value and find the variation or dispersion that exists among the hosts, this gives us our standard deviation. Once completed, we inject both of those values into the formula at the bottom six minus the ceiling function multiplied by the load imbalance metric. That's the one we got earlier. divided by 0.1, multiplied by the square root of the number of hosts in the cluster. Now, I know you feel better about howDRS is calculated, but in all honesty, you will probably never be asked about this. But at least now you know how the values are constructed. Now, these values are directly compared to the priority levels that we set from 1 through 5. Remember to determine if the motion is required to balance the cluster resources. So, here is a list of reasons why DRS might recommend or even migrate AVM. The first two CPU and memory averages over the five-minute period violate the threshold levels we set. Of course, level one is excluded in these two cases since it is only used in cases whereintervention is mandatory or rules are being violated. Now, the bottom three, even at a priority level of one, would in fact cause DRS to react or give us a recommendation because these three are violations or break affinity rules. Now, let's take a more advanced look at the CPU and memory calculations. If you have VMs that burst in CPU utilization, you could see some issues in your DRS cluster. Because those are the averages that we are going to use. You can use the "aggressive CPU active" advanced setting. Now, this uses the second highest CPU utilisation that was captured in the five-minute period instead of the average CPU utilization. Now remember, only use it on VMs that burst CPU resources, and only if you find it necessary. There are also differences in the idle memory and active memory we use. When DRS does its sampling of memory utilization, it takes into account active memory plus 25% of idle memory to determine the host utilization. Now, idle memory is the difference between consumed memory and active memory. Consumed memory is the largest amount of memory the VM has ever used. So if you wish to have more control over how this calculation is being made, you can use an advanced setting. We can use the percent idle MB in memdemand as x, where x is the percentage of idle memory being used. Setting more idle memory to be used in this calculation can more accurately reflect use on a VM that surges in memory usage over short amounts of time. Again, only look at this option if you are unhappy with DRS memory balancing within the cluster. Now, both of these advanced settings are only available in version five and later. Now, I'm sure some of you are asking: Why wouldn't we adjust the migration threshold here? Well, that would be more sensitive, for sure, but in some cases it is the way we are measuring things that is the problem. In the case of volatile VMs that quickly utilise and drop resources, the average is over five minutes. This is the problem with these advanced settings. We changed the way that these metrics are calculated to be more sensitive to those bursts in resources.
VMware 3V0-752 Exam Dumps, VMware 3V0-752 Practice Test Questions and Answers
Do you have questions about our 3V0-752 VMware Certified Advanced Professional 7 - Desktop and Mobility Design practice test questions and answers or any of our products? If you are not clear about our VMware 3V0-752 exam practice test questions, you can read the FAQ below.
Purchase VMware 3V0-752 Exam Training Products Individually