Pass Nutanix NCP Exam in First Attempt Easily
Latest Nutanix NCP Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 172 Questions & Answers
Last Update: Dec 22, 2024 - Training Course 12 Lectures
Download Free Nutanix NCP Exam Dumps, Practice Test
File Name | Size | Downloads | |
---|---|---|---|
nutanix |
354.5 KB | 1291 | Download |
nutanix |
354.5 KB | 1380 | Download |
nutanix |
383.5 KB | 1517 | Download |
nutanix |
129.2 KB | 1592 | Download |
nutanix |
107.4 KB | 1669 | Download |
Free VCE files for Nutanix NCP certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest NCP Nutanix Certified Professional 5.10 certification exam practice test questions and answers and sign up for free on Exam-Labs.
Nutanix NCP Practice Test Questions, Nutanix NCP Exam dumps
Introduction
1. Introduction to Nutanix Cloud Platform
Operating system. We are primarily looking at three areas. The first area is a hyper-converged infrastructure in which my compute and storage are integrated, and I should also be able to get VM-centric storage so that I can provision my virtual machine, as well as backup and disaster recovery capabilities so that I can instantly protect my virtual machines and replicate the VM data to a disaster recovery site. In addition, an enterprise cloud operating system requires a second monitor, which is a HCI hyperconverged infrastructure with native virtualization. So the solution that we are going to consider for our cloud should have native virtualization capabilities, and it should also have multi-hypervisor support. So depending on the client's choice, we should also be able to provide multi-hypervisor support. It should also support hybrid cloud services so that I can configure the platform for hybrid cloud services, which have access to both public and private clouds and also maintain security between these layers. It should also have machine intelligence compatibility, system and operations management capabilities, and we should also be able to have file and block storage capabilities. When I say file and block storage capabilities, I am considering having file services on the Nutanix cluster as well as block storage in terms of Ice Cassie.So if I need to provision block storage for a customer or for a virtual machine, I should be able to do that as well. The third module which we talked aboutthe enterprise cloud is primarily we aretalking about integration with front end dashboardor billing system or application site. So, in this area, we will consider container support and cloud automation in order to provide self-service capabilities that will automate the rest API activities. Also, we will be looking at application mobility, where the virtual machines should be able to failover smoothly between posts as well as between VR sites as well. So we will be looking at integrating the public cloud as a repository for putting the data from Nutanix in the public cloud as part of compliance or long-term retention. And also, it should provide micro-segmentation so that we can have security policies in place. Nutanix offers application-level security policies where we can have these policies in place and create a security rule for accessing these applications. We should also have an application ecosystem where we are able to accommodate new applications and new technologies, and we should be able to provide full high-level redundancy for the application. And finally, it should also be able to provide an object storage capability so that if any customers are looking for, creating, or using Nutanix as an object storage, it can also be created in Nutanix. So in a summary, as an introduction, you can see the new tenant's platform is offering all the different capabilities of a cloud operating system. Let us look at the technical overview of how Nutanix works, and we'll try to understand the overview and the fundamental component. Here in this slide, I have three X86 servers, which I can consider a commodity server. And I will be having a hypervisor running on each X86 server. And each hypervisor will have a CDM running inside it, which is known as a Nutanix controller VM. And this Nutanix controller VM runs on each node in the cluster, and it is responsible for establishing the Nutanix cluster's communication with other nodes. And also, every server will have its own local storage in terms of flash and a HDB. So every node has two storage tiers. One is a flash drive, while the other is an HDB. Now, all these X86 servers will be connected to each other as a cluster, and the CVM that is running on the node will be acting as a cluster service, and it will be communicating with each node in the cluster to perform any activity. Now, the first thing that I will be doing is deploying some virtual machines for my application, which are represented by these grey boxes. So these VMs that you see are the production VMs that are running on this node. And the CVM is a component of Nutanix, which is running on every node and keeping the cluster configuration in sync. Okay, and this CVM also acts as a 1 second delay; I think I'm getting a call. and this COkay, sorry for that. So each CVM will be working asa cluster service and it will besynchronizing the configuration between each other. Now, if a VM fails on this particular node, then the CVM will take appropriate action to failover the VM to the other node. Since all the CVMs are working together, they are able to synchronise their configuration and take respective failover decisions. And the underlying storage, the Flash and the HDD on each node, will be controlled or will be used as a distributed storage fabric. So all the CVMs are able to see the physical disc of all the X86 servers across the cluster, and they will be using these locally connected hard discs as a distributed storage fabric. Let me just go back to those lines again. This X86 server now has its own storage, including Flash storage and HD storage. Now, what we need to do is the step one scenario here. Let us say this is my first X86 server that I am deploying in Nutanix. Okay, so what are the steps? The first thing that I will do is install the Acropolis operating system, which is known as AOS. And when I install this Acropolis operating system, it will install two components. The first component is known as "AcropolisHypervisor," which is a native hypervisor. Okay? And inside this Acropolis hypervisor, it will also create a CVM. It will also create a CVM. Now this Acropolis operating system has the hypervisor, which is a native hypervisor, and it has installed a CVM. The CBM is a controller CBM, which iscontrolling all the activities, which is controlling allthe activities and which is also controlling yourhard disc and your SSD storage as well. So this is able to see and detect your SSD disc, which is our flash, and your HBD as well. So the Acropolis CBM will be able to see the HDD and the SSD. And it will use the SSD as tier zero, and it will use the HDD as tier one. So it knows that on this particular x86 server, you have two types of disks—which are considered storage tiers. And when I install the Aquaponics hypervisor, the hypervisor is installed and the CBM is s. And when I ion the first node. Now, on the second node, again, I am going to do the same exercise. I am going to install my AOS on this system. And along with the AOS, it will also install your hypervisor, and it will install the CVM. Now, when I am installing the second, note again that the CVM will be able to detect your SSD and HDD. So primarily, your CVM is acting as a storage controller. It is taking ownership of the disk. Let me show you one more slide. This slide. So if you look at this slide here, when the Acropolis operating system is installed, it will install the hypervisor, and it will also install the CVM. So the CBM will actually take ownership of your storage I O.And it will communicate with the Scuffy controller in your server, as well as see your SSD disc and HDD. So whenever a guest wants to perform any I/O on the VM, the guest will send the VM IO to the hypervisor, and the hypervisor will send it to the CVM. So the CVM is acting as a storage controller and is able to perform all the storage activities. Now, this hypervisor that is installed is the default Acropolis hypervisor, which is installed by default. But I can reimage, and I can change the hypervisor with ESXi or HyperV as well. Even if I change the hypervisor to ESXiserver, the ESXi server will still talk to the CVM as a storage controller. So the ESXi server will see your CVM as a data store provider. Yes, the impact will be much like a storage failure. It will not be able to do read and write operations. So primarily, Newtonian Acropolis Hypervisor has a built-in HF feature. So if these VMs get frozen or the VMs are not accessible, then it will automatically restart the VM on the other node, based on your affinity policy. So basically, the CBM is acting as a storage controller, which is providing access to the data store. But what you can do is learn more about Nutanix. So, as you rightly said, your question is, "Let me take you to the next slide." If you see here, and if you try to understand the right IO functionality of new tanks, what Nutanix does is that whenever a guest VM is writing data to the storage, the guest VM will be sending the data to the controller, the CBM. The CBM. What it will do is it will risethe data in a local storage first. So it writes a copy of the data locally while also replicating a copy of the data to remote nodes and other nodes for high availability. So if a CVM is down and the VM is restarting on the other node, it will actually read the remote replica copies of the data, and it will start the VM so that you don't have any delays. So Nutanix always tries to keep multiple replicas of the data. One is local, and the other copy is distributed across the nodes for redundancy. It is not replicating the reference. It will replicate the actual data that is needed for the VM to power s for rPlus, it will also keep the metadata information to identify which blocks are local blocks and which blocks are replica or remote blocks. Let me take you to one more slide and show you the reference here in this example. Now, if you look at this example, I have a Nutanix cluster which has three nodes. Now, if you see all the nodes here, they have their respective local storage, right? Now, when I configure these three nodes as part of a Nutanix cluster, then what happens? The Nutanix cluster, using its DSF storage technology, will consider all this storage as one pool, actually, as one pool.So it is able to see all the physical discs across the nodes as one logical container, or a data age as one Now, in this scenario, what happens? What happens is if a guest VM is writing the data, and the guest VM is writing the data to the controller. And let us assume that it is writing these two blocks. One is a square and one is a circle, right? So this VM has two blocks of data. One is a square, and one is a circle. What it will do is also update the CVM; it will write a copy locally. And also, what it does is replicate the data to other nodes in the CBM in a distributed manner, so that it will replicate the square to node 2, and it will replicate the circle to node 3. So it is distributing the copy across multiple nodes. So this is how it attempts to maintain the data replica, so that if the first node fails, the VM can be powered on the second node by reading the local square data, and then the VM will also require the circle right to complete the operation. Now, to perform or get the circle, it will also look up the metadata information to understand where my other block is, and the metadata information that is synced between all the CVMs in the cluster will tell it that the circle is available with node three. So it will request a remote read of this block, and the remote read of that block will be done over the CVM in another mode. So the CVMs are talking to each other. Yes, definitely, there will be an overhead because it is going to keep metadata on each and every block to identify which is a local block and which is a remote replica. That is the reason why we need to give some extra CPU cores and memory to the CVM. The default configuration that is needed for a CVM to run is eight cores and 16 GB of RAM. The first thing is that it replicates at the block level. So we are not considering whether it is replicating the VMDK file or the VMX file to other nodes; it is replicating the data at block level. So for CVM, whatever data is coming from the VM, it is considering it as a block. So basically, what it does is accumulate the block, run the block in a stripes manner to local storage, and then distribute those blocks to other nodes. For example, let's say a VM is sending two blocks—one is a square and one is a circle. So what it does is it will take those two blocks, write them to the local storage as a local copy, and distribute block one to node two and block two to node three as a load balance scenario so that it is not overloading another node in the cluster. So by default, the DSF, which is the distributed storage fabric, uses the distributed method of keeping the blocks at different nodes. The second thing is that yes, it requires a good amount of hardware resources because not only is it replicating the blocks, but it also maintains the metadata information for each and every block. So basically, if I have to go more in detail into the read and write operations, what happens is that whenever a guest PM is sending the data, the controller CVM will actually put it in its local folder up log," which is like your memory location. So once this upload gets 80% or 50%, it depends on how much. Upload is also stored by default on the Flash disk, which is a faster disk. So all the home directories of Nutanix, the upload, the metadata database, everything, is stored on the flash disc for faster read and write purposes, and the same thing is replicated to other CVMs across the node as well. It uses the general concept to make sure that it does not have any impact on the data loss in actuality. So the off lock is like a journal where it keeps the information about the incoming block and keeps updating those details for the other CBMs in the cluster. Let me show you some more diagrams and some more slides to move forward because initially it might be a little bit difficult to understand the whole picture, but as we move forward, things will keep getting clearer for you. Now let me go back to the initial slide. Are we okay with this? The concept of having physical servers with their own local hard discs and the local hard disc of each node being created as a logical data store or a logical container And they run a CVM on each node of the cluster. And these CVMs are talking to each other to maintain the cluster configuration, to replicate the data across multiple nodes, and to failover or power on the VM whenever it is necessary. So the purpose of CVM is to act as a storage controller and to make sure that it has resilience in terms of data. The CBM is not only acting as a storage controller; it is also able to do some storage activities such as taking snapshots, creating local clone copies, or enabling compression or deduplication so that we don't have duplicate blocks on the storage side as well. It will also be able to do the tiering part. So initially, whenever data is stored, it will get stored on the flash disk, and if any data is not accessed over a certain period of time, it will automatically move that data into the HDD tier as well. So that is what we call it, "tiering." And it also has an error-correction feature where it will try to keep multiple copies based on our configuration. Like, for example, if you want to have a redundancy factor of two, it always tries to keep two copies or replicas of the data so that we have resolution at any given point in time. And on this slide, I will just try to take it once more, and then this is where we are talking about the right operation. So whenever any data comes from the VM, the CVM will write it locally to the storage and create a copy on the remote nodes for resiliency purposes, depending on the RF factor that we have. And these are replicas, which are spread across the cluster for high performance. So it is always trying to see the load of the CDM, and based on how the CDMs are occupied, the replicas will be created from the remote site. In this slide, I am showing you a single node configuration. So basically, this is a single node of Mynutanix, which has its own local storage and SSD and HDDs connected to the Scuffy controller. The scuffy controller is controlled by the CVM, and the CVM is running on the hypervisor as a virtual machine. So whenever a guest VM wants to write the data, it will talk to your hypervisor, the hypervisor will forward the request to the CVM, and the CVM will use your flash storage to store the data. And in terms of the hypervisor, you can have your own choice of hypervisor. You can either reimage as ESXi HyperV or use the native hypervisor as well. Are we good until here? As of this concept, this is just an overview of how the CDM hypervisor and the storage components are working. We will be going into more detail. So let us look at how this TVM is performing these operations. So as you can see here, I have a Newtonix cluster with three nodes. So every node has its local storage, every node is running its hypervisor, and every node will also have a CVM. CVM is the one maintaining your cluster. Even if you use the ESXi hypervisor, you will have a CVM running in your ESXi hypervisor that will be your controller VM and will perform all storage-related activities. Now, if we go into the CVM, and if we look at the CVM, okay, what is running inside the CBM? The CBM is running some services, which are taking care of and performing all the storage-related activities and features that we are going to discuss later on. And you see, each CBM will be running its own set of services, okay? And these services are independent, and they are also talking to other services across the cluster as well. Let us look at the components of one CVM. Now these are the components of one CVM. Here you can see that this is one single CVM that is running all these services. Now let us discuss the first service, which is called Stargate. Looking at this, you can see that the Stargate service is receiving requests from your hypervisor, and your client using NFS is Cassie or SMB to call. Now when I say NFS, if I am running an ESXi hypervisor, the Stargate will be able to communicate with your ESXi hypervisor using the NFS protocol. If I am going to run a HyperV hypervisor, then the Stargate service will be able to get a request from your HyperV over the SMB protocol. Because Microsoft uses SMB, right? When I say "client," this client is referred to as an external server, which can talk to your Nutanix cluster using the IceCube protocol. So we can also give storage from the Nutanix cluster to external clients, and they can connect using IceCreepy as well. Looking at this target service, it is the primary service that receives requests from your hypervisor and external clients to perform any read or write operations. Okay? Then that Stargate service is talking to a service called Cassandra, and it is also talking to a service called Curator fine.And then there's the curator service, the Cassandra service, and the Stargate service. They are also talking to a service called Zookeeper through a process called Zoos, which will actually make sure that the cluster configuration is intact. And if any changes happen in the cluster, like when you deploy a new virtual machine, then that configuration will be replicated to all the other nodes in the cluster using Zookeeper. So the Zookeeper service is a cluster configuration manager service that keeps track of all the configuration of your cluster. Then we have one more service here called PrismService, which is primarily used for our management purposes. So as an administrator using a browser or orchestra interface, I will be connecting to the PrismService using an HTTP protocol or a Rest API. So the Prism service is giving me the management capabilities to manage my cluster. Okay, now let's take a look at the other components, such as the Cassandra service, which is in charge of managing a distributed metadata store. So whenever we are writing any data to the storage, the data is extracted for the metadata information, and the Cassandra service will keep that information about the metadata. The Zookeeper Service is a cluster configuration service. It is responsible for managing the cluster configuration and updating the cluster configuration on other nodes in the cluster. The Start Gate service serves as your data interface manager, receiving requests from your hypervisor to perform read and write operations. The Curator service is the one that manages and distributes the task throughout the cluster. So, which CVM is managing and performing those CVMs? The Prism is my user interface and my API entry point, where I will be able to manage my Nutanix cluster. So if I go back to the previous slide and just do a quick recap again, so the hypervisor, there are some dependent services there that we cannot start individually because some services are dependent on other services. What you can do, for example, is start and stop the Stargate service independently because that is only what you are requesting. So it's a requesting service that's accepting requests from your hypervisor and client. So if you stop the target, then you will not be able to receive requests from your e dependent on otAnd, if you stop your Cassandra service, you are also stopping your metadata information. This is again a distributed metadata store. So it is not advisable to forcefully start, but there is a procedure for starting and starting the services, which I will share with you as we move forward. So let me do a quick recap of these services again. So if any virtual machine that is running on the hypervisor writes data, that data is coming to the hypervisor, and the hypervisor will forward that data to the Stargate. The Stargate is using a service called Medusa, which is talking to your Cassandra so that the Cassandra can extract the metadata information of the data, and then that information will be shared with the zookeeper using the Zoos service or process and the curator, which is managing your task. So the curator is the one who will be able to manage and distribute the tasks in the cluster, and Zookeeper is the one who will keep your cluster configuration, and Prism is your GUI interface or the gateway for managing your NewtonX cluster. Because you have multiple nodes in the cluster, we will be able to connect to the prism via an HTTP browser or a Rest API to perform all management activities. So the zookeeper service will have one zoosleader, which will hold the primary cluster configuration, and the other nodes in the cluster will be like your secondary configuration or slaves, or the zoos. Other than this, there are a few more services. One more service is the Genesis service. So this Genesis service will act as your service manager. So if you want to restart any services in a graceful manner, you can use the restart Genesis command, which will restart the services in a graceful manner and will stop the services in a graceful manner. One more process that is running in the cluster is known as Promos, which will keep track of your jobs and task scheduler. So if you are going to create a virtual machine, that is considered a job, and Chronos will monitor that job to give you an update on whether the job is successfully completed or not. This is the service that manages your data application and Dr. Manager. So when you are replicating your data from onenode to another node, the celebro service will takethe data and replicate it across the nodes andalso across the site as well. Titan which is a service which keepsyour Vdisc configuration in table act. So, because the virtual machines will each have their own independent V disk, which will be configured, So this service is rare.
Hardware Overview
1. Hardware Servers offering with Nutanix OS
Hardware systems, which Nutanix sells directly into the market. We will also look at the Dell XCD to get an example and get an idea of how Dell is offering the Nutanix solution. Then we'll look at Lenovo servers, specifically Rack servers, which are more flexible in terms of hardware configuration, and how we can use and run Nutanix on them. So first thing, if I look at the Dell XC series, the Dell XC series is nothing but the PowerEdge servers, which are the 14th generation of PowerEdge services, which come with factory-installed Nutanix software and whatever hypervisor you request Dell to preinstall at the factory. So when you're ordering your existries, you can inform Dell whatis the version, how much CPU or memory, how many diskthat you are looking for, and you can also ask themor request them for a choice of hypervisor. So you can get it as an Aquapolis Hypervisor AHV, or you can get it reimagined at the Dell factory as ESXi or Hypervis. Then since Dell offers servers in differentcombinations of flexible configuration of CPU, memory,SSDs, HDDs and NVM SSDs as well. So depending on your sizing yesterday, if you have seen the disc calculator, when you give your parameters like what RF factor you want, how much SSD space you want, and how much you want, you will be able to identify what capacity and how many numbers of SSD you will be needing. Depending on that, you can order the DellXCC, and this Dell XCC includes all the DSS features because it's running the Nutanix software, such as SIM provisioning, cloning replication, data tiering, deduplication, and compression. These features have already been discussed as part of our DSS, right? So all these features will be availableas part of the Nutanix software. And one of the advantages of doing that is that they will be using some of their hardware and software for tested and validated benchmark tests to make sure the compatibility in terms of hardware and software is validated before shipping the bot. And you can also grow one node at a time with the nondestructive feature, one node at a time depending on whether we are assuming that we are looking at RF factor two, and we can keep on adding nodes one at a time without distraction. We can add more nodes as well. I can do an online expansion of my cluster by discovering any X86 servers that are running Nutanix software on the network; that is also a non-destructive operation. There is a link that is available on the Dell website as well as on its YouTube channel, where you can actually look at this video, which gives you an idea of how the Dell XE series is configured to run on Nutanix software. if you look at the different models that are available. To give you an example, consider XC 636 40. These are different models that are available (let me take you to the next slide). I will show you something in the coming slide. Okay, just give me 2 minutes in terms of PCI lanes and all those things. Right now, depending on the configuration and then on how many slots are available in the two UChatties, the hard discs are distributed across four loads. take you to this one. Now, if you look at this option, the drive options that are now available with the help of this drive option, I can see that the XT 360 series can support ten 2.5-inch hard disks. So I can actually add up to ten of any SSD or HDB combination. In 640, I have four into 3.5 inches and ten into 2.5 inches of physical provision. So depending on the physical slots of the hard disc size, I can add the number of hard discs accordingly. If you look at the 730 XC, it gives me a greater number of disks, right? If I have a provision for adding additional discs to the chassis, then yes, I can add extra discs to that 740. Also it has more number of this. And also, I can add Jayboard like Dell has the PowerVault series JBODs or storage arrays, right? That can also be connected to the server if we have the SAS port or the cutting board or the fibre channel with no initiator installed on that. This is just a quick overview of the different Dell models available. But again, I would suggest you always refer to the Dell website to look at the latest upcoming model numbers, the specifications, the CPU, and how many cores each one is providing so that you can understand what number of cores you are getting from each one. Now, as I told you, initially the CDM will require eight cores. So if I have a 630 that provides 18 cores, eight cores will be used for the CDM. Then the remaining ten quote is what is availablefor me to use it with my guest view. At the same time, you can look at the memory channel here per socket. So if you look at the memory channel per socket, I am getting four channels. So let's say initially I had only purchased two channels and had filled those two channels with my SIM slot and my memory. Then I have two more available which I can add inthe future to increase the memory as and when needed. And if you look at the next column, it provides me somewhere around 834 GB of memory per socket. So I can go up to per socket, I can go upto 834 GB and each socket is referred as the CPU socketthat we are using and it will be allocated to that. Okay, that was a quick overview of Dell hardware and the different types of Dell XD series that are currently available. When you are planning for implementation, when you are considering implementing your Bell XD series in your client environment or in your data center, there are a few things that we need to evaluate or collect before we can begin the implementation. These are a few lists, similar to the physical elements. What are the physical elements, including the power source; what network switches are being used? Because your Dell XC series comes with a ten-gigabyte port, you must have a network switch with a ten-gigabyte port available on the client's network in order to connect. You can also talk to the customer. And yes, the best practise is to scale out compute and storage together so that you have enough resources available, but that will be on the cluster level. So when you add compute and storage at the same time, you are adding resources to your cluster. But sometimes what happens is one of thenode is running out of code or runningout of memory or running out of space. So in that scenario, my objective might be to just add more memory or more cores to that particular node. So I can also do that as well. The best practise would be to add compute and storage at the same time so that you have enough resources available. Now, there is one more tool or one more feature. That is available in Prison Central, where you can actually look at the what-if scenario as well. I will take you through the demo as well, where you can plan for your future growth. For example, suppose you want to deploy 100 virtual machines and run them for a year with a one-year runway. So you can actually understand and analyse the Nutanix cluster resources to see whether the existing resources will be enough or what type of resources are required to run that 100 BM. So that tool will actually help us to understand what resources are falling short and what resources need to be added as well. So with the help of that tool, I will beable to identify whether I am running out of computeresources or I am running out of storage resources. The next thing is that we also need to collect information from the client if they have any VMs or subnets, so there is no negative impact. As such, the best requirement for Nutanix is an X 86.So if I have different models in the Nutanix cluster, at the end of the day, I will be managing all the resources at the cluster level. If any DMs are running out of resources on one node, and if I am adding a higher configuration node to the cluster, I can also migrate the VMs to that new node with more hardware resources. This is entirely software-defined, right? The nutanix operating system is entirely builtin software, it's not binding to anyhardware model or anything like that. So moving forward, we also need to get information such as what network components and what DNS servers are running in the Clusters customer environment. And also, we need to get the IP addressing details from the client because we will need some IP addresses for the cluster itself, for the host, the hypervisor, and the CVM as well. The first thing we will need to make sure is that the customer has a dwell power supply because every NutanixNX model or Del executive model has a dwell power supply, and we make sure that each power supply has a different source so that we have redundancy at the PDU or at the source level. If you look at the Nutanix block, this is just an example, but again, this configuration will vary depending on which model you are looking at. So if I look at normal Nutanix node and whatI find is that it has 210 GB port, twoone GB and 1100 Ethernet for management purposes. So depending on this number of ports and the model that I am going to deploy in my customer's or my data centre environment, I need to make sure that that many free physical ports are available on the network and that the IP addresses are also available. I need to get the default gateway, the subnet mark, the DNS, the NTP server, and the Active Directory details from the client because you will be configuring the NTP server so that you have a time synchronisation as well. As you might also plan to integrate with Active Directory or LDAP in your environment for mechanics or for cloud user connectivity as well, So if you have an Active Directory, then you need to get those details so that you can integrate your Nutanix with Active Directory and have single sign-on enabled on your Nutanix cluster with Active Directory as well. When we talk about the new IP addresses that are needed at the Nutanix site, we need an ICMI interface IP, we will need one IP address for the hypervisor so that we can manage the hypervisor and communicate with the hypervisor, and we will need an IP address for the new tank PBM as well. So primarily, we will need at least three IP addresses from the client. Sometimes the clients have a separate management network, so we need to get that IP address and connect the IPAMphysical port to that management network accordingly. Network segmentation is used when you have different network segments where you are diverting your IPAM or hypervisor for redundancy features or replicating data from another segment. So Nutanix also has a built-in reputation and snapshot technology where you can replicate the storage data from one Nutanix cluster to another Nutanix cluster, and we can look at those settings as well depending on how the customer wants to use the network segmenting part of doing a physical installation of the Nutanix block. So primarily it's a two-U box. So in this two U you will also seethe Dell XCC also comes as to you. So I can have the rails connected and installed in the rack. Now, before powering on the nutrition block or the Dell exec system, we need to make sure a few data centre components, such as the temperature, mechanical loading, and the airflow, are proper so that we don't encounter any issues with our circuit overloading. Make sure that the circuit overloading is proper, thereliable earthing is in place so that we don'thave any earthing issues when we are powering onthe new channel when we talk about the block. So usually when I say block, I am referring to a two-year chassis that has four loads. So when I say block, I am referring to this entire chassis as a block. And inside this block I will be having different nodes: node one, node two, node three, and node four. So, when we discuss this bundle or chassis based on factors, we usually have four nodes in a block, and each node is identified by its position by naming it A, B, and C, respectively. So if you look at the example of newtonic storage systems like 10030, 50, or 3060, please always have a minimum of four blocks. I can add all of them at one time, or I can install three nodes initially, create a cluster, and add the fourth node as an expansion to my cluster. These nodes will be connected to their respective physical drives. So in this chassis, the physical drives are sharing the same backplane, and they are sharing the same form factor. So every node will be connected to its respective location in the back panel. And the first drive, the first drive in each node, will host your Nutanix operating system, your CM, and your metadata. The first drive is always an SSD drive in the lock. Let me show you this diagram. So if I take this hardware, as you can see here, this is the front view of the two chassis where I can see multiple hard disks. These hard discs are primarily for node one, while the hardest are for node two, the hardest for node three, and the hardest for node four. So, when you order the mechanics or delxit series, you can do all of the hardest yeses. You can consider all assessments as well. So if you look at this first hard disk, the first hard disc is the SSD hard disk, which contains your boot and metadata information and also uses it as a flash store for storing the hard block. And the remaining four hard discs are considered data discs where the persistent data will be stored, and I also have one hard disc marked "blank," which means it is not installed, it is empty, right? So if I am running out of my storage space, I can add a hard disc here, so when I add a hard disc here, I am actually doing it, adding storage resources to the cluster at the end of the day. All these hard discs together all these hard diskstogether will be shown as one single storage poolso this is how it will look like youwill have the remaining hard disk. So the first hard disc SSD will be a boot hard disc for my notebook b.See, that depends on your hardware chassis, the backplane, and how they are connected because at the end of the day, the SLD and the hide have their backend ports, right? Those backend ports will be connecting to the physical backplane. So we must first determine which shells or disc shelves will be able to mix and match. Some vendors offer a hybrid of the two by including a converter for the back end ports. Actually, it is a hardware-dependent issue that we must first investigate. Nutanix uses SSDs only for storing the data. The HDD is only used for tearing purposes. The HDD is only used to move the cold data from the flash to the HDD for permanent storage purposes. So we don't want our cold data to reside on SSD because it will occupy space, right? As a result, the auto-tiering feature will automatically move the cold data to the HDD, freeing up space in the SSD for new incoming data. Yes, that would be recommended because if you have an old flash, the performance will be different, and if you have a hybrid storage system, the performance will be different as well, so you can do them block by block as well. I mean, I can have one block where hybrid disc is used, and I can add one more block to the metering cluster with flash as well because the storage containers that are presented to the guestVM are presented on a logical view, right? They are not presented on a physical view. So in the same chassis, you can also see that you have a power option here where you can look at the power of the node, whether the node is on or whether it is offline. This is the back end of the storage of the Nutanix box, and as we have seen in the front view, we have four nodes. Same thing on the backside of the box, you will have a blade kind of server, which is installed in a single two-year chassis, and they are sharing the same power supply. So all the four nodes will be sharing the same power supply, and they will have their respective compute and storage discs connected to them through the backflow, and if I look at this one node, I can see a magnified view of this one node. So this one node, primarily what it has, it has a1100 port, it has two one GB port, it has twoUSB ports, it has a VGA port, it has an Ledso that I can see the power status of the server. It also has two to ten GB of ports on the server as well. So this model primarily I am looking atan example of 1050 and 3050 and 3060. So this hardware specification is related to and thenumber of ports and the type of ports willvary depending on which model you are looking at. Yes, if I am looking at the NX 100:50 module, for example, I can use the two 1 Gigabit ports and the 210 GB ports for the network connectivity, and by default, what does Nutanix configure? 110 GB active port and 110 GB backup port So all the traffic primarily goes on the 110 g port, and if that ten g port fails, then it will use the second ten g port, which is a backup port available, and if the second ten g port also fails, then Nutanix starts using the two g port, or the data curve. This is the default configuration, but if you are going for rack servers or if you are going for other models where you have a larger number of ports, you can consider them for your own requirements as well. If you don't want the 210 G ports to be one standby and one active, you can configure them as a team as well, where you can configure them as a load balance and use link aggregation protocols and all this. Yes, Nutanix has an open virtual switch concept where I can create a bounding box for teaming the ports. We will cover that as well. When we come to the networking topic, we will talk about that. No. It is not recommended because if you are combining or teaming one g and ten g and want to use ten g as your primary port and one g as your backup port, this is not possible. Then what happens is if the VMs will be runningfine on ten g and if the ten g portis failed then the VMs will start using the oneg which is very which would be a bottleneck actually. Yes, same-speed pool, so we can avoid any performance issues. So moving forward, if we look at nodes in the block, we can have only a maximum of four nodes. If I want to have more nodes in the cluster, I have to add one more block. So each block will have four nodes? Not exactly, but from a specification and benchmark point of view, they'll use their PowerEdge servers, which are very reliable and are standard in the market. Right, but at Nutanix, what they are doing is OEMing this hardware, so the support and the kind of services that you get from Dell will be much better as compared to Nutanix. I mean, Nutanix will be able to give you good support on the operating system side, but when it comes to parts replacement and any hardware issues, you might face some challenges as well. Because I know Dell-powered servers are 14-generation powered servers, and they are very reliable. So I would recommend going for the Dell Power XT series. And now Nutanix also supports HP ProLiant servers as well. So if you are standardising on HP ProLiantservers in your data center, you can alsolook at Nutanix compatibility lists to verify whichHP ProLiant servers are compatible. And you can use them, and you can reimage, and you can install the Nutanix operating system by yourself as well. Yes, that's true. Most of the time when we are encountering any bugs orany if we need a hot fix, then we will notbe able to provide us with immediate response to that. So they are dependent on Nutanix again forthe operating system software support as well. The only issue with Nutanix hardware is that it has a limited range of configuration options and is not very focused. They are not a hardware manufacturer, but they are only buying these X86 servers from third-party vendors and selling them as a complete solution. Or you can use HP ProLiant as well. Now you have a wide variety of choices. Because, see, even if you use the Nutanix hardware, Nutanix is not manufacturing the hardware, right? They will be able to provide good software support. But when it comes to the hardware, again, Nutanix is dependent on an OEM partner. So they open a case with them at the back end on our behalf to see what the problem is, and any replacement must be done in this manner. So there is always a kind of a risk actually. So the best option would be to consider taking a reliable X86 server and using it for installing Newton's. On top of it, we have a Dell 637-40 Renovo. Also, we have some models; multiple models are there. Just give me the next available hardware models from Nutanix. So they have some higher-end models, where you can see the G blocks are also available and where you can see you have different ports. Again, the ports are in different configurations, right? So here you can see you have one iPad port, one Gigabit Ethernet port, and you have 210 GB of other ports as well. So depending on the hardware models, you can see that the ports are pre configured.So we don't have much choice in terms of adding extra ports to this bundled or chassis form factor. If I look at the high-end model, I can see that here, you know, I have the disc in a horizontal position, where multiple discs are available. So the first disc is the SSD. Then I have two discs that are used for the HDD. And you see here that I have an empty slot here. Right now, in this empty slot, I can insert an SSD disk, or in this one, I can have an SSD disc as long as the backend hardware will support it and Nutanix follows a criteria, or it follows a labelling feature where we are able to identify the disc by looking at the label. So if you look at a green label on the HBD, it is a standard SSD disk. If I have a blue label on the HDD, it is a standard HDD disk. Nutanix also supports encrypted disks. So you can also have an SSD encrypted disk, which will have an orange label, and you can have an encrypted HDD disk, which will have a grey label. As a result, these are self-encrypting drive SSDs that can be used to encrypt data at both the flash and HDD levels. This MX 120 is an entry-level model with a very minimal configuration. So you see here, we don't have a 10 GB port; we have only one GB port in this particular node. So this is good for small deployments or remote offices, or where you have test or development networks that you want to set up. So, if a hard disc fails, you can use this $1020 as a very simple investment to begin building your mechanical cluster. So for example, when we talk about the hardware, primarily we are looking at the disc failure, the load failure, and the controller failure, right? So if I have a disc failure, how can I replace that disk? So if an SSD is failed on the node, then I will get an alert in my Prism GUI where I can see that a particular disc is shown as a bad disk, so I can actually click on that disk. So as you see this slide, I have an NX 1020 andNX 10 65 which are having only one g Ethernet port. So I would use this as a use case for small office deployments or remote offices where I want to have a Nutanix cluster to run some virtual machines, and I will be able to replicate that data to my head office. So I can use the Nutanix cluster on the remote side and also configure the replication to my main Nutanix cluster at the HQ as well. Also, I can use this for customers who are small in size. Sometimes customers in a cloud environment, they areworried about the data privacy, they are worriedabout the data theft, these kind of scenariosand they are also looking at not sharedinfrastructure, they are looking for dedicated infrastructure orthey are planning for a colocation facility. So in those scenarios where they have a limited number of VMs running, you can go for the NX 1020 cluster, and you can set this up with minimal effort. And, if we look at the SX, if I go to the SX series and the NX1020 block, I can also consider having an encrypted right, particularly for financial institutions or banking customers who require data security. So we can have encrypted drives in the FX Series 65 and the NX 10 65, so we can encrypt the data and meet the SLA of the customer. There are 210 g ports on the NX 102-01050 and 3060. So this I can use for my mid- and enterprise-level deployments. Since I have ten G ports, I can configure the data centre bridging protocol on my network switches. I can enable jumbo frames on my network switches so that I can take advantage of the 10 G. And if I need a higher configuration, if I feel the ten GP210G ports are not enough, and if I need a higher configuration, then I should consider a rack server. I should not consider this block chat because in the rack server we get additional PCI slots where I can insert and have additional 10G ports installed, and I can use them with my load. So, coming back to the original topic, if a hard diskis faulty, the prism will show us the indication and itwill alert us that a particular hard disc is bad. So we click on that particular alert to get more information about it, and we can see that this particular drive with so and so a serial number in so and so a location and in a specific host has failed. Since the block will run four nodes, with the help of this information, I will be able to identify which node and at what time the hard disc is placed, and then I can take the appropriate action to replace the disc and rebuild the data using the RF factors so I can unmount the disk. So I can go and unmount the disc so that no further writes will be made to this disk. So the best option is to first unmount the disc and then inform your data centre or your engineering team to physically replace that disc with a new disc so that the RF factor can rebuild the data and start using the new disk. Also, if I want to look at the performance of the disc or the hardware summary, I can go to the performance scenario, where I will be able to look at the CPU wide usage, the cluster wide memory usage, and the cluster wide IO as well. So at any given point in time, I will be able to see the performance of the entire cluster. And if I want to see a specific node, I can go to that node and look at the CPU memory and the IOPS usage for that. If a node fails, the TBM will fail as well. If a node has any hardware problems, we need to also consider swapping the node. Since it is a block and it has a bladeserver in the block, I can power up the node. So I can look at the front end of my chassis, and I will identify that a node has failed with an amber light indicating that the node has failed. Since the node is failed, all the discs will also be shown as failure disks. So what I can do is go to the respective node power button and power up that node, and then I can swap that node with a new one and change it. In this section, I will be covering a few details about a rack server, and we will also try to understand how we are able to benefit by using a rack server instead of the chassis model. So, if you look at here, I have an example of a Lenovo HXLE 3500, and if we look at the first thing, this server provides me with two SSD drives. So I can have two SSDs in this server in the first two slots. Then the remaining slot I can use for my SATA HD, right? And the rest of the components are standard: the USB port, the power button, the LED, and all those. So by going ahead with this configuration, I can easily have two SSDs and six pieces of hardware. If I look at the Lenovo HX-550 Series here again, I have two SSDs and six SaaS hard disks. Now, I know SaaS hard discs are much faster compared to SATA, so I am getting better performance here with this series. If I look at one more model of Lenovo, I am able to get a higher number of SSD drives. So per node, I am getting four SSD drives, and I can go up to 20 Tata. This is a permanent node, so every Rack server will be seen as a node in the Nutanix cluster, and I can add more discs to the Rack server. When I look at the rackserver's back end, I see that I have some default ports, so I have one ethernet port here and four into one gigabit ethernet port. So you see, I am getting a larger number of 1 GB ports. And also, I have standard two into ten GB ports here, and I have optional two into ten GB ports. That means I can add an additional 210 GB of ports as well. All together, I will get 410 GB in ports that I can use for my data traffic. Plus, I also have some free PCI slots available in this rack server where I can add a greater number of G or 1G ports as needed. Also, I have a dedicated 120 GB SSD drive, which is used for boot purposes. That means I am not using the two SSDs that are given at the front. The two SSDs provided upfront will be used to store metadata by the oplock, the content store, and Cassandra. I will only use this 120 GB SSD for booting purposes. booting the Micropolitan operating system, booting the hypervisor, and booting the CBM as well, so I can keep all these components on a separate SSV drive. Now, one concern is that if I have a singleSRG drive, that will become a single point of failure. So what I can do is use the PCI slot here to install a red controller, allowing me to make my boot drive highly accessible. So I can have two SSD drives. One more SSD drive here andthe one which is already present. I can configure both of them as hardware rated so that I have redundancy. And if one boot drive fails, the ratecontroller will utilise the second boot drive, and there will be no downtime at my door. Yes, saturn Dome basically they are giving Saturnon the chassis to have a quick replacement. The Satterdome is primarily used for hosting the hypervisor. If the hypervisor or the Saturn is damaged, you can shutdown the node, pull out the node, remove the Saturn, replace it with a backup Saturn, and then start your node again. That appears to be a quick fix for bringing the node back online. But with the rack server, since I have a lot of PCI slots and the provision of adding more disk, I can make it more highly available. Yes, that is also correct. Because you need to physically remove the node, you have to bring it offline. Then there are some customers who have an atest cluster; they are setting up a standby or mechanicscluster where they are imaging the Saturn with a hypervisor of the same version and swapping it as well. Okay, yeah, then that scenario Yes. Then we have to do the complete installation again. Okay. This is one advantage of the Rack operating system onthat node, the node which is failed, if you're notable to recover it, then there is no choice. You have to reinstall the Aqua Policy operating system again and again, join it to the cluster, and then we start participating in the cluster like that. We are considering the worst-case scenario as aVR; we don't have any options like that. So that is the case with the Rack server. Because in Rack servers we are getting a provision of havinga rate controller like I can have a red computer. This is not an option probably. Additionally, if you look at the inside the motherboardof the Rack server, you see a lot ofpans are there which will keep the server cooling. You also have multiple CPU sockets, like CPU One and CPU Two. So you can have multiple CPUs as well. Some Rack servers can also provide you with quad-core CPU sockets, correct? And they also have the respective memory slots as well,so I can keep on adding more memory, and Ican install more CPU to increase the compute as well. And I can also have multiple PCI slots. Sometimes I want to connect an external disc enclosure. For example, if I have a Jbord, or if Ihave a storage array that I want to use itas an external or more capacity, I can add anFC initiator on the host and I can configure itproperly so that the Rack server can see the externalstorage box as an additional storage tier as well. So that possibility is also there. And some of the customers are doing so with Dell PowerVault appliances. They are using that as external storage for more capacity. We complete the hardware section here. So I was actually trying to showcase the examples of Dell Xcenix, NXC, and Lenovo Rack servers. The same will be applicable for HP ProliferateRack servers, and you will be able to use them according to your requirements. The next topic is about can we makeit a 15 minutes break so that Ican also quickly grab some dinner for myself? Okay, thank you.
Nutanix License offering & Security Concepts
1. Nutanix License License Management & Cluster Security
I'm coming to the next topic, which is our licencing topic. In this, Nutanix primarily controls the features based on the licences that the customer is purchasing. Nutanix offers three different licenses. The first licence is known as "Starter," the second one is known as "Pro," and the third one is known as "Ultimate." They have these three offerings in terms of licenses, and for these licenses, they have various restrictions in terms of features and various new capabilities that we can use. So let us see the first one, which is the cluster size. So when I am setting up my cluster,I can start with a single node cluster. I can have three nodes or I can have twelve nodes in a cluster with a starter license. So the starter licence will allow me to have a cluster size up to twelve nodes. If I want to build a cluster with more than twelve nodes, then I need to go for a Pro licence or an Ultimate license, which offers me unlimited cluster size. I can have unlimited nodes in a cluster. The second feature is the CoreData feature, which is a heterogeneous cluster. That means I can have a different hypervisor running on different nodes in the cluster. And this feature is supported by all the licenses. I can also create VM-centric snapshots and clones; I can create snapshots of visual machines and clone copies of them that I can export or import based on the license. Different licenses. The data tiering feature that we have discussed earlier, where Nutanix stores the data in the SSV AHV, is built in. It's a native hypervisor of Nutanix. We don't need to get a licence for EHV, but if you are planning to use ESXi or HyperV, then you need to have the respective vendor licences in place. This is for the features that I am going to use in a Nutanix cluster. Yes. So the startup license is not free for production. When you purchase the Nutanix cluster, you will get the Starter licence by default. And, for example, suppose you want to use the ECX feature, which is a parity feature of having data redundancy, right? If you want to use that feature, then you need to have at least a Pro license. This community edition is primarily used by professionals like To get ourselves set up for our own labor, you want to set up a test lab so you can actually use the Community Edition. Basically, it is provided for free, and you can run it on any community hardware, and it is used for training and testing purposes like that. Of course, you won't get support for that, but you will be able to get help from Nutanix's community forums. There is a forum called Next Nutanix.com where you can sign up for an account and you will be able to download the Community Edition. From there it is not free when youare purchasing the Dell XE series apply boxor when you're purchasing the NX series hardware,the Starter will come along with that. Yes, that is bundled with the hardware, and it has limited features. And if you want to use features such as ESX, then you have to upgrade your licence from Starter to Propane, just like you want to allocate or reserve some resources for the virtual machine. For example, you created a virtual machine with, let's say, a HVCPU and 64GB of RAM, and you want to make sure that these are allocated or pinned to the VM. It's similar to a reservation in ESXi, where you can do CPU and memory reservations, among other things. So you are able to pin the VM, and you can also pin the VM to the flash mode, which means the VM will always run on the SSD disc irrespective of whether the data is hot or cold. So with the help of VM pinning, you will be able to reserve resources for that particular VM. So inline compression is also part of the Starter license. So you can see multiple features are therebased on which the licences are available. Then if we look at the next table, which is the infrastructure resiliency features, this is related to the data path. How? The data path is redundant in a Nutanix cluster. So by default, Nutanix always tries to have a redundant path, and it keeps the remote replica copies of the data, so that feature is part of all the licenses. The RF factor is tunable, so if you're running it, you can run it as two in your starter pack. But if you want to change it from RF factor two to three, then you need to have a Pro licence where you can change your RF factor from RF two to RF three as well. Availability Domains: This is the Dr aspect where you are creating your disaster recovery, your replication, or a protection domain. And you will be able to have a doctor in this availability zone. This will be included in the Provo and Ultimate Licenses. You can also configure the availability domains in Community Edition. The Community Edition provides you with this feature as well when it comes to the data protection feature. So if you want to replicate your data in one-to-one replication and you want to do disaster recovery, that is part of all the licenses. If you want to do bidirectional replication and DRM, that is also available in all the licenses. If you want to grow your cluster or you want to shrink your cluster, sometimes you decide to decommission old nodes, nodes that have old hardware that is not supported or that are running out of compatibility or support contracts. You can actually remove those nodes from the cluster. That activity is also available for all licenses. If you're running a VM with a Windows operating system and you want to take VSS snapshots for that particular VM, then you need to have at least a Pro License to use VSS as a snapshot provider for the VM. Time Stream is a feature of replication, availability, or disaster recovery that keeps track of your snapshot timestamps. So it will provide you information with whenthe snapshot was taken and it will keeptrack of the time slots for each snapshot. So when you want to recover from a disaster or when you want to restore a VM using a snapshot, you'll be able to see that snapshot time stream, and you can restore it. That feature is also available in the Pro License. Cloud Connect is a feature where you can replicate your data from your Nutanix cluster to a public cloud provider such as Amazon or Azure. So you can connect to your public cloud, and you will be able to send your copy of your Nutanix data to that cloud for replication purposes, for backup purposes, or for DR purposes. This will also require the PRO license. And before you configure this cloud connection with your respective cloud provider, you need to verify with your cloud provider whether they are offering the service as a backup or as a DR. If your cloud provider is providing the service as a backup, then you will only be able to send a copy of your data as a backup copy to the cloud provider. You will not be able to use that data to power on your virtual machines in the cloud in the event of a disaster. If the cloud provider is providing the service as a Dr, then you will be able to replicate your VMs to the cloud provider, and in the event of disaster, you can also power on your virtual machines on the cloud as well. If you are looking at multi-size disaster recovery where you can do one to many to one, you can do that, but you will require the Ultimate License for that. So in that scenario, one too many titles will be available in the Ultimate License. You can also set up a Metro Cluster or Metro Availability where you can perform synchronous or near-synchronous replication to achieve zero RTO. That is also available in the Ultimate License. So, as you can see, Nutanix is attempting to control the features based on the license, and each license, of course, will have a cost factor to the licence that we use. One more thing I would like to highlight here is that if you have three nodes in a cluster running Pro License and you add the fourth node, which has a Starter License, then what happens is the old three nodes will also get degraded to Starter License. So you have to be very careful when you are ordering your new notes. And also, you have to always verify with Nutanix or Dell what your current licence level is on your nodes so that you can order a similar level of licence or a higher license. If you order a higher licence on your new node, that doesn't mean the cluster will get upgraded to the higher license, but the new node will get downgraded to the nodes that are running the older version as well. So it's very important to verify at the time of ordering what type of licence you have and what features you are planning to use in your cluster. When we talk about the management features and the analytics features of Nutanix Cluster, the core management feature or the analytics feature of Nutanix is known as "Pulse." This feature is similar to "your call home" or "auto support." This kind of feature has Nutanix Cluster send events, messages, or event information to technical support on a periodic basis. And this is a core feature that keeps on sending alert messages to identify if you are running into any hardware issues or any software-related issues. So Nutanix will be able to have a complete health history of your cluster, and when you open a ticket, they will be able to assist you by looking at your false information and identifying if you have any done, any changes, or any critical issue that is impacting your cluster. The second analytics feature is the clusterhealth feature, which is very extensive and can provide me information at all levels. By using this health dashboard or the section in my presume, I can understand and eliminate any bottlenecks related to performance capacity, CPU usage, memory, and all those things. Nutanix has a utility called NCC, which is a Nutanix cluster check utility that can be run manually, or you can schedule it to run at regular intervals and collect all the diagnostic information about your cluster. And you can see which components arefailed or which components are critical orhaving any error messages as well. And most of the time, the Nutanix technical support team will ask you to run the NCC so that they can collect the relevant information for troubleshooting purposes. The next feature is a one-click upgrade. One-click upgrades are a core feature that is available in all licenses. One-click upgrade means you can upgrade the Acropolis operating system and the Hypervisor, you can upgrade the disc firmware, and you can upgrade the NCCutility check from one version to another just by doing a rolling upgrade scenario. So if you have three nodes in a cluster and you want to upgrade your operating system from one version to a higher version, then you can download that new version online directly from the Nutanix Prism console and you can perform a one-click upgrade. So when you start this one-click upgrade, it will perform a rolling upgrade on one node at a time. And once the first node is completed, it will take up the second one. When the second is completed, it keeps on going and upgrading all the nodes in the cluster one by one. So we don't need to manually update them one by one. We can use the one-click upgrade feature, which can automate the upgrade process by itself. The next component is rest. Nutanix has a very extensive library for Rest API calls. So you can actually explore the Rest API attributes and parameters from Nutanix Prism, and you can use the Min Rest API client to automate a lot of your tasks for managing your Nutanix cluster. Some of the examples in this recipe are like cloud integrations. If you want to give your customer an easy dashboard where he can deploy his own virtual machines, you can integrate using the Rest API so that you can deploy and create virtual machines. You can increase the size of the storage container. You can take a snapshot. When we talk about the virtualization features of Nutanix, most things are done by the Rest API attributes. So the virtualization features, all the virtualization features, are part of all the licenses. I mean, we don't need to go for any specific licence for these features such as VCR support, which is built into a Propolis hypervisor, and any VM operation, like powering on a VM, powering off, installing the tools, using guest tools, taking snapshots, placing the VM based on an affinity policy, or doing the high availability of a VM. So all these virtualization features are built in, and they are part of all the licenses. Any doubts on the licencing part? Are we good to move forward? I didn't get you that. Can you just repeat that again, please? Okay. You mean the NCC. There will be a lot of questions on the licencing part, which I have gone through, and in fact, I have had feedback from my previous students as well, who have already cleared the NCP. So, yes, there are questions. You know, they're basically giving you scenarios like: if a customer has a cluster and wants to implement an ECX feature, which licence is required or applicable? Those are questions. Are there a lot of questions? Are there? And all those questions will be basically revolving around these three things, these three slides: the core data services, the infrastructure, the data protection, the management, and the virtualization. So have you already appeared for the NCPOR or are you planning to prepare yourself? Okay, fine. I have some resources. Maybe I can share that with you—that might help you get through, actually. Okay, so moving forward, we'll be talking about securing the cluster. Like, how can we secure the cluster? So one way of securing the cluster is by creating user accounts based on what type of access we would like to provide to that particular user. And we can also have an authentication method as well. I mean, do you want the user to only connect via SSH, or do you want the user to connect only via the prism, the web console, or do you want the user to connect via both of them, but he should only be able to monitor it? He should not be able to make any configuration changes, and he should not be able to do any VM operations like that. That is one way of securing the cluster by giving appropriate user permissions. The second method is using SSL to establish secure communication for the cluster. So whenever an administrator is trying to connect, he will be able to use the SSL certificates that can be installed, and you can allow him to use that certificate for secure communication. Nutanix also supports key-based SSH access. So if you want to have key-based SSH access to the cluster, you can have this or you can disable it as well. There is one more feature, so not only the access control is important, but also the data that is residing on the cluster needs to be secured. So Nutanix has a Data at Rest security feature where you can use self-encrypting drives, and these self-encrypting drives can encrypt the data and also manage the keys through an external key management server as well, so that it will be easy for us to manage the keys. Let us look at the first option, which is a key-based access that I can create, and I can enable the nutrients to lock down the access to the CVM as well, so that no changes are made to the CVM. Even if the administrator is there, he will not be able to do any changes to the TVM. So I can also enable that feature as well. Data at Rest encryption technology will use self-encrypting drives so that the data that is written on the disc is encrypted at all times and is inaccessible. In the event of a drive or node theft, data on a drive can be securely destroyed. You can also have an authorization method that allows the password rotation policy as well, so that you keep regularly changing the passwords. You can have this protection, which can be enabled or disabled at any time. So, if you intend to use encryption on your drive but later notice a slight performance degradation or a business requirement that requires encryption to be disabled, So any time you can disable the encryption as well, there will be no performance penalty. Because the self-encrypting drives have their own mechanism for encrypting and decrypting the data, We can integrate the Nutanix cluster with the key management server, where we can comply with FA standards and have an integration with the key management server. So the key management server will be managing your encrypted keys on the drives, and whenever access is required, it will get authenticated and password rotation can be done via the key management server. Let us look at a configuration overview of this data access encryption, so what are the steps that are involved? First, what we need to do is have a key management server that is installed outside the Nutanix cluster on the network so that we have a separate key management server that can be used. And this key management server, for example, can be SafeNet or volumetric key management servers that we can use, which are already supported and compatible with Nutanix. And the process of configuring the encryption with the key management server involves five steps. So steps two, four, and five are done through the console. Steps one and three are coming from the key management server. So steps one and three will be performed on the key management server, which is running outside the Nutanix cluster. So what I will do is first configure a key management server outside my Nutanix cluster and make it redundant so that if the key management server is down due to any reason, I should not have an issue accessing my data. So it is always a good practise to make the key management server redundant so that you have a backup for that. Then, for the key management on the Nutanix cluster, I am going to generate a certificate for each node in the cluster. So if I have three nodes in the cluster, every node will have a certificate generated, which will include its unique node identification ID. And that UID field will be used by the key management server to access and create the certificates. Once I have generated the certificate from the node, then what I do is send that certificate to the certificate authorities (CA) to get it signed. This can be done via the Secure SafeNet software, the key management software, or you can also do it with your Active Directory or your Microsoft CA feature as well. And once we have the certificate signed, then what we do is go to the Fi compliant feature of Nutanix and enable the security as "high security" and set it as FIPS compliant. After that, we upload the certificates and the assigned certificates that were generated in the previous step for each node to the cluster, and we get them authenticated by the key management server. So now the key management server, the KMS, and the Nutanix are able to communicate with every node in the cluster using the SSL certificate. Then what happens is that it starts generating the keys for the self-encrypting drives, and it uploads those keys to the key management server. So depending on how many CDs you have in your node, it will generate those keys, and those keys will be uploaded to the key management server so that the key management server or the KMS can manage these keys centrally. Okay? Any doubts on that topic? Okay, so moving forward with our installation, when we are concluding our installation at the customer's premises or in our data center, what we need to do is one of the recommended activities by Nutanix: run the Nutanix cluster check. the NCC tool or the utility that I mentioned earlier. What is this NCC tool? This NCC is a framework of scripts that can help diagnose the cluster, and it can also provide individual nodes that are up regardless of the cluster state. As well, the scripts are standard commands that are run against the cluster or the nodes, depending on what type of test we are performing. I can perform all the tests using the NCC command, or I can only run a specific test for my cluster as well. For example, let's say if my disc performance is low and I want to look at my disc parameters, I can run an NCC check on the disc only so that I don't interrupt the other components in the cluster. So I can run this from the command line, or from the Prism console as well. And once the command is run, it will generate a log file with the output of the diagnostic commands that are selected by the user. So if the user says I want to collect all the information of the cluster, then all the diagnostic information of the cluster will be collected and a log file will be generated, which we can actually share with our nutrition technical support team as well. The tool can also be grouped into plugins and modules so that you can actually run the entire cluster check, but you want to group them together based on certain parameters. So you can plug certain objects that can run the diagnostic command, or you can create modules, which are logical groups that can be done at one time to make sure that the parameters are diagnosed properly. So what is the output that we get? So when I run the McCool, I am getting these outputs. I will get an output as a pass if everything is healthy and there are no further actions required. Fail if the test output should be something else, but we have not got that parameter, and if it is not held, then it will mark it as a fail output. warning returns if an unexpected value is investigated. So we were expecting a value, but that value is not present. So it can actually give us a warning saying that there is some unexpected behaviour or unexpected value that is present in this or that component. It only provides us with a value that indicates whether it passed or failed. So it's only for informational purposes that this test is run. What type of test can I run? I can run all the tests at the same time, I can run all the tests at the same time from the Prism console, or I can run two or more individual checks at the same time as well. So if I don't want to run all the tests, I can parallelly run two or more diagnostic tests at the same time as well.If you have run a test and one of the outputs failed, and you have taken corrective action for that one failed output, you do not need to rerun the test. You can only run the failing check so that it will only check the component that failed in the previous diagnostic information. You can run the checks in parallel, and you can set how many parallel checks you can run. There is a value that you can mention. There is a command syntax that you can use to set the value for the parallel checks that can be run. You can also use the NPY screen to display the status of the NCC test diagnostic information, and you will be able to see the output as well. This is an example of how you can run individual checks. So I can say that this command, "NCC Health Check System Check," is equal to plugging in. Then I am saying they are equal to the clusterversion check and the CBM reboot check. So these are the two checks that I want to perform on my cluster. One is to collect the cluster version details, and the other is to see whether the CVM has rebooted recently or not. Sometimes, if the CBM is running short of resources like memory or the CPU core, it also tries to reboot. So we can also verify whether the CDM has recently rebooted or not. If it has rebooted recently, we can also further investigate what the cause of the reboot is. Using this command, I can set whether the reason for the failing test is true or false. So if I set this command, then what happens is that it will do the retest of the failing test. I can use the command "Healthcheck, run all parallel is equal to four" to run a four-parallel test. So I can run four health checks in parallel, which is my maximum value in Nutanix, but if I want to make it two or one, I can do that in certain scenarios. The NPY Screen, which allows you to use the NPY screen parameter in your command to see the interactive output of the NCC tool in real-time. You can also create a general usage where you can log into the CVM and have some flags where you can create a plugin for yourself and run it whenever needed. So if you have a typical requirement of running a specific test every week or on a daily basis, you can create a plugin and run it as and when needed. And if you want to run all the checks from the Prism console, you can directly go to the Actions tab. Then you click on Run Checks, then you select all checks, and you click on Run if you want to do it. From the command line, you log into one of your CVMs in the cluster, and you can type NCC, which will run all the respective diagnostic tests. Sometimes we need to check the status of the cluster services because there will be quite a few of them, depending on how many nodes you have in the cluster, so you can log into any CBM in the cluster. You do not need to log into any specific CVM because all of the CVMs in the cluster are participating and syncing their configuration. But one of the good practises is that if you have too many nodes in the cluster, you can assign a cluster IP address as well, so there is no need to connect to a CDM every time you connect to the cluster IP address. The cluster IP address will connect to the main Louis leader who is in the cluster or to the first node in the cluster. And then from there onwards, you can run this cluster status command, which will show you the list of all the services that are running on each CVM or each node. So it will show you the CVM IP address, and it will show you the status, and then it will show you the list of services. So, as you can see, Zoos, Medusa, Utah's Stargate, Celebro, Kronos, Curator, Prism, and a variety of other services are available, as we discussed earlier. So you will be able to see these services. And sometimes, if you want to stop the cluster or the services, you can again use "cluster Stop." So you just replace the status command with the stop command, and it will stop the services on all the CVMs running on the cluster to verify the cluster. To make sure that the NCC is running properly and there are no misconfiguration problems, they can use a one-gigabyte Nix instead of a ten-gigabyte Nix to verify these settings are correct. So what I can do is go and configure the cluster so I can run the NCC check manually and verify that the cluster is healthy and is having no issues. So I will be able to see that it is fine. Now that we've finished installing and testing, it's time to look at the resources, the support resources that are available so that we can get proper support from Nutanix. The first way is by visiting their support page. The second way is by opening a ticket with Nutanix or Dell. Then the third one is you can callthem as well directly or you can create. Your own customer portal account on my Nutanix.com. Now, I would suggest that if you don't have your Nutanix account, please go ahead and create one on Nutanix.com. What you need to do is just give your official email address so that they can verify your partner status, or as a customer, they will verify your contract, and then you will be able to get access to all the resources, finally handing off the cluster to the customers. If you are passing the cluster to another team within your organisation or to your client, you must ensure that the client has an active self-service portal account on mechanics support. Also make sure that the Pulse is configured and is able to send the information to Nutanix support, and make sure that alerts are turned on in the cluster. So these things serve as a checklist before handing over to Nutanix's customer support services. They provide the supporting services in multiple ways. Technical Support can monitor your cluster and provide assistance whenever a problem occurs. This is done through the Pulse. You can also request that the technical support team maintain your portal so that you have your own support portal where you can create an account and open tickets. You will be able to download the AOS software, and you will be able to view the documentation as well. The third one is that you can have a Web console, which also has online documentation. So you can access the help documentation on any specific topics directly from your presumed web console. You can also get Rest API support from Nutanix for the administration scripts. So if you want to automate your administration activities using Rest API calls, then you also have support for that. Plus, Nutanix has a link called Developer Nutanix.com.If you go to that link, you will be able to see a lot of Rest API templates and scripts that are available, which are developed by various developers across the globe. You can use them for your purpose. You can customise them as well. As we discussed earlier, Pulse is the main support activity or information provider that keeps on sending information to Nutanix technical support on a periodic basis, and usually it will use your SMTP server to send the messages. If you configure an SMTP, you can use that SMTP server email address and send that as an SMTP. If your network does not have SMTP, you can implement a HTTP proxy so that the Pulse can use the HTTP proxy to send the alert information to Nutanix technical support. Let us look at a few more details on Pulse. So the support service, which is the Pulse, is enabled and ready to send messages by default on 480 and 8443. You can disable it if you don't want it to send any information. You can enable it through SMTP or through HTTP as well. the alert email notification, which is by default enabled and ready to send information using a customer-open port. You can disable it as well if you don't want the alert email notification to go out of your network, or you can use an SMTP delay, or you can configure an HTTP proxy as well. Sometimes you want to receive alert emails in your inbox; you can configure additional email accounts so that you will be able to receive alert notifications in your inbox as well. The third option is the Remote Support Services option, which is disabled by default. It is disabled by default because you can actually enable it if you want to give access to the Nutanix technical support to access your cluster for a certain period of time and troubleshoot it. And once the troubleshooting is done, if you just want to stop the remote access again, you can configure the pulse, so by default it is enabled, but if you want to configure it, you can do so from the get icon, and you can select the pulse to enable or disable it. You can uncheck it, and you can do all this configuration and this pulse every 24 hours. It keeps on sending information to the Nutanix support server by default. So if you want to disable it, you can do it; if you want to enable it, you can enable it via a proxy as well. You can do all these changes for the remote connection service. You can enable the remote connection service again. If you go to your gear icon, you will have a drop-down list where you will see the remote support. Once you get to the remote support, you can click on the radio button and access the remote service, and you can make it live for a certain period of time. So you can enable it for 1 hour or 2 hours, and if the support case is resolved within 1 hour and you have given the time as 2 hours, you can also end it as well.So you can also immediately end the session to make sure your Nutanix cluster is secured. One more way is to configure the pulse and the alert notification to go through a proxy server. So again, Nutanix clusters have a built-in proxy configuration. So, go back to the gear icon, and you'll see an HTTP proxy option, where you can configure your proxy server's IP address and the port number that you want to allow so that the new damage cluster will use this hostility proxy and support to send information to technical support. Finally, we need to have a valid support contract with Nutanix so that we get prompt support from the Nutanix site, and this support portal can be accessed by your account. So the first thing you need to do is create your account, and you also need to register your Nutanix cluster ID or the serial number of your nodes so that you can access your licence key information. You should be able to access all the documentation and downloads as well. This is an accessing help page, so once you goto the help page, you can see it has alot of options, I mean, very simple to use. Once you start using it, you will be able to look at the table of contents and at the different options that are there. You can search for a specific topic, and you can also look at the PDF versions, so if you only want to look at the PDF versions of the document, you can also have some major links as well. So depending on which topic you are looking at, you can search by keyword. Okay, so this was the topic related to the hardware, the support, and the tools that we use for Nutanix clusters. Any doubts? Right?
Nutanix Hypervisor, Networking, ABS & AFS
1. Nutanix Networking, ABS & AFS Configuration
So let us start with the next module, which is the Prism module in Nutanix, and we will try to understand what are the elements that are available and what are the features that are available that we can do with Prism. Now, when we talk about the prism in Nutanix, it comes in two flavours or two varieties. One is called a prism element, and one is known as a prism sensor. The Prism element is preinstalled on the Nutanix PVM and it is running as a service. So if you remember yesterday when we were talking about the different services that are running on the CBM, one of the services was the Prism service, and that Prism service was giving us management capabilities and it was allowing us to integrate with the Rest API. So the Prism element is pre-installed on the CRM, and it can be accessed directly by giving the CDM IP address. We usually refer to it as PE, for prism element. Then we have one more prism, which is known as the prism centre and is usually referred to as TC, which is prism center. So the prism element is actually givingme a stand alone management capability. That means I can manage a single cluster with Prism Element, whereas with Prism Central I can manage multiple clusters across my data center. So this topic that I am showing you is actually about Prism Central, and we will see what the capabilities are. The prism central has all the capabilities of a prism element. You can also launch the prism element via the prism tensor when you want to do any cluster-specific configuration. So, in most enterprise data centers, we use Prism Central, which allows us to manage multiple clusters and see a centralised view of all clusters from a single console. So the presumed element will be standalone cluster management. The Prism Central will have multi-cluster management. Now, from Prism Central, I will be able to do a single sign on all the registered clusters. So what I need to do is first deploy the prism sensor, and Nutanix offers the prism central as a VM, so I can download the prism central from the Nutanix site and deploy it as a VM on my Nutanix cluster. And then I will register all my clusters in the data centre with the prison centre so that it can enable single sign on.And I'll be able to see a quick glance of my multiple clusters at the same time with the help of this dashboard, with the help of this prism central dashboard. So I can see the health status, the performance, and the shoes that are prevailing in my cluster. From a single pane of the interface, I can also see the majority of the major entities, into which I can drill down. So if I want to see the storage of a specific cluster, I can go to that cluster and look at the storage of that cluster. I will be able to run multicluster analytics capabilities so I can see which cluster is running low on resources and which cluster has enough resources. I can also look at the alerts for all the clusters. I can also configure individual clusters through PrismCentral as well. So Prism Central provides management of multiple clusters. The next management feature is the Rest API. Nutanix provides Rest API functionality, which you can use to configure or automate your Rest API. The latest version that is released by Nutanix is version 3. So you can always go to the developer Nutanix.com site, where you will get a complete list of Rest API functions, and you can use them. You can code them according to your requirements in multiple languages as well. There are also some tutorials available on this site that will help you understand how to use the Rest API, how to call them, and how to integrate with Rest API clients as well. The third management feature, or managing a Nutanix cluster, is through a command line. I can manage the Nutanix cluster via a command line as well. I can connect to any local machine or any CDM in the cluster. And once I log into that CVM, I will be able to see the entire cluster information. So by default, when you log in from the command line, you log in as a Nutanix user, which gives you admin privileges. And when you want to use any command, the command is organised in a syntax such as the entity, the action, and the parameters that we want to use. So every command in the Nutrition Cluster will follow this sequence. One more option is that I can download the ncli command list from the Prism. So if I don't want to use the normal footy or any other CLI or SSH utility, I can also download a ncli that is provided by Nutrition and set up that ncli with my Java. So I will require that Java perform actions on this command. So I need to have a Java runtime installed in my system, and then I can download the ncli tool and use it for running it as a command-line utility as well. So how do we install the NCLI on your local system? Whether you're using Windows or Linux, you need to go to the command prompt and start the ncli by giving this command: NCLIS. Then you give the management an IP address. This management IP address can be the IP address of any of your CDMs, or it can be your cluster IP address as well. Then, in place of user, replace it with your username. For example, the default is Admin, so I can use the username as admin, followed by the password that I have set for the admin. Then I can start using the NCLI to perform my command. As stated earlier, the command format for NCLI is entity, action, and parameter. So this command format that the nutanix follows for example,if I want to look at a storage pool list,storage pool is my entity list is my action. So I am following the syntax of entity and action here. Now, in the example below, I have a data store, which is my entity create, and an action. Parameter is the name value that I want to use, and I can give multiple parameter values. So as you can see, I have parameter one here and parameter two here as well. So this format is followed by Nutanix, and you can easily use the tab key as well to see what the next available feature as well.Embedded help is available in the NCLI. So if you want to have embedded help, you have to run this command cluster help detail equal to true, and if you don't want to use this help, you can use the command cluster edit parameter help detail equal to false. So by using these commands, you can enable and disable the embedded health option as well. If you want to manage your Acropolis hypervisor, then you need to go to the ACLI prompt in your Acropolis hypervisor. So if you're running an Acropolis hypervisor in your cluster, and if you want to run any commands on the hypervisor, then what we do is log into Acropolis or to the host IP address, and once I come to the Acropolis hypervisor, then I can run the commands that are related to the hypervisor itself. I can also use PowerShell commandlets. Nutanix provides the PowerShell commandlets for interacting with the cluster, and these can also be downloaded from the Nutanix cluster. So if you go to your Nutanix Prism element or Prism central and once you log into the cluster, you can download the PowerShell script and use it on a Windows system to run the PowerShell scripts for Prism. Self Service This is one more method of managing the cluster, and this method is primarily giving control to the consumers of its infrastructure. So if you have a team that is responsible for VM deployment, you can create a self-service portal and create an account for your individual virtual team or your development team so they can deploy their own virtual machines in a self-service manner without engaging in your day-to-day IT day It operation.So with the help of the self-service, the user will be able to deploy his own virtual machines and manage his own virtual machines. You can also restrict the user from using the resources. You can give him some limits on the resources. For example, you can give him a resource limit of ten vCPU. So when he's deploying VMs, he will be only able to use ten vCPUs from the cluster. He cannot use more than ten CPUs from the cluster, same thing.You can also allocate memory and storage, so that the user that is using the self-service portal can use the limits and deploy and provision his VMs accordingly. This is the help dashboard, very important in Prism because it will dynamically update the information about your VM's host and the disc in the cluster. This is the one place where you are able to get all your performance-related information. so you can see all the different attributes. You can see your VMs, how many VMs are having problems, how many VMs are having warnings, how many hosts there are, what the discs are, what the data stores are—all this information you can see in a very detailed manner. You can see the number of VMs that are running, how many are held, and all these things from the dashboard, all from the same interface. I can also run my nutrition checks as well, so Ican run my NCC from here and make sure that allthe information is available coming to the prism central. So the prism central has a similar look to the prism element. So you will log in with your username and password that you have created during the Prism Central deployment. And once you log into your PrismCentral, you will see this interface. So primarily you will not see your cluster name, but you will see the prism center, and then you can see different clusters as well. What are the health, the view, and the options? Everything is similar to your presumed element—the feel, the search option, the task, the gear icon. Everything will be seen except the Dr. view. So the dashboard view is the same, but the Dr view in prism central differs from prism element. I will show you the demo of both tomorrowfor prism element and prism central as well. So how do we access any cluster if the cluster is already registered? I can just go and search for the cluster by typing the name of the cluster. And once I get the list of the clusters, I can move my cursor over the cluster, and I can quickly see what the health status of my cluster is. And depending on the health status of my cluster, if I want to drill down, I can click on that cluster. Then the presume central will do a single sign-on on the cluster, and it will show all the details related to the cluster. I can configure the prison centre cluster like a regular cluster and have different subsets. I can customise the view; I can configure the icons that I want to see in the prison centre, the ones that are important for me. I can select them or deselect them so that I can customise the view according to my requirements. This is the scenario view that I was telling you earlier. Using this planning option, you can actually see what your current load on your cluster is and what resources are available on your cluster currently. And if you are planning to deploy 100 VMs, are your current cluster resources enough or do you need to add resources? It will also tell you what type of resources you need to add, such as whether you need to add only compute or you need to add only storage as well. Also, I will show you a demo of this tomorrow when we go through the Prism Central access. This is a sample. So if I give a what-if scenario and if I give my workload, you know, that I am going to start a workload of 100 VMs, it will tell me that to accommodate 100 VMs, the current storage, which is this dotted line, will not be enough. That means I need to add more storage to my cluster, and I can also provide a runway where I am going to select my point. There is a point where I can use it. same thing I can see for my CPU as well. So what is my current CPU that I have resources available for? What is the type of current that is used? The current CPU is being used, and when I start the new workload, how much CPU do I exactly need exactly?So it will also tell me whether the computation needs to be added or not. The same can be said for memory. It will also show me details on the memory as well. What is your current memory, what is the currently used memory, and when you deploy the new workload, how much memory is required? So by looking at this Prism Central scenario, you will be able to plan and perform your cluster expansion accordingly. This search option is a very good one where you can search for different VMs across the cluster or different objects across the cluster, but you will require a Prism Pro licence for that. So when you have this Prism Pro licence for your Prism Central, then you will be able to search for a virtual machine, a disk, or a container across multiple clusters. So you don't need to go andbrowse each cluster to find an object. This is the home dashboard, where you will be able to see all the information about your presumed central. So I can see different clusters; I can see the runway; I can see different quick access options. All these are different builders that I can use. I can also customise them. If I don't want to see the task in my dashboard, I can remove it. If I want to add the impacted cluster, I can keep it, I can look at the controller, I can look at the cluster memory usage, I can look at the cluster CPU usage, and I can look at the latency as well. So if I want to see only these important aspects in my Prism Sensor dashboard, Then I can only keep these widgets in place. And for others, I can remove it.
Nutanix Cluster Management
1. Nutanix Management Interfaces
Are you able to see my screen? Okay, so let us start with our next topic, which is Acropolis Hypervisor. So yesterday we saw the demo and understood how the Acropolis hypervisor is running virtual machines. We will try to see the new topics, which are networking topics, and how we can manage the VMs as well. So, just a quick recap of what we discussed earlier in this Aquaponics hypervisor. So if you see here just a quick recap, we have seen that the node, so a node in the cluster, will be running a serial component, which is a prepackaged VM and which also acts as a storage controller, and it communicates with the Scuffle controller, and it also stores the data; it holds the upload to the Cassandra metadata store and the content store as well. And we are also using a hypervisor. So this hypervisor will be your Aquaponics hypervisor, or it can be your HyperV, or you can also reimage the ESXi server as well. Now the guest VMs that are running on this Acropolis hypervisor will be able to communicate and run the guest VMs.And when the guest VMs are running, the hypervisor will use the CVM to do the IO so that the VMs can be stored, their data, their VMDK files, their information—everything on the storage. Now, the next topic that we are going to discuss is how are we going to connect this guest VM to our outside switch or our network infrastructure? So how the network infrastructure will be configured and how the cluster will use the physical ports that we have seen in our hardware section So before we start, let us have a look at the Acropolis HyperV overview. So in this V, it uses an open V switch concept, which is connected to the CVM, and it also interconnects the hypervisor and the guest VMs to each other, which also provides the link to the physical network. So with the help of this open V switch, the CRM hypervisor and the guest VM will be able to talk to each other as well as communicate with the network's physical switches or other applications or clients on the network below. If you see here, I have one hypervisor host, which is known as host One, and I have one more host here, which is known as host Two. Now we know that these two hosts are two physical nodes in a nutrition cluster, right? And if I look at the bottom, I have ETH one and ETH two. These are the physical ports that are present on mynode, and primarily, these will be your ten GB ports. So each node will have 210 GB of ports, and the default configuration of these ten GB ports is inactive and standby. So one port will be active, and the other port will be a standby or failover port. What we do is initially connect these physical ports to our physical switch in our data center, or the core switch, so that we get the physical connectivity from the land. And the open V switch concept in Nutanix will create a bridge called VR Zero. And this bridge spans across multiple nodes. It is able to see all the ports. So each physical port will be mapped to a virtual port in the bridge. So in the bridge, I will be able to see four physical ports, and I will be able to see that the bridge is a default bridge, which is spanning across multiple nodes. So if a VM is connected, and if I'm going to create a Nic interface in the virtual machine, that Nic interface will be connecting to a tap zero on the bridge. And whenever the data traffic is coming from this VM, the data traffic will be sent to the tab zero when the tab zero uses the bridge, and then it will forward those packets to the relevant Ethernet port on the node. Now, in this scenario What happens when the VM is running on a specific host is that only that host will use the tap and the bridge to forward the data to the other switches in case this physical link goes down. For example. If these ports Two ports are down. Then the VM can also start using the big zero and will be able to send the data through another node. physical node in the cluster, and it can still communicate with the outside network. So with the help of this bridge, we are able to create a backend infrastructure, or I would call it a backbone for the network, so that the VMs can be deployed and we can utilise all the physical ports that are available on nodes. Let me show you this diagram here. This has a little bit more clarity here.If you look at the right side, there are four legends. The green one is the internal villain, and the other three, the orange, blue, and yellow, are the other VLANs. If we look here, we can see that we have bridge zero, BR zero, which is the default bridge. We have also created one more bridge, which is BR 1. This is like creating data ports or creating a V switch using different physical ports. So, if we look at ETH Zero, ETH Three, and ETH Two, we can see that these two ports are configured in a bond. So the bond will actually represent the teaming of these two physical ports, and they are participating in bridge zero and the ETH one and the ETH zero; they are also teamed together, and they are participating in bond one. Now all the four ports are connected to the physical switches in our core infrastructure. And whenever a virtual machine is deployed, and when I create a network interface card on the virtual machine, I can map it to the bridge, such as this one, which goes to tab zero. So the bridge is able to identify that this tabzero is coming from a guest VM, and whenever the data travels over this tab, it will be forwarded to the bond, and the bond will forward the data depending on the configuration, whether you configure these two ports as active passive or as load balancing. The same scenario happens when the guest VM2 sends the data over ETH zero. So the guest VM #2 has two Nicks. One is connected to bridge zero, and the other one is connected to bridge one. So what we are doing is, in this guest VM2, in this example, having two Nicks and connecting them to two separate bridges. It is like two separate networks that can be used for assigning different IP addresses, and they can communicate with the respective Tap ports. And when the data comes to that tab that Tap will forward, the bridge will forward the data to the bond, and the bond will send it to the physical port, and then the physical port will take it over and send it to the switch. Physical switch. Now, on the physical switch, I can create a VLAN to manage and shape my traffic so that different ports are seeing different network traffic. So this is how the guest VMs will send the data to the outside network. Now, in addition to this, we also have an internal line, which is a green internal line, right? So if you look at the first box, which is a CDM, the CDM also has two network interfaces. one is ETH zero, and that ETH Zero is connected to my bridge zero with a VNet zero naming convention. And whenever the CVM wants to talk to the outside world For example. Let's say an administrator is sitting here outside the network and he wants to connect to the CRM for management purposes, like using the Prismelement IP address or doing an SSH. The administrator can then connect to the port via ETH-3, and it will talk to the VNet so that it can reach ethical for management purposes, and he will be able to manage the CVM or the entire cluster from the Prism element or SSH. Furthermore, the bridge zero is connected to the hypervisor, allowing the hypervisor to see the status of the VMs as well as manage the hypervisor from the outside. So I can SSH to the hypervisor from the outside switch, and I will be able to perform command and management of the hypervisor. Now, in addition to this BR zero, there is one more bridge called Virtual BR zero, which is created as a private bridge or as a private network for communicating between the CDM and the s a private bridSo the hypervisor and the CVM talk to each other over the Linux bridge, which is virtual BR zero. So this is a dedicated network that is used by the CBM and the hypervisor, and the other one is a network that can go out for our infrastructure side. Let us see one example of the networking in this scenario. So if you see here, I have the serum, and this serum has an IP address of one one four and is connected to the bridge zero via a VNet port. And I have two physical ports on the node that are configured as bound. Okay? Now, the VR Zero also has an IP address of 1.3, which is actually going to my Acropolis hypervisor. So I am able to connect to the Acropolis hypervisor from the physical port through 10.0, and I am able to connect to the CVM from the outside network as 10.0 dot 14, and the Linux bridge, which is the internal bridge. This bridge is between the Acropolis hypervisor and the CVM. And, as you can see, Nutanix uses these IP addresses by default: 192.168.5.2 and 5.1.1. And it uses this IP as a private network between the CBM and the Acropolis hypervisor. Here we have to take into consideration that whenever I am configuring an external IP or a Prism IP for the CBM or for my VMs, I should try not to use this 192 168 5 range. This is the only precaution that we need to take because if I assign a similar IP on this CVM, which is like, for example, let's say one and two, or a VM that is connected on this bridge zero as one dot or five dots one or two, then there might be an IP conflict. So the only precaution that we need to take is not to use this IP address, which is 192.168.5.51. 5251 is primarily used by the Acropolis hypervisor, and phi 2 is used by the CVM. Now, yesterday we saw the demo where we were looking at the installation aspect, right? During the installation aspect, we have given a host IP address, which is the hypervisor IP address, and we have given the serum IPS. Those IP addresses are these ones, which I'm taking here. These are the IPS that we provided during the installation. We are not given these IP addresses during the installation. With the 192-1685 series, I can create one more bridge for my virtual machines to communicate with the outside world, so I can keep my management and my cluster communication separate, and I can keep my production VMs separate as well. So, any doubts here? So Nutanix has a built-in networking facility like VMware. We have the NSX feature of software defined networking in Acropolis. The software-defined networking concept is built in, and we can use it to build bridges, manage traffic, and shape traffic as needed. If you have any doubts, I can go back and make sure that we understand the concept before we can move you have any So I believe we are fine with it. So moving forward in Nutanix, what I can do isnot only creating the bridges, but I can also createnetworks such as where I can create a L twonetwork and I can create an L three network. The L-2 network differs from the L-3 network in that the L-2 network can provide the guest VM. So the guest VM connects to a port that is created by default with Drzero, and it goes to the physical port. And the Acropolis will manage these ports depending on how many ports or how many network interfaces we are creating on the guest VM. So what I do is initially I create a networkin my Aquapolis Hypervisor using the Open V switch concept. And as the VMs are being deployed, I can select the network through which I want to assign my NIC or my network interface card. This scenario is similar to creating a nick in the VM and assigning it to the aVM network or any other port group. So whenever we are creating a VM in ESXi, we have an option of selecting the VM network or any other port group that is created by the administrator. So this is where we are going to do the mapping so that the traffic can go out to the core infrastructure. When we talk about a managed network, what we can do is create a DHCP service. So when I create a managed network inside Acropolis, it will start a DHCP service where it can give IP addresses to the VM based on the available pool or based on the available range. So I can assign auto IP addresses to my sed on theI can assign the subnet mask, the gateway, the DNS, the domain name—all these properties—as part of my DHCP request and response. So two types of networks are available if you want to automate. If you don't want to manually assign IP addresses to your nicks in the VM, then you can create a managed network and you can define a DHCP scope or a pool of IP address ranges so that whenever a VM boots, it will get an IP address assignment automatically from the Acropolis hypervisor. If there are any doubts here or we are fine, we can move forward. Okay, so moving forward, we will be looking at this. I will show you in the demo how we can create these two networks, how we can define the IP address range, and how we can map it to a VM so that the VM can take the IP address assignment. Moving to the next one, the next topic is about Nutanix Guest Tools. This Nutanix Guest Tools is a software bundle that we can install on the guest VM, such as a Windows or a Linux guest VM, and it will enable us to have advanced functionality. So the first thing that we are looking at here isan NGT Agent which is a service and this service willallow the Guest VMs to communicate with the CVM. So the communication and the management of the guest VM are done by the CVM with the help of this Nutanix Guest Agent Service. Other than the communication part, it also gives us some extra capabilities, such as the ability to do a file-level restore using the CLI command and to allow a self-service recovery from the VM snapshots that we have performed. So from the command line, a VM administrator, or virtual machine administrator, can restore his own files at the file level. So we can give him the capabilities, we can give him the command syntax, and we can also give him the list of snapshots, so he will be able to use the command, he can look at the list of snapshots, and he will be able to restore an individual file that needs to be recovered. The next functionality that the NGT tools provide is the VM Mobility Driver. This VM mobility driver facilitates the migration of a VM from ESXi to AHV. Or it can also do an in-place hypervisor conversion, and you can also do a cross-hypervisor conversion in case of disaster scenarios. For example, if I want to migrate a VM from an ESXi server to ASP, what I need to do is first install the guest tools on the VM that is running on an ESXi server, and once the tools are installed, these tools will allow me to migrate the VM to AHV and also convert the format from ESXi format to AHV format. I can also do an in-place hypervisor conversion, and I can also do a cross-hypervisor Dr as well. For example, let's say you are running your production site on an ESXi server, but on the Dr, you are running a Nutanix cluster with an HV hypervisor, and when I do a replication between these two clusters, the replication will happen. But when I want to register my VMs on the VR site, which is running the AHV hypervisor, and when I want to bring them up, then the mobility drivers will help me to do a cross-hypervisor conversion, and it will smoothly power on the VM. The next feature of an entity is to utilise the VSS requester and the VSS provider for the Windows VMs. So if I am looking at performing an application consistentsnapshot of AHV or of ESXi Windows VMs, then Ican also use the NGT tools to perform an applicationconsistent snapshot using the VSS technology of Windows. So when I am performing a snapshot, I have anoption of enabling the application consistent snapshot feature and thisNGT tool will coordinate with my Windows VM OS andit will utilise the VSS to perform a snapshot. The same thing can be done for Linux VMs as well. So the NGT tool will also provide me the capability to perform application-consistent snapshots for Linux VMs, which will run some specific scripts on the VM and take a consistent snapshot of the Linux VM as me the caAny doubts perform aSo we are okay with this, and yesterday we saw the creation of a VM example, right? So when I go to a specific VM, I see that there is an option called ManageNGT Tools where we can mount the NGT tools on the VM if they are not already mounted. And for mounting the entity tools, I need to have at least one empty CDROM configured in the VM configuration. So it will use that empty CDROM to mount the NGT tools. Following that, we can install the NGT tools from the VM. For installing the NGT tools, we have some requirements and some limitations. The first requirement or limitation is that we should have a cluster IP address configured for our cluster. So when you have a virtual IP or the cluster IP configured for the entire cluster, the entitytools will actually communicate with the cluster IP. So if one CVM is down or if one node is down, there will be no impact on the VMmanagement part, and this cluster IP should not be changed. If you are going to change this clusterIP, then it will impact the NGT tools that are running in your cluster. So what we need to do is take some precaution here and decide what the cluster IP is that I am going to assign to my Newtonics cluster. And we need to make sure that that IP will not be changed in the future. The second requirement is that we should have at least one empty CDROM slot attached to the VM so that we can mount the ISO of the NGT tool. And also, we need to have this port number 20.74 open so that it can communicate with the CVM service. So the entity tools, they are communicating with theCBM with the help of 20 74 port. If you are running a hypervisor, which is an ESXi hypervisor, we will need at least 5.1 or later releases. Or if you're running an Acropolis hypervisor, we will need the 2016 or later release. The VMs should be connected to a network thatcan be accessed by the virtual IP address. Sometimes, if you have the management traffic separate from the VM traffic, you need to create a route between the cluster IP and VM so that the NGTtools will be able to reach your cluster IP and keep on updating, monitoring, and managing the VMs. These are a few requirements and limitations, and to get the latest details about the requirements and limitations for NGT Tool, I always recommend looking at the website and the latest version that you are running on your Nutanix cluster. So, how do we enable and mount the NGT Tools? So, if you go to your Nutanix Web Console, from there you can enable and mount the NGT Tool on a VM. So you select the particular VM that you want to install NGT Tools on. And once you go to the web console of your prism, you go to the VM dashboard, and inside that VM dashboard, you will see an option called Manage Guest Tools. So when you click on this Manage Guest Tool, it will try to mount the NGT Tools using an empty CDROM. And from there onward, you can start the implementation of these NGT tools. It is similar to the VM tools that we install, and it provides us all these different types of features. I can also have the Self-Service Restore feature enabled, so I can click on that feature so that I can enable or disable that feature. When I enable this feature, I am giving the ability to the DM administrator or the application administrator to restore his own files through the CLI. And I can also enable the VSS option to perform application-consistent checks. And once I select the features that I want to provide, then I will click "Submit" so that this configuration will be saved in the VM. The VM will then get registered with the NGT Service. And if NGT is enabled and mounted on a selected VM machine, a series with a label of "Nutanix Tools" will be shown in my computer, where you will see that this particular volume label or series is the Nutanix Tools from which you can run the NGT Tools as and when needed. And if you want to do any updates or make any changes, you can also perform those changes. So, any gaps in the previous topic, the NGV topics, okay, so we'll move on to the next topic, which is a Cropolis block service in Nutanix. They have designed a scale-out storage solution where each CVM in the Nutanix cluster can participate in the presentation of storage. And it allows multiple applications to scale out for high performance. And we are able to configure the storage container, and we are able to present the storage container using the Volume Group option to present the Nutanix DSX storage to an external host or to a VM as well. So in this scenario, what we do is first assign an IP address to the Nutanix cluster as a target. Once I assign a single IP to the Nutanix cluster as an Icecassy target, then what I do is I come to my Nutanix container, and in this container I can create a volume group, and inside that volume group I can have multiple disks; I can create multiple virtual disks. So you see here, discs A, B, and C are part of group A. Now on this group A I can map the IQN of my host A. So once I map the IQN of host A, I give the host access rights to connect to my Newtonist cluster as an Icecube target. And once from the host I configure the Isacy initiatorsettings the discovery portal and I connect the target. The host will be able to see the V disc ABC as three individual disks, which will be presented to the host. And from there onward, I can format this disc, assign the drive letters, and start storing data on that disk. Here's one more example. I have two discs that I have created and placed in storage. I am creating them as part of my volume group, and these two discs will be part of the volume group. And then on the volume group I am going to give the IQN of host B, and from the host I will configure myeyescase initiator, and then I will be able to see the two discs in the host as independent and block-level storage. So what are the use cases for using the ABS service? The Acropolis blocks services. We can use the Block Services for the production databasetier, which is running on a bare metal server while the application is running on a virtual infrastructure. So if the application is running on a VM, but your database is running on a physical server, I want to give the storage from the Nutanix cluster to the database so that it can utilise my DSF properties and features and be able to store the data in the Nutanix cluster. Usually, this is our three-tier applications where we can use that, or I can use it for my dev and test environments as well, or I can also use it for server protection or investment protection where I don't want to have a separate IceCussy storage or block storage facility in my data center. However, because I have a Nutanix cluster, I can use the same Nutanix cluster for storage as well. One more good thing about Nutanix Acropolis block services is that it does not require a multi-part software configuration on the client. So we don't need to worry about having MPI O software on the client. The MPI O functionality is taken care of by the DSF because we are assigning a single IP address as the Iscasi target on the cluster. The multi-path IO functionality will be taken care of by Nutanix DSF. I don't need to consider PowerPath or VeritasFoundation or NetApp DSM or any third partysoftware MLX or any of them. And I can directly use the operating system to connect to my single Ice Curse target, and the multipath will automatically be taken care of by the CVM. Let us look at what arethe requirements and supported clients. Now this feature is quite new as per nutrition, so how I can use it is that I can use it for synchronous applications or I can use it for metro availability, which are currently not supported for volume groups. So I can use them for this port, synchronous applications, or if I have metro availability configured in the same data center, but I can't use them for external clients because they might not be able to fail over to the other one. And what I have to do is create an external data service IP address in the Cluster Details pane so that I can connect to that particular IP address. And it also uses the default ports 3260 and 3205, which are used by the Iskasi protocol to establish connectivity between the initiator and the target. As of now, the supported clients are Windows 2012, 2008, and Redirect 6.7. But you should always go and look at the most up-to-date list on the Nutanix site to verify what operating systems are currently supported operating systems.such as if you want to verify whether Windows 2016 or Linux 7 is supported or not. So you can verify that, and you can also verify whether the current Acropolis operating system version is supported as well or not. Some applications are already supported by Acropolis Block Services. We can have Oracle, we can have MS SQL; these are the databases that are already supported. And you can also look at the UpToDate list onNutanix to see what are the new databases support thathas been announced by Nutanix based on their new releaseand based on their new features as well. So how do we use the Acropolis Block Services? What are the configuration steps? Let us look at the configuration steps for a Windows client. So what I need to do is first go to the Windows host and collect the Icecussie initiator name, the IQN. So I get the IQN from my eyescase initiator on Windows. And once I get that, then I come to my Nutanix Cluster and assign a dataservice IP address to my Nutanix Cluster. And assigning the IP address to the Nutanix Cluster is asingle step process, we just need to assign it one time,we don't need to assign it multiple times and it isgoing to use only one single IP address. Once the IP address is assigned, then what I do is I go and create my volume groups, and I add my disc to the volume group. Now once I add the number of discs that are required to be part of the volume group, what I do is add the IQN of my client to the whitelist so that the client will be able to connect to my IceCussy target, and then I go to my Windows client again and use the Ice. Cassie initiated utility on the Windows client, I will discovermy Icecuse target and I will be able to seemy disc which I have added into the volume group. The same procedure is applicable for Linux clients as well. I need to go and run the Linuxis Cassie utility to get the IQN. Then if I have not configured the data service IP address on the Newtonics cluster, I have to do that. If I have already configured it, then I can skip this step. Then I will go and create a new group and add the disc to that group, and I will add the IQN of the Linux host. Then I go to the Linux host, and I will discover my eyeshade target, and I will be able to look at the targets that are available for accessing the data. So, any doubts here? One more feature that is provided by Nutanix is the Acropolis File Services. Now, this Acropolis File Services feature is not a built-in feature. You have to download this package, and you have to deploy it on your Nutanix cluster. By default, what Nutanix does is deploy the three files of VMs in its Nutanix cluster. The minimum is three. And we also have a recommendation for having three nodes in a cluster, right? So following that recommendation, when I installed this AcropolisPile Services package from the Nutanix website as part of my software upgrade yesterday, I showed you that right in the Software Upgrade section in the Settings Panel we were able to download this package. And when I download this package, what happens is that three VMs are deployed, and we can configure them to serve file services to my external clients that are sitting on my network, which is known as an external network. So external network refers to my production land or network, where the users will be able to access the file. I can also have a DNS on the external network so that I can configure these VMs as a host name. I can also integrate with Active Directory so that I can apply the Active Directory permissions, group policies, and all those things. Whenever the users are storing the data on these VMs, these VMs will in turn talk to the respective CVM via an internal network, and then the CVM will be able to store the data into the storage container utilising the DSF properties. There are some limitations. You should have at least one space available on your cluster to configure the Acropolis File Services, and you will need at least three VMs to be deployed on your Nutanix cluster to configure the Acropolis File Services. Once we have deployed the AcropolisFile Services as an administrator. We will be able to look at the statistics of the file services. We will be able to look at the file server usage. We will be able to look at the file server performance. We will be able to create local users. Or we will be able to integrate with Active Directory. Then I can create my shares. I can also have built-in groups, so if I want to utilise the built-in or local groups, I can do that. Or once I integrate with Active Directory, I can also utilise the Active Directory Groups feature as well.And it also has built-in alerts and events to give us information about the activity that is happening in the Acropolis File Services. And when I go to those VMs that are running as file service VMs, I will be able to look at their performance, their disc IO, and their latency as well, and I can identify whether these three VMs are satisfying my users with their file sharing requests or not. If I feel there is a bottleneck, I can increase the resources of the VM, or I can deploy additional Acropolis File Services VMs in the cluster to enhance the performance. It also provides the features of any full-blown file service, such as the ability to have quotas, so I can create quotas on my Nutanix Aquapolis File Service so that I can control the users from overutilizing the storage space. The default quota that is set is initially done by the administrator, so the administrator can set a default quota. I can also set a user-level policy where I can set a quota for a single user. For example, if the administrator allocates only one GB, then the user cannot store data larger than one GB as part of the policy. Or if I want to create a group policy, I can also have a group policy created, and I can assign the size of the group so that as it accumulates storage capacity, all the users in the group will be able to utilise that much storage space. I can set notifications on the quotas, so to monitor the file services and the quotas, we can have notifications configured as an email notification that can be sent to the administrator or the user when they reach their limit or when they are using their storage capacity to the maximum. These alerts can also be sent to the additionalrecipients, like for example, you can send it toyour team members where you want them to takeappropriate action, or you want them to increase thesize of the quota without the user knowing it. Or when the quota reaches 90% of the consumption, you can start sending warning emails to the respective user. And when the quota reaches 100%, you can send an alert to the recipient. And you can also set a soft limit and a hard limit so that you can take appropriate action. So do you want to send the user a warning message once he reaches the 90% limit? Once he reaches 100%, you can send him a warning message as well as give him an additional 10% or 15% of space so that he will continue to use the services. You can also set the alert to occur every 24 hours until it is acknowledged or resolved, so that you can audit that option as well. When we come to the policies, I can set a quota policy, I can specify the consumption limit, and I can have an enforcement type on all the quota levels. So I can also enforce, such as if the user is consuming 90% of his quota space, I can enforce him from writing new data into his share, we can sync with Active Directory, and FSMVMs will be automatically disabled for 24 hours as well. And we can also avoid this quota policy's enforcement, which can begin several minutes after the policy is created. So therefore, if a user uses or reaches his quota limit before the interval is completed, the alert will be raised but the quota may not be enforced. So you can also set an interval where you say that you will give a grace period of certain time intervals so that the user has enough time to take the necessary action for moving or creating data. What we do is go to the Prism console, and once I log into the Prism console, I go to my home, then I will find an option called File Server, and then I can click on the appropriate share where I want to add a quota policy, and I can add a quota policy to that particular shade. Any questions in this module?
Nutanix NCP Exam Dumps, Nutanix NCP Practice Test Questions and Answers
Do you have questions about our NCP Nutanix Certified Professional 5.10 practice test questions and answers or any of our products? If you are not clear about our Nutanix NCP exam practice test questions, you can read the FAQ below.
Purchase Nutanix NCP Exam Training Products Individually