7. Load Balancing
And in the next lesson, we’re going to do an extensive demo showing you how to set up load balancing in Nsxt two four. So let’s start with Layer 4 load balancing. And there are a couple of characteristics that I just want to mention. Load balancing is enabled on a Tier 1 gateway. The Tier 1 gateway must be configured in active standby mode. So this is only possible on the Tier 1 gateway. Tier one gateways have to be in active standby. If you want, your tier-zero router can be active, but the tier-one router must be in active standby. Okay, so how does this work?
Well, we’ve got this server pool that we see down at the bottom of our diagram here. Web one, Web two, and Web three And so we’re going to have a pool of virtual machines, and the requests that come into the load balancer are going to be distributed across these VMs. And the load balancer itself is going to have a VIP. And this virtual IP is the IP address for the load balancer itself. It’s the public IP. So as clients send requests to the load balancer, for example, we see three clients here on the left, all of whom are trying to hit a website that is hosted on these servers and the server pool. And so they put in the address of the load balancer. The requests eventually hit the Tier 1 service router, where the load balancer is hosted. And the tier-one service router distributes those requests across all of the servers in this pool. So at a layer-four load balancing level, it’s going to be TCP or UDP-based.
And this is what you’ll see me setup in the demo in the next lesson. And there are different algorithms that we can use to distribute those connections. For example, we could have some webservers with a lower number of connections. We could prefer those for new connections. We could simply do a round robin where we’re essentially just taking turns across these web servers. And the other key characteristic of a load balancer is availability.
So the load balancer in this case really has two purposes. Number one, it’s looking to equally distribute traffic across multiple servers in a pool. And by doing so, we may get a performance enhancement. We can have a large number of web servers handle all of those tasks effectively. What happens if one of these web servers goes down? Well, you can build a health check into the load balancer. And what that’s going to do is periodically reach out, and you can do either active or passive health checks. So let’s assume that we go with an active health check. What the active health check is going to do is initiate some kind of active test against these pool members. Maybe it’s going to send an HTTP request to all three of these on a regular interval. And what if one of them fails?
What the load balancer does is simply stop sending traffic to that particular VM, and all new connections will hit the surviving members of the pool. And then we would assume at that point that high availability will kick in and maybe Web 2 will get rebooted on some other ESXi host, and then we’ll get back up to full capacity. So the purpose of the load balancer is really twofold. Number one, it’s allowing us to scale out and create many virtual machines and spread the workload across many VMs. However, it also allows us to improve the availability of an application. Now we can also do layer seven load balancing, which is load balancing at the application level. So let’s say in this scenario that I have different web servers that are handling different aspects of my application. Maybe one of these servers is delivering all of the images or videos, or something like that. Or maybe there are different URLs, like maybe I have a URL that’s specific to mobile clients, and I have one of these web servers serving that content up. That’s one of the things that we can do with a layer seven load balancer: we can have those URL rules and direct certain requests to different virtual machines. Now with Layer 7 load balancing, there is a bit of a performance hit. It’s not going to work as quickly as Layer 4 because it has to examine the traffic at a deeper level. The other thing that a layer seven load balancer can accomplish is SSL offload. So let’s assume that we have incoming HTTP connections that are coming in from some client.
They’re hitting my Tier 1 service router, and at that point, the HTTP is terminated. As a result, the traffic between the client and the Tier 1 gateway is encrypted. But from that moment forward, the traffic is going to move unencrypted from the load balancer to the servers in the server pool. And the purpose of that is to actually offload the task of decrypting that traffic to the web server. So that’s what SSL offload gives us, and that’s something that’s also possible with a Layer 7 load balancer. So over the next couple of slides, we’re going to look at a few diagrams from the Nsxt reference design guide. And what we see here is inline load balancing. So in inline load balancing, you’ve got a load balancer, and it’s sitting in the middle here. Here we can see our load balancer with its VIP, and we’ve got traffic flowing through it. So the pool servers are on one side of the load balancer, and the clients are on the other side, and traffic is essentially just flowing through it. So here’s our uplink into the Tier Zero gateway.
That’s the side that the clients are on. And then I’ve got a connection to some segment here on the right, and that’s where my pool servers are, so let’s think about this from an IP addressing perspective and kind of work our way through the steps here. On the left, we have a client, and let’s say its IP address is 1234. So this client has a public IP address. And let’s just say that the VIP of my load balancer is 5678. So essentially, I’ve got a client sending a request that hits the VIP of that load balancer. And so the client request arrives here at the load balancer, at which point the load balancer just simply sends it over to one of the servers in my pool. Nat, we’re not doing a source. The client IP is actually preserved, and the traffic is just sent to one of these back-end servers. And then the back-end server responds directly to the client IP, and the response is received by the tier-one gateway. And the tier-one gateway forwards that traffic back to the client, replacing the source IP with the virtual IP of the load balancer.
So the responses that are being received bythe clients themselves look like they’re coming fromthe virtual IP of the load balancer. But the key thing here is that there is no source NAT performed on the incoming traffic. And my web servers are actually seeing the true IP addresses of the clients that are generating that traffic. Now let’s compare that to something called “one arm load balancing mode.” And in this situation, we’ve got an incoming connection request from a client in step one. So the incoming connection request from the client comes in, and it hits the Tier 1 gateway, where we have our load balancer VIP configured. So the source IP is the IP address of the client. The destination is the VIP of the load balancer. At this point, the load balancer performs a source NAT; it pulls out the client IP and presents itself as the source IP, and then it forwards that request onto one of the servers in the pool. When the server in the pool receives that request, it responds. The source IP is, of course, the IP address of the server in the pool, and it’s sending the response to the NAT IP of the load balancer.
And when the load balancer receives that response, it then performs a source NAT, replaces the server IP with its own VIP, and sends the response on to the original client. And so in this scenario, we’re only using a single interface on the load balancer, and all the traffic is coming in through that interface. It’s being sent to the servers, and the pool uses the same interface. And with this sort of design, one of the things that we could potentially do is place the one-arm load balancer on the same overlay segment as the server pool, and then the load balancer would only be involved in traffic that actually needs to be load balanced. And so this is the primary benefit of the one-arm design versus the transparent in-line load balancer that we saw before, where all of the requests have to flow through the load balancer. If I put it in this one-arm configuration now, only the traffic that needs to be load balanced has to pass through that one-arm tier, one load balancer. And one final note that I just want to make here before we move on to the demo In the case where we’re using a one-arm loadbalancer, again, remember, the client IP is being obscured. The original client’s IP We’re doing a source network at the load balancer, so the servers in my pool network see the actual client IP address. You can configure X forwarding to preserve those original client IP addresses and expose them to the web servers. So if you want the original client IP addresses for your web server logs, you can enable Xforwarded on the load balancer to preserve those.
8. Demo: Load Balancing
In this video, I’ll demonstrate how to configure a load balancer in NsXT 3.0. As can be seen, I’ve already logged into the NSX user interface. I’m going to go to my Tier 1 gateways and I’m going to add a new Tier 1 gateway. So in this environment, we only have a Tier One gateway. We have a single-tier routing topology. I’m going to add a Tier One gateway as well. And I’m going to call my new Tier One gateway LB1, and I’m going to associate it with the Tier Zero gateway that was already created as part of this lab environment. And load balancing is a function that’s performed by the service router. So I’m going to have to pick an edge cluster for my service routers to run on for my Tier One gateway. So when we enable services on the Tier One gateway, we need service routers, and we need an edge cluster. Then let’s take a short detour to route. Advertisement: We’re going to select which routes we want advertised to the upstream Tier Zero gateway.
I’m going to disable IPsec, but I am going to advertise all NatIPs, all load balancer VIP routes, and all source NatIPs as well. And what I’m really determining here is what types of route entries I want to advertise at the Tier Zero gateway. So if I create load balancer virtual IP routes, I want to advertise those to Tier Zero. Of course, we also want to promote the load balancing source net IPS. So I’m just going to go ahead and hit save here, and I’m going to choose no. I don’t want to configure anything else on this Tier 1 gateway right now. And let’s go back over to the Tier Zero gateway, and I’m actually going to edit the Tier Zero gateway. So I’ll click on the little ellipses here to the left of it, and I’ll choose Edit. So I’ve set up the Tier One gateway to advertise certain routes to the Tier Zero gateway. The Tier Zero gateway could potentially redistribute those routes into a routing protocol like BGP. So I’m going to go ahead and change my route distribution settings here. And so it looks like under route redistribution, I already have route redistribution setup for the Tier 1 load balancer in Tier One Nat.
Let’s see exactly what’s being redistributed. My load balancer’s virtual IP and my NAT IP So I’m just going to add one more section of the route redistribution here. I’m just going to call it “LB.” And for route redistribution, I’ll click on Set, and I’m going to choose my load balancer source, Nat IP. I’ll apply that, and I’ll go ahead and click on “Add” and “Apply.” So now I’ve configured the route redistribution of the Tier Zero gateway to take those routes that it’s learning from Tier One and advertise them in a routing protocol like BGP. So now I’m done editing my Tier Zero gateway. Let’s go to the load-balancing area under Network Services. And I’m going to add a load balancer here. I’m just going to call my load balancer LB One. I’m going to make the size small, but depending on what sort of traffic you expect here and how many virtual servers you’re dealing with, you could choose one of the other sizes. And I’m going to attach the load balancer to my Tier 1 gateway, which is called LB 1. So that’s my Tier 1 gateway. And that’s really it as far as what I’m going to configure here. So I’ll just click on “Save,” and then I’m going to choose “No.” I do not want to continue configuring this load balancer. So next, I want to set up a monitor.
So on the right here, we’ll click Monitors, and I am going to add a new HTTP monitor. So I’ll just call this Web app Monitor, and I’ll monitor port 443. And by the way, if you’re following along in a hands-on lab environment, you can follow the lab manual here, and it’s going to walk you through a lot of these tasks as well. So don’t just take my word for it. You’re able to complete these labs as well. Anyways, I’m going to change the timeout period to 15 seconds here. So, yeah, my monitor will monitor on port four four three. I’m going to click on “Configure” here next to “Http Request.” And this is how the load balancer is going to ensure the availability of the servers behind it. So the load balancer is going to receive incoming traffic. It’s going to send that traffic to a group of servers. How do we make sure that those servers are still actively working? We’re going to send HTTP requests to them. So rather than doing something simple like aping, it’s actually going to send an HTTP request to make sure that the Web server itself is actually responding to incoming requests. So this is the HTTP URL that we’re going to send the requests to on the virtual servers in the background, and then the HTTP response configuration.
This is what we expect to see back from the virtual servers that are going to be handling these requests. I’m just going to go ahead and click on “Apply” here, and I’ll just click on “Save” for my Web App monitor. Now, that’s not the only type of monitor that we can create. And there are two types of monitors that we can create here: active and passive. Active monitors include things like sending a Ping or an HTTP request. And so we’re actively sending out these requests. And if a server does not respond, we consider that server unhealthy and stop sending traffic to it. And so those health checks are going to be performed on all of the pool members, all of the virtual machines that sit behind this load balancer. Now a passive monitor is going to monitor for failures during client connections. So these check the actual traffic going through the load balancer, and if they see certain responses, like a TCP reset response, the load balancer will consider that a failure. And if we have multiple consecutive failures, then the load balancer will consider that pool member to be temporarily down, and it’ll stop sending requests to that particular pool member.
So you can configure one active monitor per server pool and one passive monitor per server pool. We’re just going to stick with the action in this demonstration. But I did want to make sure that you understood the difference between an active and a passive monitor. So now that we’ve got the monitor setup, let’s create a server pool. So the server pool is going to be made up of a group of virtual machines. There’s going to be traffic hitting this load balancer, and it’s going to be distributed to the virtual machines in the server pool. And the monitors that we just set up are going to perform health checks on the servers in these pools. So let’s start by clicking on “Add server pool.” I’m going to call it Web Servers One. and then I’ll pick the algorithm here. I’m going to go with round robin. What a round robin is basically doing is taking turns. So if I have three servers in my server pool, one connection will go to the first server, then the second server, then the third server, and the first, the second, and the third. And it will continue in this manner indefinitely. It doesn’t take into account the number of connections on each server or any other characteristics of what’s going on with those servers. It simply always takes turns. Whereas with a weighted round robin, each of the servers in this pool has a weight that they are assigned, and the weight value is an indication of how this server is performing.
And so each of the servers in the pool will have a weight that is relative to the other servers in that pool. And in that manner, if one of the servers is performing better than the others, it will be preferred for new connections. And then we have the least connection. That one’s pretty obvious. Whatever server has the fewest number of connections, that’s the server that’s going to get the next incoming connection. And lastly, connections only care about the number of connections. It doesn’t care about any performance attributes. If I select weighted least connections, we will prefer the servers with the fewest connections, but we will also consider the weight of each server and its performance. And then there’s IP hash load balancing, which just balances load based on the source IP address. So for the purposes of this demonstration, we’re going to go with the simplest option here: round robin. And I’ll pick my pool members in just a moment. But first let me choose that monitor that I created called Web App Monitor, and I’ll go ahead and apply that monitor to this server pool. And that Web App Monitor is going to be used to monitor all the pool members and take them out of service if they go down. And then you can see here the source-net translation mode, which by default is set to auto-map. And what’s going to happen here is that we’re going to have a public IP address that could potentially be used for multiple simultaneous connections. And what we want to do is have a bunch of servers behind this load balancer with private IPS.
We’re going to automatically perform that source NAT for the servers that are in this pool. So I could also just disable source Nat completely, or I could define a specific set of IP addresses that are going to be used for those source Nat translations. I’m going to go with the default option here and just choose Auto Map. And then finally, let’s click Select Members and add some members to this pool. So I’m going to enter individual members. I’ll just click on “Add Member.” So here’s my first member, Web One A. I’ll put in the IP address and the port number, and I’ll go ahead and click on “Save here.” And let me go ahead and add a second member, Web Two A, with its IP address. I’ll go ahead and click on “Apply here,” and then I’m back at my server pool. I’ll go ahead and click on “Save.” And now my server pool has been successfully created. And now I’m going to create the front end of the load balancer, which is the virtual server.
So I have a few different types of virtual servers that I can choose from here. Layer four TCP UDP layer four and UDP layer seven Http So I’m going to actually create a layer 4 TCP virtual server. One thing I forgot to mention before we get there: let’s take a look at these profiles so I can create application profiles. And the purpose of these application profiles is to improve the performance of load balancing. So here you see, I can add an application profile, and the first option is fast TCP. And with either a TCP or UDP, we’re not doing anything at layer seven. We’re not looking at the URL; we’re not load balancing based on that. We’re just load balancing all the TCP or UDP connections that come our way. And that’s going to be the fastest way to load balance because it doesn’t require anything at the application level. The HTTP profile is used when the application has to take some sort of action based on layer seven; that’s when we would create an HTTP application profile. Maybe, for example, all of the images are being served up by a specific pool member. Or maybe we’re terminating SSL at the load balancer so that our servers on the backend don’t have to terminate those SSL sessions.
So if we wanted to customise some of this stuff, we could create an application profile prior to creating this virtual server. I’m going to go ahead and create the virtual server. Now I’m going to create a layer for the TCP virtual server. So I’m just going to call my web server and web application VIP. And here’s the VIP address: 170, 216, 1010. And four four three is my port number. So that means that any traffic that is destined for 170, 216, or 10 on port 443 is going to hit this load balancer that I selected here, LB 1. And as the traffic hits that load balancer, it’s going to be distributed to the server pool that I selected here, my web server pool. And I also showed you the application profiles and how we could create a custom application profile. I did not create one, so I’m just going to go with the default here. But if I had created a custom application profile, I could choose it there. And I can also choose an option here for persistence. So persistence can be used to ensure that based on the source IP address, a user will always get sent to the same virtual server in my server pool. I’m not going to enable persistence; I’m just going to go ahead and click on Save. And there we go. Now I’ve created my virtual server for my load balancer.
Now, just one final note regarding this process. My first attempt at this didn’t work correctly, and that was because I overlooked something with the monitor. So really quick, if you’re following along at home, go to the monitors area here and go ahead and edit the monitor that we’ve created. And under HTTP request, change it to HTTP version one one.Otherwise, it won’t work. Those servers won’t come up as healthy unless you change that HTTP version. So, yeah, here we are. Now we’ve got this virtual server. We can see that the status is successful if I look at my server pools; here’s my web servers’ one pool, and we can see that the status of this is a success as well. And I can see the members of this group right here, on Web Two A and Web One A. So I’m just going to open a new browser tab, and under three-tier apps, we’re going to look at web appVIP and we’re going to test this out and see if we can pull up this customer database access.
And it looks like it’s working, and it’s showing us which web server we’re actually receiving this content from. So this one is Web One A, and if I continue hitting it with requests, we should see those requests get spread across Web One A and Web Two A. So not only does load balancing give us a performance benefit by spreading this workload across multiple virtual machines, but it also gives us an availability benefit. And let’s take a closer look at that by going to the Vsphere web client here. And what I’m going to do is take the web server that responded to my first request, which was Web One A. So let’s go find that virtual machine. Web One and I’m going to take it down. And moving forward, any subsequent requests should still work. They should hit Web 2A. So let’s power off. Web One Now that the web server is down, let’s go back to our customer database tab here. Let’s refresh. Now, it took a few seconds for the load balancerto actually recognise that Web One A was down. But when it did, it served up this request from Web Two A. So the availability of my application has been improved by using this load balancer.
9. IPSEC VPN
These IPsec VPN tunnels But let’s start with the basics. And IPsec VPN is used to secure traffic over untrusted networks, like, for example, the Internet. So way back in the old days, we used to use things like point-to-point private circuits like T-1S, and these were very common, but they were very expensive. And so one of the benefits of a VPN is that you can have a very cost-effective private connection over a public network. So in NSX-T, two-four-tier zero-gateway is the only supported method for IPsec. In about two and a half weeks. And later, you can also do this on the Tier 1 gateway.
Okay, so let’s break down how this diagram looks. And before I get any further, I just want to mention that you have to configure an active-standby configuration for your Tier 0 gateway. And if you’re on NSX-2-5 or later, even if you’re using the Tier-1 gateway, it requires an active standby configuration. You’ll also see that in the demo coming up ahead. So here’s our diagram. And on the left hand side, we have a host at a local site, and on the right hand side, we have a host at a remote site. So let’s just assume that these are both ESXi hosts, and maybe they’re both in sites where we have Nsxt Two Four.And so we’ve got transport nodes that virtual machines are running on. And I haven’t really included it in this diagram, but we have a tier-zero service router here, which would actually be running on the edge node. So I just tried to keep the diagram as simple as I could. So think of this green box as the entire site, not one particular ESXi host. The same is true for the remote site here.
Think of the “green box” as actually the entire site. So anyways, I’ve got a virtual machine here at the local site, and that’s running on one of my transport nodes. And I want to secure communications between this VM that’s running on a layer-2 segment here and a VM running on a different layer-2 segment over here. And that’s really the purpose of this IPsec VPN tunnel: to give me a way to secure that traffic as it flows over an untrusted network. So one of the key characteristics that I want to point out here is that this is not a layer-2 VPN. Notice that the VM at each location is on a different network. I have one unique network over here and a completely different network over there. And I’m going to configure my VPN to forward interesting traffic over that VPN tunnel. Like, for example, if this VM needs to reach something on the 192.168.10. network, then that traffic should be routed over this IPsec VPN tunnel so that it can reach this segment located at the other site. So I’ve got two different segments. Each segment is local to its particular site, and yeah, we’re not stretching layer two. We’re not doing anything like that.
We’ve got two different and distinct networks at each location. All we’re using the VPN for is to secure traffic over the public network. So let’s take a look at a few details. IPsec uses the Internet Key Exchange protocol, or IC Protocol, to negotiate security parameters. We’ll see that when we look at the different IPsec profiles that we can identify in the next video. And something called ESP is used to provide secure tunnelling of the payload. So what ESP basically does is encapsulate an entire packet, including the headers. So for example, let’s say that there’s some traffic that’s flowing out of VM 1, and the destination IP is 192, 168, 111. Well, when that traffic hits this IPsec VPN, what’s going to happen is that the entire packet, including the source and destination IPs, is going to be encapsulated and encrypted inside of the ESP payload. And that’s very important because it prevents things like replay attacks. So if an attacker were able to actually successfully discover the source and destination IPS, they might be able to take advantage of that and launch a replay attack.
Now, as one final side note, before we wrap up this video, I just want to mention that when we implement Ipsag VPN in Nsxt, it’s being implemented on the service router. It could be the service router of a Tier 0 gateway or a Tier 1 gateway. But regardless, it’s implemented on the service router, and the service router has to be in active standby, high availability mode. And so essentially, the way that it works is that it has to be active-passive. So under normal circumstances, the passive router is not actually handling any of the traffic. So one of the service routers is passive, and one of them is active. So if something happens, if the primary router goes down, the secondary service router is essentially going to assume the identity of the primary service router and take over. And this is not a stateful failover. So in the event that this failure does occur, the VPN connections will need to be reestablished. Rephrase. That’s far enough.
10. Demo: IPSEC VPN Configuration
In this video, I’ll show how to configure an IPsec VPN in Nsxt 3.0 using the freehands on labs available at Hol vmware.com. And that being said, I’m just going to set up the NSX-T part of this configuration. I don’t actually have an on-premises router that I’m going to use to terminate the IPsec VPN tunnel, but I do want to show you the configuration options here within the Nsxt user interface. So here I am on my Tier Zero gateway. So the first thing that I want to point out here is that if we take a look at the Tier Zero gateway, notice the High Availability mode.
It’s configured as Active/Active. And if I try to set up my VPN here, let’s just try to add an IPsec VPN. You must select the Tier Zero gateway, which does not appear in this list. That’s because it’s configured as Active/Active. So, returning to the Tier Zero gateways, I’m just going to add a new Tier Zero gateway and call it VPN Demo. I’m in ha mode. I’ll choose either active or active Standby.So, should we use equal-cost multipathing and have multiple service routers actively passing traffic, or should we have one active and one standby service router? So I’m going to choose the Active Standby option here, and I’m just going to leave all of the rest of the options at the default because I just want to do a very basic demonstration of setting up the VPN configuration here. So let’s click on VPN once more, and I’m going to go to Add Service and choose IPsec. And the name of my IPsec service will just be called Rick Demo IPsec.
And notice now that we can see our TierZero gateway is being exposed here because it’s set up as “active standby,” the admin status. I’m going to enable this and set up some bypass rules. And so basically, the bypass rules specify ranges of IP addresses for which the traffic should actually not get encrypted. So I’m not going to add any bypass rules, but I just wanted to point out what those were. So now we’ve created the IPsec configuration, and the next step is to add some route-based IPsec sessions. And so again, we’re going to go to the VPN section here, and we’re just going to click on IPsec sessions, and we’re going to add a route-based IPsec session. And so again, I’m just going to give it the name Rick Demo. I’m going to pick the VPN service that I’ve just created, and I’m going to pick my local endpoint and my remote IP. And the local endpoint is going to be a local NSX edge node. Now, I haven’t already created this, so I’m just going to go ahead and click on the ellipsis here, and I’m just going to add a local endpoint. I’ll just call it the Rick Demo Endpoint. I’m going to give it a fictitious IP address. And those are the only required configuration fields here. So I’ll just click on “Save.” This is the IP address of the local system that’s terminating the IPsec traffic. And again, so I’ll just make up a fictitious IP address for my physical router, and then I can configure some of the IPsec characteristics here. So by default, the authentication mode is a pre-shared key, or PSK. So there’s going to be some secret key that I have established that is going to be shared between the NSX edge and the remote site, which is probably going to be my physical router.
Or I could use a certificate instead that needs to be installed on both of those devices. So I’ll just pick a pre-shared key and type in an example pre-shared key. And then I also need to establish a tunnel interface. So each end of the IPsec tunnel is going to need a tunnel interface. I’ll call it 19216-810 One.And again, I’m just kind of making up these addresses as I go. I just want to demonstrate what the process of configuring this looks like so that you can get an idea of how to configure this in your environment. And so now I can put in the remote ID. This should be either the IP address or the FQDN of whatever the peer site is. And if you have peer sites using certificate authentication, it has to be the common name or distinguished name of the peer site certificate. And then finally, we’ll click on profiles. Here. the IC profiles and IPsec profiles. You can see we’ve got some default profiles configured here. I’m going to click on Save, and then we’ll just quickly go over here to the profiles and examine what these look like. So let’s take a look at the layer 3 VPN IC profile. This is establishing the IC version.
The Internet Key Exchange version AES 128 is the encryption algorithm. Sha 256 is the digest algorithm. This is basically just the type of encryption that is going to be performed. I could create a new profile here and change things like the encryption algorithm to maybe something like AES 256. So maybe I have a regulatory requirement that means I need to do that. Or maybe different digest algorithm characteristics exist. These are the profiles that we can establish to determine what the actual encryption methodology is going to be. And yeah, that’s as far as we’re going to take our IPsec VPN configuration. But I did just want to give you a quick basic demo of how to set up an IPsec VPN, at least on the Nsxt equipment on your own customer premises; this configuration may look different, or who knows, you may be using NSX at the other end of the IPsec tunnel as well.
11. Layer 2 VPN
So let’s just start with some of the basic elements of a layer-two VPN. And the basic purpose here is to stretch a layer to form a network over a geographic distance. So I want to maintain one consistent segment at two different locations, and I want to maintain the same IP addressing scheme at both of those sites. So let’s break down our little diagram here. We’ve got a transport node running at our NSF site, and we’ve got a virtual machine. And the VM is running on that ESXi host transport node. And it’s connected to a layer for VNI-backed segments. Now, it doesn’t have to be a VNI-backed segment; it could be a VLAN-backed segment that’s supported as well. But let’s just stick with VNI-backed segments for a moment here. And so I’ve deployed either a tier-one or a tier-zero service router.
Either one of those options will support this layer-2 VPN configuration. So, of course, my layer-two segment provides connectivity for my virtual machines to send traffic to that service router. and this is a very simplified diagram. We’re going to have inter-tier transit links and other stuff that’s part of that. But I just want to give you a real basic summary because we’ve already covered all of that ground. So the layer-two segment has a connection to that tier-one or tier-zero service router. And using that service router, we’ve also got a layer-two VPN tunnel established. So the service router is going to act as either a layer 2 VPN server or a layer 2 VPN client. And it’s going to have an established VPN connection to a router at another location. This essentially makes it appear as if this VM and the machines over here, in this case, the public cloud examples, are connected to the same layer-two segment. If VM-1 sends out a broadcast, such as an ARP request, that broadcast will travel through this layer-2 VPN tunnel and hit anything on the other side of the VPN.
So it’s really like taking that Ethernet segment and stretching it out to another physical location. So let’s take a few minutes and talk about some of the details. When you establish a layer 2 VPN, both sites are within the layer II broadcast domain. I mentioned ARP requests or things like DHCP Discover. Those are things that are broadcast, and those broadcasts will flood both sides of the layer 2 VPN. So if I have a layer-2 network that spans two sites, my IP address scheme is going to be the same at both sites. And that’s going to allow the movement of virtual machines between those sites. Because if I move a VM from one side of the VPN to the other, the IP address of that virtual machine does not need to change. Like I mentioned on the previous slide, Layer 2 VPN is compatible with either VNI- or VLAN-based networks. And as a matter of fact, you could have a VNI at one end of the layer 2 VPN and a VLAN at the other end of the layer 2 VPN. So you can extend a VLAN to a VNI or vice versa, and that’s really handy, especially if you’re doing data centre migration, which we’ll talk about in just a moment.
VLAN trunking is also supported. So if you have multiple VLANs that you want to bridge simultaneously, you can utilise VLAN trucking and establish a trunk port over the layer-2 VPN, and it’s ideal for data centre migration and disaster recovery. So let’s start by talking about data centre migration. So in the case of data centre migration, let’s say I’ve got this VM and I want to move it from my on-premises environment to a public cloud environment, and I’ve got a layer-2 VPN established. Well, what I can do then is just migrate this virtual machine. And assuming that both sides of this are VMware-based, I could V-motion this VM from the on-premises environment right into my public cloud environment. So that’s one really great option about having a layer-2 VPN: having a consistent addressing scheme. So that, number one, I’m not having to re-address my virtual machines as I migrate them, but number two, just having a consistent addressing scheme gives me the flexibility to gradually move virtual machines from one location to another. Another great use case for this is disaster recovery. So let’s assume that, hey, in the event of a disaster, maybe I’m using VMware Site Recovery Manager, or maybe I’ve just built my own homegrown solution.
It really doesn’t matter too much in either case. Let’s say I have a bunch of virtual machines running at one location and need disaster recovery to another. Assume the primary location fails and everything here at my site goes down. I want the ability to now boot up VMone at my other location if I don’t have to change the IP address of VM one.That simplifies this process. If I don’t have to update all of the DNS records for my virtual machines to indicate that they now have new IP addresses, that simplifies my process. So having a layer-two VPN between our protected site and our recovery site cuts down on the complexity of our disaster recovery response. And as I mentioned earlier, layer 2 VPN is supported on either a tier zero or a tier one gateway. And finally, a gateway can either be a client or a server; it cannot be both. So in a layer-two VPN connection, one end is always going to be the client. One end is always going to be the server. When you establish a layer 2 VPN, you will set up your tier 1 or tier 0 gateway as either a layer 2 VPN client or a layer 2 VPN server.