106. Auto Scaling – Scaling Down Operations
Hey everyone, and thanks for joining in. Now, in the previous lecture, we examined and confirmed that the scaling up policy is flawless. And in today’s lecture, we will look into whether the scaling down policy is also working perfectly or not. Now, there are two instances that were added. The IP address of those two instances is now automatically configured if you simply open it in the browser. Let me do that for the third instance as well. Perfect. So all three instances are up and running now.
Now, if you look into the monitoring aspect over here, it was 100%, and I closed the DD terminal where the DD was working. And now the CPU utilization has actually gone from 100% to 0%. Now the question is: why did the instance not scale down? And we made one mistake—that is, we decreased group policy. If you look, the take action is at zero, for instance, when the CPU utilization is less than or equal to 20%. So, actually, it should be a Remove operation. So let me just click on Edit, and I’ll select Remove, and I’ll put Remove two instances, and I’ll click on Save. Perfect. So now we have a proper policy for increases and decreases. And now, within a few moments, you will see that auto-scaling will come into the picture and the two new instances that were launched will be terminated. Now, one very important thing that I would strongly advise you to do is to always perform proper testing for auto scaling and to configure for the applications in your environment.
We had one use case where we had one production server, and due to increasing load, we saw that the server actually went down. It was configured with automatic scaling. But due to some misconfiguration in the cloud watch settings, the server did not auto-scale up. So this is the reason why you should always verify, whenever you create auto scaling, if the policies are working. And now, if you will see, both the instances that were created are actually shutting down. One more thing I wanted to emphasize is that when we created the auto scaling group, we chose three subnets, and whenever three instances are launched, auto scaling is smart enough to launch each instance in a specific subnet or availability zone. So you have USD one A, USD one B, and USD one C. So this is what an auto scaling group will automatically do. So that’s the gist of the “auto scaling down” policy. I hope this has been informative for you. And again, I’ll really encourage you to practice this once so that you can have a good understanding of how auto-scaling would work. So I hope this has been useful for you, and I’d like to thank you for viewing.
107. Auto Scaling Plans
Hey everyone, and welcome back. In today’s video, we will be discussing the auto-scaling lifecycle hooks. Now, in definitive terms, the auto-scaling lifecycle hook allows us to have control over the instance launch and termination states with the auto-scaling group. As a result, the definition becomes a little perplexing. So let’s understand this with a simple use case. So let’s say that you have a user case where you have an auto-scaling group, and within that auto-scaling group you will have instances that will scale out and scale back in again depending on the scaling policies that you might have. Now, you have an instance there that is scheduled to be terminated. Let’s say that you have a scaling policy where if CPU utilization is greater than 70%, then the instance will be launched. Now, over the past minute or two, you have realized that the average CPU utilization has gone down. That means the scaled-out instances will soon be terminated. So that is the first point over here.
So, typically, you’d want to back up all of that EC2 instance’s logs to AWS S3 and also run some deregistration scripts over there so that it deregisters itself from any centralized services that may be present. In general, in enterprises, a single EC, for example, may be registered with a central service; it could be AD, SpaceWalk, Nessus, and so on. So typically, once the instance is terminated, you want that instance to be deregistered. So that is the second step. And the problem used to be that even after the instance was terminated, it would still be present in the AD console, the SpaceWalk console, and so on. And it becomes a little difficult to identify which instance is still running and which instance has been terminated from the centralized services console. So that is the second important requirement.
And the third important requirement is that you can go ahead and terminate the instance once the second step is completed. So the lifecycle hook basically allows us to achieve this use case. So this is one of the use cases that you can implement. There are several others, but we have taken this simple one for our demo purpose. So, if you look at the overall diagram of the lifecycle hook, this is thanks to AWS documentation. So let’s understand this. You have an auto-scaling group over here, so you have a scale-out event that happens. So during the scale-out event, a new EC2 instance gets launched, all right? So initially it would be in the pending state, and then it would go into service, all right?
So we are not discussing the lifecycle hook over here. So there are two lifecycle hook events over here, one on the right hand side and one on the left hand side, which is the termination one. So the upper one, this lifecycle hook, is typically when a new EC2 instance gets launched, and the bottom one is when the EC2 instance gets terminated. All right? So let’s start from the beginning. You have an auto-scaling group. There was a large-scale event that happened. So a new EC2 instance got launched. So it goes into the “pending” stage. We previously stated that we were not considering lifecycle hooks at this time. So after the pending state, it goes into the in service stage. That means everything is up and running. After that, you have a scaling event that happens. This indicates that CPU utilization, or overall load, has decreased. So now you have the terminating stage, and from the terminating stage, you have the terminated stage. All right?
So now, in the use case that we discussed, that use case was associated with the terminating stage of the life cycle. So now what happens is that this new EC2 instance is in service. Now that scaling has happened, it is going to terminate. Now, if you have a lifecycle hook, then the policy will go to the terminating weight. If you do not have this lifecycle hook, then the instance will automatically be terminated. Now, since we want a solution where all the logs should be back up and some deregistration scripts should run, we need to have a lifecycle hook over here.
So let’s say that we have a lifecycle hook called a terminating weight. So now what will happen is that EasyToInstance will not get terminated at this specific stage. After this specific stage completes and you have the termination proceeds, then only the instance will get terminated. All right? So at this specific stage, you can go ahead and run scripts that can back up the logs and that can run the deregistration script. Once your automation scripts have been completed, they can proceed to the termination process stage and be terminated. Similarly, if you have something—let’s say you want to run certain scripts before the instance goes into service—you can have a lifecycle hook for pending weight, where you can run some kind of automation. Once completed, it moves to the “pending progress” stage before entering service.
So, this is the high-level overview. Let me show you a screenshot of what one of these stages looks like in real life. So, in this stage, this is the step where the instance is awaiting termination. That basically means that the scaling stage has occurred and the instance is in this specific stage, which is the terminating wait. Now, if you look at this screenshot, it basically states that we are waiting for a Terminate Lifecycle Action. And it is basically telling us that it is easily terminating, for example. This is the simple instance ID.
Now, it has not yet been terminated because it is still in this specific terminating wait stage. Now, at this stage, you can tell what automation scripts you might want to run. You might want to run deregistration scripts or remove the DNS record associated with the instance before it gets terminated. So that is something that you can do. Now, once all of your automation and everything has completed, there can either be a time off, let’s say after 1 hour, where it will automatically go to the TerminatingProceed stage, or you can send a specific API call saying that all the automation in this specific stage has completed and the instance can now be terminated. All right? So this is the first stage.
The second stage is where we are basically running a CLI command. And the CLI command is basically stating that, all right, all my automation scripts have successfully been executed, all the logs have been backed up, and all the deregistration scripts have completed. You can now exit the “terminating wait” stage and proceed with the EC to instance termination. So, if you look into the CLI command, it basically states that the lifecycle action result has continued. And it says that the instance ID is this specific one. If you see it, it is seven, five, three. And if you look at this screenshot, this is the instance that is scheduled to be terminated.
All right? So once your automation script has completed here, you can tell it to continue. After it is continued, it goes to the termination proceeding, and it is terminated. And you can see in the third stage that once you have given the go ahead — you see, you have given the go ahead over here saying that you can continue — the auto-scaling group will go ahead and terminate an easy instance. So this is basically what the lifecycle hooks are all about. Now, this hook that we were discussing is about terminating. You also have another that is used when the instance is launched. So this is something that you need to remember for the exams because there might be a question related to this. So let’s do one thing. Let me quickly show you how this might look on the console, because things are boring without practicals. So this is my EC instance. Now, this EC2 instance, if you look into the security group, is actually part of an auto-scaling group. So let’s go to the auto scaling group over here.
Now, within this auto-scaling group, you have one instance. One is the desired number, one is the minimum, and two is the maximum. So let’s do one thing. Let’s edit this specific group. And I’ll say that desired is equal to zero. All right? I’ll say the minimum is zero as well. And I’ll click on “Save.” All right? So now, you see, if you just open up this info box, it basically states that the instance may be in the process of launching or terminating in order to match the desired capacity.
Now, since the desired capacity changed from one to zero, That basically means that the EC instance would be in the terminating stage. So, if you look at this diagram, during the terminating stage, if you have the lifecycle hook, it goes to the terminating weight state. Now, in my case, I do have a lifecycle hook, and due to the lifecycle hook, you see, the instance state currently is terminating weight. It has not yet been terminated. So let’s quickly verify if this holds true. Currently, as you can see, the instance has not yet been terminated. Now, if you wonder where the lifecycle hooks are, if you click on the lifecycle hook tab, there is one simple lifecycle hook that is created. Now, if I click on Create Lifecycle Hook, there are two lifecycle hooks that are available. One is instance Launch, and the second is instance Terminate.
There’s also the hardbeat timeout. So basically, it will wait for a certain amount of seconds for the instance to be in the “terminating wait” stage, and after that timeout completes or after you send another API call, the instance would come out of the “terminating wait” stage and it would be terminated. So, if you look into the active waiting history over here, let me just remove the above information because it is minimising the screen. Great. So now it is much more visible. So currently, you see, it says that it is waiting for template lifecycle action. That is, this specific instance ID, which is it, is the same as one FB. This instance is scheduled to be terminated. It is not yet terminated because there is a Terminate Weight Action Lifecycle that is ongoing. So let’s assume that all of our automation scripts have been executed successfully and we are good to go with terminating instances because you really do not want big instances to be running for the next 1 hour, even after your automation script is completed. So you want them to be terminated.
So in order to terminate it, we’ll have to make an API call. Let’s try it out. I’ll go to the EC-2 instance. So I’m in my EC2 instance CLI, and basically I’ll copy one of the CLI commands. So what the CLI commands do is, you see, auto-scale is a table Auto Scaling. Then you have a complete lifecycle. Then the lifecycle hook name is Sample Terminate Hook. So this is basically the lifecycle hook name over here. So this is the hook that you want to continue with. Then you have the auto-scaling group name. So this is the name of the Auto Scaling Group with which this hook is associated. Then, at the lifecycle action result, you want to continue. And then you have to specify the instance ID. So let’s replace the instance ID over here quickly. So let’s copy the instance ID, and you have the region. After that, you can run this command, and everything has been successful, and you can see that it automatically states that the system is shutting down for the power of stage.
And basically, if you look into the instances and refresh it, you will see that the terminate-wait lifecycle has been changed to terminate-proceed, and within a minute or two, you will see that this specific instance has been terminated. So currently, you see it is shutting down, and then it will be terminated. So I hope you understand what the auto-scaling lifecycle hooks are all about. Again, this is quite important. Particularly if you have auto-scaling in enterprises and need to do a lot of deregistration. In fact, in the organisation where I’ve been working, we used to make extensive use of this terminating weight state to deregister our instance from the various monitoring services that we had. Anyway, that’s the high-level overview. I hope this video has been informative for you, and I look forward to seeing you in the next video.
108. Overview of Auto-Scaling LifeCycle Hooks
Hey everyone, and welcome back. In today’s video, we will be discussing the auto-scaling lifecycle hooks. Now, in definitive terms, the auto-scaling lifecycle hook allows us to have control over the instance launch and termination states with the auto-scaling group. As a result, the definition becomes a little perplexing. So let’s understand this with a simple use case. So let’s say that you have a user case where you have an auto-scaling group, and within that auto-scaling group you will have instances that will scale out and scale back in again depending on the scaling policies that you might have. Now, you have an instance there that is scheduled to be terminated. Let’s say that you have a scaling policy where if CPU utilisation is greater than 70%, then the instance will be launched.
Now, over the past minute or two, you have realised that the average CPU utilisation has gone down. That means the scaled-out instances will soon be terminated. So that is the first point over here. So, typically, you’d want to back up all of that EC2 instance’s logs to AWS S3 and also run some deregistration scripts over there so that it deregisters itself from any centralised services that may be present. In general, in enterprises, a single EC, for example, may be registered with a central service; it could be AD, SpaceWalk, Nessus, and so on. So typically, once the instance is terminated, you want that instance to be deregistered. So that is the second step. And the problem used to be that even after the instance was terminated, it would still be present in the AD console, the SpaceWalk console, and so on.
And it becomes a little difficult to identify which instance is still running and which instance has been terminated from the centralised services console. So that is the second important requirement. And the third important requirement is that you can go ahead and terminate the instance once the second step is completed. So the lifecycle hook basically allows us to achieve this use case. So this is one of the use cases that you can implement. There are several others, but we have taken this simple one for our demo purpose. So, if you look at the overall diagram of the lifecycle hook, this is thanks to AWS documentation. So let’s understand this. You have an auto-scaling group over here, so you have a scale-out event that happens.
So during the scale-out event, a new EC2 instance gets launched, all right? So initially it would be in the pending state, and then it would go into service, all right? So we are not discussing the lifecycle hook over here. So there are two lifecycle hook events over here, one on the right hand side and one on the left hand side, which is the termination one. So the upper one, this lifecycle hook, is typically when a new EC2 instance gets launched, and the bottom one is when the EC2 instance gets terminated. All right? So let’s start from the beginning. You have an auto-scaling group. There was a large-scale event that happened. So a new EC2 instance got launched. So it goes into the “pending” stage. We previously stated that we were not considering lifecycle hooks at this time. So after the pending state, it goes into the in service stage. That means everything is up and running.
After that, you have a scaling event that happens. This indicates that CPU utilization, or overall load, has decreased. So now you have the terminating stage, and from the terminating stage, you have the terminated stage. All right? So now, in the use case that we discussed, that use case was associated with the terminating stage of the life cycle. So now what happens is that this new EC2 instance is in service. Now that scaling has happened, it is going to terminate. Now, if you have a lifecycle hook, then the policy will go to the terminating weight. If you do not have this lifecycle hook, then the instance will automatically be terminated. Now, since we want a solution where all the logs should be back up and some deregistration scripts should run, we need to have a lifecycle hook over here.
So let’s say that we have a lifecycle hook called a terminating weight. So now what will happen is that EasyToInstance will not get terminated at this specific stage. After this specific stage completes and you have the termination proceeds, then only the instance will get terminated. All right? So at this specific stage, you can go ahead and run scripts that can back up the logs and that can run the deregistration script. Once your automation scripts have been completed, they can proceed to the termination process stage and be terminated. Similarly, if you have something—let’s say you want to run certain scripts before the instance goes into service—you can have a lifecycle hook for pending weight, where you can run some kind of automation. Once completed, it moves to the “pending progress” stage before entering service. So, this is the high-level overview. Let me show you a screenshot of what one of these stages looks like in real life. So, in this stage, this is the step where the instance is awaiting termination. That basically means that the scaling stage has occurred and the instance is in this specific stage, which is the terminating wait.
Now, if you look at this screenshot, it basically states that we are waiting for a Terminate Lifecycle Action. And it is basically telling us that it is easily terminating, for example. This is the simple instance ID. Now, it has not yet been terminated because it is still in this specific terminating wait stage. Now, at this stage, you can tell what automation scripts you might want to run. You might want to run deregistration scripts or remove the DNS record associated with the instance before it gets terminated. So that is something that you can do. Now, once all of your automation and everything has completed, there can either be a time off, let’s say after 1 hour, where it will automatically go to the TerminatingProceed stage, or you can send a specific API call saying that all the automation in this specific stage has completed and the instance can now be terminated. All right? So this is the first stage.
The second stage is where we are basically running a CLI command. And the CLI command is basically stating that, all right, all my automation scripts have successfully been executed, all the logs have been backed up, and all the deregistration scripts have completed. You can now exit the “terminating wait” stage and proceed with the EC to instance termination. So, if you look into the CLI command, it basically states that the lifecycle action result has continued. And it says that the instance ID is this specific one. If you see it, it is seven, five, three. And if you look at this screenshot, this is the instance that is scheduled to be terminated. All right? So once your automation script has completed here, you can tell it to continue. After it is continued, it goes to the termination proceeding, and it is terminated.
And you can see in the third stage that once you have given the go ahead — you see, you have given the go ahead over here saying that you can continue — the auto-scaling group will go ahead and terminate an easy instance. So this is basically what the lifecycle hooks are all about. Now, this hook that we were discussing is about terminating. You also have another that is used when the instance is launched. So this is something that you need to remember for the exams because there might be a question related to this. So let’s do one thing. Let me quickly show you how this might look on the console, because things are boring without practicals. So this is my EC instance. Now, this EC2 instance, if you look into the security group, is actually part of an auto-scaling group. So let’s go to the auto-scaling group over here.
Now, within this auto-scaling group, you have one instance. One is the desired number, one is the minimum, and two is the maximum. So let’s do one thing. Let’s edit this specific group. And I’ll say that desired is equal to zero. All right? I’ll say the minimum is zero as well. And I’ll click on “Save.” All right? So now, you see, if you just open up this info box, it basically states that the instance may be in the process of launching or terminating in order to match the desired capacity. Now, since the desired capacity changed from one to zero, That basically means that the EC instance would be in the terminating stage. So, if you look at this diagram, during the terminating stage, if you have the lifecycle hook, it goes to the terminating weight state. Now, in my case, I do have a lifecycle hook, and due to the lifecycle hook, you see, the instance state currently is terminating weight. It has not yet been terminated.
So let’s quickly verify if this holds true. Currently, as you can see, the instance has not yet been terminated. Now, if you wonder where the lifecycle hooks are, if you click on the lifecycle hook tab, there is one simple lifecycle hook that is created. Now, if I click on Create Lifecycle Hook, there are two lifecycle hooks that are available. One is instance Launch, and the second is instance Terminate. There’s also the hardbeat timeout. So basically, it will wait for a certain amount of seconds for the instance to be in the “terminating wait” stage, and after that timeout completes or after you send another API call, the instance would come out of the “terminating wait” stage and it would be terminated. So, if you look into the active waiting history over here, let me just remove the above information because it is minimising the screen. Great. So now it is much more visible. So currently, you see, it says that it is waiting for template lifecycle action. That is, this specific instance ID, which is it, is the same as one FB.
This instance is scheduled to be terminated. It is not yet terminated because there is a Terminate Weight Action Lifecycle that is ongoing. So let’s assume that all of our automation scripts have been executed successfully and we are good to go with terminating instances because you really do not want big instances to be running for the next 1 hour, even after your automation script is completed. So you want them to be terminated. So in order to terminate it, we’ll have to make an API call. Let’s try it out. I’ll go to the EC-2 instance. So I’m in my EC2 instance CLI, and basically I’ll copy one of the CLI commands. So what the CLI commands do is, you see, auto-scale is a table Auto Scaling.Then you have a complete lifecycle. Then the lifecycle hook name is Sample Terminate Hook. So this is basically the lifecycle hook name over here. So this is the hook that you want to continue with. Then you have the auto-scaling group name. So this is the name of the Auto Scaling Group with which this hook is associated. Then, at the lifecycle action result, you want to continue. And then you have to specify the instance ID.
So let’s replace the instance ID over here quickly. So let’s copy the instance ID, and you have the region. After that, you can run this command, and everything has been successful, and you can see that it automatically states that the system is shutting down for the power of stage. And basically, if you look into the instances and refresh it, you will see that the terminate-wait lifecycle has been changed to terminate-proceed, and within a minute or two, you will see that this specific instance has been terminated. So currently, you see it is shutting down, and then it will be terminated. So I hope you understand what the auto-scaling lifecycle hooks are all about. Again, this is quite important. Particularly if you have auto-scaling in enterprises and need to do a lot of deregistration. In fact, in the organisation where I’ve been working, we used to make extensive use of this terminating weight state to deregister our instance from the various monitoring services that we had. Anyway, that’s the high-level overview. I hope this video has been informative for you, and I look forward to seeing you in the next video.
109. Creating our first LifeCycle hook in ASG
Hey everyone, and welcome back. Now, in the previous video, we got a high-level overview as well as a small demonstration of how the auto-scaling Lifecycle hooks might look. Now, in today’s video, we’ll start from scratch and create the lifecycle hooks, then look into how exactly they work. So I’m in my EC, console-ing. So let’s do one thing.
Let’s go to the auto-scaling section and create a new auto-scaling group here. So I’ll click on “get started.” It will basically take me to the launch configuration. I’ll select the Amazon Linux AMI, or here we’ll use TWO Micro so that it comes under the free tier. The name. It’ll be known as the Kplabs auto-scale configuration. Now, here I am, and my role is quite important. Basically, if you look into my role, I already have one. I am a role. Let me quickly show you what this IAM role is all about. So I’ll go to Roles and select Sample as Zero. Now, for the time being, I have just attached auto-scaling full access here. Now, this specificity is required because instances will basically send an API call to Auto Scaling once the automation script has completed. So, in general, this API call permission should be present within the EC to instance, primarily via the IAM role. And this is the reason why we have an Im rule that was created.
There is a very basic instance-based EC2 role available. So I’ll just select this role, and I’ll click on Add Storage. Eight GB is fine. We can go ahead and skip to the review. and I’ll create a launch configuration. I’ll select the key and click on “Create Launch Configuration.” So once the launch configuration has been created, the next thing is the auto-scaling group. Let’s call it this: KP Labs. Hyphen ASG. And the group size? One instance is quite good for our demo. We’ll just select one subnet. Let’s look into the scaling policy. We’ll say keep the group at its initial size. We’ll skip the notification, I’ll do a review, and I’ll create an auto-scaling group. So this has created an auto-scaling group. And, in essence, you have one instance as desired. Let me just remove this information. So you have one instance, as desired.
So basically, a new instance would be launched over here. So you see, the lifecycle is currently pending. And if you look over here when you scale out or create a new auto scaling group, you’ll see that the lifecycle is currently pending. So this is the lifecycle stage. Along with that, let’s go to the instances here. So a new instance has been created, and this instance has an im role called a Sample ASG role. So let’s do one thing. Let’s quickly wait for a moment. All right. So our EC2 two-instance state is now running. Returning to the auto-scaling console, you can see that the Lifecycle is currently in use. So, from the Auto Scaling Group, we looked into pending and in-service. So here you can also attach a lifecycle hookover, and you can also attach a lifecycle hook during the termination stage, depending on where you want to have it based on your use case. So we’ll have it at the termination stage. So, let’s go to the EC console to enable auto-scaling. And now, basically, you have a tap for the lifecycle hook. Now, there are no lifecycle hooks that are configured. That basically means that if I change the desired capacity to zero, then this EC2 instance would be terminated automatically. So we do not want that.
Let’s click on “Create a Lifecycle Hook.” This hook is known as a sample termination. Hook. Now, with this lifecycle hook, it is basically saying that you want it at the first stage, when the instance isn’t pending, or you want it while the instance is getting terminated. So there are two lifecycle places. At this point, I’ll call it “instance terminate,” with a default reset. I’ll say “continue,” and we’ll press the “create” button. Great. So it has been created successfully. Now, what could happen is that when you change the desired capacity of the instance, let’s do it now and we’ll understand it better. So I’ll go, I’ll click on Edit here, I’ll change the desired capacity to zero, and I’ll click on Save. Oops. We’ll also have to change the minimum, and I’ll click on Save. All right. So now it basically says that the instance may be in the process of launching and terminating. That is fine. So currently, our instance is in service. So now it would be indeterminate. Now, since we have the lifecycle hook here, we will go to the terminating weight stage. So now let’s go to the instances, and currently you see that the lifecycle is terminating weight.
Now, the question is: how long will my instance be in this specific stage? Now, we already discussed that once the instance is in this stage, we can make an API call after our application has completed or our automation script has completed all the deregistration, et cetera. It can call the API call to continue so thatthe EC to instance can come out of this pageand it can go ahead and be terminated.In case our automation script did not make that call, you also have the timeout here. So after this timeout, automatically, the EC Two instance will be terminated. All right? So this is the purpose of this specific configuration over here. So let’s do one thing. I’ll copy the specific IP address of my EC2 instance, and I am connected to that specific EC2 instance. Now, I do have one sample command over here. So let’s look at what this command does. So the first thing is that you have AWS auto scaling.Then you have the complete lifecycle of an action. Basically, our lifecycle action is in the terminating phase. So we want to go ahead and finish it so that it can proceed and be terminated. Now the next thing is that you have to specify the lifecycle hook name. Now, in our case, the lifecycle hook name is Sample Termination Hook.
So let’s copy this specific one, let me clear the screen, and let’s go ahead and start to replace each one of these things over here. So I’ll replace the name of the hook. All right, now the next argument is the name of the auto-scaling group, and you have to basically specify the name of the auto-scaling group where your lifecycle hook is created. So in my case, the auto-scaling group name is Kplabsg. So let’s go ahead and edit that as well. All right, then, is the lifecycle action, and the result is to continue over here, and the last part is the EC to instance ID, where you will have to specify the instance ID. Let me copy this up. So I’ll go back to the ECtokensol, and I’ll copy the instance ID. So there is also a way of putting the tokens here, but for simplicity we’ll be using the instance ID. And the last one is the region name; here it is US East 1, but since we are in Oregon, let’s go ahead and replace this with US West 2. All right, now before we do that, I would just like to share with you. If you look into the activity history here, you will see that it basically states that it is waiting to terminate the instance lifecycle, and within the instances, the lifecycle is terminating weight. And here the EC2 instance is running. So let’s go ahead and press Enter.
All right, so it is giving an error. Within the error, it says that argument instance ID expected one argument, and yes, we forgot to remove this specific part. That’s why there are two arguments over here. So let’s quickly remove this. So this is our instance ID, and this is something that we had to remove earlier. All right, let’s press Enter now. Great. So our command has been executed successfully, and as soon as it got executed, you see it basically as Linux sending a broadcast message that the system is going down for the power of stage.And now if you just click on Refresh, you’ll see the instance state is shutting down, and the same thing would be reflected over here because currently the lifecycle is terminating weight.I’m sure you can guess what the next life cycle will be. It’s terminating proceedings. If you just click on Refresh, the terminating process will proceed, and the last one will be terminated once the EC two instances have terminated. So this is the high-level overview of the auto-scaling lifecycle hooks. I hope this video has been informative for you, and I look forward to seeing you in the next video.
110. Overview of AWS OpsWorks
Hey everyone, and welcome back to the Knowledge Fold video series. And in today’s lecture, we’ll be speaking about OpsWorks, and we’ll look at a very high-level overview related to what exactly this service is all about. So AWS Ops is basically a configuration management service that provides managed instances of Chef and Puppet. Now, in the early year lecture, we were already discussing what configuration management tools are all about. So in the configuration management tool, we write a small script. So that script will depend on various configuration management tools. Like for Ansible, the script needs to be in YAML. For Shift, it should be in a different format. For Puppet, it should be in a different format. So depending on the configuration management tool that you will be using, the way you write the script will differ. Now, integration of EC to with configuration management Toolbrings up a lot of great possibilities on howservers can be configured, deployed and managed.
Nowadays, most organisations use some kind of configuration management tool in a production environment, such as Ansible, Puppet, Chef, and so on. However, if you have a native integration by default, such as when your instance is launched, you may want to do something like generally when your instance is launched, we manually install Ensile, and once Ansible is installed, we can deploy the Ansible related roles. So what happens in AWS OpsWorks is that the instance that gets launched gets launched with the OpsWorks, or, as I would say, it gets launched with the Chef Agent. If you are using Chef as the configuration management tool, we’ll be looking into how exactly that would work. So, consider the following use case: while an EC2 instance is being launched, we want to install certain packages such as NGINX, PHP, FPM, and MySQL, as well as a custom SSH configuration file, followed by a SSH server restart.
Now this is a very simple use case because, generally, let’s assume you are launching a new EC2 instance. Now, if you want to run Chef in that EC2 instance, you should install a Chef agent there. Now, in order to install Chef Agent, you might do it via some bash scripting, or you would log in there and manually install the Chef agent. Once the Chef Agent is installed, you can go ahead and deploy the Chef recipes. So any instance that generally gets launched via AWSOps works; if it is configured with Chef, then the Chef Agent is automatically installed over there. So there are a few concepts about stacks and layers in OpsWorks that we have to understand. Where you see this overall diagram, this is called a stack, and within the stack there are multiple related entities, and these related entities are called a layer. So this entire diagram is called a stack. And within this, you have multiple layers. So this layer contains the logical grouping of similar instances. So you have an ELB layer, you have the PHP application layer, and you have the database layer. And when all of these things are combined, you have an OpsWorks stack. So, when it comes to stack and layer, you can see that you have an OpsWork stackover here with three layers. You have the elastic load balancer layer, you have the application server layer, and you have the database layer.
Now, for each of the layers, you can define a custom Chef recipe that would do the custom things that you have defined. Like for the application layer, you will have a cookbook repository, which would install NGINX, which would install PHP, which would do some other things. You may want to install MySQL and upload some custom MySQL configuration files for the database layer so that they can be configured in the database layer. So you see the cookbook repository getting integrated with the app layer; it also gets integrated with the database layer. And configuration management will perform different tasks for each layer. So let’s do one thing. Let’s go ahead and open up Ops Works so that it will become much more clear. So the first thing we’re supposed to do in OpsWorks is build a stack. So I’ll call it adding a stack. So there are three stacks over here. You have one stack for chef eleven, and you have one for chef twelve. However, for our purpose, we’ll create a sample stack. This is also based on Chef. You see, this is a Chef 12-bit sample stack.
So, basically, this stack will have one layer, which is the application layer, which configures the Node JS application. So let’s go ahead and create this tag. So first the stack gets created, then you have to basically set up a Chef cookbook repository. So let me show you what I mean by this. So this is my sample stack based on Linux. And if you look into the layer, there is one layer that is the node application layer. So if I go inside the layer, there is the node JS application layer. And this node’s JS application layer has a custom shift recipe. So if you go into the recipe, you will see that there is a repository URL. There is also a node JS demo within the deploy states. So what this will do is, once the EC2 instance gets launched based on this specific layer, the EC2 instance will get configured based on the Chef recipe that is configured. Now, a chef’s recipe is not nothing, but what are the things that you want to do once your instance gets launched? Do you want to install NGINX?
Do you want to update some packages like what we did in Ansible? Do you want to update some SSH configuration files? So anything that you want to do, you create a chef recipe for that, and you put the chef recipe URL within the application layer. Once the layer is configured, any instances launched within it will be based on the chef recipe that was created. So let me just start this EC2 instance. Now once this EC2 instance gets started, it will have the chef agent installed by default, and that chef agent will basically pull the chef recipe and deploy all the things that are mentioned in that chef recipe.So let’s do one thing. Let me click over here, and let’s go to the EC2 instance. So basically, the EC2 instance is booting up. So you will see this instance is booting up, and once the booting up gets completed, then the chef agent will pull the recipe that is mentioned in this node JS application layer, and it will perform all the things that are mentioned over here. So in this case, it will just install the nodeJS application and start the server. So let’s wait for a while. Let me just wait for the initialization to complete.
And meanwhile, I’ll just show you a few things. You can definitely add your own layer as well. So I can add a layer. Now this layer can be based on OpsWorks. So this will have all the chef-related recipes that you can configure. You have an RDS layer, and you also have an ECS layer if you want to manage Docker-related containers. So let me call this a PHP application, and the short name would be PHP. So I’ll click on “add layer.” Now you can see there are two layers. Now the recipe that will be present in this PHP application, this recipe will not only do PHP-related things; let’s assume that the recipe that we mentioned over here will install the PHP-related packages, and the recipe that we mentioned in the Node JS application layer will install the Node JS-related packages. Now, whenever you launch an instance, you can do so in one of these layers. So by default, you see that there is an instance that has already been launched in this NodeJS app layer. So let’s open up this EC2 instance. Let’s try this out. Okay, so let’s just wait for a while. Let’s verify the security group. Okay, so it is getting initialized. You can now use the instancein PHP app layer to launch it. So you can click on “add instance,” and you can add one more instance over here.
So you have to specify the instance size and the subnet, and you can even specify various things related to the SSH key and all. Currently, there are a good number of operating systems that are supported. So earlier there was just Amazon Linux support, but now you have Amazon Linux Cento as well as Ubuntu with the option of launching it with a custom AI. So let’s click on the NodeJS app and let me try it out. As you can see, this is the EC2 instance, which is pre-configured with the Node JS application and a website. So now, whenever you launch another instance based on this layer, it will have the exact same setup because the setup is configured in the chef recipe that is linked over here; you can definitely put your own chef recipes that you want. So you can change the repository URL, and you can put in your own Chef recipe that would perform certain tasks. So I hope you understood the basics about what a stack is and what layers are. One thing to keep in mind is that if you launch an application in the PHP layer (for example, the EC), it will boot up with the recipes that you have configured for only the PHP layer. Similarly, if you launch an EC2 instance in the NodeJS app layer, it will take the recipes that are configured in this NodeJS app layer. So this entire part becomes a stack. So that’s the fundamentals of OpsWorks.
I hope you have a very high-level understanding of how AWS Ops works. One thing that I would share as a personal opinion is that I have been working with a lot of clients and Ops Works is quite good, but there are great alternatives to Ops Works, which include TerraForm as well as Ansible. So when you use Terraform with Ansible, you get something similar in functionality. So this is something I would strongly advise you to try. So I’ll just type in Terraform. So Terraform is a great infrastructure as a code tool, and this has a direct integration with Ansible. So this is something that you should try out. We already have plans to launch the course, and this is a very good alternative that I have seen a lot of big organisations using quite effectively. So anyway, this was just the side talk. This will not be covered in the exam, but questions about OpsWorks may. So try it out and check if you find it useful enough to be used in your organization. I hope this has been useful for you, and I look forward to seeing you in the next lecture.