37. EC2 Key-Pair Troubleshooting
Hey everyone, and welcome back. In today’s video, we will be discussing the EC to keep air. Now you might be wondering why we are discussing these simple topics at the specialty level of certification. And the answer to this is that you might get certain questions related to this topic that might actually confuse you, just as they have actually confused a lot of experienced people as well. So let’s go ahead and understand more about this topic. Now, this is something that I’m sure all of you know: whenever we create an EAI instance, we generally specify the associated key pair.
So if you see over here, I am specifying the associated key pair Kplabs New, and then we can go ahead and launch an instance. So behind the scenes, what AWS will do is take the public key associated with this key pair and store it under the authorized keys. Now, as far as the exams are concerned, there are two specific confusing questions that might be asked. So you should be prepared with those. So these are mostly related to troubleshooting. So you might be asked, “Let’s assume that you have created an instance with the KP labs having a new key pair.” Now, what would happen if you deleted this specific key from your AWS console? So that would be a question that would be asked.
Now, what do you think will happen if you delete this key pair from your AWS console and there is already an instance that is running? Will you be able to log in or will you not be able to not? So just think about it for a minute, and you can then resume the video. All right, so I assume that you have already thought of your answer. So what we’ll do is take the practical aspect and look into what exactly happens so that it becomes very clear for us. Now I am in my EC management console. So within this, I’ll go to my key pair and basically import a key pair. So, within my Linux box, I’ll generate a new key pair, do a cat on IDRs apart, and then copy and import this to my AWS. So the key pair name will be Kplash Heaven Demo. And I’ll go ahead and click on “Import.” Perfect. So this is a key pair that is imported. Now let’s go ahead and launch a new instance with this specific key pair. So I’ll just try to keep everything as simple as possible.
The subnet will be selected as a public subnet, and we’ll go ahead and click on Add. I’ll choose a security group. Let me just click on Security Group, which is KplabsFun, which actually has been used for the demo in the security specialty book that has been published. And I’ll go ahead and click on “Review.” I’ll launch it, and while it’s launching, if you will see, I have specified kplabs with a demo as the key pair. So let’s just wait. And to make it easier to identify, I’ll simply call it Kplabs Demo. Perfect. So let’s just wait for a moment for the instance state to be running, and we can then perform the practical to see what would happen if we deleted the key pair. All right? So now our Kplabs demo machine is up and running. So what we’ll do is copy the IP address and the public IP associated with the instance, and then go ahead and connect with our private key. Perfect. So now that we’ve connected with our private key, one thing I wanted to show you is that whenever you specify—and I’m sure you already know this—because you basically upload the public key within the keypad.
Now, whenever you create an instance with one of the keys, AWS will store the public key within the authorised underscore keys file. So I’ll show you that. So for an Amazon Linux machine, it would be under the user-authorised underscore keys. So this is the key that AWS has stored. Now, one of the troubleshooting questions that we were discussing is, “What would happen if I deleted this specific key pair?” Let’s try it out. So I’ll delete the key. Now the question is whether I will still be able to log in or not. So let’s try it out. I’ll log out and let’s try to log in again, and you will see that I am logged in. So, one important thing to remember is that since the public key is stored under the authorized underscore keys file, even though you have deleted the key from the AWS console, it will not have any effect on your ability to log into the machine because your public key is stored in the authorized key file. So this is one part that you should be remembering.
Now, the second troubleshooting point that would generally come up is that, if we assume this is the old instance A, which is created with a key pair kplabs (hyphen “old”), You then create a new EC to instance from this one. And you specify Kplabs hyphen new dot pup as the key pair this time. AWS will not remove this key from the authorized keys list. It will still remain. The new key pair, Kplabs hyphen pub, will, however, be added to the authorized underscore keys file. This is again a very important part that you should be remembering. So, just in case you’re wondering, what would you do if you lost the key pair associated with the EC2 instance? One of the most basic solutions is to take the AMI and launch a new EC2 instance from it, then create the EC2 instance. So let’s again take this scenario from a practical point of view. So let me do one thing. I will delete the associated private key associated. So now I won’t be able to log in to the easy instance. So this is one of the very common scenarios that you might find. Now, there are various solutions to this. Many people find that creating a new AMI from this EC2 instance is one of the simplest ways to accomplish this. So let’s try that out. I’ll stop the EC-2 instance. As a result, the EC-2 instance has been terminated. I’ll quickly go ahead and create an image of this EC2 instance. Let me name it Kplabs, and for the description, I’ll put the same, and I’ll click on “create image.” Perfect. So our AMI is getting created, and it will take a little amount of time.
I’ll pause the video and resume it once the status of the AMI is complete. All right, so our AMI has been created. So what we can do is launch a new EC instance from this AMI. And before we do that, since we have already deleted our key pair, I’ll go ahead and create a new key pair. In fact, let me create it on my Linux box itself. All right, so this is a new key pair. I’ll copy the public key, and I’ll import a keypad as, say, Kplabs demo Hyphen New. So this is the new keypad. So now let’s go ahead and launch the EC instance. In fact, let’s just select the right security group as well. I’ll review and launch, and this time I’ll select the Kplabs demo new key pair. Perfect. So this is the new ECTwo instance that is getting launched. I’ll just name it the right way so that it will be easier for us to understand. Perfect. So now our new instance is up and running. I copy the IP address, and we’ll use the new key pair to log in.
As a result, I perform SSH EC to the user at the rate of the IP address. Now, if we go ahead and look at the authorized keys, you will find that there are two public keys. One is associated with the Kplabs demo that we had created earlier. And second is the new Kplabs demo that we created now. In the event that you recover the older private key, you will still be able to access this EC2 instance. Do remember that whenever you create a new ECTwo instance from the AMI, the private key, or I would say the public key, that you specify during the time of the launch will be appended to the authorised underscore keys file. Now, one last thing I wanted to make sure everyone understood was that if you copy this AMI, let’s say this AMI that we created, if you copy this AMI in a completely different region. So let’s assume you copy this AMI into the Tokyo region, and within the Tokyo region, you launch a new EC to instance. That new EC2 instance will again have these authorised underscore keys, which will have both of these public key files. So even in the Tokyo region, if you launch the instance with the AMI, you will still be able to login perfectly with these keys, which are present in the instance.
38. EC2 Tenancy Attribute
Hey everyone, and welcome back. In today’s video, we will be discussing the EC two-tenancy attribute. Now, when we launch an instance within the VPC, there are certain specific tenancy attributes that can be associated with those two EC2 instances. So there are three tenancy attributes that are available. The first is a shared instance, the second a dedicated instance, and the third a dedicated host. So let me quickly show you before we actually begin and look into each one of them. So I’m in my AWS Management Console; I’ll go to EC2 and select Launch Instance. Let me quickly give the T two micro, and within this configuration screen, within the tenancy, if you will see over here, there will be three tenancies available. The first is shared, the second is dedicated (also known as a dedicated instance), and the third is dedicated host, which is essentially the third option. So these are the three tenancy options that are available, and for the exam, we should understand the differences between each one of them and when you should be using either one of them.
Now, the shared tenancy is pretty straightforward and easy to understand. So basically, whatever simple-to-implement thing you launch So let’s assume that there are three easy instances over here on top of a virtualization layer. So whatever easy-to-launch instances that you will be launching will be launched on shared hardware. So if this is one hardware, within this hardware there are multiple EC2 instances, and it might be the case that this EC2 instance belongs to you, the second EC2 instance belongs to some other AWS customer, and the third EC2 instance again belongs to a different AWS customer. So this is what is called the “shared tenancy.” Now, one of the issues with this kind of approach is that let’s assume that there is a second AWS customer who is really abusing a lot of things. So if he’s abusing the network, the chances are that, since all of the instances are on the same hardware, your instance will also get affected. So there is always a little risk involved in the shared tenancy.
The second is the dedicated instance. Now, within dedicated instances, whatever EC2 instance you run, it runs on hardware that is dedicated to a single customer, which is you. So it might happen that there can be multiple EC2 instances that will be running on common hardware, but all of the EC2 instances that will be running these EC2 instances will belong to your AWS account. So it might not be like the first EC2 instance belongs to you, the second EC2 instance belongs to some other customer, and all of them belong to your AWS resources only. Now, one of the problems with this kind of approach, due to which the dedicated host was released, is that, basically, if you stop and start the EC2 instance, it is not necessarily that it will be started on the same hardware; it might be started on entirely different hardware, which again would be dedicated to you. But it is possible that when you start and stop the EC, for instance, it might not start on the same hardware, and many times you are specifically doing the perpetual licenses, which are directly tied to the hardware.
So there are certain licences related to Oracle and Microsoft that are directly tied to the hardware. So for that, what customers need is that even when you start and even when you stop the EC, for instance, it should be launched on the same hardware because certain licenses are directly tied to the hardware and you cannot really change the hardware. Like after stopping and starting, if the hardware gets seen, it’s really a big challenge, and this is the reason why. Dedicated host ware is now available, and one of its examples or use cases is that even if you stop and restart the instance, it will be launched under the same hardware box. Now, the second important pointer is that a dedicated host is a physical server, which basically allows you to have much more granular control at the socket, core, or even VM-based licence level.
Again, as we discuss, you have Windows, you have Oracle, you have Suzy, and various others. Let me just cancel now that I’m in the EC console. So you’ll notice in the instances here that you have a dedicated host where you need to allocate a host over here so you can allocate a host. So this host can be configured depending upon your requirements; you do not really have a T, two micro, and all of those things, and this is like a physical server, and you can control how many instances need to be launched and various other things when it comes to the dedicated host.
39. AWS Artifact
Hey everyone, and welcome back. In today’s video, we will be discussing AWS artifacts. Now, AWS Artifact is basically a portal in AWS that will provide you access to various AWS security measures as well as compliance documents, which are also referred to as audit artifacts. Now, as you are probably aware, many AWS services are compliant with various industry standards such as PCI DSS, HIPAA, and others. So let me quickly show you this part. So if you’ll see this AWS blog, it says that AWS adds twelve more services to its PCIDSS compliance programmer and that it has added API Gateway, Cognito, Work, Docs, and Lambda, among various others. So, if your organisation is subjected to a PCI DSS audit tomorrow and you say, “We’re completely hosted on AWS, and AWS services are already compliant,” So if you say this to an auditor, the first question that auditor will ask you is to show the compliance document for those AWS services.
So, let’s say you’re using API Gateway and you’re telling the auditor who’s auditing your organisation for PCA DSS compliance that API Gateway is already PCA DSS compliant and we don’t need to do much. So the auditor will ask you to provide a document stating that API Gateway is compliant with the PCI DSS. Now, that document is not an AWS referendum; it is actually an official PCI DSS compliance document. As a result, it is also referred to as “attestation of compliance” in PCI DSS terminology, which is similar to “attestation of compliance.” So all of those documents can be downloaded from this portal, which is referred to as AWS artifacts. So let’s go to the management console, and I’ll select Artifact. Now, within this, if you will see, there are a lot of artefacts that you can download. Many of them are very useful in terms of compliance. So if I do a PCI DSS, you basically get an AOC, which is like an attestation of compliance. And if you want to download this artefact, you can get this artifact. You must read the NDA agreement, which you can accept and download by clicking here. And this will basically download the PCI DSS attestation form.
40. Lambda@Edge
Hi everyone, and welcome back to the Knowledge for Video Series! In previous lectures, we learned the fundamentals of DDoS and what a single machine can do for DDoS-based attacks of DDoS and what a single machine can do for DDoS-based attacks.
So generally, what happens is that hackers use the full botnet of servers to attack the websites, and a lot of websites go down because of distributed denial-of-service attacks. So what we’ll do today is learn about various techniques for mitigating DDoS attacks on our infrastructure. So there are four major points to understand as far as mitigating DDoS is concerned. The first is to be ready to scale as traffic surges, so you should be ready to scale up if the traffic increases. So we’ll understand all of these points in detail. So the second point is to minimize the attack surface area. This basically means that you should not expose your entire infrastructure structure to the internet because DDoS attacks are most likely to occur in the exposed area, which is usually the public subnet. So this is what the second point says.
The third point says to know what is normal and what is abnormal. This is specifically applicable to enterprise websites; they should have a proper metric to understand that this much traffic is normal and this traffic is abnormal. This is a critical point. And the fourth point is to create a plan for attacks. So this basically means: What will you do when there is an ongoing DDoS attack? So you should have a proper plan for that as well. So let’s understand each point in detail. So the first point again is to be ready to scale. So basically, our infrastructure, or your infrastructure in AWS, should be designed to scale up as well as scale down whenever required. So this will not only help you during your peak business hours, but it will also help you protect yourself under a DDoS attack. So, to scale infrastructure up and scale infrastructure down, you can use various AWS services, specifically ELB and auto scaling. Now, for example, whenever a CPU load is greater than 70% on the application server, it automatically adds one more application server to meet the needs. So generally, in a DDoS attack, there is resource consumption.
Assume your application server—your current application server—is using 70% of the CPU. Then the auto-scaling group should automatically add one more application server to meet the needs. So this will help you not only during your peak hours, or, as I would say, “suddenly when the traffic comes,” but also when there is a DDoS attack going on. It is very important to always have your infrastructure ready for scaling. This is the first point. The second point is to minimise the attack surface area. So again, this is possible if you have a properly decoupled infrastructure. So as with PCI, DSS also says that one server should be used for one service; there cannot be multiple services on a single server. So, for example, an application server and database server should not be in the same EC2 instance.
Now, let’s assume that you have a single EC2 instance running both the application server and the database server. So if you have such a scenario and if there is a DDoS attack that is happening on that particular EC2 instance, not only your application server will go down, but along with that, even your database will go down. Now, if, for example, you have a separate EC2 instance for application and database, and if you have a sudden DDoS attack, then just the application server will go down. In the worst case, your database server will still be up and running. So, it is very important to always have a decoupled infrastructure, and in order to have a decoupled infrastructure, there are various services like SQS and elastic beam stopping that will help here. Third, understand what is normal and what is abnormal. So there should be a key matrix that defines that this is normal behavior. So again, an example is that of a website that is receiving huge traffic in the middle of the night at 3:30 a.m. Assume you have an e-commerce website and you suddenly see a large increase in traffic at 3:00 a.m.; this is actually unusual.
You can see that this is unusual because an e-commerce website for a specific country does not receive a lot of traffic at night. So, similar to this, you should have a key matrix that can help you as a security engineer determine whether this amount of traffic at this time, for example, is normal or abnormal. So again, various services can help you, like Cloud Watch and SNS, which are important services that can help in this case. Now, the fourth and most important point is to create a plan for attack. So let’s assume that there is an ongoing attack on your intrastructure. Now, how you handle this situation or what actions you take in this scenario are critical. So you should have a plan to mitigate DDoS attacks or to mitigate ongoing DDoS attacks. So, for example, let’s assume that there is a dual-source attack going on and you are unaware of what exactly is happening. Checking the source IP address of the request from which the traffic was sourced is a very simple way to determine whether it is a DDU attack or not. The second important point is to check from which country the increased traffic is coming from. So, if you have an e-commerce website based in India and suddenly you are finding a huge amount of traffic coming from another country, that definitely means that that traffic is basically suspicious. The third step is to determine the nature of the attack. So if the attack is a thin flood or if the attack is at the application level, So once you understand the nature of the attack, you can know what measures you can take.
So, if it is a sin-flood-based attack, then maybe you can work around it with, say, network AC or the security group. But if it is an application-level attack, then maybe you need a web application firewall, et cetera. So in order to understand how you can prevent an attack, you should know the nature of the attack. And the fourth point is that it can be blocked at the network ACL or security group level. For example, if the majority of the traffic is coming from a specific IP address, you can block that IP address directly at the network ACL level. And the last point to remember, which AWS also recommends here, is that it is recommended to have AWS support, at least for business purposes. So whenever you are having a DDUs attack, you can immediately contact AWS support, and they, along with your security engineers, can help you work around the ongoing attack. So, four very important points. Now there are various services that will help you protect against DDoS attacks, like Amazon CloudFront. It is one of the major services that can help you protect against DDoS attacks. The second is route 53. Then you have various services like auto-scaling, web application firewalls, ELB, VPC, security groups, and network ACS.
So generally, as far as exams are concerned, specifically in a security specialty exam, whenever you see a DDoS attack, they might ask you what the prevention measures are to protect against those DDoS attacks. And again, the number one prevention mechanism, I would say, is having an AWS cloud front. These are two very important services. Amazon has released a very nice webinar, or should I say video, on mitigating DDoS attacks. I’ll attach the link along with this module. I will really recommend you watch that video once because it goes into too much detail, too much technical detail, on how CloudFront or Route 53 can actually help you protect against DDoS attacks. So it goes into the Sin Flood and how cloud fronts can mitigate the Sin Flood-based attacks and those things. So, it’s really recommended that you watch this video. So those are the fundamentals of mitigating DDoS attacks. So I hope this has been useful for you. And this is again a very important question. As far as the exams are concerned, I would really recommend that you watch and understand them. So this is it for this lecture. I hope this has been informative, and I’d like to thank you for viewing.
41. AWS Simple Email Service (SES) (New)
Hey everyone, and welcome back. In today’s video, we will be discussing Amazon’s simple email service. , i.e Typically, many organisations have a generic email address such as “no reply at this time,” which could be the organization’s domain. Now they use these domains to send emails to the users for various use cases. For example, when a user registers, he should be able to receive a sign-in link or the initial password, among other things. So those bulk emails are typically sent from a common email address. Let’s say there is no reply at ratexample.com. Now, in order for you to be able to send emails, you will need a mail server for that. Now that this mail server can be SMTP, you have postfixes, so those need to be installed in the EC2 instance.
However, do remember that, by default, let’s say that you have a mail server that is installed in an EC2 instance and you start to send a lot of emails. AWS will not allow that. AWS will throttle the email traffic that has been sent to port 25, and there is a genuine reason for that because what happens is that I have seen that typically when I do consulting for startups, a lot of organisations and startups have been breached already. And what an attacker does is that once he gains access to the EC2 instance, he goes ahead and adds the mail server. He instals the relevant packages for the mail server, and then he uses that EC 2. instance to send a lot of spam traffic to thousands of email addresses, and that spam traffic contains a lot of phishing links, a lot of malware, et cetera. This is why, by default, AWS throttles traffic from the EC to the instancesent over port 25. Now, if you want to remove the throttle, you can basically make use of the non-default port, or you can even fill out a form to remove the email sending limitation.
Now consider this: suppose you’ve asked Adelis to remove the throttle and you’re sending emails from that EC to instance. Then why do you need SES? Now, if you have worked with mail servers, you will understand that it really gets complicated. There are numerous factors to consider when sending emails on a large scale. Basically, you need a dedicated person who can manage the mail server if you are running it on a larger scale. Furthermore, easy-to-use is not widely available on its own. So, in addition to high availability, you must also consider security and a variety of other factors. So instead of worrying about all of those, you can basically use the managed platform by AWS, which is the Ses, and let AWS take care of the scalability and security of the platform itself. So let me quickly show you what Ses might look like. Now, I’m in my AWS management console. Let’s go to the services, and I’ll type “Ses.” So you have the basic email service here. Now, you see, it says that the region is unsupported. So you need to make sure that you use Ses in a supported region.
So Singapore is not supported. So there are only three regions as of now that are supported, which are North Virginia, Oregon, and Ireland. So let’s select North Virginia. And this is how the Ses dashboard looks like.So currently, you do not really have a domain. So, say I have a domain with kplabs in it. So, if I want to send emails from this email address and receive no responses at the rate suggested by kplabs, I can go ahead and verify the domain. Now, before the Ses sends the emails from the email address to hundreds of users, it will ask you to verify your domain. Now, if you want to send the email, there are certain things that you will need to have. The first one is the server name. So this is similar. Let’s say you want to log into Facebook. You need to know the domain name of Facebook, which is facebook.com. Then you need to know the user name. You need to know the password as well. Similarly, the server name is required for SMTP. So this is the server name. You need to know the port. The port is 25 or 465 or 587.
Then you have TLS enabled. So, going back to a PowerPoint presentation regarding some of the important points that we should remember for the exam, the first is that to access the SAS interface, we need to have the Sees SMTP username and password. This is something that we saw. Now, SMTP works on port 25 or 465 or 587. Now, you need to provide SMTP credentials whenever you want to connect to the Ses SMTP endpoint. So let’s say that you have an application. Now, the application wants to send an email to the user upon registration, so it can connect to the Sees to send the email. Now, in order to do that, it needs to have the SMTP credentials that you need to put within your application. We also talked about how TES isn’t currently available in every region; it’s only available in North Virginia, Oregon, and Ireland. It might be available later in other regions as well. Now, each region has a specific end point. So North Virginia has an endpoint for email. For Oregon, hyphenate SMTP us one; you have us hyphenate west two. For Ireland. You have the EU-Hyphen West one. So depending on which SMTP region you want to connect to, the SMTP endpoints will change considerably.