8. AWS Artifact
Hey everyone, and welcome back. In today’s video, we will be discussing AWS artifacts. Now, AWS Artifacts is basically a portal in AWS that will provide you access to various AWS security as well as compliance documents, which are also referred to as audit artifacts. Now, one thing that I’m sure you’ve known is that a lot of AWS services are compliant with various industry standards like PCI DSS, HIPAA, and various others. So let me quickly show you this part. So if you’ll see this AWS blog, it says that AWS adds twelve more services to its PCIDSS compliance programme and that it has added API Gateway, Cognito, Work Dogs, Lambda, and various others. So, if your organisation is subjected to a PCI DSS audit tomorrow and you say, “We’re completely hosted on AWS, and AWS services are already compliant,” So if you say this to an auditor, the first question that auditor will ask you is to show the compliance document for those AWS services.
So let’s assume that you are using API Gateway and you’re telling the auditor who is auditing your organisation against PCDSS compliance that API Gateway is already compliant with PCDSS and we don’t really have to do much. So the auditor will ask you to provide a document stating that API Gateway is compliant with the PCI DSS. Now, that document is not an AWS referendum; it is actually an official PCI DSS compliance document. As a result, it is also referred to as “attestation of compliance” in PCI DSS terminology, which is similar to “attestation of compliance.” So all of those documents can be downloaded from this portal, which is referred to as AWS artifacts. So let’s go to the management console, and I’ll select Artifacts Artifact.Now within this, if you will see, there are a lot of artefacts that you can download. Many of them are very useful in terms of compliance. So if I do a PCI DSS, you basically get an AOC, which is like an attestation of compliance. And if you want to download this artefact, you can get this artifact.You must read the NDA agreement, which you can accept and download by clicking here. And this will basically download the PCI DSS attestation form.
9. AWS Trusted Advisor
Hey everyone, and welcome back. In today’s video, we will be discussing the AWS Trusted Advisor. Now, AWS Trusted Advisor is basically a great service that analyses our AWS environment and provides best practises and recommendations in five major categories. So these five categories are cost optimization, performance, security, fault tolerance, and service limits. So, this is a screenshot that I took from the AWS console. And if you see over here, it is actually giving you various recommendations, like for cost optimization. It is showing that you can save up to $10 on performance for security. You also have recommendations for fault tolerance and information-level data for service limits. When it comes to Trusted Advisor Checks, depending on your subscription, there are two types of checks available.
So the first one is code checks and recommendations, and the second is full, trusted advisory benefits. Now, when we compare the code checks and full checks, it is important for us to know that if you are in a basic support plan, you will have the basic code checks. However, if you are in a business or enterprise support plan, then you will benefit from the full set of trusted advisory checks. Let’s take a look at what’s available for Coche now. One is the security group, specifically for the security group port being unrestricted; then you have the Imus; then you have the MFA for the root account; and then you have the service limit. Now, when it comes to full checks, you have full checks that are available ranging from security, performance, fault tolerance, cost optimization, and service limits as well. Now, from the name itself, we’ll understand what each of these checks is. So cost optimization—basically, these are the checks—helps us improve the cost. So, if you have a good EC2 instance, EBS volume, or RDS instance where you can save money, it will show you a list of them. You can see the list of idle load balancers if you have the ideal load balancer over here. You have low utilisation in EC2 instances; you have unassociated elastic IP addresses because EIP are also charged; and you also have underutilised EBS volumes. For fault tolerance, it gives you the checks related to fault tolerance. As an example, consider RDS in multi-AZ. So if you have RDS in a single-availability zone and there’s an issue in that zone, then your database would be inaccessible.
As a result, having a multi-AZ is always preferable. So these are the types of checks that it offers for fault tolerance. The third factor is safety. One well-known example is the security group unrestricted access. So if someone puts a zero, zero, zero on port 33206 or 22 or another, then those are the recommendation types that you will see under the security checks. Then is the performance, where you will see checks where you can improve the overall performance, and the last one is the service limits. So let’s do one thing. Let’s jump into the demo and look at what exactly it might look like. So, this is how the Trusted Advisor console looks like.So this is my trusted adviser. Console. Now again, I have two accounts here. The first is based on a business subscription, while the second is based on the free tier. So this is a free tier-based support plan. In fact, I bought the business one just to show you how the Trusted Advisor looks and to relate to how the AWS support would look. Anyway, returning to the Trusted Advisor, you can see that there are five major categories over here. If we just click on security here, it is basically showing that the MFA of the root account is not really set over here. Now, if you go to the cost optimization, you’ll see that it says that you should upgrade your support plan to unlock all the trusted advisory recommendations. Similarly, if you go to Fault Tolerance, you’ll get the same thing and be asked to upgrade over here. Now, this is my second A Tobis account, where I have a business subscription, and this is how the Trusted Advisor looks here. So I have full checks available in this area.
So let’s look into each one of them. So, if you look into the cost optimization, it is basically saying that there is a low utilisation EC2 instance, and if you just maximise it, it will give you the instance ID as well, so it becomes much easier. So here it says that this is the easy instance ID. The instance type is T-2 micro and the CPU utilisation is zero (3%), so I can even move it to T-2 nano if required. Anyway, because it’s under free tire, I’ll just leave it as T 2 micro. Now, it says that there is an associated elastic IP address. It claims to be in the US East region. So if you just click over here, it will take you to the console. So, you see, it automatically took me to the Elastic IP. So again, this is a great recommendation because if I just leave it as it is, then I’ll get charged. So I’ll go ahead and release this elastic IP address. Great. So now this is the second part. In the third part, it says that there is an underutilised EBS volume, and it is giving me the volume name as well as the monthly storage cost. So let me just click on the volume ID here. Again, it will directly take me to the Volumes console under EC 2. And here is one available volume of the state. So we do not really need it. We’ll go ahead and delete the volume. And this seems to be a good video in terms of saving costs. Anyways, this is how the checks look like.I hope you now understand why Trusted Advisor is so important. Now, the second one is performance. Everything appears to be going our way in terms of performance. Next is security. You see, it says that cloud trail logging is not enabled. Now, if you look into the other account, there is some insecurity here because it does not really say Cloud Trail. The reason why it does not really say “cloud trail” is because under “free tire,” it only supports certain checks. So you will not have “full checks” under “security” enabled.
So, even though Cloud Trail is not enabled in this account, it will not show you and will essentially ask you to upgrade in order to enable all of the checks. All right, so you have the cloud trail, you have the IAM password policy, you have security group permissions unrestricted, and you even have three buckets of permission. And it will basically give you a list of three buckets where the permission is usually public. So this is extremely useful as far as governance is concerned. The same goes for the security group. If I just open it up over here, it basically gives me the security group. You see, it says there’s a security group called Launch Wizard, and it has port 22 allowed and the IP range is zero. So for production or even development purposes within your organization, a trusted advisor proves to be extremely important. The next one is fault tolerance over here. Again, fault tolerance is important if you have non-fault tolerant items in your environment. And the last thing is the service limits, where it basically tells you if your service limit has reached more than 80% of the current limit of the specific service. Now, along with that, if you also go to the preferences, you can get a weekly email notification, so you can set a security contact email address. So basically, if you remember from the slide, we had a sentence that said “get weekly updates via email as well.” So we can configure weekly updates to be received via email for trusted advice.
10. Understanding CloudTrail
Hey everyone, and welcome back. In today’s video, we will be discussing the Cloud Trail. Now, Cloud Trail is one of the very important services, and typically, this is the first service that I generally enable whenever I create a doubles account. So let’s go ahead and understand more about Cloud Trail. Now, basically, it is very important for us to record each and every activity that is happening within your infrastructure, your cloud service provider, and even your service. Typically, your servers may be breached at some point, and if you do not know what activities were taking place, you will be unable to determine the root cause of those beats.
And that has actually happened with a lot of organization. And hence, it is very important to record each and every activity that is going on within your account. Now, a “Cloud Trail” is a service that allows us to record every API call that happens within your AWS account. So let’s understand this with an example where you havean auditor who is auditing your organisation and he asksyou for a question which states that show me whatdid Annie do on 3 January 2017 between 10:00 A.m.To 02:00 p.m.? Now, you will only be able to see this if you have Cloud Trail enabled. Now, do remember that this question is specific to an AWS account. If the auditor asks you this question, saying, “What did Annie do inside the server in this time frame?” then you need a different mechanism for that. But as far as AWS is concerned, Cloud Trail is something that will help you achieve or answer this specific question. So how Cloud Trail works is that you get something similar to this table where it says that at 3:50 p.m. A user called James logged in and Annie modified a security group at 07:30 p.m. And Suzanne created a new EC2 instance at 11:00 p.m.
So from now on, you can say all right that, on this specific timeframe, Annie had modified a security group. So this is a very simple table that can give you a glimpse of what Cloud Trail is all about. So let’s do one thing; let’s go ahead and understand this in a practical manner. So I’m in my AWS console, and basically what I did a few minutes ago, before recording the video, was start the demo instance, and we just wanted to show you how exactly it might look in Cloud Trail.So I’ll go to services and I’ll put Cloud Trail, and within the event history, I already have Cloud Trail enabled. We will also look into how we can enable it. But for demo purposes, CloudTrail has already been enabled. So now, if you look here, you have the event time, the username, the event name, the resource type, and the resource name. So the first event name here is startinstances, and if you click here, it will basically give you a lot of aspects here. One of the more detailed ones is the view event. So if you click on “View event,” you will get the actual JSON value of what exactly happened. So let’s understand this. So it basically says that this is the ARN; ARN is of root, and the event source is EC2 and Amazon AWS.com if you go a little further down. That means that this specific event happened on this service, which is EC Two.
Now what was the event that happened here? So the event that happened here is that instance’s event. Where did the incident begin? It all started in the EC two, or as I like to call it, “us.” one region in the east Now who started it? Which IP address started it? This is the IP address of the user who started the EC2 instance. And the final question is: what is the instance ID of the instance that was started by this specific user? and this is the instance ID. So this instance ID is phi e 30, and basically, if you see over here, it says phi e 30. So basically, from this cloud trail log, I can say that someone is a root user. So basically, root user with the IP address 277101165 started an EC2 instance in the North Virginia region, and the EC2 instance ID is this. So this is one of the sample cloud trail events. So, if you see a lot of cloud trail events, they will all have the same functionality. So, returning to how we can enable cloud trails, So in order to do that, you need to go to the CloudTrail dashboard, which is here, and you need to go to Trails.
Within trails, you can see that one trail was created and is called Demon KP Labs, and it has an association of S 3 buckets. So, basically, whatever event history you see within the cloud trail does not get saved for an indefinite amount of time. In fact, it says that you can view the last 90 days of events. Before this, you could only view up to seven days, but AWS has increased it to 90 days, which is very beneficial. But what really happens after 90 days? So, after 90 days, these events are saved in the S-3 bucket DemonKplabs, as specified in the configuration value. So let’s look into this specific SC bucket. So now you see that within this trail, we’re more interested in US East 1. So basically, you will get the cloudtrail events associated with every region. So if you just want to see what events happened within the US East One region, which is North Virginia, you can just click over here; it gives you 2018 624, and all of these are the compressed files. So when you download it, you’ll have to uncompress it, and you will see the JSON event that we had seen within the CloudTrail console just a moment ago. So in order to create a trail, what you need to do is come to the Trail tab and click on Create Trail. Now you need to give the trail a name. I’ll say Kplabscloud Trail, and you have the option of applying it to all regions. This is very important. Make sure that this option is always selected, which applies the trail to all regions.
Now, for the management event, we need to log all the read and write events. So I’ll select all, and basically for the data events, you can select all three buckets within your account. So basically, if you want to record the three object-level API activities, then you need to select this very important option. Make certain that this is selected within the lambda. Also, you can basically record the invokeAPI operations that are happening. So make sure you select to log all the current and future functions within the data event field. Now, we already know that Cloud Trail will only store a maximum of 90 days. So it’s always recommended to never delete your cloud tracking activity, at least for a period of one year. Now, how will you store it in SD buckets? It’s defined by the storage location here. So you say, “Create a new SD bucket.” You specify the bucket name. I’ll say Kplabs. Hyphen cloud trail So this is the bucket name, and then you can go ahead and do a create. So once your trail is created, which is Kplatz Cloud Trail, if you go to the event history, you should be able to see the Cloud Trail activity up in your dashboard. Now, do remember that if you enable it right now, you will not get the past event. You will only get the events from the time frame in which you enabled Cloud Trail. And also remember that the cloud trail events that appear over here are not very instantaneous. It might take a few minutes for the event to appear here. So by that, I mean that if you say, “If I stop this EC2 instance,” it will not immediately come here. It will take a certain amount of time—typically a few minutes—for that event to appear within the cloud console.
11. Understanding AWS Inspector
Hey everyone, and welcome back to the KP Labs course. Now, in today’s lecture, we will be discussing AWS Inspector. Now, AWS Inspector is a very new service. It is not a very mature service. However, this has a lot of scope and amazing features. So let’s go ahead and understand more about what AWS Inspector is all about. So, in a nutshell, AWS Inspector is a vulnerability scanner that will scan your servers for specific vulnerabilities and notify you that these vulnerabilities are present. Now, in order for the AWS Inspector to scan your server, what it basically does is rely on the agent, which gets installed inside the server, and that agent is responsible for the scanning. Now, when it comes to vulnerability scanners, Inspector is very similar; one of its features is very similar to Nessus. So Nessus also does vulnerability scanning of all the servers. And at the end, it will basically show you that the server has ten high vulnerabilities, seven medium vulnerabilities, three low vulnerabilities, and so on. So this will basically help a security engineer to determine what vulnerabilities are present, and it will basically guide a security engineer on a patch fix.
So the Inspector is very similar to what Nessus is all about. Now, AWS Inspector has certain predefined templates on which we can perform our scanning. The first one is based on “common vulnerabilities and exposures,” or CVEs. The second is based on the CIS benchmark. The third category is security best practices. And fourth is behavioural analysis. So these are the four packages that AWS Inspector currently has. Now, as we have already discussed, if you look into the nests, they give you a very nice GUI based on the vulnerabilities. Inspector does not really have a good GUI, but it will definitely tell you in a nice tabular format that there are certain vulnerabilities that are present. So if you see you have high vulnerabilities present in a specific instance ID, it will tell you the instance ID and it will also tell you the CVE score, which is basically the CVS ID. It will basically tell you about the CDE score. The CVS score basically tells how critical the vulnerability is all about.So how basically Inspector can determinethis is it instals the agent.So you have an agent installed inside the server. That agent will scan all the packages that are installed on the server. It will look into the mapping of the vulnerabilities to see if the package version has any vulnerabilities. If it has, then it will give you this nice little tablet form.
So the first thing it basically does is have the capability of using CI’s benchmarks, where it can basically assess whether your operating system is following the CIA’s benchmark. CI’s benchmark is basically “best practices” for security. The second rule that Inspector has is related to security best practices, where it can scan whether root login is disabled, whether SSH support is enabled for version 2, and various others. You also have runtime behavioural analysis, where it basically scans for unused TCP ports, insecure server protocols, and various others. So those are AWS Inspector’s high-level overviews. So before we conclude, let me show you the Inspector page. So there are three things that you have to do. You must first install the AWSagent on your EC2 instance. This is the first part. Once the agent is installed, you can go ahead and run the vulnerability assessment or various security rules. And once you run a third, you analyse the finding that the inspector gives you. As a result, this is a high-level overview of AWS Inspector. I hope this has been informative for you, and I look forward to seeing you in the next lecture.
12. Real World example on DOS Implementation
Hey everyone, and welcome to the KP Labs course. Now in today’s lecture we are going to speak about a very interesting topic that I am sure most of you will really like, which is the denial of service. So, I’m sure many of you have heard about denial of service attacks and how many major websites are being brought down as a result of them. So let’s understand the basics of what denial-of-service attacks are all about, and then we’ll go ahead with our interesting tacticals as well. So in a normal website operation, there is generally something called “normal traffic.” So if this is a website, a website can handle a specific amount of traffic. So it can be ten requests per second or it can be 100 requests per second, depending upon the server capacity. So in the normal scenario, you see that the server is all happy. So it is in green where there is normal traffic. At times, there may be high traffic, causing the server resources to become overburdened and the website’s application to become quite slow. However, the website is still operational.
So there are about two use cases that are very genuine. However, in some cases, attackers try to generate this high traffic on purpose in order to bring the server down. So if you’ll see over here in the third use case, you have a denial of service where a single attacker is generating a lot of requests in such a way that the server is completely down. Now, there is one more attack called “distributed denial of service,” where there are multiple parties doing a denial of service attack on the same resource. One distinction between a DoS and a DDoS is that a DoS may involve only one user attacking the server. However, in DDoS attacks, there will be hundreds of users across the world who are attacking the same endpoint at the same time, which is why they are called “distributed denial of service” attacks. So DoS and DDoS are basically part and parcel of the servers and the network life.Now, the reason why these attacks are so successful is because they are very easy to launch them.And along with that, when you go and inquire about DDoS protection, if you are a system administrator, you will be presented with a big bill that you might have to pay if you really need DDoS protection. However, many cloud providers, such as AWS, are providing good services, such as AWS Shield, that can help you protect against a distributed service attack to some extent.
So nowadays, DDoS attacks are very big. So if you talk about 2016 itself, which is like two years ago, you had a DDoS of 800 GB per second. You can imagine 800 GB of traffic per second worth of traffic.It can actually bring the biggest of the websites down. So, let me demonstrate in practise what the Dos attack would actually look like. So on the left side, I have my Windows machine. And if you look at the CPU utilization, it is at 3%. So I have a Core i7, and it is at 3% utilization. Now, after I performed a denial-of-service attack, you see, the CPU went to 100% instantly. So, within a few seconds, DoS attacks increased from 3% to 100%. So, let’s not waste time. And let me show you what it would actually look like. So, first, I have a Windows 10 machine over here. And if you look at the CPU utilization, it’s not much, quite empty, at around 8%. On the right, I now have a Kali Linux machine.
And this is where the DoS attacks will be directed. So, let’s start. Just notice the CPU utilisation is quite low at 3%, right? Now, within Kali Linux, there is a great tool called Loic. So Loic is one of the tools that can perform Denial of Service attacks. So, the first thing you’ll need to enter is either a URL or the IP address of the endpoint you want to attack. So, in my case, I’ll put the IP address of the endpoint. So 192, 168, 92, 30. So, this is the IP address of my Windows machine. Depending on the firewall and the application, you may be able to employ a variety of attack vectors. So, in my case, I will be using a UDP-based attack vector. And let’s start. The first thing you have to do is just lock on to the target and just press “I am charging my laser.” As you can see, the number of requests sent by this ploy is astoundingly fast. And note that this virtual machine only has around 2 GB of RAM. Examine the requests that are being made within two GB of RAM. Now, if I go to the Windows Server, you can see the CPU utilisation is actually going to a very high spike rate. So around 89% And after a minute or two, the CPU utilisation will spike to 100% full.
And this is the power of a denial-of-service attack. In any case, we’ve already reached 89%. So, if I simply press “Stop flooding,” as a counterrequest, I’m confident I’ll be able to say “Unitten hundred thousand,” “10,000 lakh,” or “ten lakh.” So, around 13 lakh requests were sent within just 1 minute from a 2 GB RAM virtual machine. So imagine what would happen if you launched this attack vector from a 16 GB RAM server. The number of requests that we will be able to fulfil will be enormous. Fast and it can actually bring down lotsof networks and lot of websites anyway. So coming back to our PowerPoint presentation, I hope you understood what the denial-of-service attack was all about. So the practical thing that we did right now was to deny service because there was a single entity. You also have distributed denial of service, where there are multiple users who might run the Lloyd tool, which we just ran now to a common endpoint. So that is the difference between a Dawson, a DDoS, and DDoS attacks, which are again pretty common. It has actually brought down Twitter and brought a lot of functionality to Facebook, PayPal, and various others.
13. AWS Shield
Hey everyone, and welcome back. In today’s video, we will be discussing AWS Shield. Now AWS Shield basically helps you protect your workloads against distributed denial-of-service attacks. AWS Shield is now available in two varieties: Shield Standard and Shield Advanced. Now one of the very common scenarios nowadays is a distributed denial-of-service attack, which actually brings the website down. And this is the reason why a lot of customers have been asking for a solution that can protect against large-scale DDoS attacks, and AWS Shield is one of the solutions that can help against this scenario. Now speaking about the two variants of Shield, which were Shield Standard and Shield Advanced, let’s understand the difference between them.
Now, when it comes to the AWS Shield standard, it basically provides a basic level of protection against the common attacks related to the transport and network layers of the OSI stack. When it comes to higher levels of protection, Shield Advance is the best option. Now Shield Advance basically it protects against various sophisticated distributed denial of Service attack. And one good thing about Shield is that it provides near-realtime visibility into the attack that is occurring or that might be occurring within the organization. Now, along with that, AWS Shield Advance gives customers 24-by-7 access to the AWS DDoS Response Team, which is also referred to as DRT during the ongoing attack. So let’s assume that your organisation is already facing a massive DDoS attack. So what you can do is contact the AWS DRT team, which will help you with the measures that can be taken to protect against those attacks. Now the next important part to remember is the AWS Shield-related cost and credit factor.
Now AWS Shield Advance will cost you a base price of $3,000 per organization, and it basically requires you to have business or enterprise support. Now one of the interesting parts about AWS Shield is that during the attack, assuming that you have Shield Advance enabled and have received a huge amount of attack traffic, your infrastructure costs will also increase. So AWS will basically return you that money in the form of credit. Now, remember, it does not offer you credit for all the AWS resources. There are certain AWS resources, like route 53 ELP CloudFront, for which the credits will be returned to you if you have seen a search due to a DDoS attack. This is a screenshot of ShieldAdvance as it actually appears. If you are interested, you can pay $3,000 and see in real time what Shield Advance would really give you. But I’m sure that we will be content with a few screenshots to just see what exactly it might look like. So these are a few screenshots related to the Shield Advance. Before we wrap up this lecture, I wanted to show you the shield in the console. So within the Waffen Shield page of AWS, you can go to the AWS Shield, and basically on the start page, it will give you a comparison between a S Shield Standard and an AWS Shield Advanced. So these are all the comparisons. If you look closely, you will notice that this is the cost protection where it reimburses. AWS will reimburse the costs related to Route 53, Cloud Front, and ELB. And along with that, you can activate the shield advance, where the base price is $3,000 per month and you have additional data charges as well.
14.AWS Direct Connect
Hey everyone, and welcome back to the Knowledge Pool video series. So continuing our journey with the networking section Today we have an overview of Direct Connect. Now, Direct Connect is a pretty important topic as far as the exams are concerned. And when it comes to the advanced networking specialty certification, Direct Connect is one of the most important topics. So let’s go ahead and understand the necessity of Direct Connect. Now, in the normal communication list, assume you have a customer and you have a VPC in AWS. So if you want to connect to the VPC, what happens behind the scenes is that the internet comes into the picture. So this is the internet, and then you route your traffic through it and get the data back through the internet.
So this is how most of the communication works. Now, when you talk about the internet, the packet basically travels in hops. So there are a lot of routers that are present all over the place. And let’s assume I have my clients in India and my server somewhere in Oregon. So the packets will actually have to travel halfway around the world to reach the Oregon region. And as you might have assumed, it leads to a lot of latency as well. So let me just show you what I mean by that. So here I have done a simple traceroot on Google.com, and you can see that it actually took around 17 hops for my packet to reach the Google server. So this is the first hop from the first hop to the second hop. Assume this is the first router, then proceed from the first to the second, the second to the third, the third to the fourth, and so on. So, in total, ten hops were required for my package to travel from client to destination. Now, it actually sometimes goes much higher. This is because Google has local servers in India. but at certain times, a lot of clients host their websites in North Virginia, Ireland, or even Oregon. And to reach that, it actually requires like 20 hops or sometimes 25 hops. And that leads to a lot of latency, so the website basically starts to get slow. And this is the reason why this approach is definitely good.
However, when it comes to critical applications where latency is one of the most important factors, the Internet is not the best option. So let’s look at the challenges. First, the internet is a good option if the amount of traffic is within a certain limit. If you use the internet, you will always experience latency. Many organisations now use hybrid architectures. Some of the servers are in a datacenter, while others are in AWS. In one of the companies that I used to work with, we had a hybrid architecture. Some of the application servers were in the data center. So some of the application servers were in the datacenter and some of the application servers were in the AWS cloud, and both the servers needed to communicate for the website to work properly. So, in order for the client request to be fulfilled successfully, both the servers in the AWS data centre and the network connectivity must be optimal. Now, if the network connectivity between the datacenter and the VPC is good, let’s assume an ISP. So if the ISP is down or if the ISP is slow, then the entire website gets hampered. That’s one thing. If the ISP does not provide the requested bandwidth, the website will become slow once more. So there are a lot of challenges when you go through the Internet, specifically if you have your infrastructure both in a data centre and in the cloud, and both of them need communication. So many organisations are following this approach, and this is the reason why AWS came up with the new feature of Direct Connect. So in order to solve this challenge, AWS introduced Direct Connect. So AWS Direct Connect lets customers establish a dedicated direct network connection from the client network to one of the Direct Connect locations. So what you do is you have a data centre here, you have a VPC here, and you establish a direct connection like a leased line from the data centre to the VPC, thus bypassing the Internet. And this is very, very effective because you don’t really have to worry about things slowing down or other things.
You have a direct connect, you have an extremely fast network between your data centre and your VPC, and you go ahead and implement a hybrid architecture or whatever you want to implement. So there are a lot of benefits to a Direct Connect connection. For starters, having a direct connection between the customer’s data centre and AWS has numerous advantages. Some of them include consistent network performance. So I’m sure many of you must be familiar. If you have WiFi, you will not get fast speed all the time. Certain times you will get very slow speed, and certain times the WiFi will not work, so that is inconsistent network performance. So when you go with Direct Connect, you have consistent network performance because that amount of bandwidth is allocated to you and is not overused. That concludes the first section. The second benefit is lower bandwidth costs. So again, this is something we can refer to the ISP for. Now, generally, when you go to an Internet service provider for a WiFi connection at your home, they have various plans. Plans for 30 GB, plans for 40 GB, and a plan for 100 GB The higher you go, the more money you have to pay. Similarly, the higher you go in a data center, the more you must pay. And when you go for the direct connection, since this is something like a lease line that is directly connected, you don’t really have to pay a very high cost. The cost of bandwidth is much lower than that of the ISP. So this is the second; the third is private VPC connectivity. And this is also quite good because you don’t have to worry about man-in-the-middle attacks or other things. You have a direct, dedicated line to your VPC.
So these are a few benefits. Now actually, let me show you. So this is the architecture of the DirectConnect connection: on the left hand side, you have your data center, and on the right hand side, you have your Amazon VPC. In the middle, you have a Direct Connect provider. So what you do is connect a line from your data centre to a Direct Connect provider. And the Direct Connect provider has a dedicated fiber-optic line to the AWS. So all you have to worry about is connecting your data centre to one of the Direct Connect providers. To accomplish this, you must first contact a Direct Connect provider who will assist you in establishing a line between your data centre and them. And after that, you don’t have to worry; they’ll take care of the other section. So let’s do one thing. Let me show you how exactly that would work in a high-level overview. So this is the AWS Direct Connect page. Now, as you can see, the first step is to select a location for Direct Connect. Remember that direct connect is only available in certain regions. So you have to select a specific region for the Direct Connect location. So in every region, you will have a different Direct Connect that you need to establish. Once you select the location, you can basically configure the virtual interface. So if you see over here, the first part is the connection, then the virtual interface. And then you have to connect us. You can connect your data center, office, or colocation environment to AWS Direct Connect. So let’s do one thing: let’s go to Connection and click on Create Connection. Now I’ll just say KP Labs Testing, and then there are various locations of Direct Connect that are present. You can select any one of them. Let me just select any random one.
And then you have to specify the port speeds. Direct Connections now comes with a port speed that ranges between 1 GB per second and 10 GB per second by default. Depending on how fast you need this pipe You can set the default storage size to 1 GB or 10 GB. There are other speeds that are also available that we will be discussing. Select one of them and click on “Create.” Now, when you click “Create,” the state will be requested. Now you have to wait for Amazon to approve this specific request. And once this request is approved, then only you can go ahead and go to the virtual interface and create new virtual interfaces. So what we’ll be doing is discussing more about virtual interfaces in the upcoming lecture. But keep in mind that if you have a direct-connect connection between your data centre and the VPC, you won’t need to use the traditional method of connecting to Three Ways Internet. You can send all traffic directly from the direct connect connection, which can connect to S3 without using the Internet. So this makes things extremely fast.