37. Understanding Serverless Architecture & Lambda
Hey everyone, and welcome to the Knowledge Portal video series. And in today’s lecture, we are going to speak about one of the very hot topics that is going around in the market, which is server less computing. So let’s go ahead and understand what server less computing is. Now, if we look at the traditional infrastructure-based architectures, what we really have is a data center. On top of the data center, we have some kind of virtualization technology.
So, similar to how Digital Ocean uses KVM, AWS uses HVM, so some kind of virtualization technology is used. And on top of the virtualization technology, you have virtual machines that are created. So this is a very high-level overview of the architecture of a cloud platform. This topic was covered in greater depth in previous lectures. So when we talk about this type of architecture, there is one challenge: managing the servers. So there are a lot of challenges when we manage our servers. The first challenge is to imagine that your organization has designed an application that must be operational, and that you are the solutions architect. So the first thing that you need to do is understand how many resources your application needs. Is more RAM required, and how much CPU is required?
So you have to check those aspects. Once the capacity requirements are met, you have to launch an EC2 instance in such a manner that it is highly available. As a result, if you only launch one EC2 instance, and the EC2 instance fails, your entire website will fail. So you have to manage the high availability as well. Once you launch the EC2 instance in a highly available environment, you have to make sure that there are proper packages installed so that your application can run. So if, for example, your application is based on Python, then you have to install Python-related packages. Pip, then you must follow the pip requirements and the TXT file to manage the dependencies. Sometimes it is quite a lot of pain, and this specific step takes a lot of time. And last but not least, after your application is up and running, you have to make sure that you take care of security, including firewalls, IPS, host-based intrusion detection systems, Sllani, and all. You have to take care of patching, and you have to take care of monitoring. It might happen that your server is running slowly due to a lot of requests coming in. So you have to monitor, then you have to design auto-scaling environments, and so on. So, as I previously stated, this is a significant challenge, particularly for a startup that cannot afford to hire a skilled solutions architect. And this is one of the reasons why serverless computing, or platform as a service, has really started to boom.
So when we talk about platform as a service, we had already discussed this in the previous lectures; I’ll just revise this. When we talk about platform as a service, such as Google Cloud Platform, what really happens is that you don’t have to worry about all of these things: servers, patching the requirements, and all of those other things. It is the only thing that you have to do to make sure that your application is up and running and that you upload the application. In the past, providers As a result, Google Cloud Platform is one of the previous providers. And what you do is, when you design your first app over here, you select the language in which your app is supposed to be designed. For example, I select Python. And what Google will do is take care of everything related to installing Python packages, installing the dependencies, taking care of the back-end servers, patching security, etc.; this is one reason why this Google Cloud Platform is very similar. Path providers became very famous. However, AWS took it one step further with the launch of an event-driven service called AWS Lambda. Now, this is one of the most amazing services, and it really changes the way things work. So AWS Lambda is an event-driven service. So what really happens is that when a configured event occurs, the lambda function runs. So what do I mean by that? So let’s assume that you have a lambda function that basically shrinks the video to a smaller sized format.
So a user uploads a video to YouTube, and that is the event. So as soon as the video gets uploaded to S3, your function will run. It will convert those videos in such a way that the size of the video decreases to a very minimal amount. And this is what AWS Lambda can do. So, in terms of an event, timing can also be an issue. So at 10:00 in the night, certain functions should run. So that is an event. The best part about lambdas is that you are only charged for the amount of time your function takes to compute. So if your function runs for 1 minute, you only get charged for 1 minute. If your function runs for 500 milliseconds, you only get charged for 500 milliseconds. So before we conclude this lecture, let’s go ahead and understand how that would really work.
So this is my AWS account, and I have a simple lambda function. So if I just click it over here, this is a simple Hello World-based lambda function. And we will attempt to carry out a specific function. So what I’ll do is click here and select “Test.” So you have to assign certain values. So let me do one thing. Let me give one example. I’ll say Kplaps. So the function will take this particular value and output it. So once I save and test, in the output, you will see KP Labs. So I’ll click on “Save and Test.” It is executing this specific function. And now, if you go into the details, you will see that the output we are getting is KP lapse. Now, this is a very simple function. Now, one thing to keep in mind is that if you look over here, it tells you the build duration, such as how much time I was charged for. So I got charged for 100 milliseconds. It also shows how much memory was used, which is only 19 MB. So, to summarize, the time I was charged for this function was 100 milliseconds. The maximum memory that was used was 19 MB. So this is how it really works. So if my function does not run during the night, I am not charged. So, if my function lasts two minutes, I am only charged for two minutes throughout the day. So this is one of the beauties of serverless computing.
38. Getting Started with AWS Lambda
Hey everyone, and welcome to the Knowledge Portal video series. Now, in the previous lecture, we were discussing serverless computing. So when you talk about serverless computing, it does not mean that the servers are not present. It just indicates that the servers are definitely present, but they are completely managed on the cloud provider side. So as a user, you don’t really have to manage servers. You only have to focus on your functions or your code. And this is why it is called “serverless computing,” because the entire server management part is taken care of by the back-end provider. So with this, let’s go ahead and understand AWS Lambda in great detail.
Now, in a very simple overview, AWS Lambda is a fully managed compute service that runs your code in response to an event or in a time-based interval. So, when we talk about responding to an event, an event could be anything from someone uploading an object to S3 to someone deleting an object from S3. So all of those are events. So your lambda code can respond to an event or based on a time interval. So every 15 minutes, you want your lambda function to run. It can do that as well. So we will be discussing this in great detail in the relevant lecture. So there are two important things that you have to remember. First is what Lambda does for me, and second is what I need to do for Lambda. So the first section is quite important. What Lambda does for me is manage the servers for you. And that’s why it’s called server less. It manages the capacity needs. So you don’t need to do any capacity planning for how much resource my application might require. Let’s assume tomorrow morning there’s a big promotion and there are a lot of visitors who might be coming. So I don’t really have to do capacitor planning.
The third thing is the deployment, followed by scaling and high availability. You don’t really have to worry about your servers going down. So all of these things are managed by AWS, and the last thing is operating system updates and security. So this is something that you don’t really have to worry about. So in a typical startup, specifically one where there are very few people and there are budget constraints, they cannot really hire a solutions architect who can do all of those things for them because it will be quite expensive for them. So what they really do is choose the lambda way because they don’t really have to worry about the servers and other things. So in that case, what I need to do for Lambda is bring your own stable code that works. And the second thing is that you have to pay for what you use, and you don’t really have to pay for idle resources. So if your function runs only three times a day for, say, 100 milliseconds, you are only charged for 300 milliseconds per day. You don’t really have to worry about any other things. So that’s the fundamentals of lambda.
What we’ll do is create our first lambda function and look into how that would work. So for our test purpose, we’ll create a simple “Hello World” application that basically prints “Hello World.” So the programming language that we will be choosing is Node JS. The code’s purpose is to display “Hello, World!” role required. Not right now. If your lambda function needs to connect to other AWS resources such as S3, EC2, Cloud Watch, and so on, the Im role will be required. And the last thing inside VPC is not really required right now. And we will do it once the use case is here. So let’s go ahead and implement our first lambda function. So I’ll go to services, and I’ll type lambda. So here you see the description: run code without thinking about servers. So this is what “server” basically means. Let AWS think about it and manage all the servers for you. Okay, so this is the lambda interface page. They had previously redesigned the entire page. It was much better, but it changed something that many people disliked. So this is the new AWS Lambda page. So the first thing that you need to do is create a function. So I’ll click here, and I’ll create a function. So what AWS does is give you a prebuilt function that does a lot of things.
So there are a lot of functions that are already created. You’ll see that there are more than eleven pages of functions that are created. So, while the majority of the items may be pre-made, there may be some that are not. So you don’t really have to write your own code. But, for the sake of time, we’ll start over by clicking on “Author.” And here you’ll see it says “add a trigger.” So in the trigger, if I click here, there are a lot of triggers that are present. like an API gateway. You have Cloud Watch; you have DynamoDB. You have three S. So what really triggered us was that if I select S3, I can select something like “if someone uploads a code in an S3 bucket,” then this lambda function should run. So this is something that we have already discussed, where your lambda function can respond to a particular event. So for the timing, we’ll not select this right now; we’ll just select a blank lambda code. You must now give it a name. I’ll say KP lapse hello’s description is that it outputs Hello World and runs on NodeJS. There are other runtimes that are present as well, depending on which platform you’re more familiar with. But since this is the default code, we’ll not touch it right now. We’ll just leave it just the way it is. Now you have to select a role. I’ll choose a new role from the templates and call it the “KP Labs lambda role.” When you go into the advanced settings, it will ask you how much memory is required for this “Hello, World” display. We don’t really need much memory, so I’ll just put in 128 MB and time out for 3 seconds, which is quite good.
Enough VPC, we don’t really want to do it in VPC right now, so we’ll just leave it as is and change these things as needed. So I’ll select next, and I’ll click on the create function. It might take a few seconds for the function to be created perfectly. So our function is created; let’s go ahead and click on Test, and you can just leave it the way it is. I’ll just select “Test,” and I’ll click on “Save and Test.” So it is executing the function, and now you see it is saying function execution succeeded. So if I click on details over here, the output that you see is “hello from Lambda.” So this is something that is coming from the code, where you have a callback saying hello from Lambda. Now there are a few important things that you have to remember. To begin, it states that the function took 38 milliseconds to complete. Build duration is 100 milliseconds because 100 milliseconds is the minimum amount of billing duration that you have to pay for. The configured resources are 128 MB, but the maximum memory used is 20 MB. And this is the report that you are getting. So you are only built for 100 milliseconds only.So now one important thing to remember: if your function, and this is a function, gets executed only five times a day, you pay only five times a day, not more or less. And this is the beauty of AWS Lambda.
So, returning to the final pricing slide, we definitely pay only for what we use with AWS Lambda, and we are charged based on the number of requests for our functions as well as the time the code executes. As far as free ties are concerned, which most of us like, we have 1 million requests per month that are free. So this is quite important to remember. There is one more thing that I would like to show you if you do it as lambda-pricing. Let me just open this up. The pricing actually differs based on the memory that you might assign to a lambda function. So, for 128 MB of memory, you get a certain number of free time seconds per month, as well as pricing. So the higher you go with memory, the more the pricing increases and the more the free tyre limit decreases. So this is one important thing to remember. By default, you see Lambda Pro has 1 million free requests per month, and you have certain GP seconds of compute type per month that are available as part of your free time.
39. Understanding Reverse Proxy
Hey everyone, and welcome to the Knowledge Pool video series. And today we will be speaking about one of the most amazing features of engineering, which is reverse proxy. So let’s go ahead and understand what a reverse proxy is all about. In simple terms, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers.
So let’s understand how this would really work. So you have a typical internal network with an engine server over here and an application server in the backend. Now, whenever a client wants to access this application server, what it needs to do is first send the request to the engineer server, which internally will forward the request to the application server. Similarly, the application server will send the response to the engineered server, and the engineered server will forward the request back to the client. Because an engineer server sits between the client and the application server, this type of architecture is extremely useful. So your application server is not directly exposed to the client. Now that you have your NGINX reverse proxy, this is called a “reverse proxy” because it is actually taking the request from the client, sending it to the back-end server, taking the response, and sending it back to the client.
Now, since it is sitting in between, it can actually do a lot of things. For example, it can have some kind of DDoS protection. So, if a client attempts to attack with a large number of packets, it can be blocked at the engineering level; it can also perform other functions such as web application firewall, caching, and comparison. When you have engines sitting behind you, you have a lot of options, which is why it’s called a reverse proxy. Now, reverse proxy can play a key role in achieving a lot of use cases. Now, one of the very important use cases is that it hides the existence of the original backend server. So this specific client does not really know how many servers are actually behind the scenes or what the configuration is. So you have NGINX, which is sitting behind, and the client only needs to know about this engineering server. And if we apply all the hardening over here, this would actually protect our entire website. So the second important point is that it can protect the back-end servers from web application-based attacks like SQL injection. If the NGINX reverse proxy includes a web application firewall, it can protect against denial-of-service attacks and a variety of other types of attacks. A third important point is that a reverse proxy can provide great caching functionality.
So what do I mean by that? Let’s assume a client requests an index HTML from the application server, and the server gives it back to the client. Now, after a few seconds, one more client is requesting the same file. So instead of NGINX asking the application server again and again, what it will do is store the file within the NGINX server itself. And whenever a client asks for that file, the engineering server will send it directly to the client without contacting the application server directly. So this provides much faster speed and response time. So that is what is meant by the great caching functionality. Engineers can also optimise the content by compressing it. So it can compress based on GSA. We’ll be looking into it. It can act as an SSL termination proxy. You can also do things like request routing and many other things. So, everything we’ve been talking about here will be put into practise in the following lectures.
So let’s do one thing. This has become theoretical. What we’ll do is have an engineering server and one of the back-end servers. So we’ll do a practical and see how it would really work. So let me show you. This is ours, and I’ll show you. So this is our Engineix reverse proxy, which is running on the backend server. So there is one more server that is back in service. You see, the IP address is different. This is 17 three, and you have 17 two. So within the back-end server, I have an index.html file that says this is a back-end server. We’ll send requests to our engineers via reverse proxy and examine the responses we actually get. Perfect. Till then, I’ll open up the engineer access lock. so that it will become much more clear for us and for both of them. Fblog Engineers log in. Perfect. So currently the access logs of both of these instances are empty, which means no new requests are coming up.
So when I type “example” this time, let’s see what happens. It says this is a backend server. So, in reality, I sent exampledomain, which is linked to the Ngenics server. So I created an example. The request went to the Nginx reverse proxy, which forwarded the request to the back-end server, got the response from the back-end server, and served the response back to the client. Now, if you just want to verify that, let’s see. So this is the reverse proxy. The reverse proxy got the request from 172 one.So this is the request, and if you go here, this is the back-end server. Now, that request got forwarded to the backend server from the IP address 2170 two.As a result, the client has an IP address of 170 2170.1. So this is zero one, this is zero two, and this is zero three. As a result, the reverse proxy will receive requests from 0 and 2, while the application server will receive requests from 0 and 2. And this is what is actually happening. So the reverse proxy is receiving requests from zero one, which is my browser, and the backend server is receiving requests from zero two, which is my Nginx’s reverse proxy. So that’s the fundamentals of reverse proxy. Now, since NGINX is sitting between the client and the back-end server, we can establish a lot of things like a web application firewall or all the hardening-related parameters within the reverse proxy. So this is it. In this lecture, I hope you learned the basics of what a reverse proxy is.
40. Introduction to Content Delivery Networks
Hi everyone, and welcome to the Knowledge Port video series. Now, today we are going to talk about content delivery networks. So in the last lecture, we spoke about the basics of reverse proxy and how reverse proxy helps with caching. So this is a scenario that we have taken, where we take the static files like pictures or JavaScript and move them to the front-end server, which is NGINX. Now, whenever traffic comes from a mobile device or a browser, the traffic will reach NGINX, and engineers can serve all of the static files directly.
And this is one of the advantages that helps not only in latency but also in resource savings. Now, one more advantage over here is that this particular computer is not directly accessible to the backend server. As a result, it has a sense of insecurity. Now, there is one issue over here, which is that let’s assume that there are lots of visitors. Let’s say you have a big sale on your website and you get thousands of visitors. Now, the problem here is that this is a single point. Now all the thousand users will come here, and all the big things like serving the websites, DDoS protection, and having a proper security suite will come over here. Now, one of the things that you can do is take all of this load offshore. So we mean a variety of things such as web application firewalls, DDoS protection, various caching mechanisms, proxies, and so on. All can be moved out, even from this front-end server, to a content delivery network.
So we put a content delivery network between the users and our front-end server, and whatever heavy lifting has to be done can be done by this content delivery network instead of our servers. Now, one big advantage of having a CDN in between is that this CDN is actually optimised for doing the heavy lifting. So we don’t really have to spend time designing our own web application, firewall, et cetera, et cetera.So now all of these big things can be handled by the content delivery network. And along with that, if you see the static assets that we are putting on the engineered server, those static assets can now be at the CDN level. So now a web browser makes requests to the CDN and CDN without needing to contact the front-end server. It can directly serve traffic as long as it is for static assets. As a result, CDN offers numerous benefits. And many of the major websites have CDNs of some kind. Now, if you’re wondering what CDN types are available, there are two major CDNs that most small and medium enterprises generally use. One is Cloudflare. Cloudflare is an amazing CDN specifically designed for small and medium organizations. They also have a free plan available. So if you have a small blog, you can actually use the crowdflare CDN for your own website. It also provides a lot of things like DDoS protection, content-based caching, et cetera.
So if you see, they have a free plan that is available that is $0 per month and offers various things like DDoS protection, although it is limited in CDN. You’ll also need an SSL certificate if you choose a more advanced plan, such as the Professional one. The professional version also comes with a web application firewall. So this is one of the CDN providers that are available. The second one is the cloud front. So this is Amazon CloudFront, and CloudFront supports a lot of features. It supports dynamic content, manual cache invalidation, and then the things that I would really target are geotargeting, cross-origin resource sharing, and obviously, caching. It also supports Web application firewall service with the integration of CloudFront. So AWS CloudFront supports a lot of things, and the good news is that this also comes with free tires. So if you’ve registered yourself under the free plan, you’ll see that CloudFront supports up to 50 GB of data transfer, which you can use in your free time. So Cloud Front is really nice, and what we’ll do is learn the basics about the content delivery network. In the upcoming lectures, we’ll set up our own CloudFront-based CDN and explore the various features that will help us not only in caching but also in the security aspect. So I hope this has been informative for you, and I’d like to see you in the next lecture.
41. Understanding Edge Locations
Hi everyone and welcome back again to the new topic, college edge location. As a result, edge location is an extremely important concept in CDN. So let’s go ahead and understand how edge location helps CDN networks. As previously stated, many organizations use it for a variety of reasons, including content caching, the web application firewall feature, preventing distributed denial of service attacks, and some organizations use it for all of these purposes. Again, the purpose varies, but I would say that many organizations use CDN primarily for content caching and to protect against DDoS attacks. So one of the most important concepts in CDN is content caching, and edge location is part of content caching. So let’s understand what it means.
So let’s assume that this is a world map and that your server is located somewhere around Singapore. Now, as you have a global website, you can expect users from all over the world to connect to this particular server. Again, it really depends on where the users are coming from and the amount of latency. Assume that this is a user from the US. So the amount of time the packet will take for this user to reach this server will be much higher than the amount of time it will take for the user in Australia to connect to the server. Now again, here is one more important concept called hops. So this user’s package, the one that user will send, will not directly reach here; it will travel through something called “hops.” So hops are basically routers. So let’s assume that this user from the US is making a request to this particular server. So the request might first go to, say, somewhere in Europe like Germany. From Germany, it might travel to some other country, then to another country, and then it might reach this particular server.
So there are various intermediary hops that the packet has to travel through before it reaches this server. So let me actually show you how this works. So there’s a very nice utility called Trace root, and through Traceroute you can actually look into where or from where your packets are taking hops. So please allow me to perform a trace root on Kplabs. If you see the first hop, it is reaching this specific server, or as I would call it, router. From here, it goes to the second hop. From there, it is going somewhere in Germany. So, from the Germany bit down, this is the Germany area. If you go, it will take you to Amsterdam. If you go a bit down, it is reaching London, and from London, it is reaching the lineode service where the site is hosted. So it is taking various intermediary hops to reach the final location. So this is what we were discussing about that. It will not reach this specific server directly. It will go through various countries before reaching here. So again, the amount of time it will take for the users to reach this particular server will depend on how far they are from the servers.In this case, the user from Australia will take fewer milliseconds than the user from the United States. So, what exactly is edge location? So what happens is that when you create a CloudFront distribution, CloudFront will create an edge location. So let’s assume that you have an MP3 file over here that you want all the users to download.
As a result, CloudFront will copy all MP3 files to the edge locations. So it will connect to the server, and it will copy the files to the edge locations over here. And now, when the user makes the request, they will not make the request over here, but they will make the request to the nearest edge location. So this user from the US, instead of making requests to the server in Singapore, will directly make them to the edge location, which is present over here. As a result, the amount of time spent in latency has been drastically reduced. This is a very important point. So this is just an overview of what edge location means. Now, there are only three edge locations that are present in the diagram. But in reality, there are a lot of edge locations that Amazon provides. So you have more than 50 edge locations, which Amazon provides all over the world. So, just talking about India, the country has around three edges. So you have one in Chennai, you have one in Mumbai, and you also have one in New Delhi. So within one country, there are three edge locations. So your content can be distributed to each of these edge locations, which will make the amount of latency much lower. So, once again, when designing your own cloudfront distribution, the number of edge locations you choose has a significant impact on the price you will be charged.
So let’s go back to the EC-2 console. So if I go to cloud printing, let me just create one distribution. So if I go a bit down, you see the distribution settings. So there is a price class where it says to use all edge locations. This is for the best performance. But, once again, the best performance comes at a cost. However, keep in mind that Amazon does offer a variety of edge location options from which to choose. So let’s assume that all of your users are only from the US, Canada, and Europe. Then, instead of using all the edge locations, you can just select the first option, and this will solve the problem for you. Or if your users are mostly from Asia or a similar region, then you can use this instead of all the education. So that will basically save you some amount of cost.So that’s the fundamentals of CloudFront’s Edge location. I hope this has been helpful, and in the next lecture, we’ll go ahead and create our own distribution in CloudFront. Thanks for watching.
42. Deploying CloudFront Distribution
Hey everyone, and welcome back. In today’s video, we will discuss deploying a CloudFront distribution. Now, in order for us to be able to deploy a CloudFront distribution, there are certain steps that are involved. Now the first step is to create a server or some kind of storage location where we can store our website files or the content that CloudFront delivers.
Now, one great thing about Cloud Front is that it can integrate with Sri. So you don’t necessarily need some kind of easy-to-use container; for example, we can use a three-gallon bucket for that. Once you have your files in your three buckets, the next thing is that you need to create a Cloud Front distribution. Once the distribution is created, you can go ahead and load the website from CloudFront to verify if everything is working fine. And once that is done, you can go ahead and explore various features of CloudFront. So we can understand the steps with the help of the below animation here.So let’s assume that this is the server, or this can be a three-bucket server, and this three-bucket server or a server has some kind of static file. So this can be an image, this can be an HTML file, etc. For the time being, you create a cloud friend distribution. All right? Now this Cloud Front distribution can communicate with the server or the essay bucket, which has the static files, or it can even have dynamic contents. Now the cloud print distribution has edge locations, which are present over here, and these edge locations are something that basically caches a lot of information. Now, the first time a user visits your website and there is a CloudFront distribution, what happens is that the cloud-based distribution will request content from the server, which it will then serve the content.Now, once it serves the content, it will also save the content at the edge location. So let’s say that this user has requested the image.
So the first time, CloudFront will serve the image directly, and along with that, it will store the image in all the edge locations all around the world. So now, next time, let’s say there is another user who also loads the website. Now this time, what happens is that the image will be served from the edge location. So, once again, Cloud Front will not send a request to the server for the image; instead, the image will be sent directly from the edge locations. So this is a high-level overview of the series. Let’s go ahead and do the first step for this video post, which will go ahead and deploy the Cloud Front distribution. So I’m in my AWS management console. Now the first thing that we need to do is have a location where we can store our images and the HTML file. Now again, you can create an easy instance, but this is something that we will avoid for the demo and instead create a simple S3 bucket. So I’ll go to services, and I’ll select S 3. Now within here, I’ll create a new bucket, which I’ll call my democlot, and I’ll click on create. Great. So these are the “three buckets” that are available to us. The next thing is that we need to upload certain contents over here. So I’ve basically created two containers, one of which is currently available. One is a simple index in HTML, and the second is the image, which I’m really fond of. So this is the image. So we’ll be uploading both of these within our three buckets. So from my screen, I click on “upload,” and from here, I will upload both of these contents.
Great, so you have the index HTML and you have the shift jpg. Again, you can have your own custom contents as well. So basically, if I can show you what the index HTML file is all about, I’ll just open up a notepad. So this index HTML file is a simple file that basically contains “welcome to the website” and that’s about it. All right? And the image is something that we have already explored. You can have your own custom contents for your demo that you can use. Great. So once your content has been uploaded, let’s quickly go to the permissions. In fact, I wanted to show you a few things. So currently, AWS has released a feature that means you cannot really make things public. And this is quite a new feature. So I’ll just deselect them all and click save. So this is just for our testing purposes. Allow me to attest to your greatness. So the public access settings have been updated. So now let’s go to the properties, and within the static website hosting, I’ll select the first option, which basically states that use this bucket to host a website. The index document would be indexHTML, and I click on “Save.” Great. So now the last thing that you need to do is change the permissions. We’ll go to access control here, and for public access, we’ll select everyone to be able to read the objects. Okay, let’s click on “Save this Draft.” So everyone will be able to read the objects. And now, if you notice, let me proceed to S 3. Now you see that this bucket is now named “public.” So this is a really great feature because if I go to the S3 console, I’ll be able to see which buckets are public in a simple and easy-to-understand way. So now, within the bucket, I’ll quickly select both of these objects and make them public.
All right, so that’s about it. In order to verify if everything is working correctly, let’s click on one of them. I’ll copy the object URL, and if you paste it in the browser, you should be able to see Welcome to the Website over here. In a similar manner, we will use the URL for the image that we have available. Let me put it in the browser, and you should be able to see the image. Great. So since we have the static website hosting available for the S3 bucket, let’s take the URL. So this is basically the URL of the S3 bucket. Now, if you place the URL over here, you should be able to see the index HTML website. All right, so this is the S3, which is hosting both our website and our image. Now, coming back to our animation diagram, we have our server. SC is hosting the image and the index HTML file in our case. So now, instead of users directly accessing our server, what we want is to create a cloud-based distribution system that will handle all the requests over.So this is something that we’ll create in the upcoming video. So this is the high-level overview video. I hope this video has been informative for you, and I look forward to seeing you in the next video.
43. Deploying CloudFront Distribution – Part 02
Hey everyone, and welcome to the second part of our video series on deploying CloudFront. In today’s video, we will be creating the second step, which is creating the CloudFront distribution. Now, I’m in my ad management console. Let’s go to the CloudFront service. Now, I already have a CloudFront distribution that is available. So this was basically used for a different demo that we had. So let’s go ahead and create a new distribution over here. So there are two types of distribution. One is Web, and the second is RTMP. For our demo, we’ll be using web distribution. So we must specify the origin domain name here.
So the origin domain name is essentially where Cloudfront will get the data from. So, in our case, the origin is essentially a three-bucket configuration. So if you just click over here, it will basically show you the list of three buckets that are available within your account. If you remember, our bucket name was My Demo Cloud Front. So once you have selected the origin domain name, let’s go a bit further. Now, if you see it says, use all the edge locations within the price classes, do so. Now, this is important because, let’s say, you have customers coming from all across the world. In that case, you can basically make use of all the edge locations. That basically means that CloudFront will begin storing cash in all of the edge locations around the world. Now, in that case, if a customer is coming from, say, the US region, he might be served all the files from the nearest edge location all the files.So, if your customers are not from the United States and you know they are only from Asia or possibly Africa, there is no need to store your data in every edge location.
So in such a case, you can select one of them. So you have used only the US, Canada, and Europe. You have used the US, Canada, Europe, Asia, and Africa. So, depending on the location of the customers who visit your website, you can choose one of them. All right, so let me just select the second option here. Now, the next thing is that, basically, you can specify the default root object over here. So let’s specify the index HTML. So any time a user visits the website, the index.html file should be returned. So once you have done that, you can go ahead and create a distribution. All right, so this is the distribution here. So if you just let me sort it out, So the distribution origin here, you see, is my Demoncloudfund SE, Amazon.Aws.com. So currently, the status is “in progress.” It takes a little bit of time for the CloudFront distribution to get created. I’ll pause the video for some time, and once the status is changed, we’ll resume the video. So, after about 10 to 15 minutes, the CloudFront distribution’s status has changed to “deployed.” So you’ll need to take this domain name over here. All right, we’ll copy the domain name associated with the CloudFront distribution and we’ll put it within the browser. And here it is again, with the welcome to the website page. So this is how you can create the cloud front distribution and associate it with the S3 bucket. So before we continue this video, I wanted to show you a few more things. So within this diagram, we were discussing that the first time a user visits the cloud-based distribution, the request would be sent to the origin, and the image or whatever file that is present would be served back.
Now, along with that, CloudFront distribution will also store the static contents within the edge locations over here. Now, the second time the user visits the same website, the content will be served from the edge location. All right? So let’s quickly look at what exactly that might look like. Now let me do one thing. I’ll copy the CloudFront domain, and let’s do a curl. And we’ll do a shift jpg this time. So this is an image file. So now what we can do in an easier way is make use of it. So I will basically print the headers. If you look at the ex cache header, you’ll notice that it takes a hit from CloudFront. That basically means that this specific image file has been served from the CloudFront edge location. So this is the basic information on how we can go ahead and deploy the cloud front distribution. We also look into how, when multiple requests are made, the contents are served from the edge location instead of sending the request to the origin and fetching the same content multiple amount of time.So with this, we’ll continue this video. I hope this video has been informative for you, and I look forward to seeing you in the next video.
44. S3 Transfer Acceleration
Hey everyone, and welcome back. In today’s video, we will be discussing the three transfer accelerations. Now, S-3 transfer acceleration basically utilises the cloud front edge network to accelerate your uploads that you make towards your S-3 buckets. Now, instead of uploading directly to S3, you can use a unique URL to upload directly to the edge location, which will then transfer the file to s three. So let’s understand this in an animated way so that it becomes easier for us to grasp. So let’s say that you have a three-bucket setup over here and you have multiple users across different containers.
So either they might be putting things in the buckets or they might be fetching them. So let’s assume that if they are uploading certain files to your SD bucket, they have to upload them across the continent, and there might be a lot of latencies involved if you are uploading across the continents or if your remote users are quite far from where your infrastructure is. So in the day of three-way transfer acceleration, what really happens is that you have edge location across various countries, and now users are present. Now instead of users uploading the data directly to the three buckets, what they can do is put it at the nearest edge location, which is present. The data will now be uploaded to the S3 bucket by the edge location. Now internally, AWS has a very strong backbone network, and they have optimised the network in such a way that the data transfer from the edge location to the S3 bucket will typically be much faster. So, with three-way transfer acceleration, it becomes much easier because the user only has to send it to the edge location.
Edge locations are near the user, so the amount of latency will reduce, and then edge locations will take care of fetching or putting data in the three buckets. So this is what three-transfer acceleration is all about. So let’s understand this in a practical manner so that it really makes sense. Now I’m in my AWS management console, and we’ll go to the S3 bucket. So there are a lot of buckets right now. Each one is used for a specific demo within our courses. So what we’ll do is create a bucket called “Kpop Transfer Acceleration” and let’s create it in North Virginia. All right, North Virginia, that sounds good. So this is the name of the bucket. I’ll go ahead and do a great job. So once the bucket is created, let’s quickly do a search on that. I’ll go with creepy lapse and transfer acceleration. I’ll click here, and we’ll go to Properties. Now within Properties, if you go a bit down within the advanced settings, there is an option for transfer acceleration. So let’s click here, and we’ll go ahead and enable the transfer acceleration. You can also do a comparison of the data transfer speed by region. So if you just click here, it will basically tell you how much speed difference you might typically see if the data is transferred across various regions. So it might happen that the user might be present in different countries. So it will basically give you an idea of the speed difference that you may experience when using SD Transport acceleration versus direct uploads. So this will take a little time to complete. Meanwhile, let’s do one thing.
Let’s go ahead and enable the estate transfer acceleration. So once you have clicked on Enable, you can click on Save. Perfect. So, once you’ve saved it, you’ll notice that you’ve received a one-of-a-kind endpoint, which is Kplab Hyphen Transfer Acceleration, Three Acceleratory Amazonaws.com. So let’s do one thing. Let’s upload a sample picture to our S3 bucket in order to see how it might work in a high-level overview. So, I have uploaded a photo, one.jpg, and I’ll do an upload. Great. So our photo is now uploaded. So what we’ll do is quickly make this picture public. So I’ll make it public and I’ll save it, and it seems that there is an error. Let’s look into it. It states that access is denied. Interesting. And I suspect that this has something to do with one of the new features that AWS has released related to S 3. Anyway, we’ll be discussing this in a relevant section if it starts to come up for the exams. But for the time being, I’ll do a save. Let me type-confirm. So basically, it blocks the objects from being public. This is actually a good feature that AWS has launched. Let’s try it out once again. I’ll make it public. I’m going to make it public. Perfect.
So now you have success over here. So, our photo is now public. So let’s do one thing. Let’s copy the link associated with this picture. I’ll copy this up, and let’s do a quick curl on this. If I quickly paste the URL, you should see the HTTP response header, which we have a dedicated course on enginex that explains in detail. But for the time being, we are more interested in this server part. So it now says that the server that essentially responded is the Amazon S 3. Normally, if you put this URL, allow me to copy it again. Let me put it here. You’ll typically see the photo within the browser. So this is the ATR that I recently travelled in. This is basically done through S 3. Section 3 is still being worked on. That means that if I’m in India and the S 3-bucket is in North Virginia, there will always be some latency.
So we have enabled transfer acceleration. So ideally, we should make use of transfer acceleration here. So in order to do that, let’s quickly go to the properties, or we would have to go to the bucket properties, not the object properties. And we already had a unique URL over here within the transfer acceleration. Let’s copy this unique URL, and if you do a curl request, the only thing that you have to do is change this specific path. I’ll pace the new path, and I’ll press Enter. This time the server is still at 3. However, if you notice that the Xcache is missing from Cloud Friend, this indicates that a cloud front-edge location is involved over here. You also have the cloud frontID associated with this specific request. So in case you put the same link in the browser, let me just copy it once more, and if I paste it here, typically you will see the same image. However, this time the image came from the cloud front and not directly from the stage. The first time you query it, it will always be a miss, but the second time you query it, it will be a hit if the edge location has been caching it.