1. Solutions Architecture Discussions Overview
So far, we’ve been learning all about individual technologies. But here, we’re going to learn how they fit together. And so we’re going to view solution architectures. Now, this is, to me, the best part of this course. So really, this is the part I’m most proud of, because, okay, no one does that. How do we see how these technologies all fit together and work together to form a solution architecture?
So you need to be 100% comfortable with everything. I’m going to explain this in this section going into the exam, because this is pure solution architecture. So, to discuss this, we’ll go over how you can think of solution architecture. Overall, iteratively through simple case studies We’ll see what the time is at microbes.com, mywordpress.com, how to instantiate applications quickly, and beanstalk. So overall, it will be a natural progression. It will be increasingly complex, but hopefully it will give you some good perspective on EC2 versus ELB versus ASG versus EBS, EFS, and RDS ElastiCache. All these kinds of things interact with each other; how do they all fit together? Okay, that’s it. Let’s get started with solution architecture.
2. WhatsTheTime.com
Okay, so let’s get started with our first solution architecture discussion. And I’m really excited because we’ll see so many different topics all at once and really understand how they fit together and what challenges we face as solution architects. So the first website is whatisthetime.com, and whatstime.com allows people to know what time it is. I know it sounds stupid, but at least it’s so easy that everyone understands it, and we’ll be able to talk about it at length. So we don’t need a database because it’s so simple. Each instance, each server, knows what time it is, and we want to start small. We’re willing to accept downtime, but overall, maybe our app will get more and more popular. People really want to know the time around the world. And so we’ll need to scale vertically and horizontally, maybe removing downtime. And let’s go through the solutions architect journey. You’ll notice that there are numerous options for how we can proceed with this app. So let’s start really simple. Okay? Let’s start from the very beginning. You’re a social architect, and you say, “You know what would be great?” You have a Tgmicro instance, and you have a user.
And the user says what time it is and says, “Okay, it’s 05:30 p.m..” Done. This is my app. So we have a public EC2 instance, and because we want to make the EC2 instance have a static IP address just in case something happens and we need to restart it, then I will attach an elastic IP address to it. So this is my first POV. It’s working really well. Our users are able to access our application, and we’re getting great feedback. So now what’s happening is that our users are really having a good time using our application. So they say to their friends, “Hey, you should also use this application.” So another friend comes in and says, “What time is it?” and 07:30 p.m. And another friend comes in. What time is it? 06:30 p.m.. And so we realised here that our application was getting more and more traffic, and certainly the T-2 micro instance wasn’t enough. And so, as a solution architect, we say, “Wait a minute; maybe we should replace that T-2 micro instance by something a little bit bigger to handle the loan.” So that’s called vertical scaling. Perhaps we’ll make it an M-five large instance. So what we do is that we stop the instance, change the instance type, and then start the instance again. And here we go. This is an M-5 type of instance. So what happened here is that it has the same public IP because it has an elastic IP address. So people are still able to access our application. But we have experienced downtime while upgrading to an M five.And so our users were really happy during that moment. They were not able to access our applications, so this works, but this isn’t great, right?
So next, we’re going to be really popular, and it’s time to scale horizontally. So we get one public IP address with an elastic IP address attached to it—remember this application, M-5. And now we’re getting tonnes of users, so they’re all asking: what time is it? And so now we want to scale horizontally. So we begin by adding two EC2 instances. They’re all M-5 in size and have an elastic IP attached to them. So now, on top of having three easy-to-create instances, we have three elastic IPs. And so our users need to be aware of the exact values of these three elastic IP addresses to talk to our instances. And so that’s called horizontal scaling. We’re not doing bad, but we see. We started at a recent limit. Now the users need to be aware of more and more IPS, and we have to manage more infrastructure. and it’s pretty tricky, right? So OK, let’s change the approach. Now we have three easy-to-manage instances, MFive, and let’s remove elastic IP because it’s something that we can’t really manage. There are only five elastic IPs per region account by default, so it’s not a lot. As a result, our users will take advantage of Route 53. So we’ve set up Route 53, and the website URL is API Weddesign.com.
And we’ve decided it’s going to be an A record with a TTL of 1 hour. And a record means that a DNS like this will provide me with a list of IPS. So keep in mind that Route 53 is an IP address. So great, so the users query Route 53, and then they get the IP addresses of our 82 instances, which they can change over time. It makes no difference because root 53 will be updated, updated, and kept in sync. And so our users are now able to access our easy-to-manage two instances, and we don’t have any elastic IP to manage anymore. So using Route 53, we’ve made some good improvements. But now we want to be able to scale, you know, to be able to add and remove instances on the fly. And so when we do remove an instance, what happens? Well, it seems like these users on top were talking to this large M-5 instance, but now it’s gone. And it turns out that if they do a Route 53 query because the TTL was 1 hour, they’re using the same response for 1 hour. So they’ll try to connect to the instance for an hour, and then it’ll be gone. And so here it’s not really great because even though these users are having a good time, and maybe after 1 hour these users will be able to connect to these two instances, they’re not having a good time right now because they think that our application is down, and that’s really, really bad. Okay, so this is an architecture, and we see the limits of it. So how can we push this a little bit further?
So let’s talk about adding a load balancer. So instead of having public instances, we no longer have any. We have two Private EC instances that will be launched in the same availability zone because we don’t know any better. So we’ve launched them manually. We have three to five large instances, and we are following this course, so we said, “Okay, let’s use a load balancer.” And you know what? Furthermore, it will have health checks, so if one instance is down or not working, we will not route traffic from our users to it. So okay, we’re linking the two together. So my Erb will be public facing, whereas my private instance instances will be in the back, and traffic between these two will be restricted using perhaps a security group rule that we’ve seen before, using security group as a reference. Okay, that sounds pretty good. So our users will now inquire as to what time it is. But this time it cannot be a record because the load balancer’s IP keeps changing all the time. And so instead, because it’s a load balancer, we can use an alias record. And this alias record is perfect because it will point from Route 53 to the ELB, and everything will work really well. So we change the DNS here. But now the users connect to our load balancer, and our load balancers redirect us to our EasyTwo instances and balance the traffic out.
And it’s really great because now we can add and remove these instances and register them with a load balancer, and we won’t have any downtime for our users thanks to the health checks feature. So very, very good. But now adding and removing instances manually is pretty hard to do. So what if we just leverage something we just learned in this class and launch an auto-scaling group? So now we have our API on the left hand side; it’s the same thing, route 53 ELB, but on the right hand side. Now we’re going to have an availability zone, and we’re going to launch Private EC in two instances, but this time they’re going to be managed by an auto-scaling group. And so this allows our auto-scaling group to basically scale on demand. Maybe in the morning, no one wants to know the time. Maybe at night, when people want to leave work, they want to know the time. So we’re able to scale based on demand—scale in and scale out. And this is really, really great because now we have an application with no downtime, auto scaling, and load balancing. It seems like a really stable architecture, and it is. But what happens is that there’s an earthquake that happens, and availability goes down. So one goes down, and guess what?
Our application is entirely down, and our users are not happy. And so Amazon comes to us and says, “Yes, it’s because you haven’t implemented a multiAZ application, and we recommend you implement multiAZ to be highly available.” So we say, “OK, let’s change a little bit the things now that we’re going to have to ELB, and on top of being health checks, it’ll also be multi-at and it’ll be launched on AZ one, two, and three.” So three Az’s for this ELB, and our Otis King group will also span multiple AZ, allowing us to possibly have two instances in AZ. one occurrence in AZ 2 and one occurrence in AZ 3 And so, if AZ one goes down, we still have AZ two and AZ three to serve our traffic to our users, and we’ve effectively made our app multi-AZ, highly available, and resilient to failure. Pretty awesome, right? Okay, how far can we go with this? Let’s keep going, so we have two AZs, and we know that at least one instance will be running in each AZ, so why don’t we reserve capacity? Why don’t we start basically diminishing the cost of our application? We know for sure that two instances must be running at all times during the year.
And so by reserving instances, maybe for the minimum capacity of our auto-scaling group, we’re going to save a lot of money in the future. whereas the new instances that are launched may be temporary or on demand, or if we’re feeling brave, we can even use spot instances for lower costs, but the instances may be terminated. And so it’s really interesting. Right. Because we’ve seen an architecture go from a very small application all the way to a load balance Odyssey group, multi-AZ health checks, and reserved instances type of application. So, as a solutions architect, it’s up to you to understand what the requirements are and how we should architect in response to them. And this is what the exam will test you for. Now, this is the first architecture discussion. Trust me, there will be many others in the next lecture. But for now, let’s just review what we’ve discussed. We’ve discussed, for example, what it means for an institution to have a public IP and a private IP. Where does it fit in our architecture diagram? We’ve also seen what the benefit is of having an elastic IP versus using Route 53 versus maybe using a load balancer for our application. We’ve also seen that due to the Route 53 TTL, we couldn’t really use a record, so we had to use a load bouncer, an alias record, and so that we could see how GT-3 could fit.
In this whole picture, we’ve seen how to maintain EC2 instances manually, and then we say, “Well, it’s too much maintenance; let’s use auto-scaling groups.” And guess what? It will actually save us money because it will scale on demand, ensuring that we always have the right number of EC2 instances available. And then we said, “Okay, let’s go into multiAZ; we can survive disasters this way; let’s enable ELB health checks so that only the instances that are correctly responding do get traffic.” And we’ve seen how to set up security group rules so that the easy two instances would only receive traffic coming from the ELB. And finally, we said, “You know what, let’s look at capacity; let’s do some cost savings.” We always know that we want to have two instances running at any time, so let’s reserve these instances, and that will bring lots of cost savings. And so all this discussion right here—there’s a thing called the “Well Architecture framework” in AWS, and we’ll be talking about it at length as well as in a dedicated section. However, there are five pillars to consider: cost, performance, dependability, security, and operational excellence.
So, through this discussion, I’d like to educate you on the fact that we’ve seen these five builders serve a variety of functions. So maybe we’re scaling up our instances vertically, or maybe we’re using ASG to just have the right amount of instances based on the load. And maybe we want to reserve instances as well to optimise cost in terms of performance. Well, we’ve seen vertical scaling, we’ve seen ELBs, we’ve seen autoscaling groups, basically how we can adapt to performance over time and improve reliability, and we’ve seen how root three can be used to basically reliably direct traffic to the right EC2 instances. and maybe using multi-AZ for the ELB and multi-AZ for the ASG as well for security. We’ve seen how we can use security groups to basically link the load balancer to our instances, reliably and with operational excellence, and how we can evolve from a very clunky manual process all the way to having it fully automated with auto-scaling groups, etc., etc., etc. That’s really cool. I believe this is a good discussion, and there will be many more, but as a solutions architect, start understanding what technologies we’ve seen and how they fit together and solve problems when configured correctly. So that’s it. I hope you liked it, and I will see you in the next lecture.
3. MyClothes.com
Okay, so in the past lecture, we were in a stateless type of web application. What is the time? Just answer the time. And we didn’t need any databases or external information to answer that question. But now we’re going to get into a stateful web application called MyClose.com. And MyCloset.com allows people to buy clothes; it’s online, and there’s a shopping cart when you navigate MyCloset.com. And we have hundreds of users online at the same time. So all these users are navigating the website, and we want to be able to scale, maintain horizontal scalability, and keep our application web tier as stateless as possible. So even though there’s a state of “shopping cart,” we want to be able to scale our web application as easily as possible. As a result, users should avoid losing their shopping cart while navigating our website.
That would be really bad, and maybe you could also have their details, such as their address, etc. in a database that we can store effectively and make accessible from anywhere. So let’s see how we can proceed. You’ll see that it’s going to be yet another fun but challenging discussion. Okay, so this is our application, and I’m going to go fast. Here’s the kind of architecture we saw in the previous lecture. So we have Rafi, three multiazelb, auto-scaling group Three AZ, and that’s about it. So our application is accessing our ELB, and our ELB says, “All right, you’re going to talk to this instance, and you create a shopping cart.” And then the next request is going to go not to the same instance but to another instance. And now the shopping cart is lost, and the user says, “Oh, that must just be a little bug; I’m going to try again.” So it adds something to the shopping cart, and it gets redirected to the third instance, which doesn’t have the shopping cart. So basically, the user is going crazy and saying, “Wait, I’m losing my shopping cart every time I do something.” This is really weird; Microcosm is a bad website, I don’t want to shop on it, and we lost money.
So how do we fix this? Well, we can introduce stickiness or session affinity, and that’s an ELB feature. So we enable ELB stickiness, and now our user talks to our first instance, adds something to the shopping cart, and then the second request goes to the same instance because of stickiness, and the third request also goes to the same instance. And actually, every request will go to the same instance because of stickiness. This works really well, but if an EasyTwo instance gets terminated for some reason, then we still lose our shopping cart. But there is definitely some improvement here thanks to stickiness and session affinity. So now let’s look at a completely different approach and introduce user cookies. So basically, instead of having the EC two instances store the content of the shopping cart, let’s say that the user is the one storing the shopping cart content. And so every time it connects to the load balancer, it basically is going to say, “By the way, in my shopping cart, I have all these things.” This is accomplished through the use of web cookies. So now, if it talks to the first server, the second server, or the third server, each server will know what the shopping cart content is because the user is the one sending the shopping cart content directly into our EC2 instances. So it’s pretty cool, right? We’ve achieved statelessness because now if, for instance, the user doesn’t need to know what happened before, the user will tell us what happened before. But the HTTP requests are getting heavier. So, because we send shopping cart content in web cookies, every time we add something to a shopping cart, we’re sending more and more data.
Additionally, there is some level of security risk because the cookies can be altered by attackers maybe.And so maybe our user may have a modified shopping cart all of a sudden. So when we do have this kind of architecture, make sure that your easy two instances validate the content of the user cookies. The cookies themselves can only be so big, with a total size of less than 4 KB. So there’s only a little information you can store in the cookies. You cannot store large data sets. OK, so this is the idea. So this works really well. This is actually a pattern that many web application frameworks use. But what if we do something else? Let’s introduce the concept of a server session.
So now, instead of sending a whole shopping cart web cookie, we’re just going to send a session ID that is just one for the user. So we’re going to send this. And perhaps an elastic cloud cluster will be running in the background. And what will happen is that when we send a session ID, we’re going to talk to an instance and say we’re going to add this thing to the cart. And so the EC2 instance will add the cart content to the elastic ache. And the idea to retrieve this cart’s content is going to be the session ID. So when our user basically does a second request with a session ID and it goes to another EasyCache instance, that other EasyCache instance is able to use that session ID to look up the content of the cart from ElastiCache and retrieve that session data. And then, for the last request, the same pattern.
Remember, the really cool thing about ElastiCache is its submillisecond performance. So all these things happen really quickly, and that’s really great. By the way, this is an alternative for storing session data. We haven’t seen it yet; it’s called DynamoDB, but I’m just putting it out there just in case you don’t know what DynamoDB is. So it’s a really cool pattern here. It’s more secure because now Elastic Cage is a source of truth, and no attacker can change what’s in Elastic Cage. So we have a much more secure pattern that is also very common. So now, okay, we have elastic ash. We figure this out. We want to store user data in the database. We want to store the user’s address. So again, we’re going to talk to our SQL instance, and this time it’s going to talk to an RDS instance. And the RDS is going to be great because it’s for long-term storage. And so we can store and retrieve user data such as address, name, etc. Directly, by talking to RDS, each of our instances can talk to RDS, and we effectively get some kind of multiazygos stateless solution.
So our web traffic is going great, our website is doing amazing, and now we have more and more users, and we realise that most of the things they do are navigate the website, read, get product information, and all that kind of stuff. So, how do we scale the readings? Well, we can use an RDS master, which takes the rights, but we can also have RDS read replicas with some replication happening. And so anytime we read stuff, we can read from a replica, and we can have up to five replicas in RDS, which will allow us to scale the reads of our RDS database. There’s an alternative pattern called rightthrough where we use the cache. And so the way it works is that our user talks to an easy-to-use instance. It looks in the cache and says, “Do you have this information?” If it doesn’t have it, then it’s going to read it from RDS and put it back into ElastiCache. So only the information is cached.
And so the other 82 instances are doing the same thing. But this time when they talk to ElastiCache, they will have the information, they will get a cache hit, and they will get the response right away because it’s been cached. And so this pattern allows us to do less traffic on RDS, basically decrease the CPU usage on RDS, and improve performance at the same time. But we need to do cache maintenance now, and it’s a bit more difficult. And again, this has to be done on the application side. That’s pretty cool. Now we have our application. It’s scalable; it has many, many users, but we want to survive disasters. We don’t want to be struck by disasters. So, how do we go about it? Our user talks to our Route 53, but now we have a multi-age ELD. And by the way, Route 53 is already highly available. You don’t need to do anything, but for our load balancer, we’re going to make it multi-AZ. Our odour scaling group is multi-AZ, and then in RDS there is a multi-AZ feature. The other one is going to be a standby replica that can just take over whenever there’s a disaster. And ElastiCache also has a multi-AZ feature if you use Redis.
So, really cool. Now we basically have a multi-AAZ application across the board, and we know for sure that we can survive. and zone in on availability AWS is going down. Now for security groups, we want to be super secure, so maybe we’ll open http and https traffic from anywhere on the Albi. On the ECQ instance side, we just want to restrict traffic coming from the load balancer. And maybe for me, ElastiCache, we just want to restrict traffic coming from the EC to the security group, and from RDS, the same thing: we want to restrict traffic coming directly from the EC to the security group. So that’s it. So now let’s just talk about this architecture for our web application. So we have discussed ELB sticky sessions, webtier, webcams for storing cookies and making our web app stateless, or maybe using a sessionID and a session cache physically using ElastiCache. And as an alternative, we can use DynamoDB.
We can also use elastic caching to cache data from RDS in the event of reads, and multi-AZ to survive catastrophic RDS. We can use it for storing user data, so more durable types of read replicas can be used for scaling reads, or we can also use elastication. And then we have multi-AC for disaster recovery. On top of that, we added tightsecurity for security groups that reference one another. So this is a more complicated application. There are three tiers, because there’s the Web tier, the decline tier, the Web tier, and the database tier. But this is a very common architecture overall, and yes, it may start to increase in cost, but it is okay. At least we know the trade-offs we’re making. If we want multi-AZ, yeah, for sure we have to pay more; if we want to scale the reads, yes, for sure we’ll have to pay more as well, but it gives us some good trade-offs and architecture decisions that we have to make. So I hope you liked it, and I will see you in the next lecture.
4. MyWordPress.com
Okay, so yet another stateful web application, and we’re going to name this My WordPress.com. So here we’re trying to create a fully scalable WordPress website, and WordPress is a very common way of creating a website. It’s very popular. Although hosted WordPress is available, many people prefer to deploy WordPress on AWS. And so we want to access that website, and we want it to correctly display uploaded pictures.
So the way it works is that WordPress will store the pictures somewhere on some drive, and then basically all your instances must access that data as well. We’ll see this in the solution architecture discussions anyway. And so our user data, the blog content, and everything else should be stored in a MySQL database, and we want this to scale globally. So let’s see how we can achieve this. So the first thing we have to do is create a layer that has RDS. So we are now very familiar with this kind of architecture with RDS in the back end; it’s multi-AZ, and it’s going to be connected to all my ISTU instances. But what if I just want to go big and really scale up? Perhaps I should replace this layer with AuroraMySQL, and I could have multi-AZ reads, replicas, and even global databases if I wanted to, but in this case, using Aurora simply results in fewer operations.
It’s just a choice I’m making as a solutions architect; you don’t have to make that choice, but I like aura, I like the fact that it scales better, and I like the fact that it is easier to operate. Okay, excellent. So now let’s talk about storing images. So let’s go back to the very simple solution architecture. When we have one instance that has one EBS volume attached to it, it’s in one AZ, so we are connected to our load balancer, and so our user wants to send an image to our load balancer, and that image makes it all the way through to EBS. So the image is stored on EBS, and that works really well. We only have one easy-to-instance database, and so it goes straight to the EBS volume, and we’re happy if we want to read that image, same thing.The image can be read from the EBS volume and sent back to the user. so very good, right?
The problem arrives when we start scaling. So now we have two EC2 instances in two different AZs. And each of these two instances has its own EBS volume. And so what happens is that if I send an image right here on this instance and it gets stored on that EBS volume, if I want to read that image, maybe I’ll do it this way. And yes, I can read it. Or, in a very common mistake, maybe I can read that image and it will go here and there on the bottom; there is no image present.
And so, because it’s not the same EBS volume, And so here I won’t be able to access my image, and so that’s really, really bad. So the problem with EBS volumes is that it works really well when you have one instance. But when you start scaling across multiple AZs or multiple instances, then it’s starting to become problematic. So far, we’ve seen how to store it. Essentially, we can use EFS; solet’s architecture is identical. But now we’re recording on an EFS network file system drive. So EFS is NFS, and EFS basically creates “eyes” for elastic network interfaces, which it places in each AZ. And this key can be used for all our instances to access our EFS drive. And the really cool thing here is that the storage is shared between all the instances. So if we send an image to the M-5 instance through EFS, the image is stored in EFS. Now if you want to read the image, it goes all the way to the bottom and to the eni, and it’s going to read on EFS. And yes, EFS has that image available, so we can send it back.
And so this is a very common way of scaling website storage across many different EC2 instances to allow them all to have access to the same files regardless of their availability zone or how many instances we have. So that’s it. That’s a little subtle for WordPress, but I wanted to discuss EBS versus EFS. So we talked about using the Aura database to basically have no operations and have multiple AZs and re-replicas. And we’ve talked about storing data in EBS, which works great when we’re in a single instance application, but it doesn’t really work great when we have many. So perhaps we can use EFS to have a distributed application across multiple AZs and such. Now, the cost aspect of it is that EBS is cheaper than EFS, but we do get a lot out of using EFS, especially in this kind of use case. So again, it’s up to you as a solution architect to really understand the trade-offs you’re making, why you’re doing things, and the cost implications of what you’re doing. So I hope that was helpful for this lecture, and I will see you in the next lecture.
5. Instantiating applications quickly
So here’s a quick primer on instant shading applications. So in all the architecture discussions that we’ve had, we have never really talked about how we install and deploy this application onto our cloud instances to basically run our websites. And so when you launch a full stack, it can take a lot of time to install applications, insert or recover data, configure everything, and then launch the application. So how can we speed that up? So we can use the advantage of the cloud to speed that up. So let’s have a look at EC two instances. We can use what’s called a “golden AMI.” And a golden AMI means that you install your applications, OS dependencies, and so on ahead of time, and then you create an AMA from it. Then, for the next two simple instances, you simply launch them from this golden AMI. And the reason we do this is so we don’t have to reinstall the applications, the US dependencies, etc. We can simply launch with everything already installed and ready to go, which is the quickest way to start up our simple instance.
So gold NMA is a very common pattern in the cloud to have, and we’ve also seen how to use user data so easily and how it allows us to bootstrap our instance. Bootstrapping means basically configuring the instance when it first starts, and so bootstrapping can also be used to install applications, OS dependencies, etc. But this will be very slow, and we don’t want each application to do the exact same thing as the previous one if it can be repeated. But for dynamic configuration, for example, maybe retrieving the URL for our database and the password, etc., etc. All we can do is use bootstrapping with easy-to-use data, so we can basically have a hybrid mix of a golden AMI and easy-to-use data to make it work.
And this is something we’ll see in a second using Elastic Beanstalk. Elastic Beanstalk uses the same principle as the hybrid, where we configure an AMI and then add on some user data. Okay, so for RDS databases, we can restore from the snapshots, and then the database will have the schemas and the data ready, which is much better than maybe running a big insert statement that will take forever to start. RDS database—that’s maybe a way to go a bit quicker when you want to retrieve data. And for EBS volumes, we can restore from a snapshot, so we don’t have to have a disc that’s empty and not formatted; we can retrieve from a snapshot, and that snapshot will already be formatted properly and have the data we need. So these are the things you need to understand as a solution architect when going to the exam. They say, “Okay, we need to speed up easy-to-start two instances, or we need to speed up RDS databases or EBS volumes and formatting and all that kind of stuff.” This is what you want to douse the golden AMI user database, maps, and snapshots with. So that’s helpful. I hope that makes sense for you. I’ll see you at the next lecture.
6. Beanstalk Overview
So far, we’ve seen that we have a web application or three-tier architecture, and there are a lot of components in there. There was elastic, load balancing, underscaling, multi-AZ, a database with RDS, a cache with ElastiCache, and everything else had to be done by hand. Imagine if we had to deploy so many apps in Java, Go, Python, or whatever language you’re using, even Docker. And we had to create that load balancer, configure the auto scaling, and create the RDS database every time. And on top of this, well, we need to deploy into several environments, such as dev, test, and prod, and maybe want to have different versions at the same time. I mean, it’s a complete nightmare, right? So as a developer, you don’t want to manage infrastructure; you just want to deploy code. You want to configure the database once and just be done with it. You want it to scale. OK, you want your deployment to be valid for one instance, but also to be valid for 100 instances. And as you can see, the architecture right here is pretty traditional. Most web apps will have a load balancer and an auto-scaling group. So as a developer’s personal wish, I want my code to run.
I really don’t care about the architecture, and I possibly want consistency across all my application environments. I want a one-stop shop to manage my stuff. Okay, so this is where Elastic Beanstalk comes in. Elastic Beanfield is developer-centric, so you can deploy applications on AWS. It will leverage all the components we’ve seen before. And that’s why we’ve gone to the fundamentals first. So all the EC, two ASG, ELB, RDS, etc. We’re going to reuse everything we’ve seen before right now. And this is why it comes in super handy. It’s one of you that’s super easy to make sense of. So we’ll see all the configuration and development, but we’ll still have full control over how all these components get configured, which is very nice. Elastic voiced on top of this is free, and you are going to pay only for the underlying instances. So Beanstalk is a managed service. That means that the instance configuration of the operating system will be handled by Binstock, the deployment strategy. You can configure it, but again, it will be performed by Binstock. And just the application code is your responsibility.
That’s kind of an oversimplification. You can always customise stuff. But I wanted to give you a straight, easy-to-understand picture. There are three architectures. model for bin stalk. Developers will appreciate your single instance deployment. So you have a whole environment that’s going to be one instance. Then you have a load balancer and an auto-scaling group. That’s great when you do production or preproduction for your web applications and you want to see how they react at scale. And then if you just want to have an ASG, So, know about loading the balancers. It is great when you have non-web apps in production, such as workers or other kinds of models that don’t need a load balancer or don’t need to be accessible. So, beatstalk, when we look at it and see it in person, it has three components. It has an application and an application version, so every time you upload new code, basically every time you upload new code, you’ll get an application version and environment names. So you’re going to deploy your code to dev, test, and production, and you’re free to name your environment just the way you want and have as many environments as you wish. You’re going to deploy applications to environments, and basically, we’ll be able to promote these application versions to the next environment.
So that’s the whole idea. We’ll create an application version, move it from Desk to Test to Production, and so on. There’s a rollback feature as well, so you can roll back to your previous application version, which is quite handy, and we get full control over the lifecycle of environments. So the idea is that you create an application, you create an environment or multiple environments, and then you upload a version and you give it an alias. So it’s just a name that you want. Then you will make this alias available to environments. Fairly easy. Right? So what can we deploy on Beanstalk? Well, it has support for a tonne of platforms, and if yours is not there, there’s something weird. But go, Java, Java, Tomcat, Net, Node, PHP, Python, Ruby, Packer, single container Docker, multi container Docker, preconfigured Docker, and if your platform isn’t supported, you can write your own custom platform, which is fairly advanced and not expected of you as an associate developer. But that’s the idea. With Bin Stock, we can do a lot of things, and what we’re going to do is deploy an app on Beanstalk. So, until the next lecture, we’ll see you.
7. Beanstalk Hands On
So, let’s see how Beanstalk can help us with our AWS solution architecture. So Beanstalk is not something you need to know in depth for the Solutions Architect exam, but definitely as a developer, if you go for the developer exam, there is a much more detailed section on it in Beanstalk. But here we’re just going to see how Beanstalk allows us to create an application that follows a multi-tiered architecture really, really easily. So I’m going to get started with Bin Stockand, and I’m going to create my demo application.
And here we’re able to select a platform. So as I said, there’s tonnes of platform support for Beanstalk. There are numerous Docker images available for NodeJS, Python, Java, and Go. So I’m going to just use NodeJS for this one, and we could upload our own code. But for now, I’m just going to use a sample application because I know this one works right away. So if I wanted to go really quick, I could click on “Create Application” and be done. But what I really want to do is show you all of Beanstalk’s options so you can get a sense of its full potential. So Beanstalk has multiple configuration presets available for me. I can select Low Cost, which is free tier eligible, and it’s going to create one or two instances with a public IP on them, and that’s it. Or I can select High Availability, and this will actually create for me a load balancer, an auto-scaling group, etc, etc.
I can always customise everything by clicking on “Custom Configuration.” So I’ll select High Availability, and here we get a whole panel of all the options we can configure through Bin Stock. So it’s like if someone had already thought of all the things you might want to have in your solution architecture and put them all into one panel. So I’m not going to do a deep dive into all of these. I just want to show you that, for example, here the software is NodeJS. So we expect NodeJS. Here we can specify what the EC2 instancetype is and what the AMI is going to be, as well as the EBS volumes, chiefs, security groups, and all that stuff, and the capacity. So, we want load balancing and autoscaling; how many AZs do we need? Do we want to be multi-AZ or not? and how many instances we want. Minimum. Maximum. For my auto-scaling group, all this information is great in terms of the load balancer. Here I can modify the load balancer—right now it’s an application load balancer. We get one listener, one process, and one rule. But I could customise the load balancer right here. Rolling updates into the deployment is when you want to start updating our application. Like, how do we do it? Security is for key pairs, et cetera. Monitoring manager updates and notification networks Here we can even set up a database.
So we can set up an RDS database straight from this UI if we want to. And finally, some tags. So I’m not going to do all these things, but I just want to show you the possibility that Elastic Beanstalk has. I’m going to create high availability quickly to show you how that works. So I clicked on Create app, and now my app has been created. Now Elastic Beanstalk is going to take a little while to create my environment. And so what I’ll do is just pause here until this is done. Okay? So it took about five minutes to create things after a while. We can see that the sample application is running. We get help. Okay? And at the top of my page, I get this URL that I can access. And this URL gives me a congratulations.Your first AWS Elastic Beanstalk NodeJS application is now running. And that’s fantastic. Beanstalk basically created an entire environment for me. And you can always modify configuration, look at the logs, etc., etc.
And what I want to show you is what it has created. So if I go to the EC2 console, we’re going to see a little bit more information about what happened. If you go to instances, as you can see, I have my demo application created. It’s a t. Two micro. It’s running. EU west. One b. And actually, this one. If we go to auto-scaling groups right here, this instance should be managed by this auto-scaling group. So indeed, an autoscaling group was being created. And, if desired, we have one instance where the minimum is one and the maximum is four. Additionally, this auto-scaling group was automatically configured to use three availability zones for me. That is truly amazing. Now, if we go to load balancers, a load balancer should have been created again for my application. So this is basically how I access my application. So here it is. Yes, this application balancer has been created. We can see it’s an AL because the type is application here. type is application. It’s on three AZ, and it has its own security group right here that has been created for it. and the listener has been configured to listen on port 80. And it’s forwarding to this here, which is a target group.
So if I click on it, I’m directed to a target group. Here we go. So this is a target group listening on port 80. We can look at the goals and see that they are healthy. We can look at the health checks that have been configured for my target group as well, which is really awesome. And so, finally, let’s go to security groups, and we will see that indeed. I also have a lot of security groups. I apologise for the inconvenience, but here we are. Two security groups were created by Elastic Beanstalk. One for my EC, two instances, and one for my ELB. So, really cool, really well done, and very simple to use. So, beanstalk, again, just a short introduction, but when you’re done, you take action, and you can terminate the environment. and this will basically destroy everything. And so for this, you just say, “Enter the name of the environment to delete.” So I’ll just click copy, paste it in this box, and click on terminate. And now the elastic beanstalk will terminate my entire environment. And I will have no leftovers from this little hands-on. So that’s really it really.And yes, once the environment is deleted, you have to go and also delete the application. As a result, actions and applications are deleted. And then you’ll be done when everything has been deleted. So that’s it. I hope you have a good understanding of how elastic beanstalk works. It has great potential. I’ll see you at the next lecture.