9. Sample question 8
Question eight a global company wants to run an application in several AWS regions to support a global user base. The application will need a database that can support a high volume of low latency reads and writes that is expected to vary over time. The data must be shared across all of the regions to support dynamic companywide reports. Which database meets these requirements? Now, without even looking at the options, we should know that this question is talking about DynamoDB Global Tables, because no other database supports these options. Here, we are looking at multimaster solution, right? We must have application running in multiple regions, and we want to perform low latency reads as well as rights.
So we need a multimaster crossregion option here. So DynamoDB Global Tables is the only solution from AWS that meets these options as of now. So let’s look at the options. The first one says Use Amazon Aura Serverless and Configure endpoints in each region. And this is incorrect, because we know that Aura Serverless does not work in multiple regions. Second option says use Amazon RDS for MySQL and deploy read replicas in auto scaling group in each region. And the question specifies that we require low latency reads as well as rights. So read Replicas are not going to help here. So this again is incorrect.
Option C says use Amazon Document DB with MongoDB compatibility and configure reader replicas in an auto scaling group in each region. Again, Reader Replicas are not going to support right operations. This also is incorrect. An option. D says, use Amazon DynamoDB Global tables and Configure DynamoDB Auto Scaling for the tables. This definitely is the right answer. DynamoDB Global Tables provide multiregion multimaster database that allows us to perform reads as well as writes in multiple regions simultaneously. And don’t get confused with Aura global database.
Remember that Aura Global Database is not a multimaster solution. It’s only a multiregion solution. And it only supports a single writer or a single master. And Aura Multimaster, on the other hand, is a single region solution. It’s not yet supported in multiple regions. Aura Multimaster is also not useful here. And Aura global database is also not useful here. Although the responses to not include Aura Global Database or Aura Multimaster, just remember that even if the options mentioned these databases, they will not be fitting the criteria here. So the correct answer is DynamoDB Global Tables. So D is the correct answer. All right, let’s continue.
10. Sample question 9
Question Nine a company’s customer relationship management application uses an RDS for PostgreSQL Multiac database. The database size is approximately 100 GB. A database specialist has been tasked with developing a cost effective disaster recovery plan that will restore the database in different region within 2 hours. The restored database should not be missing more than 8 hours of transaction. What is the most cost effective solution that meets the availability requirements? So what are the keywords here? We are looking at RDS for postgres Multiac database and we are looking for a cost effective to Dr plan and we want to restore to a different region within 2 hours. Which means the RTO or recovery time objective is 2 hours. And the database should not be missing more than 8 hours of transaction which means the RPO is of 8 hours.
Recovery point objective is about 8 hours. So what solution will fit this criteria? Let’s look at the options. The first one says create an RDS read replica in the second region for disaster recovery, promote the reader replica to a standalone instance. Now, you can create an RDS read replica in another region and you can use it for disaster recovery. But this may not be a costeffective solution. The read replica will cost as much as the main database instance, right? So it’s going to cost as much as your main database instance. Option two says create an RDS read replica in the second region using a smaller instance size. And for disaster recovery, scale the read replica and promote it to a standalone instance.
Now again, this doesn’t make any sense. Having a smaller instance than the main database doesn’t really make any sense. Option B is incorrect. Option C says schedule an AWS lambda function to create an hourly snapshot of the database instance and another lambda function to copy the snapshot to the second region for disaster recovery.Create a new RDS multi AZ database instance from the last snapshot. Now, this looks like a plausible option. Let’s look at the fourth one. It says create a new RDS multiaz DB instance in the second region configure an AWS DMs task for ongoing replication. Now, this also is a good option, but remember that when you create an RDS Multiac instance in the second region, it’s going to cost even more.
It’s going to cost as much as the entire RDS cluster. And the DMs task is also going to use an EC two instance. So that’s also going to add on to the cost. So this will not give us a cost effective solution. So option C of using the lambda functions along with the snapshots is the most cost effective solution of these options here. So snapshots are cheaper as compared to creating instances. Of course, creating an AWS Lambda function to take a snapshot and another function to move that snapshot or copy that snapshot to another region is of course the right answer. And this option also says that the lambda function is being used to create an hourly snapshot.
So we are taking snapshots every hour and that is going to keep the incremental snapshot size low and hence the time to copy the snapshot across region is also going to be very low and that will help you meet the RPO of 8 hours. Also remember that taking multiple snapshots or taking frequent snapshots does not impact the costs. So the option of using a pair of lambda functions is the correct answer here. And when we discussed the RDS backups and snapshots, we also discussed this particular approach of using lambda functions to take periodic backups and move them to s three. Right? And this one here is a similar approach that allows you to create a cost effective dr plan. All right, option C is the correct answer here. Let’s continue to the next question.
11. Sample question 10
Question ten. An operations team in a large company wants to centrally manage resource provisioning for its development teams across multiple accounts. When a new AWS account is created, the developers require full privileges for a database environment that uses the same configuration data schema end source data as the company’s production Amazon RDS for MySQL database instance. How can the operations team achieve this? Now, what are the keywords here? We want to centrally manage resource provisioning across multiple accounts. So whenever we talk about central management, we generally talk about automation. And whenever we talk about automation, we should always think of cloud formation, right? So any option that talks about cloud formation is the option that we should look at.
So let’s look at what options we have. First option says enable the source database instance to be shared with the new account so the development team may take a snapshot. Create an AWS cloud formation template to launch the new DB instance from the snapshot. Now remember, we can’t share a source DB instance. We can only share snapshots. This answer is obviously incorrect. Option B says create an AWS CLI script to launch the approved DB instance configuration in the new account. Create an AWS DMs task to copy the data from the source DB instance to the new DB instance. Now, this option here won’t allow you to centrally manage your resources, right? So again, this is not the right answer.
Option C says take a manual snapshot of the source database instance and share the snapshot privately with the new account. Specify the snapshot ARN in an RDS resource in an AWS cloud formation template, and use stackset to deploy to the new account. Now, this looks to be the plausible option. Let’s look at the fourth one. Create a DB instance, read replica of the source database instance. Share the read replica with the new AWS account. Again, just like the first option, remember that we cannot share a read replica. We can only share snapshots across regions. So again, this is incorrect. So the correct answer is option C. And we have seen this in the RDS section how to copy and share snapshots. So we know that for copying snapshots across account, you must share the snapshot first and then copy that snapshot in the target account.
So option C says take a manual snapshot of the source DB instance, share the snapshot privately with the new account, specify the snapshot arm in the RDS resource in cloud formation template, and use stack sets to deploy to the new account. So the process for sharing snapshots is mentioned correctly here. And this option also mentions use of stack sets. So stack sets, as we already know, are used to create, update or delete stacks across multiple accounts and regions. And using cloud formation also allows us to centrally manage these operations. And using stack sets will allow us to manage these stacks across multiple accounts. So definitely options see here is the right option. All right?
12. Exam Strategy: How to tackle exam questions
Alright, so we discussed ten sample questions, and I hope this gives you an idea of how to tackle the exam questions for the database Specialty certification. Now, all the questions are going to be in the scenario format, so be sure to identify different AWS services as well as the requirements stated in the question. So identify the keywords first and try to formulate or come up with an answer before even looking at the options. And now once you have something in your mind, once you have the plausible answer in the mind, then look at the options and choose the appropriate options from the given setup options. All right? And that’s the best way to tackle this exam. And that’s going to help you come out with fine colors. All right? So that’s about it. Thank you so very much.
13. Additional Resources
So congratulations to making until this point we just want to give you some exam tips and I think that this course is definitely enough but it’s never a bad idea to watch some extra resources so please watch this YouTube video. There’s also an exam Readiness course from AWS that is free that will allow you to check in with the learning you have. It’s also recommended to have a look at it. There are some reference architectures you need to look at on Google, on GitHub excuse me, around databases so Dbsfr, Graph, RDBMS, Data, Lake and EDW which can be helpful to look at and see if you really master and understand them.
For some resources please read the service FAQs means Frequently Asked Questions. For example, look at the SAQ for RDS and so on because FAQ will cover a lot of questions asked at the exam. Also it’s also to read the service troubleshooting. For example for Amazon RDS, here is the troubleshooting guide. So overall this course should definitely prepare you but I always get these questions like what else should I do? What else could I do? Then on top of this course, look at all the links I just gave you and you should be good to go now. I wish you the best of luck and I will see you in the next lecture.