6. Using Polly To Help You Pass Your Exam – A Serverless Approach – Part 1
I’m in my web browser, and I’m actually on the Alexa website. If you’re in the UK, you can use Alexa on Amazon.co.uk. If you live in the United States, use or.com. So make sure you log in. And remember, when you do log into the Alexa website, use the same email address that you use for Alexa for the developer services. So for the developer services, you’re going to need to sign up at go to Developer Amazon.com. And then if you just click on the sign-in button here, And if you’re a new customer, if you’ve never used the developer portal before, just click here. I am a new customer. But this is really, really important. Make sure you sign up using the same email address that your Amazon Alexa is registered to—that then allows you as a developer to basically test your skills without having to publish them, so you can make your own private skills.
So, sign up for the Amazon Developer Portal now. And once you have signed up, log in. So once you have signed in, just click on Alexa. And then what you want to do is click on the Alexa Skills Kit. Now I have a few skills already here, but you can see their status here. They’re all in development. So I haven’t published this yet. So they only work on my Echo. They don’t work on anyone else’s. So what we’re going to do is click “add a new skill.” So in this section, we’re going to leave everything at its default. So it’s just a customized interaction model. I’m just going to give my name and the name of my skill, my name. So I’m just going to say Ryan. You might want to use your own name. The invocation name is what you use to invoke the skill. So you say something like, “Alexa, ask Ryan to help me study.” And the Ryan part is the invocation name. So if you use a name that is difficult to understand, sometimes your skill won’t work.
So try and keep your invocation name. We’re going to go ahead and hit save. And we’re now going to start building out our interaction models. So go ahead and hit “next.” Okay, so once you’ve downloaded the zip file in the resources section, just unzip it, and it will create a folder called Alexiskill. and just go into that. And so we have two different folders here. We’ve got our speech assets, and then we’ve got our source code. And what we’re going to do is start off with our speech assets. So the very first thing we need is our intent schema. So go ahead and open this. If you’re using a Mac, go get text. Wrangler. If you’re using a PC, get Notepad Plus. Plus. And here we go. We’ve got some really simple JSON, and we’ve got our intent. And an intent is basically what you want to do. So we have an intent called “Get New Fact Intent” here. And that is where we’re going to retrieve our MP3 file from S 3 and play it. And then we’ve got some built-in intentions here. And you know that they’re built in because they always start with “Amazon” in capital letters. So here’s our offer of assistance. Here’s our stop intent, and here’s our cancel intent. Help Intent will simply assist us with the skill. Tell us what we need to do.
Stop will make Alexa stop, and cancel will make her cancel. So just copy and paste this into your clipboard, and then you just need to paste it into the Intent schema in here. Scroll down. We don’t need custom slot types, so you can ignore that. But we do need sample utterances. So go back to your finder window, and we want to open up our sample utterances. And an utterance is really, really simple. It basically says it’s just a way of saying something. So you would say, “Alexa, tell me a fact.” Alexa, give me a fact. Alexa, tell me something. And you can add as much here as you want. I’m just going to use the default three. So I’ve copied that into my clipboard, and I’m just going to paste it in here and go ahead and hit save. This will now update the skill, and it will take a little bit of time, but once you’ve finished doing that, you can go ahead and click Next. Now, in here, this is where we’re starting to link to lambda. So select AWS lambda as the service endpoint type. And then you have to pick the geographical region that’s closest to your target customers. So you can either do this out of Northern Virginia or the East Coast of the United States, or you can do this out of Europe. So the EU West one.
Now, before we choose which region to deploy our lambda skill, we’re going to go back over to the Amazon Alexa webpage, click on Settings, and then you can see I’ve got an awful lot of echoes and echoes dots. Select the one from which you will be developing. So mine is called Alexa Developer. And then, in here, you’ve got your language. So my developer account is set to English (United States). Now, if I want this skill to work, I’m going to have to deploy my lambda function in Northern Virginia because my Alexa is set to English, United States. If yours is set to English (UK), then you want to deploy into Ireland. So that’s an important point: the only way you’ll be able to use your development skills is if your lambda endpoint is set to the same location as your Alexa, which you can change. I can move her back over to being an English one. I’m going to stay in the United States, though. So it wants my Amazon resource name because we’ve been selecting the United States and clicking North America and nowhere. So let’s go back over to the AWS console. Okay, so I’m here in the AWS console. I want to either go to Northern Virginia or I want to go to Ireland. Or for those of you who are German and have a German Alexa, you might want to go to Germany. So what I’m going to do is go down to lambda, which is under Computer, and go ahead and click in there. And now we’re going to write a new lambda function.
So I’m going to call this “create a new lambda function.” So we can get some blueprints if you type them into the filter, Alexa. Let’s use this blueprint here. So this is the Alexa Skill Kit fact skill. So go ahead and click on that to configure our triggers. Click in here, and we’re going to click on the Alexa Skills Kit. Go ahead and hit “next.” So we now have our function. I’m going to call this my study buddy or something like that. You can put any name you want in here. We’ve already got some prepopulated code. We don’t want this code for the purposes of this particular lecture because we want to put in our own code. So simply highlight all of the code and press the delete key. Okay, so we’ve deleted all my code, and now what I need to do is copy and paste the new code into here. But first, we need to edit it just a little bit. So if we go back to our Alexa directory, our code is in source. You can open this up. So our code is really, really simple here. We’re requiring the Alexa SDK. And then, basically, we’re launching a new request.
And the request that we’re launching is called “get new factual intent.” This is our intent here. It’s a very simple function, and it says, “Hello, Ryan.” Let’s get started on our examplus prep and then Get Fact. And the facts are all going to be random. And all you have to do is update your facts in here and the links to your MP3 files with your S3 URL. So we can get that either from S3 or you can get it straight from DynamoDB. I’m going to get it from DynamoDB. Now, you might have already loaded some notes in, or maybe you haven’t just yet. I’m going to go ahead and load some notes onto my computer to go over to your bucket website. So mine was called the Poly website. Simply go to the index, click on it, and open this link. And then it’ll open up a new page and change the voice to whatever voice you find that you like the most. I’m going to choose a random one. I’m going to choose Emma. Go ahead and paste in whatever notes you have. This is just the standard EC2 stuff, like landing page stuff. Go ahead and hit “say it.” And then that will give you a post ID.
If you click in here and retrieve the post, you can see that it has now been posted there. So once you’ve added your notes, let’s go, and I’ll add three. So this is EC 2. I’ll do RDS, and I’ll do Lambda. So here are some notes for RDS. I’m going to change the voiceover to Amy’s. Go ahead and hit “Save.” You can see if it’s saved by clicking here and having the little star appear. And then I’m going to paste some notes around lambda, and maybe I’ll get a Welsh guy and say it. And you can see it’s just been changed and updated here. Go ahead and hit “search.” Now I’ve got three MP3 files that are saved in my DynamoDB table, as well as S Three that are saved in S Three and recorded to my DynamoDB table. So go back over to the AWS console. So I’m in the console, and I’m going to DynamoDB, and then I’m going to mytables, and then I’m going to click on posts. And in here, I’m going to go to my items.
You can also get your S3 URL from this page. So you can just click in here, on the little edit button. Go ahead and copy that to your clipboard. And I want you to copy and paste these three into your code by pasting them in here. I’m just going to get rid of this one. So paste, paste, paste. And if you do want to add additional ones, just copy and paste this line here. So just go copy, paste, and then always add a little comma at the end there. So you can have as many different random facts as you want. I’m just going to go in and delete that because I only want three. One thing I will call out here is that this is using Synthesis Speech Markup Language, and essentially it looks like HTML, but it allows you to play audio files or pronounce things a different way. But if you delete this, anything you type in here is a random fact. So you could type, “Hello, Cloud Gurus.” You could put your notes directly here, and then Alexa would read them out rather than using the Poly service. so feel free to play around with it. You can put any facts in there that you want. And the only other thing I’ll call out is just this random function here. So this is basically just choosing a random fact. It’s picking one of these three lines, five lines, or ten lines, depending on how many facts you’ll have, at random. Okay, so now save your JavaScript and save your index JavaScript, and you can actually just copy and paste this into your clipboard.
You’re going to need to go back to your Lambda window. If you scroll down, you can see there are only 54 lines of code. Interestingly. and scroll all the way down. And if you choose an existing role and then use one of your existing roles, I’m going to use my one called the lambda role, which is my basic execution role. You can use your basic execution role that you created for your serverless website, or you could use the one that you created for the poly lab. It’s entirely up to you. I’m just going to use my basic execution-row role. Go ahead and click next, and then click the Create function. So that will now create my lambda function, and that gives me my Amazon resource name. Let’s go ahead and copy that into our clipboard, and we’ll go back to the Amazon developer portal. So I’m back in the portal, and then I’m just going to paste in the ARN to my North American endpoint, scroll all the way down, and leave everything else as default. Go ahead and hit “next.” That will then update your skill. And now we can test it. So in here, we’ve got the voice simulator. Anything you type in here, Alexa, will be read out in her voice. It can be good to get your grammar and syntax right. Hello, cloud gurus. So there you go. But the service simulator So here we go with our utterance.
So if you do forget your utterances, you can go back to your interaction model and scroll down and see what your utterances were. Just tell me a fact. Give me a fact. Tell me something. So I’m going to write, “Tell me a fact.” So let’s go back to testing down here. We say, “Tell me a fact.” And the way this would work in the real world is you’d say, “Alexa, ask Ryan to tell me a fact.” So go ahead and click here. And look, here’s our lambda request, and here’s our lambda response. And we can see the output speech response if we go in here. So the type is SSML, and she’s speaking and saying, “Hello, Ryan.” Let’s begin studying for our exam. And that’s the end of Alexa. The audio source is then played, and the link to our S-3 bucket is established. So you can test this. Now, you don’t need an echo to test this, but let’s go test this in real life, because this will now work. We don’t need to do anything else. We can literally go up to Alexa and say, “Alexa, ask Ryan to tell me if.”
7. Using Polly To Help You Pass Your Exam – A Serverless Approach – Part 2
Okay, so I’m logged into the AWS console, and I just want to go over to services, and then we go to S3 under storage. What we want to do now is create a bucket. So, as you remember, we have to enter a DNS-compliant bucket name. So you can’t take a bucket name that somebody else already owns, and it has to be all lower case. So I’m going to do my London bucket list. Let’s see if I can get that one. And yeah, that looks good. and I’m going to scroll down and choose London. You can choose whatever region is closest to you. Go ahead and hit “next.” We’re going to enable versioning on this bucket. So we’ll go ahead and hit save, and we’ll go ahead and hit next now. And in here, I just want to explain this a little bit clearer than I did in the last lecture.
So this is where we are managing our permissions. There are different types of permissions, so there are user permissions, and these are our users within IAM. And then we have our public permissions, and these are our different groups. So we have any authenticated AWSuser, followed by everyone. Now, this is at the bucket level, where we’re configuring it right now. You can also do this at the object level, so you can do this for individual objects. And if you want to understand the difference, we’ve got objects and object permissions. So I’m the owner of the bucket. I want to have, and if you just click on this little info button, you’ll grant permissions to the user to list, create, overwrite, or delete objects in the bucket.
So do I want me, as the owner, to be able to create objects in this bucket? Yes, I basically want read and write access in here. Do I want myself, the owner of the bucket, to grant permissions to the user to read or write to an access control list for the bucket? So do I want this person to actually be able to manage the object’s permissions, not just the objects themselves, but the permissions to that object? And yes, I do. What I definitely don’t want to do is allow object permissions to be controlled by everyone. I still cannot think of a use case. I’m sure there are use cases for it, but I’m going to leave everything at default. And what I will do is make this bucket public. So I’m just going to say everyone has read access to objects within this bucket, and we’ll see why I’m going to do that later on. So go ahead and hit “next.” And we’re going to go ahead and create the bucket. So I’ve made my first bucket versioning and enabled it for this bucket. A really key point here is that once versioning is turned on, it cannot be removed. It can only be disabled. So that’s a key thing to remember going into the exam: you cannot remove versioning. You can only disable it. If you want to remove versioning from a bucket, Basically, you have to create a whole new bucket and then transfer the objects from one bucket to the other. So we’ve got my London bucket. I’m going to go ahead and click on that. It’s got no objects in there. Okay.
So I’m just going to switch over to a text editor, and I’m going to write in here, “Hello, Cloud Gurus!” and an exclamation mark, and I’m going to save it. Then, when I get back here, I’m just going to go in and upload this file to our bucket. So go ahead and hit upload, then hit open, and then go ahead and hit next with the permissions. This is the permission for the actual object itself. So in here, we can see that my owner is allowed to read and write the object. It’s also allowed to create permissions for the object here. I want to enable people to basically read this object. So I’m going to allow that. I’m going to go ahead and hit next. I’m going to leave everything else as is and click OK. And there we go. We’ve created our object. So if we actually click in here and then click on the link, we’ll be able to see up here. Hello, Cloud Gurus! So, just one step back So if we go back into our bucket and then click on the object itself, you can see the file name next to it. We have the most recent version. If we just click up, there’s only one version. So let’s go and make a small change to this file. We’ll just say that this is the second version. I’m going to save that. We come back here. Just notice the size of the first file. So it’s 18 bytes. What I want to do is return to our bucket, where I’ll upload, add files, and upload it again. I’m going to again show the users and the permissions. If I don’t mark this as read, the new version will no longer be available to read using that link. So I definitely want to make sure that it’s readable. And even if you enable a bucket to be read by default, that doesn’t mean the objects that you’re uploading to the bucket will be publicly available by default.
You know, Amazon always makes things as locked down as possible. So you’re always going to have to make it available to the public when you upload it. So go ahead and hit “next.” We’re going to hit next again and hit next. Now, if we click in here and see the object, we can see the two different permissions. So notice the different timestamps. If I click on this one, you’ll see that it’s 18 bytes long, with the e-tag ending in a zero. If I actually change that and go back to here, we can see this one’s slightly bigger, so it’s 47 bytes. The e-tag has been changed to 282. So basically, we’ve got two different versions of the object. Now, this is a really key point to remember. Effectively, we’ve got two objects in this bucket. It will only show us one. If we click in here, it’s only showing us one. But when we actually drill down and look at the different versions, we have two different versions of it. And the size—if you add up the size of 47 plus 18, that’s how much storage space the bucket is actually taking up.
So you’ve got to think about this from an architectural point of view. What you don’t want is to have very large files that are constantly being updated and prone to change with versioning enabled without some kind of lifecycle management policy, some way of getting rid of the older versions, perhaps by deleting them or archiving them off to Glacier, and so on. And we’ll cover lifecycle management policies in a couple of labs from now. So it’s just a fundamental point to remember: just because you’ve got one file in there, it only looks like one file, but it’s actually two, three, or four files, depending on how many changes you’ve made. And the actual size of the bucket (storage space) that you’re taking up will be the sum of all those different versions. That’s a fundamental thing to remember, especially going into the exam. You might be given scenario questions where you’ve got to architect in the most cost-efficient way possible. So you’ve got to figure out that you don’t want to keep these large files updated all the time because any change to a four-gig file, no matter how minor, will consume another four gigs. So let’s just have a quick look at this text file. If I click here, we can see that this is the second version. What I’m going to do now is just go back, and I’m going to come back in here, and I’m going to go to my latest version, and I’m going to delete my latest version. And so now we’re back to this one. And now if I click in here, we can see that it’s already gone back to the previous version. So, hello, cloud gurus, we’ve lost the second version. So if you delete a version like that, it cannot be restored.
You’ve lost that version forever. But if you delete an object, you can restore it. So let’s go ahead and just restore if we go back into our text file editor; I’m just going to save this again, and then I’m going to go back and upload the previous version. So just go back in here, go back into “My London Bucket,” going to “Upload,” adding files, going to “Hello Cloud Gurus,” go ahead and hit “Next.” And then, in permissions, I’m just going to make sure that it is public so I can see the change. Go ahead and hit “next,” then “upload.” And so now if I click on this object, we can see by clicking in here that this is the second version. So I’ve uploaded a new version, and as you can see, there are only two versions. So let’s now delete this object and see how we can do a restore. So, if we return to my London Bucket and you click here, we can just go into more and delete, go ahead and hit next and delete. And then it disappears, and we get this splash screen. You believe intuitively that you should be able to go in here and go more and hit initiate restore.
Well, that initiate restore is actually a command that is used with Glacier and has nothing to do with this object. So that’s not going to work. And on the new console, it’s actually really difficult. I still can’t figure out how to do it, but on the old console, it was quite easy. So, simply return to Step 3 and select the old console; you can do so by clicking here. And then, if we go back over to our bucket, we’ve got my London bucket. Now that the versions are up here, you’ve got Hide or Show, so we can actually go in and hit Show. And here we go. We’ve got our three different versions. So this is version one, and this is the second version that we then uploaded. Then there’s this here, which is a delete marker. And the delete marker is simply saying, “Do not show.” It is not in this console, but it is still physically present in that bucket. So we’ve got two copies, and then we’ve got a delete marker over the top. Now, to do a restore, all we need to do is go into actions and delete the delete marker, then go ahead and hit OK.
And if we click in here, we can now see that the file has been restored. If you want to get back to the new console, it’s pretty easy: just go back to all your buckets, go to Properties, scroll right down to Storage Management, and just hit the opt-in button again. And then if we go back in here, we’ll be able to see the file. So we have restored that file, and all we’ve done is delete the delete marker. We go back in here, and we can also see that there are still two versions of that text file. And then, of course, if you want to download individual versions, you just go in here and click in there. And then to view it, you can click here. So that is it for this lecture, guys. Versioning is a fantastic tool, certainly for system administrators who want to prevent users from accidentally deleting objects. You can go in and restore the console. You can do it via the command line. Of course, there are tools, such as CloudburyLabs, that integrate with the way Dropbox is built.
8. Build An Alexa Skill
Okay, so I’m here in the AWS console. If we just go down to services, we can go ahead and click on S 3. And in here, we’ve got my buckets. So a cloud guru 2017. Ryan. And it’s got three different files in here, or objects. And versioning is turned on, which we did in the last lab. So you can see the different versions here. So, what we want to do now is create a new bucket. I’m going to call this my Sydney bucket, “Cloud.” It’s telling me there are no upper cases. 1 second. So, what’s in a bucket of cloud guru Sydney? Something like that Please tell me that somebody hasn’t stolen that. And there we go. So I’m going to put this in Sydney. So this one’s in London, this one’s in Sydney—literally around the entire world. I’m going to go ahead and hit. I’m going to leave everything as default, to be honest. So I’m going to just go ahead and hit create. and we’re going to create this bucket inside Sydney. So far, I haven’t enabled versioning and haven’t done anything.
I’ve literally let it become the default bucket, which now exists within the Sydney region. So let’s go ahead and turn on cross-region replication. We should get an error message because, in order for cross-region replication to work, you need versioning turned on in both buckets. So let’s go ahead and come in here, go over to our bucket properties, and it’s not actually here, sorry. It’s under management; it used to be under properties, but they’ve moved it, and you just click on replication. So it says you haven’t created any cross-region replication rules for this bucket. Let’s go ahead and add a rule. So what do we want to do? Do we want the bucket’s entire contents or just a prefix? And by prefix, they just mean a folder. So you can have subfolders of buckets replicated across You don’t have to fill the entire bucket.
I’m going to do the entire bucket. I’ll go with enabled destination. So we go in here and select our destination. It can be buckets in this AWS account or buckets in another AWS account. So do bear that in mind. I’m going to go ahead and use my Sydney bucket. And then it says this bucket doesn’t have versioning enabled. Cross-region replication requires bucket versioning. So enable versioning on this bucket. So there’s our error message. We’re going to go ahead and enable it. And there we go. We can also optionally change the storage class. So we might want to change this to “Standard, Infrequently Accessed.” We would do this especially if we were only using this as a backup device. So I am going to do that. I’m going to make it standard and infrequently accessed. Go ahead and hit “next.” It’s going to ask us to select an IAM role for this, and we don’t have one, so we’re just going to say, “Create a new role for me.” Go ahead and hit “next.” And there we go. It’s now going to create a role. It’s already enabled versioning on our Sydney bucket. We’ll go ahead and hit save, and we have just enabled replication. So here we go: replicate the contents of this bucket from London to Sydney, my AWS Sydney bucket.
And this has created a new role called the “Three CRR Cross Region Replication” role for Cloud Guru blah, blah, blah, blah, blah. So that’s fantastic that it happened. Now I’ve got a question for you. Do you think the objects that are sitting inside our current bucket—so these three objects—have been replicated over to Sydney? Well, let’s go over and have a look. So click in here, go over to our Sydney bucket, and you can see that they’re not in there yet. So it’s only the new objects or any object that we change that will be replicated over it, not the existing objects. So you might be wondering, “How can you copy the contents of one bucket to the contents of another bucket?” The easiest way to do this is via the command line. Okay? So the first thing you’re going to need to do is go to Google and just type in “AWS command line tools” or something like that. And it should be the very first link that’s not paid for.
So it’s here. AWS type in “AWS Click on that. Now you are going to have to go ahead and install this. If you’re a Windows user, you can run the 64-bit or 32-bit Windows installer. Most of you will be using 64-bit, and Mac and Linux users can install it using Pip. So you just type “pip install AWS CLI.” Also, if you’re a Mac user and you don’t have PIP installed, what you can do is just go back to Google and just type in “Mac.” And I believe there is an installer. So here’s the user guide for it. Just click in here. So this will tell you how to do it using Python. But there are standalone installers, and this is actually way easier than just using the bundled installer. same for Windows users. Using the MSI installer is a lot easier. After you’ve done that, switch to DOS or a terminal. So right now I’m in my terminal window on Mac; you either have this or you have Dosor PowerShell, depending on what you want to use. So in terms of being able to do this, if you just type in “AWS CLI tools” into Google, you’ll be able to download and install the command-line tools. And then to set it all up, you just have to type in “AWS configure” and you have to pass an access key ID and a secret access key. And you would get those from creating a user in Identity Access Management.
Okay, so here I am in the IAM console. You want to go in there, and you probably just want to go ahead and create a new group. I’ve got a group called Admin Access. Let’s just create something similar. So this is where your administrators would go. As a result, AWS.Admin access I’m just going to call these two. Then, in this section, you only want the policy type Administrator Access. And if you can’t see that, just type “administrator” into this box. Go ahead and select that policy and hit Next. That will then create the group, which will give us admin access. So there we go. AWS.Admin access. The next thing we need to do is go ahead and create a user. So my user is called Ryan’s Imac, which you can see here. You can do it, whatever it is. So simply contact this user. Hello, Cloud Gurus! And I actually only want programmatic access. I don’t want console access for this user. Let’s go ahead and configure their permissions. I’m going to chuck them into the group that we just created and scroll down.
Go ahead and hit “next.” Go ahead and create the user. Now, as soon as you create the user, you’re going to get the access key ID and secret access key. Don’t worry for the folks watching at home; I’m going to delete this user as soon as this video is finished. So all we have to do now is return to Terminal. So here I am in my terminal window. I’m just going to type in “AWS configure” again. And so it’s asking me for my access key ID. I would then just copy and paste this access key ID in there. It would then ask me for my secret access key. I’m going to copy and paste my secret access key in there, and then it’s going to ask me for my default region name. So I’m in London. So it’s EU West One. Sorry. EU West Two So just choose whatever the default region is for you, and then go ahead and hit Enter. And then just hit Enter as your default output format. So I’m just going to clear the screen, and if this has all worked, you should just be able to type AWS S 3 and then LS, and that will show us our buckets. Now I’ll tell you how I use this in real life. I’ve recently been starting to get into Bitcoin and, in particular, Ether and the Ethereum blockchain. So I’ve been buying ethers from different providers, and I store those in my wallet.
My wallet is then stored as an encrypted file, which I do using client-side encryption on my Mac. I then store this up in S3, and then I use cross-region replication to replicate it to another region. And I obviously encrypted both buckets at rest as well. So for me, that’s really good. I know that no matter what happens, I’ve got a copy of my wallet somewhere on S Three. And of course, I lock down those buckets very, very well to stop people from being able to access my wallet and stealing my coins. And you can see my two buckets here. So we’ve got a cloud guru, 2017, Ryan, and then my cloud guru, Sydney Bucket. And so all I can do is type “AWSS Three Copy” and then we’ll do this recursively. So it’s going to copy everything recursively. And then it’s three like this. So we want “cloud guru 2017” and then hyphenated “Ryan.” You can also copy this to S Three. And then it’s my cloud guru. Sydney bucket. I think I put a typo in there; just make it matter and go ahead and hit Enter. And this will copy the contents of my source bucket to my destination bucket. So there we go. It’s copying it across the world. And now my bucket in Sydney will be an exact copy of my bucket in London. So I’m back on the console. I’m just going to hit refresh after that copying has taken place. and we should hopefully see our objects. So there we go.
There are our three objects there. So the last thing I want to show you is what happens if we make an object public inside this bucket. So let’s go here. I can’t remember if this is already public or not. Let’s take a look. if we just click on it. Yes, it is. So this is already public now.Now, let’s go over to my SydneyBucket and take a look. We’ll see if the permissions have copied across. If I click on this, no, the permissions have not yet copied across because I’ve just copied the objects themselves. It’s by default a private bucket. Let’s go and make our other file in London private. And then we’ll make it public again and see if Cross Region Replication copies over the permissions. I’ll return to sentence three. going in here. I’m going to make sure that this is private. So let’s go a little further. And instead of saying, “Let’s go down here,” it’ll be easier. So we’ll go into permissions here. We’ve got public access. Everyone will say no; hit save. And now let’s update it again and say that access to this object has been granted. This object will have public access. So go ahead and hit save. Let’s go back over to my Sydney bucket. Or click here. We’ll click on the link. Okay, so there are two more things I want to show you, or really three. But let’s go in here and let’s go ahead and delete our public TXT first of all.
So let’s move on. Go ahead and hit delete, and go ahead and hit delete. Remember that we have versioning enabled, so pressing delete simply places a delete marker over it. This file is still here. Do you think the object in our Sydney bucket will also be deleted? Or do you think that it will still be visible? Let’s go ahead and have a look. So go ahead and click in here, and you’ll see it’s been deleted. However, if we click Show, the delete marker will be replicated in our other region. That’s pretty sweet, right? You’d think that if you go back into Cloud Guru and click here, you’d be taken back to our source bucket. If we delete this delete marker, intuitively, you would think that our source bucket would automatically delete the delete marker and our destination bucket would automatically delete the delete marker as well.
Let’s go ahead and have a look. So if we click on my Sydney bucket, oh no, it hasn’t. Look, it’s still there. So when you delete an object, that deletion marker is replicated across However, if the adletion marker is removed, it is not replicated across the genome. So I’m not sure why this is. I can’t see what the advantages are, but it is just some interesting behaviour to note. So if we want to restore this bucket, whether it’s an exact copy or not, we’re going to have to go ahead and delete the delete marker. There we go. So now it’s an exact copy again. The last thing we’re going to do is just go back over here, and we’ll go in here. I’m just going to do an update to my public TXT file. So it said, “Hello, cloud gurus, this is public.” And then I’m going to write, “This is the updated version,” or something like that. Go ahead and save. Now we’re just going to go in and upload the new file here. So go ahead and add my files.
So it was public text. Go ahead and hit “next.” Go ahead and just hit upload. Now if you caught that, you’ve stopped it from being public. So all we have to do is go in and click on it to make it public. And so what we can do now is click in here; we can click in here, and you can see this is the updated version. We just go back to our Sydney bucket. We’ll make sure that it is also public. Just click in here, and it should be public. There we go. Because it was a new object, the permissions are being replicated from the source to the destination bucket. I’m just going to go back now. Let’s see what happens if we delete a version here. Will that then delete aversion from our destination bucket? So in here, we can see that we’ve got two versions. So we’ve got our latest version and the previous version. So if I go ahead and delete it, what do you think will happen? Do you think it will restore it? Will you take it out of the destination bucket? Sorry.
Or do you think it will keep it as is? Let’s go ahead and hit delete. And so now we’re back to the previous version. So if I click in here and I actually view it, it’s going to just say, “Hello, Cloud Gurus, this is public.” So we’re back to version one if we go back to S-3, go back to our Sydney bucket, click in here, and actually we just went to versions, and look, it still didn’t delete that version. So, again, I’m not actually sure why they do this, but that’s just the behaviour of cross-region replication. So version control can be a bit of an issue. So if you do revert to a previous bucket in your source bucket, you must also go ahead and make that change in your destination bucket as well. So let’s move on to my exam tips.