Pass Palo Alto Networks PCSAE Exam in First Attempt Easily
Latest Palo Alto Networks PCSAE Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 171 Questions & Answers
Last Update: Dec 25, 2024 - Training Course 8 Lectures
Download Free Palo Alto Networks PCSAE Exam Dumps, Practice Test
File Name | Size | Downloads | |
---|---|---|---|
palo alto networks |
60.2 KB | 1266 | Download |
palo alto networks |
96.8 KB | 1358 | Download |
Free VCE files for Palo Alto Networks PCSAE certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest PCSAE Palo Alto Networks Certified Security Automation Engineer certification exam practice test questions and answers and sign up for free on Exam-Labs.
Palo Alto Networks PCSAE Practice Test Questions, Palo Alto Networks PCSAE Exam dumps
Domain 1 – Playbook
1. Domain 1 - Playbook Development
This is video one of the Pcsae certification video series, and it focuses on the first section of the Pcsae blueprint, which leads us away from the main one. So it's playbook development; it's the overarching, so we can look at conceptualising context data, summarise the difference between inputs, outputs, and results, outline how to use loops of playbooks, differentiate between playbook task types, use filters and transformers to manipulate data, and then that will take us into domain two, which is to go on to talk about the instant types, indicator types, layouts, and fields. So beginning with task one, which is conceptualised context data, we're going to start to talk a little bit about how we get data into it, how that is represented, and how you can visualise it. So if we come from here, this is the incidents tab. You can go to Dashboards if we have a look. Reports are all disabled because this is the community version. However, we can see that we have four assign instances. These will come from the sample Instant Creator, which we'll go over later and see how you investigate and go through it. But for this particular instance, what we're going to do is look into the playground. "Playground is where everything is executed." Each playground is separate from each incident. So you have a running commentary, if you like, of the incident and the informational data that was contained within. So, first, we'll take a quick look at running a command through integration and then seeing how that is displayed in the context data. Okay, so is command; you'll notice that it also finishes the query; press Enter, and it does so. Okay. And then that queries it. And then that comes back with all the domain information from WHOIS, which is who is integrating. So that's all there in the war room, or the playground war room, whichever way you look at it. However, the context data behind that Once you view the context data, we can see that it's in JSON format. The debug score is that it's usually reliable; score zero; it's absolutely fine. We can use all these later on to grab and put into an incident: the admin, the nameservers at the bottom, and 60 items at the bottom. And then we have the JSON that we will then grab from there to populate our field. Okay, so that's how it is in the background; that's how it's displayed there. If you want to mark any of it, you can mark it as a note, and then that changes it to green. This is a note: as you're dealing with incidents and you're going through the incident, you can keep a running log of notes and artefacts just by doing that. And then at the end of it, you can download the entire incident. So rather than trying to keep a timeline of what was said where and what information was gathered, This will do it all for you automatically. You can unlock the note. It's fine. You can view the Arts type in the New tab, which is what we just did. You can attach it to a task, and you can download it and add tags to it. Okay? So that's essentially the data that's getting entered into an incident. And then you can see the context data as it was there.Let me talk about differentiating public and private concepts. So sub playbooks are playbooks that are nested within parent playbooks, such as this one here for traps. So within this playbook, based on the outcomes of these, it will run these, and it will run them for the downloaded files. So it will run in a loop until there are no files left. Essentially, when we look at the sub playbook, we can see if they're shared globally or private to the sub playbook. Private to subplaybook basically means that you don't want this playbook to be affected by outside changes. So you don't want the context of it to be affected, and you don't want it to be changed. It only operates on the inputs that it receives from the parent. Global is when the context in the parent playbook should be considered and accessed. However, it should be noted that a change made to the sub playbook will impact the parent playbook at the next run of the parent playbook. So you have to be careful because you are going to then change the nature of the parent playbook. So based on the theory that you're going to run a playbook and use automation to start off with some data that you've received and then process it in a certain way, playbooks have an input, an output, and results for play book tasks. So 1.2 within this domain is to summarise differences in inputs, outputs, and results. Fairly simple and easy, to be perfectly honest. So the IP enrichment external look at playbook has been triggered, and we can see that we have the inbox and the outputs. So the name is IP, and this is where it says, "Get IP address from the context information, no filters applied." Transform is unique. So if there are multiple IP addresses that are replicated, it will only grab the IPS that are unique. So, in addition to the IP address to enrich, you can see all of the other information, such as a description of what it does, a CSV list of addresses, and IP addresses. You just have to check if an IP address is found within a set of IP address ranges. So then we look at the output, and this is what we're going to get at the bottom here when it's run. And then we'll get the context path IP and the IP address object debug score. It's going to run the debate against it to get the IP report. And the Endpoint OS endpoint operating system is all going to be done within this book. So essentially, that's the inputs and the outputs. This is what we want to start with, and this is what we come out as. So from the context data, two different things So context data tends to be from incidents, whether manually created or as part of an integration. And then from indicators, it says "get indicators from query results." So if you query something or you've got an indicator that you're hunting or something like that, then it would come through that. since moving on to describing inputs and outputs for sub playbooks.Playbooks are divided into two categories. They're basically the same sort of thing with different purposes. So parent playbooks are the main playbooks, and they're the ones that are triggered by an investigation or as part of an integration that's creating an event. So that will be these, and that's how these start. Sub playbooks are the ones that have an input and output; they always have an input and output, unless you want it to remain within the playbook itself, which is pointless. So in this particular instance, we have the fishing investigation in the MP 2, which is a parent playbook. And then within that, we have the sub playbook.Then we can see the inputs here: a file with instant labels or an email with instant labels (CC instant labels). It takes all that and runs it through this particular playbook. And then we've got the outputs that we can see there. So that then comes down and is used by these two here. As a result, the inputs are filed because they are solely indicators from file descent files, file entry ID. So that is going to take the file from the context data and detonate it. And then after that, it's going to give the analysis status. And here we can start to see how this is going to do it. It all depends on the type of integrations you've set up. So we've got Joe Analysis, which is a file detonator; it's a service where you can detonate files, and that gives it the description of the malicious vendor that made the decision, the description.And then, as we move down, we can see that we have Wildfire. So if you've got Wildfire integration installed and a Wildfire subscription, you can send it there to be detonated, and then this is what it would come back with. So the status of the submissions, which are benign, grayware, malware, and phishing, I believe, if I remember it correctly, it was Cuckoo Task Sandbox. So that's essentially the difference between the inputs and the outputs for sub playbooks.Now we get to the fun part, or one of the fun parts, which is the configuration of playbooks. Obviously, with XOR, you get many playbooks already installed, as you can see in a minute. Okay, so these are all the playbooks that come with it already. And then as you download integrations, they come with our own playbooks. So, for example, if we go to the marketplace and just pick any random item, why is it that you can never choose something when you only have one integration and four commands that come with that? Of course, I've picked the one that doesn't have any playbooks associated with it. So if you give me 2 seconds and one playbook, there you go. So if we downloaded Octa and then installed it, sorry, if we installed Octa and then created an instance of it, because installed doesn't actually mean that it's there, it just means that it's ready. It then needs to be installed, and you'll need to create an instance. So, within this pack, comes the classifier. So you have the apps in the classifier, the async mapper to map that data to respective context fields, a tool on the adjacent user profile, incoming and outgoing messages, and integrations there. So those are the integrations that you've got. So it's deprecated for 23 commands as well. Use the Octavita integration instead. So I'm in density and access management. And then it explains what it's about to integrate with Octaves identity access management to execute Crud operations, ten commands, and then Octavito authentication, a cloud-based identity management service. And you got integrations there. So these can be used not only for, let's sayspecifically in the case of duo, it can be usedto use two factor off for people logging into Cortex. But you can also then from here so youcan see here, return single user to active status. So, if they're suspended pending or during the investigation, you can reinstate them. Suspend a user, activate a user, or deactivate a user. Okay, so there's literally nothing you can't do with it. And then you have the playbook, and it gives you a very thorough description of what the playbook is. That is the playbook you can obtain. And so if we come back to playbooks, there are a couple of ways you can create them if you see that these are locked. So the only way you can't edit a pre-installed playbook or a playbook that comes down with a content pack is to duplicate it first. If you want to edit one or want to use it and just change it to your own, you have to pop it up here and duplicate it first. It will come with a duplicate; just rename it and save that one if you want to then change that. So if we go to here, you can now see that you now have the ability to edit it. Change that to Active users day two, for instance. Now do that. Now there are two ways of doing this as well, just to point that out. So there's a saved version, and then there's the save version, which is what it says it is. It gives you the changes, so update and exit. And then we have the ability to go back and look at the version history of that, which we can modify for anyone. So if you then do that and it doesn't work, you can go back and restore the one before, or whichever one is the known working one. If you save it, as in save with the save icon, it simply overwrites it. So there's that way. You can also start from scratch if you're feeling brave. Playbook triggered. And then you've got automation. manual tasks and playbooks. If you want to use automation, you really need to make sure. So if we wanted to use panels, for instance, then we've got all these commands we can do. So run any command supported by the API. So we add that, right? And then, in here, we have the inputs. So the action is edit, delete, clone, and all the other stuff you'd get from the API. So anything that's in the API, she wants it to show. And then you have the command category, and then the command there. You can take that from the context or you can just type it in. Sorry, okay. And then you would link it in there and then just link back to that, and that's okay. It's important, of course, to remember that if you're doing that, you need to make sure you've got the integration installed. You can create a choice, a choice task. So we can create a task; we just pin it. Create a task, and it's conditional. And then you would look at modules. Modules details, concede, integrations. And then you would look for sorry, else if we do that. Okay, so if Panels is installed, pop that down there. Make that the condition for yes. If we want to add an end soone to these section headers, we can do so right there, and then this adjoin up to there once that's done. However, our other case marks it and then closes it. And then you can use this to meet it. It looks nice. Sometimes it really does screw it up, but in this particular instance, I think it looks okay. Okay, so then you do the same again. You can save your versions on that, and then it's all down to debugging and working it out. And that's basically you can then do that. You can build it all the waythrough like you would with anything else. It's completely customizable; it's entirely under your control. I mean, just infinitely usable, well beyond the scope for this particular video. But you get the idea. Then, of course, once you've created your playbook, you need to be able to run it and test it. and there are several ways you can do that. So, if you have an existing incident, just pick this one if I go to incidents now. So this is how far down the playbook that's associated with it is, which you'll see in a minute once my slow box catches up. Now, the investigation manual, as the name suggests, is now attached to a manual playbook. So what we have as well is these little symbols here, which I'll just quickly run through them.So if you end up with a green tick there, it means everything is fine, it's finished, and everything is fine. If you have a timed task and it's orange, it means that it's overdue. If you have a red one with a little triangle in it there, that's relatively self-explanatory, but that is a warning error. Waiting for manual input is this one. And then you click on it and then you cancreate your manual input or you can choose an automation. Let's see what that one's actually called. Let's see if we can do this. So get employee information and get a device profile; you can't do anything with either of those because this event didn't really have anything we could run that against. But that's how you do it. so you do it there. You would trade it up and choose the automation that you're going to do. So in this case, it would be "user enrichment," I imagine "ad get user," something along those lines, and the task description, followed by save and run. Or you can run a playbook against it as well. So you could choose a playbook. So there you go. Account activity, account enrichment, generic V two, and then it takes the account, the username, from the context information we saw before running the book. Okay. While these are running through, you'll see a blue bar here with cogs turning, which means it's all good. If it goes greyed out, it's been skipped. So in the case of a yes or no conditional statement, if it's gone yes, then the one that was for the else condition will go gray, and vice versa. It's an incomplete white. There's been an incomplete error, which doesn't work. Missing or disabled integration is white with a small red triangle with lightning in it. So that means it's hit something and it either doesn't have the integration to run it or it's disabled. Okay? So learning how to read those is something that's quite important. Again, it's actually quite intuitive, because let's see if I can get an example of one that errors out. Right, so that's the same sample instant we had before, but this time against this playbook because I know I don't have that installed. We see that it runs through, okay, endpoint enrichment, and it gets to here and it's errored. It's not always the case because sometimes it will error out, and it's relatively obvious as to why. So for instance, if something is added to the list and it doesn't already exist, it will layer out. But in this particular instance, it couldn't find any modules to run for the command "XDR get endpoints." That is because I don't have any XDR integrations installed. So you can mark it as completed and move on, and it will go green. That is a manual task. So obviously we've got nothing with which to retrieve it. But again, it's the same thing. You could do that: create a task, choose automation, have no point because we haven't gone mark that as completed, and then you can see that it's gone grey because it hasn't detonated the file because there isn't a file to detonate, and then awesome, all tasks are done. So if you change your mind and want to run that against another playbook, you can come up here and pick any playbook you want as a random one, and then you can see that it's starting to work, but I have to be really careful because I'm loose on the command. Do you have URLs? No IP addresses? No, do you have five MD patches? We have an MD fire patch, so we have a debug score from that, and if we go to the war room, we can see all the results from that incident, where the task careers came in, and then if you click on show reason. mission argument for certain incidents proof of escalation Coming all the way down to the bottom, we get to where it was talking about the hashes, so we've got the hashes and it's got the debug score and score is three from Wildfire; you can see it used 146 commands to get there, so we've got the indicator there and then that can be ad-hoc. Evidence, and you can write down a description for the evidence that it came from automation, not evidence but first and foremost, or market it as a note for later. Okay, so that's really how you do it if you want to get rid of it if this gets really clogged up, so if we want to erase this playground create will erase the playground and our new playground will be there, so playground create that you can run it. You can test automation scripts. Test APIs and commands and so on that's testing your playbooksas you go through if you want to test it withjust I'll show you just quickly this one service and services so if we go then to this is the instant generatorand there's no commands for it because it is literally whatit says it is and you set it to fetch instanceit will fetch instance of random types and then pass intoyour instant queue to be dealt with to practise on it'sfree as our majority of them is free to run andit is very good because it gives you plenty of opportunityto work with and to understand how it's doing. what it's doing. Now let's go over how to use loops and sub playbooks. You may have a number of inputs from the parent playbook and you need to run your sub playbook per input, which you can do by coming here and looking at the inputs. What you do here is purely for demonstration purposes of looping options. So none of it is obviously built-in to exit when a value is set. So if you want to go ahead, extension equals zip. You can do that if you want to go for each input. So iterate over all the defined playbook inputs. So you look at the inputs tab here and see which one goes over, or you can choose loop automation, and that is our value equal. So in the sit for instance, if left is equalto right, if left is not equal to right thenit will stop and then you maximum iterations asleep. Fairly intuitive check value is error exists and that'show you control the loops within sub playbooks. So if there are multiple input lists forthe same number of items then the subplaybook will run once for each input set. So for example, if you have input one, inputtwo, input one will run with input two. So it will go through input one, iterate through input one, and then translate through input two. The subway will run once for each item input, with more items. Once it's done that, then it will move on to the ones with subsequently fewer items until it reaches the end. So we're going to talk a little bit about differentiation between the playbook tasks. So there are different types of tasks. There are standard tasks, there are conditional tasks, and then there are data collection tasks. So if we just go to other tasks, we've got it put in here so it doesn't ring at me. The standard task is choosing automation. Run the automation. You can assign nona to it as a SLAset task reminder, conditional as it is. As a result, you define the built-in condition for yes or no manual tasks. So, basically, as the analyst is going through it, they then decide one way or the other. Choose an automation. So you can choose an automation toour values equal in this particular instance. So if you have two values, compare them, and then that goes one way, or else check the value. If a value exists as a boolean for value existsthen yes, if not then it's on the else case. You can ask by email, and as it says, there are communication channels you can email if you go into here, and then you get playbook inputs and instant details. If you come across an email address in here – an email, so the email address, instant email address, or the email's BCC or CC – you'll be asking that person there. Then you can create your subject line, message body, and reply options, yes or no, or you can add a reply option. Okay, hang on, I want to do that, do I? So that's one way of getting the information back. So that is the task. So then they would reply; the person who was there would reply, "There is another way of doing it; keep doing that." and that's really annoying. and that's data collection. Data collection is slightly different. For data collection, tasks can be selected for surveying users. So if you want to create a survey, this is how your email will look. A link will be placed there, and then they will go, and they will complete a survey link to the webform, which will be automatically placed at the bottom of your message. So you ask your question there, and there you have your message. And then the user will get a link to click on to go to an unauthenticated survey. So they don't have to authenticate that to log in or anything like that. and then you'll receive the information back through that. Both are fairly valid. You can use that for subsequent Playbook tasks. You can store it as evidence or something like that. So those are the main task types. Section headers are at the top and bottom. You can select the timer field, add a timer to it so that the time begins, and that's about it. So we have to accept that data and alerts and so on will come in varying forms, and they have to be made to be as useful as possible. So we're down to one five. Actually, it's the last one on.The last section of the first domain is basically going through filters and transformers to manipulate data. And they'll expect you to explain the distinction between the two. Now it is relatively intuitive, but I find the classification editors to be the best way of describing it. So when you're creating your classification, you will come into the classification editor, and then you can pull from the instance or you can upload JSON files. In this particular instance, we've got a log file from my firewall. And what I was attempting to achieve was that when the log comes through, we see threats in all of them, but then we see packets, and that is the log; that is for port scans and such like. So it's coming from the zone protection profiles and the DDoS profiles. Okay. So the best way to get to that now is because from there I've linked that to incident type because the result is a package you can see there, and then I've dragged that down to there and that's linked to external scanning, which then goes on to run any logs that come in as being that instant type. So how do we get there? We came here. So we're going to get the content field, which is there and which you can see is just a complete string where it's separated. Okay. So to get that, you would go to the keys because those are the keys there. Remember, this is Jason Keys, then get the content; it's very straightforward. and then filtering. So filtering is to get, as it says, a subset of the data. So basically, if you're looking for a particular file type, like PDF or something like that, in this particular instance, I'm looking for content that matches the regex of the packet. Okay, and these are the other options you've got. So content contains a string, empty is not empty, and it includes If anything is in the list, whether it's true or false, it'll all go in there, and then you can use that to match to get the specific part of the data you're looking for and then transform it. So this is literally just where I'm transforming it to pack it. In this particular instance, it's not much use, but where it would really shine is if you were specifically looking for a zip file, for instance. Then the best thing to do is to say, "I want to get zip, but you want to change all your content matches to uppercase," because then you don't have to put in zip in lowercase and uppercase, and that will automatically do that changes to that.So you can focus on just one thing. So once we've done that, we can test, and then we drop out of this. So we see our test sample here, click test, and then we can see that we have a packet completed testing, okay, and then drag that to there distinguishing filters and transformers. is that if we're looking for file extensions, you can then filter on that. If you're looking for a specific part of the data, that's your filter, which you get from the key, and then you apply the transformer to that, which should make it easier to manipulate the data. So you get fewer misses because you're not having to think of every iteration of that particular data because you've transformed it into something else or transformed it into a type that you require as opposed to a type that you don't. And then that can be used within playbooks, going further down to run through it quickly. So the filter categories are Boolean, which determines whether a field is true or false or whether a string representation is true or false. Date left-hand side time is earlier than date right-hand side time. So it compares two general filters, such as contains, does not contain, in, and empty, looking for a specific string, looking for a number, or other random ones, and then transformers. For example, you can convert a date to a string and a date to Unix so that it can be picked up by other integrations or something similar in the format required. You can stringify, join, or sortsplice. So, for example, if we looked at that one. Again, just quickly. Okay, so looking at that, if I wanted to get a list of those, that's what I've got there. Okay, so if I then go on to add a Transformer to that, Okay, right. So let's show you. So that's that, and that's what I come up with. However, if I add a transformer to the split-on-delimiter contest and come back against the sample that was there, and then you can see, I can then pick whichever one of them. So if I then transform it to that and then I wantto get the index at index two, I can test that sampleand then threat because I've got that index to there. So that's the transformers and the filters, basicallyyou'll use them within classifications and mappings instanttypes anywhere where you're going to need toeither normalise or you're going to need totransform data from one type to another, that'swhere you're going to do it. Okay, so the next video is instant types, indicator types, layouts, and fields. And that's going to go more into configuring your own instant types by creating layouts. Because the layouts are all configurable, they're all changeable. You can create layouts for your own purposes, or you can alter layouts to provide the right amount of detail or pertinent information to the analyst that's looking at them. And again, that's fairly simple and easy to do. and actually creating the layouts is very quick. So we'll proceed to the layout builder and go through the settings, summarising the relationships between the accelerator and the cortex exor instant.
Domain 2
1. Domain 2
SAE exam. We're concerned mainly with the understanding of institute types, indicator types, layouts, and fields. This part of the exam is worth the final mark. So have a quick look at what we mean. It's going to go from incidents to types, broken down into types.As can be seen, there is a type on the left-hand side all the way through to the content pack that provided the incident type. On the right-hand side, we go to a phishing incident. You can now see the main components of a phishing incident. So in here we'd have the name. That's the incident type name. That makes sense. You can color-code it. There's the layout. It's going to use the default playbook that it's going to run, whether you run the playbook automatically or at the time of investigation. So if you click that, the incident happens, and then it's brought in, and classified XOR will run the playbook automatically. If that's unticked, as in this case, you would get your alert. You would examine your incident, and as you went to investigate it, the playbook would run in the background as you clicked on it, the auto extraction of indicators beneath you, the system default, which is correct. This means that the indicators are extracted and enriched as the playbook runs. Out of band comes after your context information is somewhat up to date. I don't know what the use case would necessarily be for that, but it does mean that your initial context data would not be in real time. As a result, you could be looking at an incident without having all of the context data. Okay, your post-processing script is exactly what it says it is. So you may have a check that you want to run to see if it is a duplicate of another tax. You may want to categorise that. You may want to close them off if it's a known false positive. So you would select your script to run, and once your instance has been processed, it will run that script and then categorise as you go. So the post process is essentially it's fortime saving, really, I would have said. And it's also to help clean up. And then if that we got the SLA. This is where we set the SLA for the example right. And you can see we got weeks, days and hours. And then we can set a reminder to prompt us if it's not looked at within this many weeks, this many days, or this many hours. So from here we move onto layout. As we can see in there, we've got the layout that you select for the incident type. There are all the layouts, and that, strangely enough, is configured here under layout. And here we can see where we define what layouts we have for what incidents. So if I go against a good old fishing incident This is what's going to be presented to your analysts when they initially click into an incident. And the goal here is to have the best quality information possible as well as the most relevant information to the current type. Basically, we work around the theory that XOR is essentially there to streamline your own processes. So your stock process is already established, or your customer is already established as an engineer. You are going to deploy XOR, and you are going to make that integrate with the existing subprocesses where the customer identifies that they feel that they work. And you are going to use XOR to change processes where the customer feels there is any kind of issue or any kind of bottleneck. The biggest bottleneck in security response, as everybody knows, is sheer noise. It's literally having to sift through sometimes dozens of alerts from different platforms, all in different formats, in order to try and correlate for yourself what the incident is and the context around the incident. So we built it here. Now, incident layouts are very simple to create, and the idea is that you have an instant summary. This is what you see soon after you click into Instant Summaries; oddly enough, when you're creating a new incident form or when you are editing a form, you can modify and customise the fields that are here, the close form, basic information, and custom fields again when you're closing the form. So the information that is required once closing anincident, the instant quick view which I'll show youin a minute, which is the fly out menufrom the right and then mobile. So there is the official XOR mobile app, which is very good, but obviously you may want to pare down the information that is provided because it's just a smaller screen, so there's less room for any kind of potential noise. And if you look at me personally, I like to look at information in context and the depth of information on a sliding scale, really. So the further away you are from taking action against something, The less information you require, the better. Sorry for the more pertinent information sorry.So you need less noise around it, and you basically need to know that there is an issue that you need to look at all the way down to, I suppose, an email simply saying there's a person involved in this particular endpoint and you need to look at it. You don't want all the information within that email because you don't need it. You need to log onto the system, and you need to look at the extended context once you're there. Okay, I'll just quickly show you the quick view. If I can go to incidents, take out the status "closed search" so we can see this instance here. As I click into that instant now up here, a quick view flies out from the right hand side and is matched into the fields that we saw previously. So you've got the basic information field: its type, severity, who owns it, status, where it came from, the source that it came from, the phase, which is the investigation and triage, so on the roles it's assigned to, is this going to be assigned to just an analyst, is this going to be assigned to an administrator? More pertinent, I guess, for that particular instance is that if you have so many people within a sock team, and you will have large sock teams anyway, you will have people that more closely deal with phishing instances, and you'll have people that more closely deal with malware instances, and you'll have people that more closely deal with, say, DoS attacks. You can also give an incident a role based on where you want it to go and which analyst you want to see it. And that's all done, as we saw with the incident type and the layout. So if you have a phishing incident, you don't really want your malware guy sitting there looking at it, wondering what's going on. So you would assign it to that role, and then you would look at the way XOR does that then. So then, further down the line, if you assign an instant type to a particular role without any further intervention from yourself. although you can then customise it further. Exor will literally look at it. Look, it's been assigned to that role, and it will assign that incident to somebody who holds that role, and that is done as well on the basis that they have less work. Have they got capacity or in invertedcommerce bandwidth to deal with it. It's an awesome platform; the timeline information lets you know when it occurred, when it was created, and the last updated total accumulated time and the labels from the incident itself. So you've got the host name, timestamp, client, which is the firewall, priority, and so on. So really, one thing that's very strong throughout the PCSAE exam course and, in fact, as you go further forward, is to always try and point out that XOR literally removes a lot of the issues and a lot of the bumps along the way. So it's responding to an incident, an event; it's creating the incident from that event; it's giving that incident to the correct person to deal with it; and it's notifying that person. And all the way along the line, it's taking a downloadable record of everything that's been done around that incident. If you go to the war room, so if you go tothe war room, all this here is a catalogue of everything fromthe moment it was found to the moment that you closed. It's all downloadable; no more sitting and writing out timelines. And although that sounds like a sales pitch fantasy, that's what they're going to expect you to say, because they're going to expect you to say XOR is great, so let's just go back to the instant types again. So we'll go back to Settings, Advanced, and Layout Trade layouts, not instant types of layouts. And we'll go back to our phishing incident. So, one of the other parts of the domain that we're working on is understanding the issues and potential problems with miscategorization or misclassification. and the mapping of instant types, because this is really where the work is done. So this is where you're really sort of—you're really making the analyst's job easier, but not just easier, but more efficient and accurate. So you'd have the case details here, which are provided to them. You can see where that is. Instant type severity This is just filled with stunning information. So it doesn't make much sense: the severity, the owner source, the source instance, the source brand, and the work plan. So in this particular instance, you've got tasks with errors. This is all very pertinent because the work plan will tell you where within your playbook you are at this particular instance, which usually is potentially a manual task or something like that, unless you have an error task and that is what it says it is. So that's an error. So if I've got a notification here, then that means that something has gone wrong with the playbook. And then you can click into that and go to the playbook, and you can rectify whatever the situation is. And again, the thing with XOR specifically is that within the war room, it will also give you the reason why it failed. So let's see, on the right hand side there's a little box, and it says "Show reason." And then you click that, and that gives you a quick synopsis of why it failed. Nine times out of ten, it will be because you've got an integration that either isn't installed or isn't responding, or so on. But obviously you need to rectify that for it to continue. Notes are notes. So when you mark something in the war room as a note that will be here, anybody involved in the incident can mark something as a note or evidence on that matter. Timeline information. Yeah, it's important. I don't know if necessary. Massively important. The SLA is important when it's due. The team members are the people who are involved in it. So if it's initially assigned to John and he wants to involve Jane because she's possibly more in line with what's going on, she's more senior, or he just needs his hand held, that's fine. So the team members there, they're involved. So you've got a record when you click on it. Who knows about this particular incident? The evidence speaks for itself. Child incidents are those that stemmed from this. So if you have a specific campaign, say a phishing campaign, you'd have an instance where things have matched. LinkedIn is what they say: a literally linked instance, the instance you've linked to it, and then the closing information. So the close time, the close reason, who closed it, and then any notes added later. This is all in the investigation. You then go further and drill down further into what the indicators are that we're seeing. Then you can further enrich them by clicking on them. and then you can run enrichment against them. Or you can use the bar at the bottom to run particular brands against particular indicators. You can also mark things as evidence in the war room; as previously stated, going through the work plan will show you where you are in your playbook, what those activities and things are, and those requirement-related instances. Canvas is then a visual representation of a linked instance. And you can use machine learning to create those links, or you can create links yourself if you know specifically that something is linked or you want to link incidents that are targeting a certain person. You can do that there. And then again, we go into the new editor. So, moving on to the next section within this domain, titled "The summary of the relationship between external data and the Cortex exor incident type," So within the context of XOR, external data is data that is retrieved from integrations. Okay, so if we go to integrations—and they're always under services and services—this is where you create integrations. So this is where you download these from the marketplace, and this is where you then create instances of your integrations. Another important thing for the exam is that it will ask you what it means. Is an integration usable if it's downloaded or in Word? Slight effect. And basically it's not. No. So you go to the market, and I have a million and one that needs to be updated. So you can see the integrations that exist here. When you click on one, you can see all of the information associated with it as well as what it works with. PolySwarm Total is obviously virus Total; specifically in this case, virus Total is the free version with the API for your free lookups. And then you have the other API, which is then paid for. An interesting thing to note that I found, and this is completely aside from the exam; I've not seen it as yet on the exam, is the fact that within the integration, it's usually hard coded to use the paid-for subscription. So I hit an instance where I was running things against Virus Total only to find that it was failing because the integration wasn't installed and configured. It was installed and configured. It was just the other one. So that was just a case of going into the automation itself, finding the line of code that was looking for the V 30 API, as you saw there, and then changing it to the other one, and then it ran perfectly. So within this, you can review it if you want. You enter your classifiers, and this will tell you what it will be as it arrives, which classifier will be the InstantType, and which integrations it will use. As we said, when you click into thisyou can see that you've got ten commands. These are the commands that can be run eitherfrom the they'll either be run automatically from theInstant type you can programmatically run them so youcan create your own scripts that then run thesecommands or you can run them from the barat the bottom within the playground. So you can run things up early against indicators or files that will, so to speak, literally give you that. Now, when these are in, you will then see within, and I'll demonstrate this in a minute, that those commands—these commands here—are then available at the bottom. They become available once they're downloaded. That doesn't mean they'll work because if you don't have an instance of it, there's nothing to work on. And then we have the playbooks that come down as part of this content pack. So you got Virus Total's create zip, which does exactly what it says it does: it creates a zip file from hashes of existing files. Virus Total detonate the file detonate. This particular instance refers to sending a file off toVirus Total to be detonated in a sandbox environment. Detonate URL: same thing. Send it off, and it gets active. When it gets detonated, they have a virtual machine go off and try and find it—basically along the lines of Wildfire. Wildfire is exactly not exactly the same. Wildfire is a lot more involvedbut Virus Toast is very good. It's something I always use at the time of making the video. This particular integration doesn't work. I spoke to Polishwarm, and they are looking at rewriting the integration for XOR, but that's not at the moment, which is a bit of a shame because the policy is quite good. So this gives you your dependencies, as can be seen there. So when things aren't working, this is why it says additional contact packs must be installed for this content pack to execute successfully. These are your dependencies. If you have any issues, you can quickly click into your integration, have a look at dependencies, and if there are any issues, you know that you need to upgrade those to help it work. Again, if you find that you have an issue, and there are occasionally issues, I'm not running to one yet, but this is, you know, these are maintained releases, and issues do come out. You can see where you are. I can see what's installed. I need to update this, reinstall that one, or revert to that version. In fact, I am two versions behind, which is slightly high. Update that one and you can see that it's fantastic. So you can see there that there are dependencies missing, as we could see in their basic common playbooks, which we could see needed updating first. Okay, so we'll actually run through that now. So if we go back to install content packs we can find in here, we can update the base update. Oh no, croaky, no. I made a mistake. Okay, so that's been updated, and common scripts are common scripts. Anyway, she didn't install the baseline, which is rather annoying because that's still available for upgrade. So I'll update that. I think in that particular instance, what we actually did was remove the last one. Many of these are now available, in fact, all of those that have been released thus far, and it notifies you when there is an issue so you can view the warnings, as well as what the issue is. But then as you go down and upgrade them all, it's usually just because there's a dependency. So just upgrade the dependency. So if we go back to, sorry, services, we now see that the new version is installed and all the dependencies are installed. We know that because we updated A lot of One of the big strengths for Cortex XOR is machine learning and the ability to train an AI model. I'm going to demonstrate this really well with this fishing incident. So we have a fishing incident that has been created from the sample generator. Got the investigation there. So we've gone through and seen everything there. There's not much context information for the sample, really. And there's your war room. So in here, you can see When I saw the field, an instant name came to mind: radiation SLA, what phase it's in. We have the email itself, sent by the user at some random time, and there's a link for it. Any automations that have been performed, label type, are listed here. Under here, we have where it came from, the IP address. So it's been classified as the incident's MD summary, the user. And so now what we're going to do is use this information to go train or start training a model. Now, because this is a demo system, I don't actually have enough incidents on here for it to be happy about training the model. So it will fail. However, in a production environment, especially one that has been operational for some time, you will not fail. So we'll always, it's always best practice, include a description within the IT industry. We tend to believe the we're going to be somewhere forever" type of thing. And we build something, and therefore we own it, and it's ours, and it's our precious little thing. The problem with that is that over time, you will forget why you did something. You will forget the purpose of something anyway. So a description is always good to give a background to why something has been done, and it gives an idea of why that then works, much like a comment section in code and things like that. But additionally, somebody at some point, I can absolutely guarantee, unless it gets decommissioned before you leave, is going to sit and look at something you've done, exactly the same as when you get into somewhere and you look at what somebody's done before. You are looking at what's been configured with absolutely no context to why it's been configured, which can sometimes mean that you alter something or remove something, believing that it's incorrect or that it's not how you would have done it, without understanding why it was done that way. So if you leave a description, you can remember why you did it, and somebody else coming along can also look at it and understand why you did it. Okay, so we're going to select our instance type, which in this particular instance is going to be phishing, which is all good, and we're going to do it for seven days. This is the date range that is going to go back. maximum number of instance tests is 3000, so you're not searching through those. When this box becomes populated, it can be a pain because you end up having to scroll through and going down in the nightmare on certain browsers. And for demonstration purposes, I am going to use the source brand, which, if you remember, is I'm not going to spell it correctly, though, because that would help immensely there). So I'm going to use "source brand," if you remember, which is the integration or the instance of the integration that has given us the information and given us the context. And here we have, in this specific instance, spam as a verdict, malicious as a verdict, and legit as a verdict. And this is literally as simple as dragging. So if we know that this particular instance of this integration is spam, okay, so we've looked at the field value. I mean, this could be anything; this could be the email body or anything like that. And we have some examples of spam emails. What we do is drag it there. You can see straightaway that it said that you got a total of three syslog incidents, and that's not enough instances for the verdict. So you'd go through and drive more and more until it's satisfied. The sample instant generator. However, if you were to say, "Okay, well, I know this particular domain, so if that was to be the domain, you could drag that to legit again; there are only five, so there's not enough of it," and then you would drag the other ones to malicious. So the argument mapping then is what it is, it'sthe field and what that wants to be mapped tothe email HTML and what that's mapped to an emailsubject and that's mapped to the email subject. You can map things to the different instant fields if you want. You build it yourself, and then you would hit "start training," and it would start the training model there. Now obviously this one's going to fail because there wasn't enough information, but that's literally it. And then, as that goes on, that will start to look at the instant types you've created, and it will start to learn. It will have a baseline from what you've done there, and it will start to learn and build upon its pattern matching. That will then go on further down, and you can take actions as a result of that, and it will learn to pick up what is and isn't legitimate based on what you've seen there and the verdicts as you go further down. So we'll look at the verdicts from classified incidents later. Now onto feed-triggered or time-triggered jobs. Jobs are basically cron jobs in the background. Jobs are things you can run for varying tasks to expire indicators, remove indicators at a certain time, or fetch indicators. You can run the jobs from here. So we come to the Jobs Indicator review, for instance, and we have the time triggered, which is if I click on it and it's recurring, what time you want it to start at. So do you want it to be something that's running sort of on a regular basis? Select a start date and time triggered by a delta in the field, recurring as well, so that you can then switch to the cron view. So for the people that are into Cron, you can configure it there if you find it easier that way; it even gives you some examples. Switch to the human view, and then you select when you want it to run every so many hours every week on those days, act, and then end. Never tag anything; you want to enter it or add to it. Sorry. So you can add tags to the job. Tags are a brilliant way, but they should be used throughout the Palo Alto ecosystem and the majority of secured systems now to categorise stuff and make it easier to find stuff for associations and hierarchy, the basic information that you want to look at. So Indicator Review was the name that came to mind when I first did it; the reminder for the owner; and you've got the role for who's going to look at it again. So you got read-only access. So if you want people that have read-only access to be able to see it, they will be able to see it. Administrator, analyst, and so on, depending on the severity that it will be classified as and the playbook that will be run as a result of it. As you can see, this particular one is a bit defunct because I'm using a manual review. No point in creating a scheduled job to run a manual review. Labels, then details, then the custom fields you want later, the assigned user, basically everything related to creating an instance. It's a scheduled triggering of an incident. You can run it now; you can save it, enable it, pause, resume, abort, and delete it again. With all these things, you can see what fields you want. You can move fields up and down. And then you can see that we now have the SLA at the top. You have a lot more information as to what the job is about from the labels and the attachments. Let's click on everything. So it's all in; you'll find with a table view that it's the same all the way around. Summary view. Look at that one. maybe a bit easier to look at. And then status closed. So it's been run, and I've closed it. And that's jobs. Really? So Jobs is anything that you want todo on a regular basis that you don'twant to be having to do yourself, basically. And then, as you get further and further into your spine, more and more of them are used. I mean, one of the big uses is to expire indicators, because that's one thing I use it for, because when you look at indicators, it's okay. You see, you've got a lot of indicators of compromise. You've got bad IPS and so on. And then, when you look through, you get the expiration status there, which you can move up a bit there. So you know that you've got active ones. Come back to it and it says, "We know that it's active." If you want to go closed or expired, expiration status is equal to expired. And there aren't any. Actually, my job is working, as are all the others that are active. But you would normally have a lot of things there. You set up your feeds, for instance. I mean the URL house. I use that. I use that on my meld, and it eats up like $72,000. So you don't want to have that all stored there. You need to clear it and get rid of it. Now we're going to look at indicator types and instant types from the point of view of the exam, which wants you to be able to differentiate between them. Okay, it's fairly obvious, but within the context of X, or perhaps not, I'm not sure. I believe he is anyway. Okay, so within the indicator type, we can configure how we want to extract a particular indicator we're looking for. So using this as a baseline, the type of indicator is the symbol for it, the name, "disabled." So you can disable it if you want to. The reputation command So this is the command that you're going to run for reputation checking, and then the regex is going to grab it. So, for example, in email, we have from beginning eight to Z in capital letters, and from beginning eight to Z in lower case numbers zero to nine. so that would capture all. the possible connotations at the beginning of an email address. And then we got to the app, and we covered it again. And then there's the regex, which is terrible, but basically the regex that's there will cover whatever username you have at whatever domain, whatever. I'll need to learn more regex than I do now, but I'm sure you won't be fooled by the fact that I don't know regex off the top of my head. You'll load a t if you waSo when we create a new indicator type, as usual, we click on that. We're going to give you a name. We're going to write the regex that we want to use to extract and classify that. So this is how we're going to get our indicator formatting script and what formatting script we're going to use after that. And so extract the domain from the URL and email—that's the script—and then the enhancement scripts. What we're going to do then is run an IP reputation script on it and a file reputation rible, but IThe reputation command This is the reputation command we're going to use on it. We could use URLs and then integrations to exclude. So by default it will use any integration, sorry, anyinstance of any integration that is marked as being youhaven't ticked the box, but do not run by default. They're all run by default. And at that point, the worst verdict is taken as the verdict that is presented to the user. So if you have three integrations, two of which say that it's good and one of which says that it's bad, it's going to show you that anyway in the context information, but it's going to pick that as an overarching reputation for that indicator as being bad, because that's best practice. You want to know what the worst-case scenario essentially is and what you're looking at. But you can say here that you don't want to use integrations to exclude. So you can exclude specific integrations from that. How is it going to expire? Never expire the speech itself; never expire the time interval—how long it's going to take for it to expire. So if you want it to expire in, say, one day, then that's all good. 1 hour, whenever you want it to expire. So then we're going to go into layouts, and this is the layout, and I'll show you layouts in a minute. And then you've got advanced stuff. So, in terms of reputation value, many of these options are available. What I would suggest, just as a quick aside, is that you download XOR; the community version has the reporting removed, okay? And you are limited to 166 automation commands, as I said before. But you download the XOR, you go to the misto, you apply for a download, and they don't say no. And then you get a full version where you can do the reporting and everything for 30 days. I would suggest doing that, and then just sit and look at it, because there are all sorts of things there. It literally helps you along the way. It's designed to help the analysts. That's literally what was designed for an engineer. Our role is obviously, as you know, to design the indicators, create integrations, and create automations that are designed to assist. But for things like this, which an analyst would do and an engineer would do at a very base level, it does help one hell of a lot. So literally, in the indicators extracted, the entry data from the command is mapped to the context; this path defines where in context that data is mapped. context value of reputation. The cash will then expire in minutes. How long do you want your cash to expire? And then you have custom fields. You can modify custom fields. You get an indicator sample. So you can load an indicator sample into there, and it will show you how to map the information from the indicator to which path. Okay, so that's indicators. Go to layouts. Let's go to instant types first. So these are instant types. As we said before, the instant type was the instant type that went through previously. So run, playbook automatically. That's where you categorise your instant I won't bore you. I'm going to it again. and then layout types. We have these two different layout types. We have the indicator layout, and we have the incident layout. so very different purposes, and they're fairly intuitive and straightforward. So we have a layout that we previously discussed and how you can customise it. And then the indicator layout is there. You can see the bad reputation. So this is a good way of demonstrating what I said previously. So in this particular instance, this test was run through Auto Focus and Virus. Total Focus thought it was bad—totally suspicious. Obviously, bad is worse than suspicious. And so the overarching display that you're drawn to is the fact that it's active. So it's working now, it's not expired, and it's bad. And then you can enrich further from there. You can expire the indicator from here. You can add tags to it and remove tags. You assign it to people there: sign, roll, roll, one, two, and then the sources that it came from. So that's one speed, and then there's the reliability of that source, because feeds are updated at different times, and a lot of that depends on how much you pay the people who maintain the feed. I don't have an issue with subscription-based indicator feeds. The reason I don't have an issue with itis because the quality of the information, in orderto make that up to date and almost realtime, the infrastructure that is required and how efficientthe infrastructure has got to be is vast. And if somebody's going to put that money into that,then I'm happy to pay the money for the subscription. However, that said, there are many viruses, total polys, worms, and "alien vaults," many of which offer URL houses and free feeds. And, in the community version of XOR, you won't get much benefit in terms of indicators because you're limited to five feeds and 100 indicators per feed. As we've already said, the URL house is up to 72,000. It will give you an idea of how it all goes together. However, if you have 72,000 coming into XOR, you can see how exponentially that benefit increases. Okay, so then that's your indicator. You can build that as well. That's all customizable. You can change that. Again, drag and drop. So those are your instance and indicator layout types, the type of indicator, and things like how you can categorise it and the difference specifically between the instant and the indicator within the cortex context. Next on the list is domain, and I appreciate this now taking some time, and if you're still with me, fantastic. It's an exclusion list. Exclusion list again is a vehicle for removingnoise from the data that you've got. So you can add an exclusion to the inhere value or use the regex. So that's how you're going to pick your particular indicator. Let's just see if I already done that right. I know that my home land range is 1926. You can see that I can put that inside there. And the reason is that it is my homeland, so I don't need the categorization there or the indicator type there. So it's cider. Come on. So I don't need that to be run through reputation commands. Bear in mind that you're now limited to 166 commands. And even then, when you're not limited to a certain amount of commands, you don't want to be going through all of your known good IP addresses. Okay, so now we have that side up and the indicator type there. For some reason, it hasn't taken that. So I'll just there you go. That's better. It just got itself in a bit of a e this nowAnd then that's in and if you'reSo now when it shows me indicators, it's goingto take into account that 192-168-0024 is known. So there's no point in reaching itbecause we know that it's our homeland. This can also be done so youcan import and you can export. So it's exporting Jason format, I believe,probably input Jason format as well. This can be done from the incident as well. If we go to an incident. Okay, so here we have one that's already closed. So the setting that I put inthere won't now apply to this. But if I want to go to that one there knowsit's good because mysto and XFP has said it's good becausethey recognise it as being an internal IP address, maybe. So we can go to actions there and thenwe can exclude and then as you can see,you come back to the same thing. So what it is and then you put an exclusionreason in and then that deletes excludes that indicator. Again, that is very much an iterative process. That's something that's going to be done probably primarilyby the engineer when he first put it in. So you would exclude certain known good indicators thatare going to be specific to each company andthen it will be done as iterative process bythe analysts as they go through and they realizethat certain things have been on and don't needto be looked at, they can then exclude that. And again you can control who can do that as well. So if you don't want your analysts to beable to do one of your administrators to beable to do it, you can drop that downto only being allowed to be done by administrators. Okay, that's the exclusion list. So for the exam as well, you're going tobe expected to understand field types and how tocreate field types yourself and how they are thenmapped into the instance and so on. How that then raffles through the whole system. So we're going to go back to settingsand we're going to go to fields. And here we have all ourfield types that we've seen previously. So we've got attachment, hash what it is. So it's short text, single select, singleselect, drop down page, objective, short text,let's see if we got other ones. Number, date, picker, chronicle, detection, created, time,date picker again and then grid ortable as you can see. And then this concept pack as well that has beenprovided by and again we can click into that. Everything is kind of you canget to everything from everything really. So we'll go back to there you can create a new field. The new field is going to allow youto further customise something that you need toknow or your customer needs to know. And you create the field type thereby deciding what's going to be. So booty in check box thefield name, call it false positive. It then changes the machine nameunderneath to remove the space. So actually you can do tool tip. So when you are hovering above it, you'll get a tooltip, the script to run when the field value changes. So when it monitors it and it willsay that excuse me easily after the fieldis modified, field display script determines which fieldsdisplay for forms and the values that displaysingle select fields and multi select fields. Which instant types you want to associate it to. So it's either all or you can find theinstance type that you want to associate it to. Default display on either the newedit form close or both. Who can edit it? Only the owner can edit it or everybody can edit it. Make the available search and add an optional graph. And then that will give you your indicatorand you can see how you can thenput that into two different incidents. Sorry, not your indicator. And then that field becomes an instant field evidence field. So you would create like, you could create a let'ssay we want to create a verified that would bebetter off as Boolean verified evidence and then make daysavailable search and then we save that. So in our evidence field, we wouldthen see that it being verified. Yes or no. You can make that mandatory as well. Don't you actually make a Boolean mandatory, tobe honest, which means that it has tobe filled in before the process can continue. We can see there so there we can see the long text. It's mandatory. So that basically means to say thatit has to be filled in. Okay, so those are your fields and creating fields. So just to sum up, so what we've seenis we've gone all the way through instant types,creating instant types, creating indicator types, difference between thetwo fields and the fact that fields.
Palo Alto Networks PCSAE Exam Dumps, Palo Alto Networks PCSAE Practice Test Questions and Answers
Do you have questions about our PCSAE Palo Alto Networks Certified Security Automation Engineer practice test questions and answers or any of our products? If you are not clear about our Palo Alto Networks PCSAE exam practice test questions, you can read the FAQ below.
Purchase Palo Alto Networks PCSAE Exam Training Products Individually