CompTIA Pentest+ PT0-002 – Section 19: Findings and Remediations Part 3
March 15, 2023

185. Administrative Controls (OBJ 4.2)

In this lesson, we’re going to talk about some administrative controls. This includes role-based access control, minimum password requirements, policies and procedures, and secure software development life cycles. First, we have role-based access control. Role-based access control is a security approach that focuses on restricting the availability of a resource to only authorized users. Now, when we deal with role-based access control, we are really focusing on the way that we give permission and rights to particular users based on their job function inside of an organization. Role-based access control allows an administrator to assign each user to one or more roles and then use those roles to assign the permissions to the organization’s resources. In Windows domain environments, role-based access control is normally going to be implemented by using groups, and these groups can also be set up in a structure that mimics your organization’s hierarchy.

For example, we might create one group for the Accounting department and another group for the Human Resources department. Then, based on those groups or roles, we can assign each group with access to different resources. Additionally, we could put both of these groups into a higher level group called employees, and that group might have access to other resources that every employee of the company needs access to. Role-based access control can be used to enforce minimum privileges for that subject based upon all of their associated groups, and the different users can be members of one or more groups based on the different roles they fulfill within the organization. This type of access control works really well for organizations who have a high rate of employee turnover because those permissions are going to be based on a work role rather than on an individual’s own username. Second, we’re going to talk about minimum password requirements.

Now, when we talk about the password policy itself, this is an administrative control. If we talk about the technical implementation of these requirements, that then becomes a technical or logical control. But for the purposes here, we are going to cover minimum password requirements under administrative controls because we’re talking about the policy itself. Now, a password policy is simply a policy document that promotes strong passwords by specifying an acceptable length, complexity, character classes, history, maximum or minimum age, auditing requirements and reversibility of your encryption. Depending on how you configure your password policies within your organization, your user accounts and credentials will either be more or less secure from being compromised by an attacker. Now, for most of us, this is going to be a review, but I recommend you don’t skip this video because there has been some big changes in the world of password policies in the recent years.

Now, according to the NIST Special Publication 800-63B, this is the Digital Identity Guidelines, and it’s going to be considered the source materials for what should be implemented in order to have a strong password security policy, some of the traditional elements of a strong password policy are now considered to be deprecated or obsolete. For example, let’s consider the length and complexity of a given password. Ever since I started working in IT and cybersecurity, we’ve been told that you need to use a long, strong, and complex password. This is defined as having at least 8 characters, or at least 12 characters, or at least 16 characters long, and it seems to get longer all the time. Now, in addition to this long password, we’ve been told to have a complex password. Now, complexity is defined by having different characters being used, such as lowercase letters, uppercase letters, numbers, and special characters. You see, if we just use lowercase letters, then you’ll only have 26 characters in your character set. But if you use lowercase and uppercase, you now have 52 characters in that set. If you add numbers, we now have 62 characters in that set. And if we add symbols, we’re now somewhere in the range of 70 to 80 characters in that given character set.

The more and different types of characters we use, the more complex the password can be, and this is always going to be considered a stronger password, at least it was historically. Now, while that’s true from a purely mathematical and code-breaking perspective, it neglects the fact that we have to account for one important thing, human nature. If my password is something like !4rQ63%@aHwJ;2XA, it is really long, strong and complex, but I’m never going to be able to remember that. So most people would end up writing down that password on a notepad, or even worse, on a digital text file on their desktop. So now, that long, strong, and complex password is simply out in the open and could be retrieved by anybody, including attackers. For this reason, the new guidance in the NIST Special Publication 800-63B states that you should no longer enforce complexity as part of your password policy. Instead, NIST recommends that passwords should now be between 8 and 64 ASCII characters in length.

Your users can have a nice long password and they can use uppercase and lowercase letters, and maybe even some numbers or special characters if they desire, but those should not be required. When it comes to security, having a long string that is not repetitive is just as secure as a complex eight-character password, which is why the NIST no longer recommends the complexity rules be enforced. Another change in the latest NIST publication is that maximum password age enforcement is no longer considered a good idea either. If you’ve been in cybersecurity for a while, you’re probably used to seeing most systems configured to require a user’s password to be changed every 60 or 90 days. But this is no longer considered a best practice. Why? Well, again, it comes back to human nature. If we’re asking people to create these long passwords of up to 64 characters, even if it isn’t complex, it’s going to be harder for them to memorize it, especially if they have to change it every 60 to 90 days.

So by allowing for a longer password age time, it increases the chances that your users will actually memorize their passwords instead of writing them down someplace. Another reason for this change in the enforcement rules is that most people are migrating towards use of password or credential managers. This allows them to create very long and randomized passwords as their credentials, and this leads to overall stronger passwords. And these are going to be securely stored by your password manager in an encrypted format where they remain secure until the user actually needs them. If your users are using a password manager, then there is less need for them to have to change the password every 60 to 90 days, and it minimizes the chances of those passwords being compromised because users are no longer subject to password reuse because every password manager will use a new and unique password for each and every account or website that that user is going to visit. The other side of the coin here when it comes to password age is the minimum age. Now minimum password age is a setting that can be enabled in your password policy that requires a certain number of days to pass before user can reset their password. If you’ve set this to zero days, that means your users can immediately change their password.

But if you set this to one day, they have to wait 24 hours before they can reset their password again. Now, why does this minimum age of a password really matter? Well, it really has more to do with the concept of password history than the age itself. Password history is a setting within your password policy that dictates how many different passwords have to be used before you can return to a previously used password. Let’s assume I have a password history policy of five. Then, I set my first password as Password1 and a few days goes by and I forget my password. So I use the forget password feature to reset my password and I created as Password2. But I really want to use my old password of Password1 again. So since I have a password age set of five, I can’t do that. Instead, I have to keep resetting my password until I do this five times. Then, I could reuse my old password of Password1 again. Now, if my minimum password age was set to zero, I could simply reset my password five times in a row and go back to my initial password on the same day. If this password was previously compromised by a data breach, for example, this would lead to a big vulnerability of my user credentials.

For this reason, it is a good idea to have your password history setting set to a really high number, something like 25, because this will help prevent password reuse. Third, we need to talk about policies and procedures. Now policies and procedures enable an organization to operate normally while minimizing their cyber security incidents. There are lots of different policies and procedures you can implement, and we’ve already talked about some of them when we talked about operational controls. When it comes to policies and procedures, this is going to allow you to be able to set things like your mobile device management policy, your remote access policy, your password policies, and your role-based access control policies. As a penetration tester, you should recommend that the organization update its policies based upon the vulnerabilities that you found during your engagement.

Fourth, we need to talk about the secure software development life cycle. The software development life cycle is the way that organizations create their own applications. So if you’ve done an engagement against an organization that creates their own applications, such as web apps, mobile apps, or desktop apps, you might be able to provide them some recommendations as remediation actions for any vulnerabilities you found during your engagement. Now, the goal of the software development life cycle is to create a methodology to predictably identify all the requirements for a given piece of software, to develop it, to test it, to release it, and to support it throughout its lifecycle. The software development lifecycle consists of eight steps. First, plan and initiate the project. Second, gather the requirements. Third, design the software. Fourth, develop the software. Fifth, test and validate the software. Sixth, release and maintain the software. Seventh, certify and accredit the software. And eighth, perform change management and configuration management for the software. Now, during the planning and initiation phase, the organization is formally going to begin to develop the plans for this project. As a security professional, it is imperative that your interests are being represented during this phase as you assess the expected results of the software that’s being planned and determine what protections should be put into place to safeguard the application and its associated data.

Now, additionally, all the data being handled by the software should be classified and documented, and this information is crucial to creating the security requirements and functionality needed for that given piece of software. Next, the requirements are gathered from across the organization or its customers. Basically, this phase is going to be designed to answer the question: what will this piece of software need to do in order to be considered successful? As security professionals, our goal during this phase is to ensure that each and every security requirement is being captured, so the software can be written to account for those requirements within its functionality.

The third step is to design the software. At this point, the software designers and the security professionals are going to be working closely together to determine the state of the application and how it’s going to achieve the desired functionality and security. The fourth step is to develop the software, and this is the portion of the life cycle where the programmers and coders begin to actually write the code that’s going to enable all the functionality that’s been planned and designed so far in the life cycle. The fifth step is to test and validate the software, and this is going to be a very important step, and security professionals need to be involved here. This is going to include not just the functionality testing that the programmers are going to conduct but also vulnerability assessments and penetration testing that’s conducted by your security team. Once tested, the software should also be validated in a simulated live environment within your lab, which can mimic the true production environment prior to release.

Validation testing is going to be used to verify a system or application is meeting the functional security requirements that were defined by the customer in the requirements document. Acceptance testing, on the other hand, is going to be used as the final acceptance into the system or application by your end users. Both of these types of testing are extremely important to utilize to verify the utility and warranty of the system or application. For example, if your application meets all the functional and security requirements but it doesn’t get accepted by your end users, then guess what? It’s useless. In addition to conducting validation and acceptance testing for your systems and applications, you can also use validation and acceptance testings for your different security controls as you implement those within your organization. Each security control being implemented should be tested for its validity and also needs to be accepted by your end users. Otherwise, they might attempt to bypass those new security controls you put in place. Software is often going to be developed in parts and pieces. Now each of these pieces should be individually tested using unit testing. This unit testing provides us with test data, checks for input validation, verifies proper outputs, and ensures that it’s functioning properly and securely. Once all of the units have been tested individually, they need to be brought together for integration testing, too. Integration testing focuses on the end-to-end experience in order to validate and accept the complete system. This type of testing is going to be focused on functionality and security, not on your user experience. Instead, user acceptance testing is going to focus on the customer and the end user experience. This testing ensures that they are satisfied with the product and it meets their needs and that they understand properly how to use the new product. User acceptance testing is going to be critical to getting widespread adoption of any new product, service, or application.

The only real disadvantage to conducting this is that it can add cost to your product and time inside of your release schedule. Another form of testing is known as regression testing. With regression testing, any changes to the baseline code is going to be evaluated to ensure it does not reduce the security or functionality of the current product. This type of testing is crucial to ensure that any new changes we’re putting in are being properly integrated, that the product quality remains high, and that there are no adverse side effects. The final form of testing is known as peer review. Under peer review, developers will review each other’s codes for efficiency, functionality, and security issues. This is a much more thorough review than an automated testing method, but it is more time consuming and more costly. And this brings us to our sixth step.

This is where the software is fielded and released into the production environment, and at this point, the operations team can take over responsibility for this piece of software. This is the phase of the life cycle where the majority of the software’s life will occur. The operations team is going to provide support to our end users of the software, and we’re going to maintain the software in a functional state. If a security issue is discovered, then a patch will have to be created and released to maintain the security of that software. The seventh step is to certify and accredit the software, which is going to ensure the software meets the security needs of its customers. The eighth and final step is to conduct change management and configuration management. This occurs when a change to the functionality, security, or release date of the software has to be made. This process is a formalized process to ensure the integrity and security of a given configuration item, such as a system, a server, or a piece of software to ensure that it is properly maintained over time.

186. System Hardening (OBJ 4.2)

In this lesson, we’re going to talk about two key terms, hardening and patching. Now, when I talk about system hardening, this is the process by which a host or other device is made more secure through the reduction of a devices attack surface area. Now, what does that mean when I talk about an attack surface area? Well, when I talk about an attack surface, I’m talking about the surfaces and the interfaces that allow a user or program to communicate with a target system. This allows all of these different services that are there to be vulnerable for somebody to attack them. And that’s the idea of system hardening, is I want to close down as many of those as I don’t need, because that will end up hardening my system and reducing that attack surface. Now, any service or interface that is enabled through the default insulation and left unconfigured should be considered a vulnerability.

And so you should scan that, you should identify it and then mitigate or remediate that based on what you need to do as part of your system hardening. Now, when we start talking about system hardening, I have this wonderful system hardening security checklist. These are 10 major areas that you need to check when you’re trying to harden a given system. First, you need to remove or disable devices that are not needed or used. For example, are you using Wi-Fi? If not, disable it and take out the Wi-Fi card. Are you using a CD-ROM or a floppy drive? If not, take those things out. Anything you don’t need, you should remove or disable because anything you don’t need is another thing that’s open and could be used by an attacker. And that makes it part of our larger attack surface. So by removing it, we are going to harden our system and reduce the attack surface. The second thing is we want to install operating system, application, firmware and driver patches regularly. If Microsoft knows there’s a vulnerability in Windows and they put out a patch on Patch Tuesday, you should be downloading that patch, testing that patch and then deploying that patch across your network because you want to make sure you are patched regularly and up-to-date with the latest security precautions. Because if somebody has a patch out there, bad guys will usually reverse engineer that patch and create an exploit. So you need to make sure that you are patched to prevent those exploits from being effective against your systems. The third thing I want you to think about is uninstalling all unnecessary network protocols. Now, what I mean here is not necessarily your Wi-Fi but instead all of those network protocols that might be used on your system.

Are you running a web server? If you’re not, close port 80. Are you running a mail server? If not, close port 25. Are you running an SSH server? If not, close port 22. As a standard workstation, you really should have no ports that are open unless you have something open for something like a host-based intrusion detection system to be able to receive reports and send reports back. Other than that, everything should be pretty much locked down on a system. On a server, you should only have the ports open for the services you need. So if you’re running a file server or a web server or e-mail server, those ports should be open but everything else should be closed. The fourth thing we want to look at is uninstalling or disabling all unnecessary services and shared folders. Anytime you have something that is open or shared or a service, that is again something that is increasing your attack surface. So again, you’ll seeing the common theme here. Anything you’re not using, go ahead and uninstall it or disable it. I prefer to uninstall it because that way nobody else can enable it. But if you can’t uninstall it, then you should at least disable it. Fifth, you need to enforce access control lists on all system resources. This means if you have local system file or folders or shared files and folders or printers, all of those things have to be controlled using the appropriate access control list. And we’ll talk more about access control later on as we define the four different types. But for now, just remember, you want to make sure you’re enforcing access control using the appropriate ACLs. Number six, we want to restrict user accounts to the least privilege needed. You’ll hear this concept a lot in security, always use least privilege. If you can do this with a user account and not use an admin account, go ahead and use a user account. If you’re a user, do you need to have admin rights? No, you just need to be able to access the computer and run your systems.

And so, you’ll have these different things like guests and users and super users and then admins. And you only want to use admin accounts when you have to because there’s a high level of privilege associated with them and that would also increase your attack surface. Next, we want to look at number seven, which is to secure the local admin or root account. And one of the ways to do this is by renaming it and changing the password. Everybody knows on a Linux system the root account is called root. Everybody knows on a Window system, the administrator account is called administrator. So, you should disable those two accounts and instead create another super user that is called something else. So, instead of calling it administrator, I might call it JSON ADM. Or if instead of calling it root, I might call it root123. Whatever you want to do to try to make it at least a little bit harder for the attacker is a good thing. And then always make sure you change that default password. If you have the default password of T-O-O-R for root, which is root, spelled backwards, you are going to get hacked really, really quickly.

And so you want to make sure you keep those things in mind and always change those passwords. Number eight, you want to disable unnecessary default user and group accounts. Again, if you’re not using it and you don’t need it, you should go ahead and disable it. This also helps harden your system and reduce your tax surface. Number nine, we want to verify permissions on system accounts and groups. This is because we can see things that happen like permission creep, where people gain permissions over time and they never get those permissions taken away. For example, I worked at one company for almost a decade and every time I move positions, they added different security controls. They said, “Oh, well, now you work in accounting,” for instance, “So you need these access “to the accounting share drives, “but you might still have your access “to the human resource files, “because you were in human resources last.” And then you move on to the tech side, and now you’ve got all three accesses. And so what should happen is every time you move to a new department, your permissions should get taken away and only the permissions you need should be added. This is the idea of verifying permissions on the accounts and on the groups. This should also be done routinely, whether that’s monthly or quarterly against your entire system to make sure everyone has the right permissions for what they need. And number 10, you always want to make sure you’re installing anti-malware software and you need to update its definitions automatically and regularly. So just having antivirus software in your computer is not good enough, it needs to check every single day for the latest updates and scheduled to automatically do its scans. This will help keep your system protected. If you do these 10 things, your system is going to be pretty strong and pretty well hardened. Now, the other thing you need to consider is how are you going to harden your systems against availability attacks? Remember, we have three sides of the triangle, confidentiality, integrity and availability. Everything we just talked about was a lot to do with confidentiality and integrity, to making sure our data is the way we want it to be, and that only the right people can read it.

But if we want to start focusing on availability, what can we do? If I have a server, it should be powered by an UPS or a battery backup. This will make sure that it can stay online even if the power goes down in the facility and that’ll give it enough time for your secondary power to come online which might be a generator, for instance. This is the idea of how you can make sure you’re protected against availability attacks that have to do with power outages. But power outages aren’t the only thing we have to worry about. For instance, in my area, we have an issue with our primary internet connection. If power is out in town for more than an hour, we lose our primary internet connection. So, we have a backup internet connection, we have a backup cellular modem. In addition to that, we have a backup microwave connection or a satellite connection. And so that way we have multiple different paths, so we won’t be offline. That’s the idea of making sure you’re protecting yourself against availability.

Now, I only talked about power and internet here, but there are lots of other things that are threats to your availability. And you need to think through these as you’re building out your server farms and your systems, because that is one of the ways you can harden those systems is making sure they’re resilient against these availability attacks. Now, the last thing I want to talk about here is the second part of this topic. We talked about hardening, now we’re going to talk out patching. And this comes down to patch management. Patch management involves identifying, testing, and deploying operating system and application updates. As I mentioned, patches are going to be there to help you fix security bugs. When Microsoft knows there’s a bug in their software, they’re going to release a patch.

You need to identify that you have the appropriate software that needs to be patched, you need make sure that you test that patch before installing it, and then you need to deploy that across your network so everything gets updated. These patches are often classified as critical, security-critical, recommended, and optional. If it’s a critical or a security-critical, you probably should make sure you get those out quicker. If it’s something that’s optional, you could probably wait a little while on that. And again, this all goes back to your risk appetite and risk management. Now, when you’re trying to conduct patch management at the enterprise level, you’re going to need some sort of patch management tool suite. There are lots of different tool suites out there, but two of the most common are made by Microsoft. Microsoft has the system center configuration manager or SCCM, and the endpoint manager, as you can see here on the screen. These are designed to support both Microsoft systems as well as having some ability to detect things on other systems as well. But really they’re primarily focused on Microsoft systems. Now, one of the things you have to be aware of when you’re dealing with patch management is that just patching is actually an availability risk in itself. Because when I install a patch, I actually can put that onto a critical system, and then that system needs to be rebooted.

When I do that, that might take five, 10, 15 minutes to reboot that server. And that means that server is down for that time. So, you need to make sure you’re planning when these patches are going to go out. You can’t just do it in the middle of the workday. You’re going to have to have a downtime Window or a maintenance Window for you to be able to install those patches to critical systems if you don’t have a fully redundant network that’s built. Luckily, most of our organizations have moved to the cloud now. Most of us have a fully redundant network built out, so we can take a single server offline, patch it, and then bring it back up. But if you’re still working with some of these older legacy systems, you may have to reboot the system manually and you may not have a backup. And so that would be an availability risk that you have to consider. Finally, when we talk about patches, you have to remember that patches don’t always exist. You might have a piece of software that’s really, really old or a system that’s really, really old, and manufacturer just doesn’t even exist anymore, they’ve gone out of business. In which case, there is no patch available.

Instead you’ll have to use compensating controls. So if you’re looking for patches that don’t exist, things like legacy systems, proprietary systems, ICS/SCADA or Internet of Thing systems and devices, and you can’t find it, you may have to either take that thing off the network, if you can assume that business risk or you’re going to put in compensating controls to overcome the fact you can’t patch that vulnerability. Now, what do I mean by this? Let’s say you had an older network file system and it requires port 445 to be open for it to be able to share those files. Well, we don’t want to have port 445 open to the internet because that would be a vulnerability. So, a compensating control if I can’t patch this software against a given vulnerability is to make sure that this file server is only available internal to the network. And I can block it from getting out of the firewall or anything outside the firewall from getting to this file server. By doing this, I put a compensate control in place such as blocking port 445 from the internet and that can solve the problem of an exploit over port 445 that this older proprietary system may be vulnerable to.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!