1. Software Applications (OBJ 1.3)
In this portion of the course, we’re going to discuss software applications in your enterprise architecture. Throughout this section, we’re going to be focused on Domain One’s security architecture, and specifically on Objective 1.3. Given a scenario, you must integrate software applications securely into an enterprise architecture. Now, we’re going to begin this section by discussing the system development lifecycle and the software development lifecycle, which cover how a system or piece of software is going to be maintained from its initial creation through operations and finally into retirement and disposal. Then we’re going to move into discussions around the different development approaches that you’re going to use, such as SecDevOps, agile waterfall, spiral versioning, continuous integration, and continuous delivery pipelines.
Next, we’re going to discuss software assurance topics, like sandboxing, third-party libraries, the DevOps pipeline, code signing, and security testing. After that, we’ll cover baselines and templates, as well as secure design patterns, container APIs, secure coding standards, and much more. We’re also going to discuss with COVID some best practices and different considerations for integrating your enterprise applications. As you can see, we have a lot of COVID in this section of the course. But before we do, let’s talk briefly about the three ways you can design secure systems and software, such as secure by design, secure by default, and secure by deployment. Insecure by Design The application’s design had security in mind from the beginning. Applications are considered truly secure when even if the source code was known, they would still be secure because they were properly designed and coded. With Secure by Design, applications are designed with security in mind from the beginning. Applications are considered truly secure when even if the source code was known, they would still be secure because they were properly coded and designed.
Now, with “secure by default,” the application is considered secure when it’s installed without any changes to its settings. For example, many server operating systems are fairly secure when they’re first installed, but if we choose to allow the installation of additional optional features, that security becomes reduced over time. Now, with secure by deployment, the application is secured by accounting for the environment into which it is going to be installed. For example, even if a piece of software is not itself inherently secure, it may become secure because of all the additional layers of defence that our organisation has built into its network architecture. So remember, just because you created something that is secure by design, it doesn’t mean it will always be secure, even after it’s installed and deployed into your production environment. This is an important concept to remember as you begin to try to maintain a secure enterprise architecture within the real world. So let’s get started in this section of the course with our discussion of software applications.
2. Systems Development Life Cycle (OBJ 1.3)
In our organizations, we often have requirements to create something new. When this occurs, we may have to go out and buy a new system, or we might have to develop one from the ground up in order to meet our internal needs as an organization. During the development of this new system, there are many different decisions and choices that are going to have to be made. Decisions span the gamut from the agreed-upon functionality of that system to the details of the system’s security requirements and even the end-user experience that is going to occur during the system’s use. Whenever you’re developing a system, though, you have to do it in a methodical and logical order. If you begin to develop a system haphazardly or simply bolt on additional components here and there, you’re going to have large security gaps that are going to be difficult for you to protect. The systems development lifecycle, or SDLC, is the process that occurs from the initial idea of a system to its development, its release into operations, and finally its retirement. So whenever you begin to develop a system, you should always think back to the five simple steps of the system’s development lifecycle: initiate, acquire, develop, implement, operate, maintain, and dispose. By performing each of these five steps in order, you’re going to be assured that your organisation has at least considered each portion of the system throughout its entire lifecycle, and this way you’ll have a better idea of how to defend it. After all, it is a lot cheaper to design security from the beginning of your development of the system than to try to add it after the system has been deployed into operations.
Now, the first step is to initiate the system’s development. During the initiation phase, the basic system requirements are suggested and agreed upon by key stakeholders. These requirements might be for a new feature, a new security improvement, or some other new user experience. Regardless, decisions have to be made on how that new functionality is going to be achieved. Do we suggest that our organisation build a new system from the ground up, or are we going to purchase an off-the-shelf solution? The second step is to acquire or develop the proposed system. If the decision is made to acquire an off-the-shelf solution, a risk assessment should be conducted on the proposed software or system. This risk assessment should consider the confidentiality, integrity, and availability concerns that may exist with that given solution. If a decision is made to develop the system in-house, then it’s going to be important for us to determine if our organisation has the capability and has enough resources to adequately develop both the function of the system and the security needs for that system. The third step is to implement the solution.
During this stage, it’s going to be important that the solution is fully tested, evaluated, and then transitioned into our live production environment. This is also the stage at which certification and accreditation will take place. During certification, the solution’s effectiveness and security are going to be verified. From a technical perspective, during accreditation, this solution receives formal authorization to be placed in the production environment. Additionally, during the transition, our users and operational staff will have to be trained on this new system. Now, during step four, the system is now in operation. During the operations and maintenance phase, the service desk is going to handle requests from end users, and the organisation is now able to fully utilise the new system. The majority of time in any system development lifecycle will be spent in the operate and maintain stage.
This stage is also the most expensive in your system’s development. This represents nearly 70% of the total cost of the system in most cases. Now, the fifth and final step is the disposition phase. At this point in the system’s life cycle, the system is considered no longer needed. The functions that were performed by that system have either been stopped or transferred to a new system. The servers and software that were associated with the system have to be disposed of properly, though. If they’re not, they can represent a large vulnerability for your organization. So once the system has fulfilled its useful operational life, what should you do with it? Should you reuse that asset or should you dispose of it? Well, this is really a question about your risk tolerance and how your organisation views its security posture. Asset disposal is going to occur whenever a system is no longer needed.
This disposal might require that the system be destroyed or that the asset be reused for another purpose. In organisations that require a high level of security for their data, it has become commonplace for data storage devices to be electronically or physically destroyed. For example, if you are using tapes for backups, those tapes might be burned or shredded when they’re no longer needed. If your organisation is using hard drives for storage, these can be destroyed using a data destruction process. Dig Owsing is going to expose the hard drives to a powerful magnetic field, which in turn causes previously written data to be wiped from the drive. And then that hard drive becomes blank once again. I’ve even seen organisations physically destroy their hard drives to keep their data safe by drilling through the platters. It really does depend on your risk tolerance. The NIST Special Publication 888 provides us with another option, though. If you want to be able to reuse those hard drives and still maintain the security of your organization, it’s recommended that the hard drive be overwritten with a series of zeros at least three times prior to reusing it. By doing this, the data that was originally written on that hard drive becomes essentially unreadable. Another common option is to encrypt the data on the hard drive and then destroy the encryption key.
Again, this makes the data on the hard drive essentially unreadable. Now, the major security concern here that we’re trying to deal with is the idea of data remnants. For example, let’s say I have an old laptop and I want to sell it to another person. I want to make sure that the other person can’t access any of the data that was previously stored on the hard drive of that laptop, things like my social security number or my banking information. So in order to do this, I could remove that old hard drive, but that would make this laptop essentially unusable. Instead, I could sanitise that hard drive by using the overwrite procedure I talked about earlier, and then we could install a new operating system on it. As long as I overwrote every single sector of that hard drive multiple times, then the fear of data remnants being recovered would be mitigated and downgraded to a low enough level of risk level.Often, an organisation will reuse an asset, such as a server, a router, or another system. Again, the idea of data remnants is one that has to be addressed.
If I’m going to take a server that was previously used for accounting and then provide it to the marketing department, I need to make sure that all the data remnants have been removed by using overwriting procedures. But if the system was being used in a test lab by the web developers and they wanted to reuse it for a new project for testing another website, it would maybe only be necessary for us to remove some of the files that were previously there for some of those applications. Again, this is a risk-tolerance decision. As with many of the things we discussed, there is no right or wrong answer to whether the asset should be destroyed or reused. This is going to be a decision that you have to make as a security professional based upon the cost, the business case, and the security issues involved.
3. Software Development Life Cycle (OBJ 1.3)
Now, during the planning and initiation phases, the organization is formally going to begin developing the plans for this project. As a security professional, it is imperative that your interests are represented during this phase as you assess the expected results of the software that’s being planned and determine what protections should be put in place to safeguard the application and its associated data. Now additionally, all the data being handled by the software should be classified and documented and this information is crucial to creating the security requirements and functionality needed for that given piece of software. Next, the requirements are gathered across the organisation or from its customers.
Basically, this phase is going to be designed to answer the question, “What will this piece of software need to do in order to be considered successful by security professionals?” Our goal during this phase is to ensure that each and every security requirement is captured so that the software can be written to account for those requirements within its functionality. The third step is to design the software. At this point, the software designers and the security professionals are going to be working closely together to determine the state of the application and how it’s going to achieve the desired functionality and security. The fourth step is to develop the software, and this is the portion of the lifecycle where the programmers and coders begin to actually write the code that’s going to enable all the functionality that’s been planned and designed so far in the lifecycle. The fifth step is to test and validate the software, and this is going to be a very important step, and security professionals need to be involved here. This will include not only the functionality testing performed by the programmers, but also vulnerability assessments and penetration testing performed by your security team.
Once tested, the software should also be validated. In a simulated live environment within your lab, which can mimic the true production environment prior to release, validation testing is going to be used to verify that a system or application meets the functional security requirements that were defined by the customer in the requirements. Document acceptance testing, on the other hand, is going to be used as the final acceptance of the system or application by your end users. Both of these types of testing are extremely important to utilise to verify the utility and warranty of the system or application. For example, if your application meets all the functional and security requirements but doesn’t get accepted by your end users, then guess what? It’s useless. In addition to conducting validation and acceptance testing for your systems and applications, you can also use validation and acceptance testing for your different security controls as you implement them within your organization. Each security control being implemented should be tested for its validity, and it also needs to be accepted by your end users. Otherwise, they might attempt to bypass those new security controls you put in place.
Software is frequently developed in bits and pieces. Now, each of these pieces should be individually tested using unit testing. This unit testing provides us with test data, checks for input validation, verifies proper outputs, and ensures that it is functioning properly and securely. Once all of the units have been tested individually, they need to be brought together for integration testing too. Integration testing focuses on the end-to-end experience in order to validate and accept the complete system. This type of testing is going to be focused on functionality and security, not on your user experience. Instead, user acceptance testing is going to focus on the customer and the end-user experience. This testing ensures that they are satisfied with the product, that it meets their needs, and that they understand properly how to use the new product. User acceptance testing is going to be critical to gaining widespread adoption of any new product, service, or application. The only real disadvantage of doing this is that it may increase the cost of your product and add time to your release schedule. Another form of testing is known as regression testing. With regression testing, any changes to the baseline code are going to be evaluated to ensure they do not reduce the security or functionality of the current product.
This type of testing is crucial to ensure that any new changes we’re putting in are being properly integrated, that the product quality remains high, and that there are no adverse side effects. The final form of testing is known as peer review. Under peer review, developers will review each other’s codes for efficiency, functionality, and security issues. This is a much more thorough review than an automated testing method, but it is more time-consuming and more costly. And this brings us to our sixth step. This is where the software is fielded and released into the production environment, and at this point, the operations team can take over responsibility for this piece of software. This is the phase of the lifecycle where the majority of the software’s life will occur. The operations team is going to provide support to our end users of the software and maintain it in a functional state. If a security issue is discovered, then APT will have to be created and released to maintain the security of that software. The 7th step is to certify and accredit the software, which is going to ensure the software meets the security needs of its customers. The eighth and final step is to conduct change management and configuration management. This occurs when a change to the functionality, security, or release date of the software has to be made. This is a formalised process to ensure the integrity and security of a given configuration item, such as a system, a server, or a piece of software, to ensure that it is properly maintained over time.
4. Development Approaches (OBJ 1.3)
approaches to developing software applications. In this lesson, we’re going to discuss six different software development approaches and concepts. Things like waterfall, spiral, Agile, DevSecOps versioning, continuous integration, and continuous delivery, also known as CICD pipelines, Now the first one is a waterfall approach. The waterfall approach is an incremental development approach with a rigid methodology where steps are followed in a sequential order. Within the waterfall model, each phase is going to have its own acceptance criteria and milestones before we move into the next phase. Under this model, development is going to move through several phases, including requirements analysis, design, implementation, testing, and maintenance.
Now, there are some security concerns with the waterfall model. One of the biggest issues with this model is that it is very linear and sequential in nature. So when the team moves into the next phase, the previous phase is considered complete and is not revisited. Because of this, projects can take longer, bugs are often discovered later in the process, and more security issues may not be fixed during development and will find their way into your final product more often than if you’re using some of the other methods we’re going to talk about. Another incremental model is known as the spiral model. This iterative approach is going to place more emphasis on risk analysis during each phase, with prototypes being developed during each of the phases. Now each of these prototypes is going to be tested, and the process will look back on itself to fix any critical issues earlier in the process than using a waterfall model. Several phases will be progressed through during spiral development each time we go through the development. First, the objectives are determined, then the risks are identified and resolved. Next, the prototype is developed and tested. Once that product is released, the next iteration is going to be planned, and the development process begins again. The spiral methodology was developed to overcome some of the issues found with the waterfall model.
Instead of waiting until the completion of every phase to test things, this iterative testing is being performed throughout all of the phases inside the spiral model. This model allows for capturing requirements quickly and addressing security issues much more rapidly. The standard spiral methodology consists of five phases: planning, risk assessment, engineering, coding, implementation, and evaluation. While the spiral model is a faster model of development than the waterfall model, agile is an extremely fast one in comparison to both of these approaches. Agile requires less time and energy upfront for analysis and requirements gathering. Instead, there is a greater emphasis on incorporating user feedback in near real time. To be effective, though, Agile does require continuous feedback and cross-functional teamwork to develop your end products. While Agile development is extremely fast because it uses an incremental and iterative approach, it does have some security concerns. In Agile, satisfying the customer is of the highest priority. Because of this, the requirements often change throughout the development process, and this continual changing of requirements can lead to inadequate security testing. Often in agile development, security concerns are going to be overlooked in favour of working prototypes that can be implemented much more quickly. To increase the level of security in your agile development, it is really important to embed security experts into your development teams. Prior to releasing any software to your end users, ensure that it has undergone security testing. Now, throughout the history of software development, there have really been three main functions that have to occur. We have the development, the quality assurance, and the operations. Often, when something didn’t work quite right, these three functional areas would actually sit back and blame each other. This resulted in a lack of cooperation between the functions and the teams, which ended up causing delays in the product development.
To overcome this, a new technique known as “DevOps” was created. DevOps is a combination of development and operations. The goal here is to have shorter development lifecycles, quicker releases of software, and more dependable releases. DevOps places the development, quality assurance, and operations functions all into one team to force collaboration across these functions and decrease the time for the deployment of a product. But the idea of DevOps didn’t include security, which led to insecure products and software being fielded into production. To alleviate this challenge, a newer variety of DevOps was created, known as DevSec Ops or SecDevOps, depending on where your focus was with Dev Stack Ops. This stands for Development, Security, and Operations, and it’s an integration of security into every phase of the software development lifecycle as it’s being built throughout a single team and a single functional area. Essentially, we’re going to take the DevOps team and integrate a security professional into that team as well. SEC DevOps is slightly different from Dev set ops.
This is where security is going to be placed as your primary driver in every stage of the software development lifecycle. So when it comes down to it, DevSEK Ops and SEC DevOps are fairly similar. It really just becomes a matter of where you place the emphasis within the team. Is your focus on development first or security first? Another important concept in development is versioning. Now, versioning is used to indicate the history of a particular software-based code. Generally, this is done using a numbering system. If you have a version number below 1.0, this is considered a beta version, while numbers at 1.0 or above are going to be considered public release versions, and therefore they have more stable code bases. In versioning, the number to the left of the decimal is considered the major version, like one point something or two point something. The numbers to the right of the decimal point are considered minor versions. For example, let’s say I had a programme that was version seven and two. This is the seventh major version of the software and the second minor update to the version seven baseline. O’codebase.
Finally, we need to discuss the concept of continuous integration and continuous delivery, also known as the “CI CD pipelines.” Now, regardless of the method of development being utilized, it is really important to understand how your organisation is going to integrate the software that’s being developed into your production network and other operational systems. In the old days, code was integrated in a very linear and controlled manner. We would start with development, then start taking that code and putting it together and figuring out what it’s going to do. Then we would move into testing and integration, where we put it into some kind of test environment and made sure it didn’t break anything. After that, we would start the integration process, which means we might be buying new servers or we might start installing all the software to see how it’s going to operate in that full environment. After that, we moved into staging. Now, staging is where we’d actually put things like a set of servers that look like our production environment and make sure everything is getting ready to go from testing into staging and then from staging into production. which brings us to our final step, which is production. Now, this is where that piece of software is actually deployed onto the servers and is going to be used by your end users.
Now, this is a very slow process, especially if you do one step after another after another. And the challenging thing is that these steps were all run by different people in different departments with different functions. So when you have the developers that were part of your program, they had one role, and then we’d send it over to another team that did testing, and then they sent it to another team that did integration, and then we sent it to another team that did staging. Finally, we sent it to operations, who moved it from staging to production. And because things are moving from one step to the next, there will be a lot of handovers and problems, and there will be a lot of internal conflict within your organization. To increase the efficiency of this process, a new method known as CI CD was created to eliminate a lot of the handoffs and delays in the old methods. When you’re dealing with continuous integration workflows, you have a common source repository. Everybody ends up checking in their code to this common source.
Now this could be your managers, your developers, or whoever’s working on the code. It really doesn’t matter. We’re all touching the same common code base. When you’re ready to start using the software, you could take it from that common source and pull it into a continuous integration server. At this point, it gets built, which means it compiles the code. It tests the code, and then it tells you whether it succeeded or failed that test. And then, based on that, it can go back to the developers for the next step, which might be moving into testing, moving into integration, moving into staging, or moving into production. Now, this can all be automated as well by using continuous integration (CI), which is part of the CI CD pipeline. Now, continuous integration is a software development method where code updates are tested and committed to a development server, a code repository, or both very rapidly.
Now, this allows us to create something to test, and then once we know it’s good, we can say this is ready to be implemented within the environment. Now, by itself, continuous integration doesn’t do that much for us in terms of speeding things up. Yes, we are shifting things left a bit here, and that’s going to let our developers do some more testing, which is always considered a good thing. But continuous integration can do a lot more than that. By using continuous integration, you can actually test and commit updates multiple times per day. So, going from that feature taking nine to twelve months to implement, we can now have systems where we’re doing that continuous integration 510 or even 20 times per day if we need to. But how do we get there so quickly? Well, that’s where we have to bring in continuous delivery or continuous deployment.
Now, continuous delivery is a software development method where the application and platform requirements are frequently tested and validated for immediate availability. Let’s say a developer just created a new feature and tested it using continuous integration. Now that it’s passed the testing, it can go through to continuous delivery, which means it’s going to go through all the tests, all the compliance checks, and all the validations. And now it’s ready to be installed on that staging server or in production. With continuous delivery, though, I’m not actually going to go forward and install it on the staging or production servers. Instead, I’ve done everything up to that point using automated mechanisms, but I’m not actually going to deploy the code onto the server until an actual human gives their approval. But if you want to fully automate this entire process, you can do that using continuous deployment.
With continuous deployment, we take the concept of continuous integration and continuous delivery and take it one step further because now we have a software development model where application and platform updates are committed to production rapidly. Essentially, I’m going to write some new code, such as a security fix, and then it will automatically go through integration testing. Once it’s been approved, it goes back to the code repository. At that point, we can go through and do continuous deployment, where it actually gets tested and is ready to go into staging. But instead of just sitting there and waiting for somebody to actually install it into staging, with continuous deployment, we could take that code and actually deploy it to staging. And then at regular intervals, maybe every 5 hours or every day, we can move things from staging into production. This allows us to do things much quicker in terms of deployment and release cycles. Now, when I talk about continuous delivery, I want you to remember that continuous delivery is focused on automating the testing of code in order to get it ready for release. It’s not released; it’s just ready for release. But when I talk about outlining continuous deployment, I’m taking it a step further. I’m focusing on automating the testing and releasing the code in order to get it into the production environment much more quickly.
5. Software Assurance (OBJ 1.3)
In this lesson, we’re going to talk about software assurance. Now, software assurance is a process to ensure applications meet an acceptable level of security for the functions they’re designed to provide. Now, there are many ways to conduct software assurance. These include actions to audit and log the functions performed by the software, the use of standard libraries, the development of the software industry, accepted approaches, web security services, and other techniques. The least formal method of software assurance is to use auditing and logging. In this approach, the enterprise environment is continuously audited and logged to determine if the applications are acting in a secure and appropriate manner. Any unusual actions are going to be investigated, and a root cause will be determined. Most enterprise environments use risk management throughout the organization. This ongoing process is going to continually look at risks, vulnerabilities, and the appropriate mitigations for both of those. Now, this can be another form of informal software assurance, or it can be utilised as part of a larger, more formal methodology. Whenever new software is going to be put into the enterprise environment, it should go through four phases.
The first phase is planning. During this phase, you should develop the needs assessment, develop the requirements, create your acquisition strategy, and develop the acceptance criteria. The second stage is contracting. During this stage, a request for proposals or other supplier solicitation forms are going to be used, and a contract is going to be negotiated. The third phase is monitoring and accepting. This occurs once a contract is in place and the software goes through the change control procedures for installation. The fourth phase is “follow on.” At this point, the software is going to be installed within the organization, and the organisation has to sustain and maintain that software. One method to test your software and ensure it meets assurance requirements is to use sandboxing or a development environment. Now, a sandbox or development environment is a testing environment that isolates untested code changes and outright experimentation from your production environment or repository.
Now, in the context of software development, this includes things like web development, automation, and revision control. This environment is going to usually be created using virtual machines or a virtual network that allows you to build, test, and deploy software during development and testing without any big consequences. This provides us with massive amounts of flexibility as we begin to deploy these programmes to our development and testing environments to verify their functionality and security prior to actually deploying them into our live production networks. Another form of software assurance is the use of standard libraries during the development of applications. Now, standard libraries contain common functions and objects that are used by programming languages, allowing a developer to reuse them without having to code them again from the ground up.
This can drastically reduce the development time, but it is also key to good security. The reason for this is that software libraries have already gone through the software assurance process, and we already know that any new programmes that are made from these trusted libraries can be trusted because those programmes tend to be more secure because we have already tested those libraries. Now, application security libraries usually contain functions for input, validation, secure logging, encryption, decryption, and authentication. As well. In addition to standard libraries, there are also many third-party libraries available for use out there. Third-party libraries provide developers with a way to integrate pretested, reusable software that saves development time and costs because this code has already been created for your use. The challenge, though, is that these libraries may have vulnerabilities associated with them that you’re not aware of. Remember, whenever you use a third-party library, you’re bringing in not just the functionality associated with that library but also the vulnerabilities it contains. So to provide a level of software assurance, your organisation needs to validate third-party libraries prior to using them and keep a list of known and trusted third-party libraries that can be used by all of your developers.
Another way to assure the quality of your software is to have a well-defined and documented DevOps pipeline. Now, a DevOps pipeline is a set of automated processes and tools that allows your developers and operations personnel to collaborate on building and deploying code to a production environment. This pipeline will vary depending on your organisation, but in general, you’re going to have several key steps such as committing the code, building the code unit, testing, integration, testing, staging, regression testing, and deployment. The DevOps pipeline can be performed manually or it can be automated using the principles of continuous integration, continuous delivery, and continuous deployment. At each step in the DevOps pipeline, there are checks and testing that are going to occur, and if a failure is found, the pipeline is stopped and that feedback is sent back to the developers to work on rectifying that issue or that bug.
This leads to higher-quality software with a higher level of assurance. Code signing is another software assurance mechanism that we can utilise for secure software development. Now, code signing is an operation where a software developer or distributor digitally signs the file that’s being sent out. This assures users that they’re receiving software that does what the creator says it will. This signature acts as proof that the code has not been tampered with or modified from its original form as created by the developer. Now, code signing is going to rely on the use of digital signatures. Since only the developer maintains a copy of their private key, they’re able to create a hash of their code and then encrypt that hash using their private key to create this digital signature.
This serves as a unique verification that the code has not been modified since they deployed it. When you attempt to verify the installation package you receive, you’re going to first check the code for this digital signature, and if it’s valid, this indicates that the code has not been modified and can be trusted for installation if you already trust that developer. In addition to these software assurance techniques, we can also conduct testing of our software during our developmental testing, acceptance testing, or penetration testing. As a security professional, we analyse software code to find vulnerabilities and other issues by using interactive application security testing, static application security testing, and dynamic application security testing. Interactive application security testing analyses code for security vulnerabilities while that app is being run by an automated test, a human tester, or any activity that is interacting with the application’s functionality. The most common way of doing this is by using a fuzzer.
Now, a fuzzer is used to inject invalid or unexpected inputs into an application to determine how it’s going to react to those. This is because many exploits attempt to cause a programme to crash, provide an unexpected output, or return you to a command-line prompt. The process of “fuzzing” is commonly used to find and exploit web application vulnerabilities. By continually injecting pseudo-random data into the program, this software can crash, and then a bug can be detected. Attackers often use two different types of fizzling. Mutation fuzzing attempts to change the existing input values of a program. generation based Buzzing attempts to generate inputs from scratch based on a specified format. This data, created by generation-based buzzing, appears to be more random than what you’re going to get from mutation fuzzing. By using fuzzing, attackers are able to attempt to create a fault injection attack. To prevent a fault injection attack, you should always use buzzing during application testing to detect errors before an attacker does. You should also adhere to good project management and safe coding practises once that software is accepted.
It’s also going to be appropriate to deploy an application-level firewall to help detect and prevent fuzzing from occurring on your web applications. Static application security testing, also known as static analysis, is a testing methodology that analyses source code to find security vulnerabilities that make your organisation’s applications more susceptible to attack. Now, static testing of software is conducted by examining the code when it’s not in use and before your code is actually compiled. This is conducted through a code review, which is a method of formal or informal review of the programming instructions. A formal review is a line-by-line inspection of that code that’s conducted by multiple programmers throughout different phases of the software development lifecycle. This is the most effective way to find bugs, but it is also the most time-consuming and expensive. Pair programming, email over the shoulder reviews, and tool assistant reviews are some of the more informal methods of code review. With pair programming, two programmers are working on one terminal, and they check each other’s coding as it’s being performed.
Code reviews can also be conducted by email, where the programme code is sent to another programmer for review. When they get around to it, over-the-shoulder reviews are when a programmer brings in a reviewer and then explains their code to that reviewer. A tool-assisted review uses automated scanning and testing techniques, and it’s very efficient, but in most cases, it won’t find every single bug in the code. Another type of testing is known as “dynamic analysis” or “dynamic analysis security testing.” Now, dynamic analysis is an application security solution that can help find certain vulnerabilities in web applications while they’re running in production. The reason this is called “dynamic testing” is because it occurs while a programme is running on a system. Dynamic testing is often assisted through the use of automated tools but can also be performed manually. Unlike a static test, the source code may or may not be available to the actual tester. Instead, they’re relying on providing inputs to a programme and seeing if the output they get matches what they expect. Penetration testers often use dynamic testing as part of their attempts to identify weaknesses to exploit during their assessments.
6. Baselins and Templates (OBJ 1.3)
Due to the large number of security patches, hotfixes, and updates that can be released, each organization needs to create a standard operating system environment known as a “baseline.” Now, once the operating system is installed, updated, configured, and properly secured, that instance can be exported and saved as the standard image or baseline for all your new installations. This simplifies the process of securing a new machine since all the hard work of configuring it has already been completed. Additionally, security baselines can be established and saved as group policies inside your Windows domain. This allows a system administrator to create a standardized set of configuration settings that may provide the minimum baseline of security for any new installation or current workstation inside your domain. While a brand new installation may be fully secure, over time, those settings may be modified by users or system administrators. In these cases, the use of a security baseline or configuration baseline that can be forced to update all the settings on a machine inside a domain through a group policy is going to be critical for you. It becomes very quick and easy to deploy and install these configurations across every workstation that’s connected to your domain controller.
By utilising a baseline configuration for all devices in the network, we can create a standard operating environment that ensures consistent behaviour across the network and reduces our support costs. Security analysts should perform a weekly assessment of the workstations that are attached to your domain to ensure that they’re all still adhering to that baseline configuration. This same concept can be used when designing secure applications and software by creating a secure design pattern for using your designs. Secure design patterns are meant to eliminate the accidental insertion of vulnerabilities into your code and mitigate the consequences of those vulnerabilities. These design patterns are not fully functional code; instead, they are high-level templates or descriptions of how to solve a given type of problem. A storage design pattern is similar to a secure design pattern, but instead of focusing on the general logic used to solve a problem, the storage design pattern is used to have a more secure layout for the storage of the data in a web application. These storage design patterns may dictate the type of storage to use, such as a database or a blob, and the basic types of configurations and settings to utilize.
Another type of baseline or template you may want to utilise when you’re developing systems or software is container APIs. Now, a container is a form of operating system virtualization where a single container might be used to run microservice software processes for a larger application. A container API is used to create and manage data containers using an application programming interface, or API. By using the standardised format for commands to initiate, duplicate, and remove a container, you can minimise the vulnerabilities of attempting to control the containers when using your custom code. Numerous frameworks have also been developed to help provide structure to the way application security is being developed. But why do we even need application security frameworks in the first place? Well, for one, it’s just a case of having too many options presented to us. There’s a huge diversity in programming languages, application servers, middleware platforms, and specialised security products all being used by our organizations.
Now, how can we go about securing all this diversion technology? To meet the application security requirements, our enterprise should use standard guidance frameworks and approaches. These frameworks are often focused on a specific technology or platform. They only focus on end-to-end service. But we have to consider the applications being used as well. This is why application security frameworks are so important. Secure coding standards are another great thing to utilise as a baseline and template. In order to help reduce the attack surface of an application that your team is developing, it’s considered an industrywide best practise to use secure coding standards. Secure coding standards have been developed through a community effort for each of the major programming languages. The Computer Emergency Response Team, or Search, has led many of the efforts to create secure coding standards. This includes standards for most frequently used languages like C, C, Java, Perl, and many others.
The National Institute for Standards and Technology, or NIST, has also provided some guidance toward securecoding standards, but they’re not nearly as in-depth as those provided by Cert. The use of these secure coding standards helps to bake in security from the earliest development of an application. This makes applications much easier to defend against, instead of having to bolt on security after the fact. Now, once an application is coded and built, your organisation should have an application vetting process in place. An application vetting process is the process of verifying that an application meets your organization’s security requirements prior to allowing it to be installed or deployed onto a production network. For example, let’s say I started a small startup company and created a new app. Now your organisation decides to buy a licence and use it. What are the different steps that you’re going to take before installing it on your servers? How do you know that you can trust the code that I wrote? Will you perform a static or dynamic test against my application prior to installing it? Or are you going to simply download it and run the installer on your production server?
Again, this is a risky decision that you’re going to have to make in your organization. And whatever your application vetting process is, it should be well documented and known by all of your employees. If your organisation also runs its own web platforms, you may also have created your own application programming interfaces, or APIs, for those platforms. If you have, then you need to determine the proper API management for those applications, too. API management is the process of creating and publishing web application programming interfaces, enforcing their usage policies, controlling access, nurturing the subscriber community, collecting and analysing usage statistics, and reporting on performance. For example, my own company, Deontraining, has several different APIs that we manage to use to provide services to our students. We have one API that we use to issue, manage, and track our students’ CompTIA exam voucher status.
We have another API that’s used to provide practise exams to our students on our platform, Deontrain.com. And we have yet another one that allows you to have access to our hands-on, cloud-based labs. All of these different APIs must be managed, controlled, and analysed to ensure they are operating securely at all times in order for any students to practise On-the-Job security functions in preparation for their security exams. Finally, we need to discuss middleware. Now, middleware is the software that connects computers and devices to other applications and networks. Middleware is the integration of your different services, and it connects the different functionality for data transformations, monitoring, and administration that your cloud-based solutions, web applications, and enterprise architectures are going to rely upon. When you’re using middleware, it’s helpful to create a template of baseline configurations for the security of the middleware and the applications that are connecting to it.
7. Best Practices (OBJ 1.3)
In this lesson, we’re going to discuss some best practises in terms of software applications and our enterprise architectures. Now, there are numerous industry-accepted approaches to software assurance and development. The WASC, or Web Application Security Consortium, provides best practises for web-based applications. OWASP, or the Open Web Application Security Project, is a group that maintains a list of the top ten web attacks on a continual basis.
They also provide guidance on how to conduct secure Web programming. The OWASP top ten includes things like injections, broken authentication, sensitive data exposure, XML, external entities, broken access control, security misconfigurations, cross-site scripting, insecure deserialization using components with known vulnerabilities, and insufficient logging and monitoring. The Build Security In, or BSI, programme is located under the Department of Homeland Security, and it provides additional security recommendations and architectures that programmers can utilise to reduce vulnerabilities, mitigate exploitations, and improve the quality of their software applications. Finally, ISO/IEC 2734 is a standard that provides industrywide guidance on securely developing and maintaining software applications. If your organisation develops Web services internally, you should also be aware of the Web Services Security, or WSS, extension to the Simple Object Access Protocol, or SOAP, framework.
WSS adds a security layer for Web services that can allow for the digital signing and encryption of SOAP messages as well as methods to utilise security tokens for secure authentication. If you’re involved in application development as a security professional, you also need to be aware of several coding techniques that should always be forbidden from use. Programs should never be allowed to request elevated privileges unless absolutely necessary. Permissions on files and settings should also be set appropriately and at the lowest level. The use of network connections should be strictly monitored, and any unnecessary network ports should always be closed. Also, your programme should not write files in publicly accessible folders unless the user explicitly requests that the programme do that, such as saving a word processing file to their desktop. Finally, it’s also important to ensure that you’re using the proper HTTP headers in your application calls. The OAS Secure Headers project describes the different HTTP response headers that your application can use to increase the security of your application whenever it places calls.
For example, if you use the HttpStrict Transport Security Web Security Policy, you can protect your website against protocol downgrade attacks and cookie hijacking by using HSTs. As a result, web servers will be able to declare that web browsers or other compliant useragents should only interact with them via secure HTTPS connections and never via the insecure HTTP protocol. Another commonly used response header is the XFrame Options header. This is used to prevent clickjacking from occurring by declaring a policy that communications are only going to be allowed from a host to the client browser directly, without allowing frames to be displayed inside of another web page. If you’re working as a developer for web applications, it is really important that you review the different HTTP headers and response options that are available to you to secure your web applications. Proper documentation is also important. Proper documentation during the development process and during the coding itself is really important to the security of your applications and is also considered a best practice. There are four types of documentation that you should be aware of in the development process. the security requirements traceability matrix, a requirements definition, the system design document, and test plans. The security requirements traceability matrix, or SRTM, is used to document the security requirements that a new application must meet. This document is often formed as a grid or a spreadsheet, and it provides a requirement.
while each of the columns documents the description of the requirement, the source of the requirement, the objective of the testing of the requirement, and how it’s going to be tested. The SRTM is used by security engineers to ensure that a piece of software meets the level of assurance needed. A requirements definition is used to document each function and security requirement that must be built into an application. Each organisation can choose the proper format for the requirement definition. Though some organisations use a simple list, other organisations might use a more formalised document or contractual format to list all the requirements. In agile development, the requirements definitions are often captured as a user story or a description of the user experience. The third thing we have is the SDD, or system design document. The system design document is used to describe the application and its architecture. The document consists of four parts. The data design is going to be used to document the choice of the data structure and the attributes that each data object will contain. The architecture design is going to describe how data flow will occur along the distinct boundaries between internal and external data sources. The interface design will document the user interface, the internal programme interface, and the external programme interface.
The procedural design section is used to document the details and structures of the programming concepts used by the application. This forms the baseline for any additional software development work that the application needs. The fourth document is known as a test plan. Now, a test plan is used to describe what will be tested in the application and how we’re going to go about performing that testing. There are three key parts to any test plan: the master test plan, the level-specific test plan, and the type-specific test plan. A master test plan is going to be used to unify all the other test plans into a single plan for the entire product or application. A level-specific test plan is going to outline the unit integration system or acceptance test plan. A type-specific test plan is going to be used for a specific issue, such as a security test, an authentication test, or other detailed test. A test plan will identify several key factors. These will include an overview of the testplan items to be tested, the approach to use during the testing, the pass-fail criteria for the test, the criteria used to suspend the test, the deliverables of the test itself, the environment to be utilized, the scheduling, cost estimates, staffing needs, risks, assumptions, dependencies, and approvals. All of that is required before that test can begin.
8. Integrating Applications (OBJ 1.3)
Organizations utilise numerous enterprise applications that often need to be integrated together to work properly. These include customer relationship management tools (CRM tools), enterprise resource planning tools (ERP tools), the configuration management database (CMDB), and content management systems (CMS).Now, the Customer Relationship Management, or CRM, system is going to be used to store all the data relating to our organization’s customers. This includes their names, contact information, past purchase history, and even their billing information. In some systems, access to this customer-centric database should be limited to personnel with a need-to-know, such as the sales and marketing teams.
Now, because of the nature of this private information, we should only grant remote access to the CRM to the employees if we first establish a secure VPN connection with them. First, the ERP, or enterprise resource planning tool, is going to collect and consolidate data from across the organization. It contains information on sales, marketing, inventory management, shipping, service delivery, product costs, future plans, and much more. This tool will help management decide where to direct resources within the organization. Again, we should keep the information contained within this database confidential and highly secured. The ERP application should only be accessible from your secure internal network, or DMZ, because it should never be publicly accessible. Next, we have the CMDB, or the configuration management database. The configuration management database is going to keep track of every asset in our organization. This includes our servers, desktops, mobile phones, software, facilities, and products. This centralised database maintains the specific configuration of each item at any given point in time. We should always update the information in this database whenever the item’s configuration status has been modified. The CMDB is key to the proper maintenance of the network and its components within the IT service management process. Finally, we have the CMS, or content management system.
The content management system is a centralised repository of organisational information. There are many different CMS out there, including things like WordPress, Joomla, and Microsoft SharePoint. This server is going to allow the users to create, edit, organize, maintain, and delete information and data in a web-based portal that other users can quickly locate and use, as opposed to a single document on a shared drive. Content management systems allow multiple users to work on a document at the same time and track each of their changes individually. This is really helpful for version control when you’re using shared documents. Now, these four tools—CRM, ERP, CMDB, and CMS—are all applications used by our enterprises. For them to work properly, though, they have to be properly integrated into our organization’s systems. This requires directory services, DNS, and a service-oriented architecture, as well as the Enterprise Service Bus. Now, directory services such as Active Directory, DNS, and LDAP are often going to be used to integrate with other services. For example, the organization’s content management system might utilise Active Directory to determine whether or not to grant a particular user access to a file or resource. The domain name system Or, DNS can associate a host name with an IP address.
This makes it easier for employees to access web-based applications and other server resources within the network, like your CMS, for example. If we want the users to feel like they can access our SharePoint server, they can open up their web browser and simply type in SharePoint if we have a matching DNS entry created for it. If not, the users have to memorise the IP address of 10 (or whatever) 1320 in order to access our SharePoint server. Another key integration technology is the idea of service-oriented architecture, or SOA. Now, SOA focuses on providing services with a single purpose or function. It combines these services to provide the total functionality needed functionality.And these services can be used by other applications as well. With service-oriented architecture, this software is going to focus on what comes in and what goes out while reading large amounts of functionality from other services. Inside this black box, web applications, databases, and cloud computing environments are all commonly used. soa Finally, the Enterprise Service Bus, or ESB, is another integration technology that can be combined with service-oriented architecture. ESB focuses on enabling communication with other applications and protocols such as Java, Net, and SOAP. Often, we’re going to use ESB to allow communication between two business partners from one DMZ to another DMZ.