1. Risk And Testing
We have mentioned before how risk is an important factor in the testing activity. We base our testing efforts upon the amount of risk in delivering the product too early. If the risk is high, then we need to spend more effort in testing the software. If the risk is low enough, then we can deliver the software. Definition of risk. So what is risk? After all? There are two parts to the definition of risk. The first part, risk involves the possibility of an event in the future which has negative consequences. My friends who are PMP certified might not like this definition because in PMB a risk may result in future negative or positive consequences. But people in ISTP consider risk as only may result in future negative consequences. Risk is used to focus on the effort required during testing. It’s used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of a negative event occurring or to reduce the impact of a negative event. So if we are worried that the client will get upset if there’s a miscalculation in a report, this is a negative event that might happen, then we can add more testing to the report to make sure we don’t miss any major defects. This action will lower the probability or the impact of the negative event.
Riskbased testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis to ensure that the likelihood of the product failure is minimized. Risk management activities provide a disciplined approach to analyze and reevaluate on a regular basis what can go wrong which are the risks? Determine which risks are important to deal with, important actions to mitigate those risks, make contingency plans to deal with the risks should they become actual events. In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks. I will try here to give you a risk management course in five minutes. So let’s talk first about analyzing what can go wrong. Product and Project Risks one of the many different ways that a project team can identify the risks in their project is to look at the different classifications of risks and wonder if any of those risks could actually happen to them or to their project. We can classify the risks into two categories project risks and product risks.
What is the difference between project and product? Easy. A product is the software itself. A project is the activities or steps needed to create the product. So product risk is inevitably related to the software itself, while project risk is inevitably to how we develop the software. Now, one of the famous questions I have seen is to distinguish between the two types of risks. I even have gotten this question in my Advanced Level Test Manager exam. Product risk involves the possibility that a worker product for example, specification, component, system or test item may fail to satisfy the legitimate needs of its users and or stakeholders when the product risks are associated with specific quality characteristics of a product. For example, functional suitability, reliability, performance efficiency, usability, security, compatibility, mentality and portability all the elite characteristics. Product risks are also called quality risks. Examples of product risks include software might not perform its intended functions according to the specification software might not perform its intended functions according to user, customer and or stakeholder needs. System architecture may not actually support some non functional requirements.
A particular computation may be performed incorrectly in some circumstances, a loop control structure may be coded incorrectly. Response times may be inadequate for high performance transaction processing systems, user experience of UX feedback might not meet product expectations. And for the second type of risks, project risks. Project risks involve a situation that, should they occur, may have a negative effect on the project’s ability to achieve its objectives. Examples of project risks include project issues. Delays may occur in delivery, thus completion or satisfaction of exit criteria or definition of done, inaccurate estimates, reallocation of funds to higher priority projects, or general cost cutting across the organization may result in adequate funding. Later changes may result in substantial rework. And we also have under project risks. Organizational issues such as skills, training and stuff may not be sufficient. Personal issues may cause conflict and problems.
Users, business staff, or subject matter experts may not be available due to conflicting business priorities and under project risks, we also have political issues. Testers may not communicate their needs and or the test results adequately. Developers and or testers may fail to follow up on information found in testing and reviews, for example, not improving development and testing practices. There may be an improper attitude toward or expectation of testing. For example, not appreciating the value of fighting defects during testing. Also, under Borgia twists, we have technical issues. Requirements may not be defined well enough. Requirements may not be met given existing constraints, the test environment may not be ready on time. Data conversion, migration, planning, and their tools abroad may be late. Weaknesses in the development process may impact the consistency or quality of project work products such as design code configuration, test data, and test cases. Bore defect management and similar problems may result in accumulated defects and other technical debit. And last, under project risks, we have supplier issues where a third party may fail to deliver unnecessary product or service or go bankrupt, and contractual issues may cause problems to the project. Project risks may affect both development activities and test activities.
In some cases, project managers are responsible for handling all project twists. But it’s not unusual for test managers to have responsibility for test related project twists. Now, for the Risk Analysis Board, the second part of the definition of risk is the level of risk is determined by the likelihood of the event and the impact of the harm from that event. Level of risk equals probability of the risk multiplied by impact of the risk if it did happen. So for example, if we have two risks, the first is the risk of having a UI issue. The probability of this risk happening is four, four in a scale, one to five, one being a low risk and five a high one. But the impact of this risk happens is very low, only one using the same scale. Then the level of risk or risk score for this first risk is four multiplied by one equals four. A second risk might be a miscalculation. In one of the reports, the probability of such a risk is very low, such as two. But the impact of such a defect would be very high as the customer will be really best off if he saw such a defect. So the impact might be three.
So the level of risk in this case is two multiplied by three equals six. So the level of risk for the miscalculation is higher than that of the UI issue. This means that if we have very limited time for testing, then we would concentrate our efforts to test the report to lower the probability of the miscalculation or the impact of the miscalculation. Virtualizing Risks is an attempt to find the potential critical areas of the software as early as possible. As I said, there are many ways to identify risks. Any identified risk should be analyzed and classified for better risk management. So now we have a long list of possible risks. We should calculate the risk level for each risk and sort the risks accordingly. That’s how we will know where to focus our testing attention. Riskbased Testing and Product Quality as we have said, risks are used to decide where to start testing, where to test more making some testing decisions, and when to stop testing.
Therefore, testing is used as a risk mitigation activity to provide feedback about identified risks as well as providing feedback on residual or unresolved risks. A risk based approach to testing provides proactive opportunities to reduce the levels of product risk. Proactive means that we will not wait till the risk habit to deal with it, but rather we will be ready for it and even get rid of it before it even happens. To summarize what we have learned so far, risk based testing involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact. The resulting product risk information is used to guide test planning, the specification, preparation and the execution of test cases and test monitoring and control. Analyzing product risks early contributes to the success of the project in a riskbased approach. The results of product risk analysis are used to determine the techniques to be employed. Determine the particular levels and types of testing to be performed, for example, security testing, accessibility testing and so on. Determine the extent of testing to be carried out birth twice. Testing in an attempt to find the critical defects as early as possible and to determine whether any activities in addition to testing could be employed to reduce risk. For example, providing training to inexperienced designers. Now, we have analyzed our risks and prioritized them. Then what we need to do now is to manage those risks and handle those risks by lowering their risk levels. This is beyond the scope of the IsDB Foundation course, but here you are.
There are four ways we can handle or respond to risks. One, avoid doing anything to make the risk level zero. Meaning that either making the probability zero or making the impact of the risk zero. Let’s imagine a risk that we have heard rumors that one of the team members, let’s call him Jack might move to another company. To avoid such a risk, you would not assign Jack to your project in the first place and get another one. So the impact will be zero. Two, mitigate. Mitigate means that you will lower the risk level. And you can achieve this by either lowering the likelihood or lowering the impact of the risk. Well, what should we do with Jack? You can lower the likelihood of him moving by giving him a promotion or salary increase.
Or you can lower the impact by giving him minor tasks to work on. The third action that you can do, or the sale to his bonds you can do to handle risks is transfer, meaning moving the risk from your site to another site. You might ask Jack’s manager to assure you that if Jack leaves the company for any reason, then he would be responsible for finding you another resource with the same qualifications, or maybe by outsourcing the whole job to another company.
And four, you will accept the risk. And you can be passively accept the risk by simply waiting for the risk to happen and see what to do then. Or you can accept it actively by putting a plan to be executed in case Jack leave the company. Like planning for two weeks, hand over from Jack to a new resource. This is called a contingency plan. Wow, that was a tough risk management course in, I hope, ten, five minutes also, so I hope you like it. It’s clear that any project risk will later affect the product itself. So the objective of all our risk management efforts is to.
2. Independent Testing
Discourse. We say that testing tasks can be done by anyone. It may be done by people in a specific testing role or by people in another role, for example customers. The relationship between the tester and the tester object has an effect on the testing itself. By the relationship we mean how much the tester psychologically attached to what he is testing. This relationship represents how the tester is dependent or independent forms a test object. A certain degree of independence often makes the tester more effective at finding defects due to the differences between the authors and the testosterone’s. Mental Biases we have talked about the mental bias when we were talking about the psychology of testing in the first section of this course. Independence is not however a replacement for familiarity and developers can efficiently find many defects in their own code. In this lecture we will elaborate more on how independence affects the test management of the software project. The approach is to organize a test team vary from a company to another and from a project to another. What we are trying to achieve here is to understand that testing independence should be put into consideration when organizing a testing. Degrees of independence in testing includes the following from low level of independence to high level.
On one side of testing independence lies a developer with low independence who tests his own code. And by the way, notice that when I say low independence is equivalent to high dependence, low independence is equivalent to high dependence. So please take care when they mention dependence or independence. So again on one side of testing independence lies available with low independence who tests his own code. A little higher independence is tested from the development team. This could be developers testing their colleagues products than the independent testing team inside the organization reporting to project management or executive management. Independent testers from the business, organization or user community or with specializations in specific test types such as usability, security, performance, regularity compliance or portability.
And on the very other side with very high independence, lies independent testers external to the organization, a third party or a contractor either working on site which is in sourcing or off site which is outsourcing. Independent testing is sure a good thing, but it doesn’t mean that we should only consider highly independent testers. So let’s look at each type of testers from the independence point of view and see the pros and cons of considering this type of tester to the testing team. To the testing team first, the developer, the author of the code.
Should we allow him to test his own code even if he is highly dependent on the code? The both of you in the developer of testing know the code best, will find problems that the testers will miss, can find and fix faults its Shebly. The cons of using the developer for testing are difficult to destroy on work. It’s his own baby. After all tendency to see expected results, not actual results. Subjective assessment. So let’s consider a tester from the development team other than the developer. The blows are independent view of the software, more independent than the developer. Dedicated to testing, not coding and testing at the same time. Part of the team walking to the same goal which is quality. The cons are lack of respect. He’s a body lonely thankless task.
He’s the only tester on the project. Corruptible peer pressure, a single view of opinion again he’s the only tester on the project. Then comes the independent test team whose main job is testing the boss dedicated team just to do testing. Specialist testing expertise testing is more objective and more consistent. The cons are over the wall syndrome. There’s a wall between the developers and testers our department and your department. Okay? And there could be some politics issues as well. Maybe confrontational over reliance on testers. Developers will be lazy to test depending on the testers to do the job for them. What about the specialized testers either from the user community or with a specialization in a specific testing type, security, performance and so on?
Sure they are the highest specialists in their field, but they need good people skills, communication and communication could be very tough with the developer. Last, with highest independence and low dependence comes third party organization where we outsource the testing of software to another organization. Highly specialist testing expertise. If outsourcing to a good organization, of course independent of internal politics and the cause of lack of product knowledge. They don’t know what they are testing, they are not from the same industry. Expertise gains goes outside the company could be expensive. Actually it’s expensive and confidential information will be leaked from inside the organization to the third party organization. Therefore, the idea is to get as much as possible from the bros of independent testing and try to avoid as much as you can from the cons of independent testing. For most types of projects, especially complex or safety critical projects, it’s usually best to have multiple levels of testing with some or all of the levels done by independent testers. Development staff may participate in testing especially at the lower levels so as to exercise control over the quality of their own work. We should consider asking the users to help with the testing and also we should consider asking testing subject matter experts to test the critical parts of this application or software if needed and so on. In addition, the way in which independence of testing is implemented varies depending on the software development lifecycle.
For example, in agile development testers may be part of a development team in some organizations, usually agile methods. These testers may be considered part of a larger independent test team as well. In addition, in such organizations product owners may be perform acceptance testing to validate user stories at the end of each iteration to summarize potential benefits of test independence include independent testers are likely to recognize different kinds of failures compared to developers because of their different backgrounds, technical perspectives and biases.
An independent tester can verify a challenge or disapprove assumptions made by stakeholders during specification and implementation of the system. For example, if a developer assumes that a value should be in a specific range, then the tester will verify this assumption and will not take it for granted. Potential drawbacks of test independence include the more independence, the more isolation from the development team, leading to a lack of collaboration, delays in providing feedback to the development team or in confrontational relationship with the development team. Developers may lose a sense of responsibility for quality.
Many times I have heard developers think that they should not test their own code because it’s the testers responsibility, which of course is not right at all. I’m saying that in the nicest possible way. Independent testers may be seen as a bottleneck or blamed for delays in these. Independent testers may lack some important information about the test object. Many organizations are able to successfully achieve the benefits of test independence while avoiding the drawbacks. So let’s all hope we.