8. Test Estimation
When we put a plan, we need to estimate the effort needed to execute the plan. We can use the estimated effort to estimate other elements like the time needed, the number of resources needed, and the budget needed. There are many techniques to estimate the different elements needed for the plan. The Http curriculum mentioned only two techniques metrics based and expert based techniques. Let’s look at each one of them. The Metrics Based Approach to understand this approach, let’s give an example. First, if your previous project was 1000 hours long, and the testing efforts in that project was 300 hours out of the 1000 hours, now the new project is 2000 hours, can you estimate the testing effort? Yes, it will be around 600 hours. Now, do you think you can estimate the number of defects that you can expect to find in the new project? Actually, yes.
If you found 500 defects in the 1000 Hours project, then you would expect the defects in the new project to be around $1,000. But you know, your development team now has the knowledge and expertise of such software domain. Then you would expect the defects to be less than 1000, say 800 defects. However, there’s a module in the new software that will use a new technology you have never dealt with before. So you can estimate the number of defects to be around 850 bugs. So in this technique, we used collected data and some sort of equations to estimate the project. The way I see it, we used some information about our history, which was how many bugs were found in the 1000 Hours project. We used some information about our present, which is the knowledge of the current development team, and we used some information about our future, which is the expectation of the complexity of a specific module. I’m good.
Other kind of data that can be estimated include the number of test conditions, the number of test cases written, the number of test cases executed, the time taken to develop test cases, the time taken to run test cases, the number of defects found. The accuracy of this technique will heavily depend on the accuracy of the collected data. Examples of metrics based estimation approach are for sequential projects. Defect removal models are examples of the metrics based approach. This is similar to the one I was just explaining, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of similar nature. For agile projects, burned down charts are examples of the metrics based approach as effort is being captured and reported and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration. You don’t need to understand or to know the details about the specific techniques for the exam. Just knowing that there are techniques called defective models and burn down charts is enough for now. The expert based approach depends on using the experience of some stakeholders to drive an estimate. In this context, experts could be business experts tested process consultants, developers, analysts and designers anyone with knowledge about the application to be tested or the task is involved in the process. For sequential projects, the Wide Band Delphi distribution technique is an example of the expert based approach in which groups of experts provides estimates based on their experience.
For agile projects, Planning Poker is an example of the expert based approach as team members are estimating the effort to deliver a feature based on their experience. Again, you don’t need to know details about specific techniques for the exam. Just knowing that there are techniques called Wideband Delfi and Planning Broker is enough for now. Details of those techniques can be found in the ICB Agile Extension Syllabus or the ICB Advanced Test Manager syllabus. The question that comes to mind now is which technique is better than the other? Again, wrong question.
Each technique has its own pros and cons. I can say we better use both techniques to confirm each other. Factors Influencing the Test Effort Test effort estimation involves predicting the amount of test related work that will be needed in order to meet the objectives of testing for a particular project release or iteration. Factors influencing the test effort may include characteristics of the product, characteristics of the development process, characteristics of the people and the test results. Let’s look at each one of those in detail. Product characteristics, the risks associated with the product, the quality of the test basis, the size of the product, the complexity of the product domain, the requirements for quality characteristics, for example, security and performance, the required level of detail.
For test documentation requirements for legal and regularity compliance development to process characteristics include the stability and maturity of the organization, the development model in use, the test approach, the tools used, the test process time pressure bible characteristics include the skills and experience of the bebop involved especially with similar projects and product. For example the domain knowledge, team cohesion and leadership how the team works together as a team test Results the number and severity of defects found the amount of vWork required the more the number of defects found or the more amount of vWork required will increase the effort estimate.
9. Test Control
The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically. A plan won’t mean anything without monitoring the execution of the plan. Test monitoring can serve various purposes during the project, including give the test team and the test manager feedback on how the test testing work is going, allowing opportunities to guide and improve the testing and the project. Provide the project team with visibility about the test results.
Measure whether the exit criteria or the testing tasks associated with an Agile project definition of done are satisfied, such as meeting the targets of full coverage of product risks, requirements or acceptance criteria, gathered data for use in estimating future test efforts, and above all, prove that the plan itself is right, and following it will eventually lead to the test objective. We are looking for metrics used in testing. Metrics can be collected during and at the end of test activities in order to assess progress against the blend schedule and budget current quality of the test object adequacy of the test approach effectiveness of the test activities with respect to the objectives percentage of blend work done in test case preparation or percentage of bland test cases implemented. Percentage of blend work done in test environment preparation, test case execution. For example, number of test cases run not one test cases pass failed and or test conditions pass failed defect information.
For example, defect density defects found and fixed failure rate and confirmation test results. You will know more about defect dynasty in the quiz actually test coverage of requirements, user stories, acceptance criteria, risks or code, task completion, resource allocation and usage and effort cost of testing including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test reporting. Test reporting is about summarizing and communicating test activity information to project stakeholders both during and at the end of a test activity or a test level. Test reporting.
The purposes of test reporting are notifying project stakeholders about test results and exit criteria status help stakeholder, understand and analyze the results of a test period, helping stakeholders to make decisions about how to guide the project forward and assuring that the original test plan will lead us to achieve our testing goals or objectives. ISO Standard 29 1193 refers to two types of test reports test broadly reports and test completion reboots, also called test summary reboots in this service and contains the structures and examples for each type.
The test report prepared during a test activity may be referred to as a test progress report, so during the test activity it’s called test progress reboot. While a test report at the end of a test activity may be referred to as a test summary reboot. During test monitoring and control, the test manager regularly issues test burgers reboot for stakeholders when the exit criteria are reached the test manager issues the test Summary Report this report provides a summary of the testing performance based on the latest test bogus report and any other relevant information.
Typical tested bogus reports and test summary reports may include summary of testing performance where we identify all relevant support materials such as test items, environment, and references so that the reader of the report knows which version and release of the project or software is being reported on. Information on what occurred during a test period. Deviations from the Blend What is different from the blend deviations include deviations in schedule, duration, or effort of test activities, the status of testing and product quality with respect to the exit criteria or definition of done factors that blocked or continued to block progress metrics on defects, test cases, test coverage, activity progress and resource consumption. Residual risks the remaining risks that we haven’t handled yet. Reusable test worker products produced in addition to content common to test, the progress reports and test summary reports.
Typical test progress reports may also include the status of the test activities and progress against the test plan factors impeding or blocking the progress testing plan for the next reporting period, and the quality of the test object. The contents of a test report will vary depending on the project, the organizational requirements, and the software development lifecycle. For example, a complex project with ministry holders or a regulated project may require more detailed or formal reporting than a quick software update. In agile development, tester bogus reporting may be incorporated into task boards, defect summaries, and burn down charts, which may be discussed during a daily stand up meeting. You can learn more about those terms in agile development in the Ice TKB Agile Syllabus. In addition, if we were using risk based testing, then stakeholders would expect to see the updated list of product and project twists, responses and effectiveness of the responses. If we were using requirements based testing, then we could measure coverage in terms of requirements or functional areas. In addition to tailoring test reports based on the context of the project, test to report should be tailored based on the reports audience.
The type and amount of information that should be included for a technical audience or at his team may be different from what would be included in an executive summary report in the technical audience case, detailed information on defect types and trends may be important in the report. Targeting the Executives a high level report may be more appropriate. Executives love one page or one PowerPoint slide presentations that might include elements like a status summary of defects by priority, budget, schedule and test conditions passed, failed or not tested. Test Control If you have heard of Murphy’s Law, then you know that’s hardly that anything goes as planned. Risks happen, the customer changes his mind every now and then, the stakeholders interfere, software crashes, market changes, stuff quit, and so on. When plans don’t execute, the way we want.
Then Control is needed to get things back to its track. Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and possibly reported actions may cover any test activity and may affect any other software lifecycle activity. Consider the following example a module or component will not be ready on time for testing. Test control might involve repuriatorizing the test so that we start testing against what is available now. You discovered that most of the executed test cases have failed, which results in too many defects logged against the software.
After investigation, you discovered that the easy test cases are the ones that run first. Test control could be to tighten the empty criteria for testing, as it seems that developers don’t do proper unit testing. Examples of other test control actions include reporiatorizing tests when an identified risks or care for example, software delivered late and changing the test schedule due to availability or unavailability of a test environment or other resources. Reevaluating whether a test item meets an entry or exit criteria due to new work adjusting the scope of testing, perhaps the number of tests to be run to manage the testing of late change requests. And we can, as I said, tighten the entry criteria. Corrective actions taking do not have to be testing related. For example, describing of functionality, removing some less and bolts important to blend deliverables from the initial delivered solution to reduce the time and effort required to achieve that solution, or delaying release into the production environment until exit criteria have.