10. Experience-based Techniques: Error Guessing
When applying experiencebased test techniques. The test cases are derived from the users and testers ‘skill and intuition and their experience with similar applications and technologies. These techniques can help identify tests that were not easily identified by other more systematic techniques like white box and black box techniques. Even when specifications are available, it swallows supplementing. The structural tests with some that you know by experience have found effects in other similar systems. Depending on the tester’s approach and experience, these techniques may achieve widely varying degrees of coverage and effectiveness.
Coverage can be difficult to assess and may not be measurable with these techniques. These techniques are often combined with black box and white box test techniques. Common characteristics of experience based test techniques include the following test conditions test cases and test data are derived from a test bases that may include knowledge and experience of testers developers, users and other stakeholders. This knowledge and expertise includes expected use of the software, its environment, likely defects, and the distribution of those defects.
There are different kinds of experience based test design techniques, but the most commonly used are error guessing, exploratory testing and checked list based techniques. Let’s start by the error guessing. Error guessing is a technique used to anticipate the occurrence of mistakes, defects and failures based on the testers knowledge, including how the application has worked in the past. What types of mistakes that developers tend to make failures that have occurred in other applications.
A methodical more structured approach to ever guessing technique is called fault attack. It’s where we create a list of possible mistakes, defects and failures and design tests that will expose those failures and the defects that cause them. These mistakes defect failure lists can be built over time and be based on experience, defects and failure data and form common knowledge about why software fails. Error guessing is not intended to be used by itself, but rather it should support other techniques. The effectiveness of the aero gazing techniques varies heavily upon the testers experience.
I have seen companies capture their testers experience by booting such best practices or best test cases or commonly mistakes okay in a spreadsheet and share it with each other. Also, I have seen testers use error guessing based on their knowledge about the developer who wrote the code. They know that John always gets confused on boundary values. They know that Ashok makes a lot of its spelling mistakes. These lists can be used as starting point and it can be expanded by using the testers and users experience of why the application under test in particular is likely to fail. Even if your company doesn’t have such a list, you can create one for yourself. Its purpose is to simply not to fail for the same bug again and again. You should learn from your mistakes.
11. Exploratory Testing
If your company gave you the software and asked you to find bugs in it and that’s it. It turns out it’s a testing technique called exploratory testing. So your company is not bad after all. Exhibitory testing has been mentioned in the syllabus in several places, but I prefer to talk about it in one place here to avoid introducing new terminologies without knowing exactly what we are talking about. What happens in exploratory testing is that we use the tester’s experience to test the software without going into the cycle of writing test conditions, test cases, test procedures and so on, but rather just sit and try to break the software. During exploratory testing, the tester can incorporate the use of other black box, white box and experience based techniques. So theoretically speaking, the tester here do test design, test execution, test logging and system evaluation all at the same time. Concurrently. The keyword here is concurrently. The results of the exploratory tests are used to learn more about the component or system at hand and to create tests for the areas that may need more testing. Exploratory testing work products may be created during test execution, though the extent to which exploratory tests are documented may vary significantly.
To control exploratory testing, more exploratory testing is sometimes conducted using what we call session based testing to structure the activity. In session based Testing exploratory testing is conducted within a defined time box or fixed duration. The tester may use test session sheets to document the steps followed and the discovery is made. In session based testing, the tester uses what we call a test charter. A tested charter contains test objectives to guide the testing, to help maintain focus on the most critical areas, what kind of defects to look for and so on. This will help ensure that the most severe defects are found. The test charter can be reduced as part of the test analysis stage. Exploratory testing is most useful when there are few or inadequate specifications or significant time pressure on testing. Exploratory testing is also useful to complement other more formal testing techniques. Exploratory testing is strongly associated with reactive test strategies. We will talk more about reactive test strategies in the test management section, but from its name it means that the testing results guide us on how to continue our testing. So we are reacting to the test results.
12. Checklist-based Testing
We have mentioned the checklist before as a tool that could be used while conducting interviews. The same concept can be applied during a checklist based testing. Think of a checklist like a Todo list, a reminder of a list of test condition that you need to consider while testing the software. Therefore, in a checklistbased testing testers design, implement and execute tests to cover those test conditions found in a checklist. A checklist can be generic or specialized. Generic checklists could be used for all types of software to verify any general software or component properties such as make sure there is a default button in a dialogue or if the text Carson is blinking in the first field when you initially open a dialogue.
On the other hand, specialized checklists also exist, for example for testing database applications or testing websites. As an example, let’s consider a checklist for testing the image uploading functionality a checking for image uploading bath a checking for image uploading a check for image uploading with different extensions such as JPEG or BMP. A checking for uploading images with the same name. A check if the image is getting uploaded within the maximum allowable size, and if not, it’s necessary to verify that an error message is appearing. A check if the bar showing the progress of image uploading is appearing or not. It checked the functionality of the cancel button at the time of image upload.
A check for multiple image uploading a check for good quality of uploaded image a check if the user can save the image post the uploading process. So actually, as you see, you could have multiple checklists for a variety of purposes. A checklist can be created to support various test types, including functional and nonfunctional testing. In the absence of detailed test cases, checklist based testing can fill the gap by providing guidelines and a degree of consistency. As these are high level lists, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability. As part of test analysis. Testers create a new checklist or expand an existing checklist, but tester may also use an existing checklist without modification. So during test analysis we create or decide on the checklist to use. Such checklists can be built based on experience, knowledge about what is essential for the user or an understanding of why and how software fails. I have read books that say that checklist based testing is usually used by test leaders, but I prefer to go with the opposite opinion that a checklist can help in better integration of new testers into the organization as they have ready made guidelines in place to start testing on a project with confidence. The checklists have also to be upgraded over time to comprehensively COVID testing of all the new aspects related to products of similar functionalities.
13. Choosing Test Techniques
Wow. It’s been a long ride to learn all those testing techniques. I hope you have enjoyed it as much as I did. We have learned about black box testing, white box testing and exercise based testing and it’s good to mention that there are many other techniques but they are beyond the scope of the ICB foundation level. With so many test techniques to choose from, how do testers decide which she wants to use? The choice of which test techniques to use depends on a number of factors. Regularity Standards some industries have regularity standards or guidelines that govern the testing techniques used. For example, in some countries the health related application requires specific types of testing like boundary value analysis. Customer or contractual Requirements I have seen customers ask the testing team specifically to include wide box testing. Level of Risk if the level of risk is high like safety critical systems, then we should do more detailed formal testing. Type of Risk sometimes the risk is not marketing the software on time. In this case, exploratory testing would be the answer.
So the type of risk helps decide which technique to use. Type of Component or System the type of system, for example embedded graphical, financial and so on will influence the choice of techniques. For example, a financial application involving many calculations would benefit from boundary value analysis. Test Objectives If, for example, the test objective is simply to gain confidence that the software is working under typical circumstances, then use case testing would be a good approach. If the objective is to test the system slowly then more measures and detailed techniques including structure based techniques or whitebox techniques should be chosen.
Documentation Available if we have specifications and models, then we can use black box testing. If we have source code, then we can use white box testing. If we have nothing, then we can use only the experience of the testers to drive test cases. Knowledge of the Testers If the testers can read code then we can use white box testing. If they can’t read code, then we cannot use white box testing. Time and Budget if we have the time then we can do any kind of testing. But if we don’t have enough time, then exploratory testing would be our only choice.
Development Lifecycle a sequential life cycle model will lead to the use of more formal techniques where exploratory testing would be a better choice for innovative life cycle models. Experience of types of defects found since some techniques is good at finding about particular type of defects then knowledge of the likely defects will be very helpful in choosing the testing technique. Component or system complexity complex systems would require more advanced test techniques available. Tools if we already have tools that support a special kind of test techniques then we better take advantage of it. Expected use of the Software developing software for your own company differs than creating software for the public and hence different test techniques will be chosen. Various experience with using the test techniques on the component or system to be tested. If we have used the state transition diagrams in our specifications, then we might use the state transition testing. If our specifications were written using use case models, then we might use use case testing and so on.
Questions here could be a little tricky. They will give you a few factors and ask you which ones help decide the testing technique to use. The tricky part is that they will blend a few factors that might look good, but they are not correct ones. For example, the knowledge of the development team, cost of the tools used and so on. So watch out for those tricky ones. Some techniques are more applicable to specific situations and test levels. Others are applicable to all test levels. When creating test cases, testers generally use a combination of test techniques to achieve the best results from the test effort. The use of test techniques in the test analysis that design and test implementation activities can range from very informal little to no documentation to very informal. The aboriginal level of formality depends on the context of testing, including the maturity of test and development process. The higher the maturity, the higher the formality time constraints.
Less time forces us to be less formal. Safety or regularity Requirements the more the regularity or safety requirements, the more formal we should go. The knowledge and skills of the people involved. The software development lifecycle model being followed now one of the questions that I have been asked by my students is which technique is best? Well, this is the wrong question because each technique is good for certain things and not as good for other things, it’s important to understand that the best testing technique is no single testing technique.
Each testing technique is good at finding one specific type of defects. So using just a single technique will help ensure that many defects of that particular type are found. But it will also ensure that many defects of other types are missed. So it’s better to use a variety of techniques which will help ensure that a variety of defects are found, resulting in more effective testing from Experience the best practice for designing test cases is start with black box testing techniques to cover the functionality as much as possible. Then add non functional test cases as needed. Then check statement and decision coverage or whatever white box coverage technique and add as many test cases to increase the coverage to an acceptable limit. And last, use your experience to add test cases as necessary. Thank you.