Student Feedback
CTFL-2018: ISTQB Certified Tester Foundation Level 2018 Certification Video Training Course Outline
Introduction
2018: Fundamentals of Testing
2018: Testing Throughout The Sof...
2018: Static Testing
2018: Test Design Techniques
Test Management
2018: Tool Support For Testing
Introduction
CTFL-2018: ISTQB Certified Tester Foundation Level 2018 Certification Video Training Course Info
Gain in-depth knowledge for passing your exam with Exam-Labs CTFL-2018: ISTQB Certified Tester Foundation Level 2018 certification video training course. The most trusted and reliable name for studying and passing with VCE files which include ISTQB CTFL-2018 practice test questions and answers, study guide and exam practice test questions. Unlike any other CTFL-2018: ISTQB Certified Tester Foundation Level 2018 video training course for your certification exam.
2018: Fundamentals of Testing
7. Concept of Test Coverage in Software Testing
Test coverage is an essential part of software testing. It's defined as a metric in software testing that measures the amount of testing performed by a set of tests. By "amount of testing," we mean what parts of the application programme are exercised when we run a group of tests. In simple terms, test coverage measures the effectiveness of our testing. So when we can count the things that are bonded in the application and also tell whether the test cases are covering those things in the application, then we can say how much our test cases have covered. So for example, if we have four features and we test only three of them, then our test coverage is 75% of the features. Or if we have 1000 lines of code and our tests have visited or exercised 600 lines of those lines, then our test coverage is 60%. So the effectiveness of testing is not measured by the number of test cases, but rather by how much those test cases can cover. You can have 100 test cases that cover 50% of your software. You can add 100 more test cases, but test coverage would still be 50%. And you can add just one more testcase, which will increase test coverage to 100%. So it's not about the quantity, but the quality of the tests. Measure software testing, so what are the bars of the software that we can measure for coverage? Requirements coverage means that software is being tested against all the requirements. Structure coverage includes each design element of the software being tested: classes, functions, and implementation. Coverage: has each line of code of the software been exercised during testing or not? But how can we know which requirement has been tested? Well, we need some sort of map that will show traces between each requirement and each test. Such traceability will help identify which requirements have been tested by which test case and which requirements have not been tested at all. Such traceability concepts can be applied between requirements, design elements, lines of code, test cases, and so on. So why do we do test coverage? We perform test coverage analysis for two reasons: to identify areas in specified requirements that are not covered by our tests and to determine where more test cases are needed to increase our test coverage. Also, we can measure how a change request will affect the software by identifying a quantitative measure of the test coverage, which is an indirect method for quality check. And last, identifying meaningless test cases that do not increase coverage.
8. The Seven testing Principles
of defects, not their absence. When you test software, you may not find any effects. If you find effects, then that's a proof of the presence of bugs. But on the other side, if your test didn't find effects, that is not proof that the software is defect free.There's a high probability that you didn't find effects because your test doesn't cover the breadth and width of the software. Maybe you didn't select the right data to exercise the software. Maybe the defect is waiting for an exceptional circumstance to fail the software. Remember the YTK problem, where many software solutions fail because they only consider the year as two digits only, assuming it will always add up to 1900 automatically? If you think about those software solutions, they used to work perfectly well but stopped working in January 2000. Therefore, those solutions contained effects that were waiting for a very special date to show up. So we cannot call those solutions bug-free, even though they were working perfectly fine for ages. So there is nothing called bug-free software. There is no way we can prove that. We simply need to design as many tests as possible to find as many defects as possible. Testing reduces the probability of undiscovered defects remaining in the software. But even if no defects are found, testing is not a proof of correctness. Exhaustive testing is impossible. Imagine if we have a web page where you need to enter the age of the employee, which should fall in the range of 20 to 50. To test this field, you truly need to enter all the possible values in that field, both the valid and invalid values. So if this field accepts an integer, which ranges from -32,767 to +32,767, then we have 65,535 possible values. Add to it a bunch of combinations of text values and special characters, and the user might type something by mistake, and you want your software to behave decently in all cases without crashing. Also, we might have test cases where we paste the numbers, not just type them, drag and drop the values, and maybe try some exceptional circumstances when entering the values, like spaces, deleting characters, and so on. Then say we have a total of 70,000 values. If each value takes 1 minute to be tested, then we need 70,000 minutes to test this field alone. 70,000 minutes is 1167 hours, which is 49 days. Testing this field day and night, testing all possible combinations of inputs and reconditions, is called exhaustive testing. Hence, exhaustive testing is impossible unless the application under test has simple, trivial cases. Then it's possible to test all possible combinations of data input and deboning. But, as you know, most real applications are not trivial ones. So the general principle is that exhaustive testing is impossible. For this reason, risk analysis, test techniques, and priorities are used to concentrate on the most important aspects of a test. It is critical to ensure that the essential parts are tested. Early testing saves time and money. Many problems in software systems can be tracked back to missing or incorrect requirements. In early testing, we are trying to find errors and effects as early as possible before they are passed to the next stage of the development process. From the shown graph, when a bug and its requirement are discovered during the requirement phase, the cost of fixing this bug is very low. The longer we wait to fix this bug, the more costly it will be. Why? Because now we might have built some of our designs based on these faulty requirements. The same thing happens when a bug is introduced in the design phase. It's cheaper to fix it in the design phase than to wait until further stages, and so on. A bug in the requirement would create the wrong design, which in turn would create buggy code, and the final result would be buggy software. From the shown graph, we see that if our customer discovered a bug after delivering the software, this would cost us 1000 times more than if we would have discovered and fixed the bug in the requirements phase. In addition, the time and effort required to fix a requirements bug during the system testing phase could be 500 times higher than if we would have discovered and fixed the bug in the requirements phase. To find defects, early testing activities should be started as early as possible in software or system development. Lifecycle testers should join the software lifecycle as soon as documents are in draught mode. For example, our individual requirements can contain ambiguous or missing requirements, and so on. What we are trying to achieve here is to break the error code failure cycle we mentioned before by enforcing a process for decreasing errors made by humans. Static testing finds defects as early as possible. Dynamic testing finds failures that we couldn't find yet and sends those failures back to the developers in the form of bug or defect boards to fix. If we wait until the last minute to introduce the testers, time pressure can increase dramatically. Therefore, there is a real danger that testing will be squeezing. The earlier the testing activity is started, the longer the elapsed time is available. Testers do not have to wait until the software is available to test. The syllabus mentioned that early testing is sometimes referred to as "shift lift," meaning testing has moved up on the project timeline. Some questions in the exam might ask about the most expensive defects to fix. Well, they don't usually tell you which phase you're in currently, so consider yourself in the time following the release of the software. So which defects are the most expensive to fix? requirements defects if they mentioned that you are still in the requirements phase. For example, defects are the easiest to fix. If you were in the design phase, then the requirement defects are the most expensive to fix, the design defects are the cheapest, and so on. Defects cluster together If we have software made up of ten modules or components, imagine that our first cycle of testing exposed $100. You won't find that each module has $10 exactly, but rather that a small number of modules exhibit the majority of the problems. A specific module could contain most of the bugs for a variety of reasons, among which are assisting complexity, volatile code, the effects of a change, development staff, experience, and development staff's and experience.So this principle says that a small number of modules usually contain most of the defects discovered during prerelease testing or are responsible for most of the operational failures. Predicted defect clusters and the actual observed defect clusters in a test or operation are essential inputs into a risk analysis used to focus the test effort. This phenomenon is closely related to the Parito principle, which is called the 80%/20 rule. The rule says that 80% of the effects are due to 20% of the issues. For example, 80% of car accidents are due to 20% of the drivers, right? In software, it says that approximately 80% of the problems are found in about 20% of the modules. So if you want to uncover a higher number of defects, it's useful to employ this principle and target the areas of the application where a high proportion of defects can be found. However, it must be remembered that testing should not concentrate exclusively on those parts. There may be fewer defects in the remaining code, but they could be more severe, so testers still need to search hard for them. Beware of the pesticide products. If the same tests are repeated over and over again, eventually the same set of tests will no longer find any new defects. Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects. After a while, to detect new defects, existing tests and test data may need to be changed, and new tests may need to be written to exercise different parts of the software or system to find potentially more defects. In some cases, such as automated regression testing, the best sidebar documentation has a beneficial outcome, which is a relatively low number of regression defects. Testing is context dependent.Different testing is necessary for different circumstances. A game that uses graphics heavily will be tested differently than a graphics editor. Software that also heavily uses graphics for an assembled static website will be tested differently than a dynamic e-commerce site where products can be purchased using credit or debit cards. Software used in an aeroplane will be tested differently than a flight simulator. Safety-critical industrial control software is tested differently from a mobile ecommerce app. The login for a banking system will be done differently than the login for an online game, even though the user interface might look exactly the same in both applications. As another example, testing in an Agile project is done differently than testing in a sequential lifecycle project. The last principle, absence of errors," is a fallacy. Some organisations expect the testers to run all possible tests and find all possible defects. But principles two and one, respectively, tell us that this is impossible. Further, it's a fallacy, meaning it's a mistaken belief to expect that just finding and fixing a large number of defects will ensure the success of the system. For example, slowly testing all the specified requirements and fixing all defects found could still produce a system that is difficult to use and doesn't fulfil the user's needs and expectations. Software with no known errors is not necessarily ready to be shipped. You should also ask the question: Does the application under test fulfil the user's needs or not? The fact that we cannot find any defects or that we have fixed all the defects that we have found are not enough reasons to believe that the software will be successful. Think about it. Before dynamic testing has begun, no defects have been reported against the code delivered so far. Does this mean the software that has not been tested yet and hence has no outstanding defects can be shipped? I don't think so. Also remember, and I have learned this the hard way, that you might go to a doctor with symptoms. He gives you medication according to his analysis of the problem, utilising medication for a while but seeing no improvements. So it turns out the doctor's initial analysis was wrong, and he might recommend another medication. The first medication was not a bad one, but it was the wrong medication according to the current situation. You can think of the same analogy when you think of software. We usually build software to help us solve a problem. We put in the characteristics of the software, thinking that it would solve the problem. And then we build the software exactly according to the characteristics we assign ourselves. Well, sometimes our solution turns out to be a bad one. So, the absence of errors is a policy that the software will be successful anyway. Now, what kind of questions can you expect in this part? Well, for this one, they can list four options. Three of them are principals, and the fourth is not a principal. And then they ask you to pick the nonprincipled one. Of course, they will make the nonprincipled one confusing one.Something like testing helps gain confidence in the software. It's true, but it's not a principle. In addition, they can give you a situation and ask you which principle the situation describes or which principle the situation is lacking.For example, a customer gets software, but he is upset because the software doesn't meet his needs. which principle we didn't follow. absence of errors is a fallacy. A team wants to discover as many defects as they can, but which principle should they follow: defect, clustering, and so on? Now, there is something I would like to emphasise here. As I have mentioned before, risk is an important aspect when talking about testing. So let's talk for a few minutes about risk, just to highlight the concept of risk. Now, testing shows only the presence of defects, and exhaustive testing is impossible. Plus, the best side paradox: a good question could arise now: how much testing is enough? The answer is: it depends on the risk. The risk of missing important faults, the risk of failure costs, the risk of releasing untested or undertested software, the risk of losing credibility and marketshare, the risk of missing a market window, and the risk of overtesting or ineffective testing every time We should evaluate the risk of the current situation of the software and decide whether the risk is high or low. If the risk is low and acceptable, then we can stop testing and ship the software. Otherwise, we should continue testing. Testing should provide stakeholders with enough information to make informed decisions about releasing the software being tested for the next development step or handover to customers. Then we use this risk information to determine what to test first, what to test most to allocate the time available for testing by periodorizing testing, what not to test this time, and how slowly to test each item. We will talk more about risk and testing in a later section. Testing software is not a random blame game with the software until we find a bug, but rather it's a process to ensure achieving the most effective and efficient testing. We've.
9. Test Conditions, Test Cases, Test Procedure and Test Suites
to delve into the test process tasks There are few terms that we are going to use: test conditions, test cases, test procedures, and test suites. So let me explain those terms here first. Imagine in our software, which we're going to call ExpertWave, we have a screen where we enter the age of the employee and hit okay. Don't ask me why we have a single field called "age" in a screen. But this is a customer request and we have to implement it. This part of the requirement could be read like this: "According to requirement one," the age of the employee shall be in the range of 20 to 50 whole numbers only. Now, our job is to test this piece of requisition. What exactly do we mean by testing this requirement? In simple words, it means we verify that this requirement has been implemented correctly in the software. So, what would you like to test here? The way I think about it is that every word in the requirement document needs to be tested. I know this is extreme and doesn't actually happen like this, but this is just so you can get the idea. So in our example, we would like to test the range 20 to 50. We would also like to test wholes and numbers. What I've just described are known as test conditions. So what is a test condition? A test condition is an item or event of a component or system that could be verified by one or more test cases. For example, a function, transaction, feature, quality, attribute, or structure element. So any tiny thing that needs to be tested is called a test condition. If I asked you to paretorize those tests, how would you do that? First, what do I mean by pay-to-use? Well, if you have almost no time to test, what would be the most important thing for you to test? First, you would feel that you have achieved something. If they ask you to stop testing afterward, periodization also depends on what your customers will most likely do. So you should test more or less? Well, I think they look like Vertez to me that way. Testing that the numbers are within range is the most important. Users would easily make the mistake of entering numbers out of reach. So you want to make sure you can capture that, then make sure that we will accept whole numbers. I think many users might try to enter real numbers instead of integer numbers. And last, we would need to accept numbers and not collectors. Well, this is the way I see it. You might see different prioritizations. It's okay. It depends on the context of the software and the understanding of your users. The next step after creating the test conditions is creating what we call the test cases. So what is a test case? A test case is a set of input values, preconditions, expected results, and postconditions developed for a particular objective or test condition. Again, let's continue with our example to see how to create test cases ranging from 20 to 50. How would you test this? Give it a thought for a second. Correct. We should test below the range of blue 20, which should be rejected by the software; within the range of 20 and 50, which should be accepted by the software; and above the range of 50, which should also be rejected by the software. So we have created three test cases for a single test condition between 20 and 50. But those three test cases are called high-level test cases or logical test cases. Why? They are called high-level because they don't indicate exact data, just logical information greater than or less than and so on. So do we have something called low-level test cases? Sure, if we say that our test cases are 10, which is below the range of 40 and inside the range of 60 above the range, Low-level test cases are also called concrete test cases. If in the exam they didn't indicate whether the test case is high-level or low-level, then consider low-level test cases, which means that the test cases contain data. I will continue my example using data. The second test condition is that whole numbers are valid, but no whole numbers are valid. So let's make sure our software does both. So let's try 40, which should be accepted by the software, and try 30.4, which should be rejected by the software. last test condition numbers. So let's try 40, which should be accepted by the software, and try XYZ as tractors, which should be rejected by the software. Great. Did you notice something here? There could be more than one testcase to satisfy a single test condition. And there could be a single test case in our example 40 that satisfies more than one test condition. Now, if you need to prioritise those test cases, how would you do that? I would say 40, 34, 1060, XYZ. It doesn't matter if you thought differently. It could be different depending on the context of the software. Remember this testing principle. Testing is context dependent, but I purchased 30.4high because I thought many users might make the mistake of booting a lower or higher value than the range. So it varies. A proper test case usually comes with a predefined expected result, telling you what to expect when completing a test case. Similarly, it's good practise to define the initial expected state when writing a test case. These three conditions are usually defined as separate properties of the test case. When a condition isn't met, there is no point in testing the test case. The third step in testing is to write the steps that we need to perform to execute the test. Those steps are called a test procedure. This is beyond the scope of this service, but I will talk about it to demonstrate how test cases or test procedures are used. So if you want to actually test the number 40, can you just type 40? No. You need to set up the software to be ready to run the test case. Therefore, to test 40, you would launch the Expert Wave application, select the new employee menu item from the employee menu, and new employee dialogue should be displayed. Check the edge of blinking inside the age field. Type in 40 and click the OK button. Check if the message sent to you is displayed. Do you really need to go into these detailed steps? Yes. Do you need to write in steps? Yes. So you might ask, why? Because this procedure can be executed by anyone other than you. So you want it to be as clear as possible for anyone to understand and execute it without any ambiguities. Notice a few points here. In the test procedure, it was found that the council is blinking in the age field, even though it's not mentioned in the requirement document. The test procedure tested more than one testicle—the existence of the menu item for the new employee. Under the employee menu, the case of "blinking the number 40" is accepted in the age field. Another test procedure for the 34-test case might look like this: launch the expert with an application. Type CTRL N, and the new InBleed dialogue should be displayed. It examines the castle blanket within the edge field. Type in 30.4 and press the Enter key. It checks that the alert, which should be whole numbers only, is displayed. You can notice an extra point here. We change the way we do things whenever we can. triggering the new employee menu item. In the first test procedure, we tried to click on the menu, and in the second test procedure, we tried the shortcut for triggering the Okay button. Okay, so now you know about test procedures. but there's something to tell you between you and me. People in the industry don't usually use the term "test procedure." They just refer to it as just a test case. So in your company, you won't find people differentiating between the logical idea of a test case and the formal test case, saying that we still use the term "test procedure" in our ISTB Foundation service. The last term we need to learn is testsuite, which should make sense to you once you get it. As the number of test cases and test procedures grows, so does the need to categorise them. For better accessibility in the process of learning and running tests, keeping track of hundreds of test procedures becomes a hard job. Thinking in terms of "we need to run a performance test tomorrow" or "we need to make sure the build is in good shape now" or "is the login module stable or not"? forces you to know the purpose and domain of each test procedure. A test suite allows you to categorise test procedures in such a way that they match your planning and analysis needs. Do you want functional and performance tests? Create two suites and label them accordingly. You can create test suites, as many as you need: one for functional tests, one for performance tests, one for recovery tests, one for lengthy test procedures that may need to run through the night, one for quick smoke test procedures to make sure the build is good, and so on. A test procedure can be added to multiple test suites and test plans. Test suites are created, which in turn can have any number of tests. Test suites are created based on the cycle or based on the scope. It can contain any type of test. So test suites are created based on the cycle or based on the scope. So I hope you get a better idea. Now about these conditions: test cases, test procedures, and test suites. Let's.
10. The Test Process
We have talked about what and why to test. Let's talk now about how to do the testing. A common misperception of testing is that it only consists of running tests, meaning executing the software and checking the results. But instead, software testing is a process that includes many different activities. Test execution, including checking the results, is only one of these activities. For example, prior to test execution, we must decide what we want to achieve with the testing and establish clear objectives. Then we design the tests and set them up. During test execution, there is some work needed to record the results and check whether the tests are complete. To do proper testing, one should go through various test activities. We call those activities the Test process. Usually, you will go through all the steps or activities. The appropriate specific software testing process in any given situation is determined by a variety of contextual factors. So you might put more effort into one step than the other. The most important thing is to decide how we should perform the test process to achieve its established objectives. These sets of test activities are the test process. Which test activities are involved in this test process? How these activities are implemented and when they occur may be discussed in an organization's Strategy We will talk more about test strategies in the Test Management section. Now let's talk about the test process in context. The following videos describe general aspects of organisational test processes in terms of the following test activities andtasks test worker products What we will create during those activities traceability between the test bases and test work products, meaning how the various test work products reference each other and reference the test basis, which are the documents that we use as a basis or reference to perform the testing, like the requirements and the design document. ISO Standard for ISO IEC 29119 Two so testedprocesses are described in ISO Standard 29119 Two. Let's talk about the listed activities and tasks first.
11. Test Planning and Test Monitoring and Control
But for now, test planning is where we define the objectives of testing, decide what to test, decide who will do the testing, how they will do the testing, define the specific test activities in order to meet the objectives, and define how long and when we can consider the testing complete, which is called the exit criteria. This is when we will stop testing and give a report to the stakeholders to decide if testing was enough or not. All these, within constraints imposed by the context of the project test plans, may be revisited based on feedback for monitoring and control activities. Next, test monitoring and control. We will also talk more about test monitoring and control in the test management section. But for now you need to know that test monitoring is the ongoing activity of comparing actual progress against the test using any test monitoring metrics defined in the test. If there are any deviations between what's actually happening and the blend, then we should do test control, which is taking any necessary action or actions to stay on track to meet the targets. Therefore, we need to undertake both planning and control through our test activities. Remember the exit criteria that were defined during test planning. Well, during test monitoring and control, we should always evaluate the exit criteria to see if we have met them yet or not yet.Evaluating exit criteria is an activity where test execution results are assessed against the defined objectives. For example, the evaluation as a criteria for test execution as part of a given test level may include checking test results and locks against the specified coverage criteria, assessing the level of component or system quality based on test results and logs, and determining if more tests are needed. For example, if tests designed to achieve a certain level of product risk coverage fail to doso, necessitating the creation and execution of new tests, Test progress against the plan and the states of the exit criteria are communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing. The test manager then will evaluate the test reports submitted by various testers and decide if we should stop testing or if testing in a specific area should continue or not. For example, if the criteria were that software performance speed be eight seconds per bare web page transaction, If the speed is below ten seconds, meaning that the criteria are not met, then there are two possible actions. The most likely option is to employ extra testing activities until we achieve the desired performance. The least likely action is to change the exit criteria, which will require approval from the key stakeholders in Agile Life and the executor map to what's called the definition of done. Again, we will talk more about test monitoring and control in the test management section.
Pay a fraction of the cost to study with Exam-Labs CTFL-2018: ISTQB Certified Tester Foundation Level 2018 certification video training course. Passing the certification exams have never been easier. With the complete self-paced exam prep solution including CTFL-2018: ISTQB Certified Tester Foundation Level 2018 certification video training course, practice test questions and answers, exam practice test questions and study guide, you have nothing to worry about for your next certification exam.