Best Practices for Testing Software
I briefly touched on this topic in my last post but I wanted to devote more time to it as it’s an important piece of the software development puzzle. I’m not going to go into detail on how to construct a test plan as there are numerous examples to download. What I will try to convey is what makes up a good test plan and why. The first step is the most obvious one – creating a test plan. A test plan tells the quality of the testing and test planning is the hardest part, then it’s just “implementation.” A test plan is used by developers and project manager’s (PM) to ensure that the test team understands all the parts of the project and that no area has been missed. The test plan is also used to estimate the amount of work (test effort) and perform scheduling as well as to track the progress of the testing itself.
A few basics when creating a test plan: Do not duplicate information from the technical specifications and assume the readers understand those specifications. Stay concise as one should be able to scan the document and quickly get a good sense of the coverage. I’ve found that short sentences and bulleted items work best in this regard. When it comes to the test cases themselves, name them for easy reference, such as: "TC_InsertCustomer01001 – Test case #1 for ICustomer". Speaking of test cases, mentioning whether or not they will be automated or non-automated eases any confusion on who should be creating what test and how it should be conducted (e.g. “UI automated testing will be done using Test Manager 2010”).
Explain the Strategy
Explaining the strategy is key as the QA staff need to know what should and shouldn't be tested. A brief explaination accompanying each test is helpful. For instance: “Component A won’t be tested as tests on component B will exercise component A. ” When it comes to unit testing these tests are normally implemented by the developers and the test team will expand them to ensure complete coverage. Determining a standard set of tests such as pass, fail, stress and performance provides a constant set to implement.
Unit tests generally have 4 major steps:
1. Setup environment
2. Call the tested code
3. Verify output
While creating these tests the developer should understand the technical constraints and test error conditions not seen during normal testing procedures (i.e. “What is the impact of a reboot on my test?”).
Functional Testing (aka Black Box Testing)
When performing functional testing one should explore all the permutations using workflows provided in the specifications. That means one test case (TC) per permutation and said TC should validate the actual and expected outputs. If there is a discrepancy then perhaps the design document forgot a scenario or the test has inconclusive results? This is why keeping these tests up to date as well as detailed and unambiguous steps are necessary.
Test automation in the past was seen by the business as complex and a large investment. This idealogy subsequently almost always elimnated automated testing and all testing was done manually. Updated tools and processes today have almost squashed those perceptions but complexity and expense can rear their ugly heads extremely quickly without establishing a few guidelines. That said; select the areas with the highest return on investment (ROI). These areas often need their tests to be run often, have a low risk of changing and are easy to implement.
Metrics are essential to not only gauge code quality (defect ratio’s, failed tests, etc.) but also to gain insight into the overall project/test plan. Utilizing a tracking tool (there are many to choose from) is almost a requirement although I suppose an Excel workbook could do in a pinch. Either way, the tracking should be based on the test plan and contain all the tasks broken-down by feature area and unit/functional tests. The percentage of completion for each task and the updating of progress on a daily basis help keep these metrics relevant for the whole team to use while development/testing is underway.
The reports themselves should include up to date tracking information, what (if any) blocking issues are known and what’s forecasted to be tested.
This topic has a multitude of layers, some of which I have not touched on (roles and responsibilities, dependencies, environment requirements, etc.) but the key take away here is that regardless of what programming language and methodology is being used to develop software (e.g. C#, VB, Java - Agile/Extreme/SCRUM/CMMI/Waterfall, etc.), testing is a must. Acquiring and using advanced testing tools pays huge dividends when it comes to application quality.