Friday, October 20, 2006

Software Testing - Do's and Dont's

A good test engineer should always work towards breaking the product right from the first release till the final release of the application (Killer attitude). This section will not just focus on testing but all the activities related to testing is it defect tracking, configuration management or testing itself.
The Do’s
1. Ensure if the testing activities are in sync with the test plan
2. Identify technically not strong areas where you might need assistance or trainings during testing. Plan and arrange for these technical trainings to solve this issue.
3. Strictly follow the test strategies as identified in the test plan
4. Try getting a release notes from the development team which contains the details of that release that was made to QA for testing. This should normally contain the following details
a) The version label of code under configuration management
b) Features part of this release
c) Features not part of this release
d) New functionalities added/Changes in existing functionalities
e) Known Problems
f) Fixed defects etc.
5. Stick to the input and exit criteria for all testing activities. For example, if the input criteria for a QA release is sanity tested code from development team, ask for sanity test results.
6. Update the test results for the test cases as and when you run them
7. Report the defects found during testing in the tool identified for defect tracking
8. Take the code from the configuration management (as identified in plan) for build and installation.
9. Ensure if code is version controlled for each release.
10. Classify defects (It can be P1, P2, P3, P4 or Critical or High or Medium or Low or anything) in a mutual agreement between the development team so as to aid developers prioritize fixing defects
11. Do a sanity testing as and when the release is made from development team.
The Don’ts
1. Do not update the test cases while executing it for testing. Track the changes and update it based on a written reference (SRS or functional specification etc). Normally people tend to update the test case based on the look and feel of the application.
2. Do not track defects in many places i.e. having defects in excel sheets and in any other defect tracking tools. This will increase the time to track all the defects. Hence use one centralized repository for defect tracking
3. Do not get the code from the developers sandbox for testing, if it is a official release from the development team
4. Do not spend time in testing the features that are not part of this release
5. Do not focus your testing on the non critical areas (from the customers perspective)
6. Even if the defect identified is of low priority, do not fail to document it.
7. Do not leave room for assumptions while verifying the fixed defects. Clarify and then close!
8. Do not hastily update the test cases without running them actually, assuming that it worked in earlier releases. Sometimes these pre conceived notions would be a big trouble if that functionality is suddenly not working and is later found by the customer.
9. Do not focus on negative paths, which are going to consume lots of time but will be least used by customer. Though this needs to be tested at some point of time the idea really is to prioritize tests.

No comments: