Testing is essential in any web based software application project. As a project manager you must create the opportunity to educate and make people aware of what’s involved in testing and what impacts testing timelines before the project enters the often-chaotic test execution periods, in which everyone can become so oriented toward the smallest details that they lose sight of the big picture.
The following are some aspects of testing that should be discussed with all stakeholders so timelines and management expectations can be established:
Establish your testing objectives up front. Ask yourself questions like:
What is most important: minimal defects or time-to-market?
Do we trade off time to market for internal manual efforts that are transparent to customers?
Do you want to make testing the correctness of the business functionality a higher priority than testing for navigational consistency throughout the application?
Is visual appeal a higher testing priority?
Who will be doing the testing?
How much experience do the testers have testing software?
How much time will they be able to allocate on a daily basis to the testing phase?
Will the time spent testing impact their daily responsibilities and if so how do you mitigate that risk?
What training do they need to be productive testers?
The following identifies a few of skills needed to be an effective tester:
Analytical and logical thinking – The major objective of testing is to identify the hidden errors, not simply prove that the software works. For a tester to be effective in his role, he must be able to analyze the given business situation and judge all the possible scenarios.
A sense of intellectual curiosity and creativity – A tester should understand that being an intellectual and being intellectually curious are not the same. A tester should arguably be the latter one; intellectually curious- which is all about asking questions and not about having answers. Thus, a tester should develop the skill to see what everyone else hasn’t seen, to think what no one else has thought of and to do what no one else has dared.
To effectively test a system testers need to have been involved in defining the requirements of the system or provided a requirements document for review in order to answer the following questions:
What exactly is the system supposed to do and in what order is it supposed to do it?
Are there integrated components that need to be tested to ensure connectivity and accurate data exchange?
What are the things that can go wrong and how is the system supposed to behave when that happens?
Pivotal to all testing activity are the test cases. The creation of the test cases constitutes the bulk of the testing activity and the quality and thoroughness with which they are designed and executed determine the quality of the final result. Test cases should be traceable to the business and system requirements. Testers should be aware that not all scenarios may be identified in the test case identification process. There are two reasons for this:
The test cases developed for implementation will not be 100 percent exhaustive and may be written at a level of detail that may be insufficient for testing.
The test team’s review process will create new discoveries and additional scenarios that may result from executing the test case. Some of these may not even have been considered in the design and may require design modifications.
Once you start executing your test plans, you will probably generate a large number of bugs, issues, defects, etc. Issue logs must be maintained documenting tests that failed. Failed tests require analysis to determine the needed fix and developers must provide a delivery date of when the fix will be ready for re-test. All this information needs to easily stored, organized, and distributed to the appropriate stakeholders.
Prioritizing defects, bugs and issues
Criteria to assign severity to all issues need to be agreed upon prior to the beginning of testing.
Examples of severity levels are:
Critical – Must have it for Go Live – includes any functionality, user interface and/or system interface defect that does not meet the business or system requirement that an external entity would see and anything that would create time consuming manual work for support staff. Includes new ‘must have’ functional requirements identified as part of testing that were not in the original requirements document.
High – Any functionality or User interface defect that does not meet the business or system requirement and is not evident to external entities but needed by support staff to eliminate manual work.
Medium – Cosmetic defects such as spelling, typos, text on screens.
Testing is a very tedious and detailed type of task that requires focused attention. Planning a roadmap for testing will help team members reach whatever degree of assurance is necessary for them to be confident that they have defined, designed and tested a system that meets the need of the end users. After all isn’t that the goal that was established to begin with.