Test Bases, Models, Oracles and Prioritisation approach are fallible because the people who define and use them are prone to human error.
Consequence if ignored or violated
Confidence in the meaning, thoroughness, accuracy and value of tests is misplaced.
- How have the sources of knowledge required for testing been identified?
- Who is responsible for the content in these sources?
- Has the content of the sources been agreed unanimously?
- Have these sources stabilised?
- Have these sources been verified against other references or validated against the experience of contributors and other stakeholders?
- What measures can be taken to minimise the impact of error in our sources of knowledge?
- If sources are fallible or conflict, who or what is the final arbiter?