Articles...

Copyright 2009 Corvus International Inc.  All Rights Reserved

Home      About Corvus       Contact Us       Articles/Resources       Clients          Affiliations  

...the most important  considerations

in software testing are issues of

economics and  human psychology


                       
The Art of Software Testing p.4
                        Glenford Myers

                        John Wiley and Sons 1979

 

        Well, the most important considerations in almost everything
            are issues of economics and human psychology, but we get your point, Glen.

 
ACM DL Author-ize serviceThe unconscious art of software testing
Phillip G. Armour
Communications of the ACM - Interaction design and children, 2005
 

We don't know what we're looking for                                       

In the Not Defect article, I asserted that the view of most "defects" in testing as being bad-things-that-don't-work is fundamentally flawed.  Testing is, more than any other activity in software development, focused on discovering what we do not know as opposed to manipulating and transcribing what we do know.

When we devise a test it is always one of two types:

The first one is quite straight forward and relatively limited (though the sheer number of combinations of even what we do know is huge).  The number and combinations of things we do not know is both unlimited and (to some extent) incalculable.  It is the second type of test that is the essence of testing.

We can't test                                                           

Since we are trying to devise tests for something we aren't looking for, there is no explicit mechanism for testing.  All we have are heuristics:

  •  Boundary Value Analysis--it makes sense that bugs would congregate on boundaries of classes, where inputs and outputs change from being in one class to being in another.  The reason is quite practical: this is where the predicate logic of the system operates.  But we still don't know what is actually wrong at these boundaries, or even if something is wrong--we just test in this area hoping to find something, anything, that doesn't look right

  •  Equivalence Class Partitioning--coupled to BVA, is the separation of inputs and outputs into classes where the members of the class are dealt with (we think, or hope) equivalently.  But we don't know if they are or not.  So we run combinations of tests across the classes hoping to find something, anything that doesn't look right.

So it seems we just can't really test.  But, of course, we do test and sometimes test effectively.  So how do we do this?

Hare Brain, Tortoise Mind                                       

In his excellent book of this name1, Guy Claxton describes two modalities of human cognition:

  •  Intentional conscious reasoning--the immediate, directed "hare brain"

  •  Below conscious thought processes--the background "tortoise mind"

I think much of the truly effective testing occurs at this gut feel, intuitive "tortoise mind" level.

Good testers have a "nose" for when a test will work or not.  To illustrate how this works, I included a (very) simple example in my article. 

Testing Three Variables                                       

Input Test 1 Test 2a Test 2b Test 2c
A 3 3 4 6
B 4 5 5 4
C 5 4 6 5
  Initial test of three numeric values Tests ONLY reordering of numeric values, actual values stay the same Tests ONLY values, relative ordering stays the same Tests BOTH values and reordering at the same time

 

 

 

 

 

After running Test 1, what is the next "best" test?  If we run Tests 2a or 2b, we are focusing only on the numerical ordering of the three inputs OR the values of the three inputs.  If we run Test 2c we test both, so we get two tests for the price of one.  What is the cost?  Well, if Test 2c goes "wrong" (throws an error), we may not know without running another test which of the two changes caused the error.

More Variables, Fewer Variables
So at each level of testing, we are presented with a choice: if we change more variables, we increase the likelihood of throwing an error but we also increase the probability that the results will be ambiguous and will require additional tests to isolate the problem.  If we change fewer variables, while each test is more specific to the operations against that variable (depending on what the error is of course--the actual error might be something quite different) we increase the number of tests we have to run and check

Our rational "hare brain" can construct good reasons to run 2a, 2b, or 2c.  But with good testers, their "tortoise mind" knows which of these is the best given the current state of testing, how the system has performed to date, and even such mundane things as how much test budget remains.

For a simple three-variable numeric input, it is not a problem, but if there were two or three hundred inputs, knowing how to select optimal tests and how and when to scale testing is a very important, though sometimes very subtle, skill. 

I have heard good testers insist "...I can't tell you why, but I'll bet this test drives out an error...".  This is indicative of the "below conscious" reasoning that Guy Claxton talks about in his book.  It is a very powerful reasoning mechanism and one we should not ignore.

1 Hare Brain, Tortoise Mind.  Guy Claxton, Harper Perennial 1999

 

The Unconscious Art of Software Testing