[prog] QA (was Re: State of software engineering profession)

jennyw jennyw at dangerousideas.com
Wed Apr 16 10:59:37 EST 2003


I've been a systems analyst, programmer, QA analyst and white box
tester, sysadmin, and a manager and director of IT and development
(including database design and SCM).  Of all the jobs I've had, I feel
that QA was the least understood.  Reading some of the things that
people have been writing on the topic, I thought maybe QA deserves a
thread of its own.

Here are a few common misconceptions that I've seen at places I've
worked:

* QA people aren't technical. 
* QA comes in after a program is written.
* Unit testing in development can replace QA.

QA isn't just "banging" on the product. QA often involves three 
distinct types of testers -- black box testers that don't know anything 
about the code, white or glass box testers that have access to the code, 
and automators who can fall into either category.  White box testers and 
automators are engineers in their own right, but I've noticed that a lot 
of development engineers look down on QA engineers as somehow inferior.

This is terrible because testing is often as hard an engineering problem
as making the product in the first place.  For example, at one company I
worked at, we never did WAN testing; once customers complained, we set a
team on figuring out what the bottlenecks were.  Ironically, they came
to my department, IT, because the developers were at a loss and the QA
department was exclusively black box testers.  And this is relatively
simple in terms of what QA often should be doing.  Developing testing
harnesses to simulate customer usage is not easy, especially when you're
talking many, many users over different types of network connections on
different platforms.

Which is not to say that black box shouldn't be technical. That should
be a technical group (and can include automation testers) too.  Their
skills might be more along the lines of system analysts than
programmers, but it's not easy, either.  Writing test cases that cover
the functionality of a product can be really hard.  I worked on a
project management application where one of the things we were testing
was the leveling calculation engine.  Most of the development team on
that particular aspect had mathematics or OR backgrounds.  Trying to
write test cases for changes to optimization, verifying that the
solutions were valid, verifying that the formulae applied were correct
and that results of calculations were correct, was not an easy task. 
Most of the testers of this were black box (although a white box
engineer was the lead).  And on top of that, there was benchmarking,
profiling, and code coverage (the last two being white box things). 

Another misconception people have is that unit testing can replace QA. 
Unit testing helps and should be performed, but functionality testing is
generally outside the scope of developers (it doesn't have to be, but
often is).  In many cases, QA people are the ones in development who
know the product best, because they have to look at the whole of it.  
Also, testing is a full-time job.  You can split your time 50-50 between 
testing and development, but that's about what it will take. In some 
cases, there's more testing to do than development.

Another note: QA is often process-intensive. This doesn't mean that it
should be overloaded with bureaucracy, but that good QA often involves
many well-defined processes. The bug lifecycle involves discovery,
verification (reproduction), confirmation (e.g. not a feature),
prioritization and scheduling, assignment, re-testing when the fix comes
back, including in regressions, etc.  Since these steps often involve
multiple groups, it's important to have a process in place.  The process
also supports development metrics which are often produced by QA (along 
with SCM, which are sometimes the same group). 

Jen


More information about the Programming mailing list