Assessing Tester Performance

Using context-free software product measures as personal performance indicators (KPI) is about as silly as pet rocks!

Periodically a discussion of assessing tester performance surfaces on various discussion groups. Some people offer advice such as counting bugs (or some derivation thereof), number of tests written in x amount of time, number of tests executed, % of automated tests compared to manual tests, and (my one of my least favorite measures of individual performance) % of code coverage.

The problem with all these measures is they lack context, and tend to ignore dependent variables. It is also highly likely that an astute tester can easily game the system and potentially cause detrimental problems. For example, if my manager considered one measure my performance on the number of bugs found per week, I would ask how many I had to find per week to satisfy the 'expected' criteria. Then each week I would report 2 or 3 more bugs than the 'expected' or 'average' number (in order to 'exceed' expectations), and any additional bugs I found that week, I would sit on and hold in case I was below my quota the following week. Of course, this means that bug reports are being artificially delayed which may negatively impact the overall product schedule.

The issue at hand is this bizarre desire by some simple-minded people who want an easy solution to a difficult problem. But, there is no simple formula for measuring the performance of an individual. Individual performance assessments are often somewhat subjective, and influenced by external factors identified through Human Performance Technology (HPT) research such as motivation, tools, inherent ability, processes, and even the physical environment.

A common problem I often see is unrealistic goals such as "Find the majority of bugs in my feature area." (How do we know what the majority is? What if the majority doesn't include the most important issues? etc.) Another problem I commonly see is for individuals to over-promise and under-deliver relative to their capabilities. I also see managers who dictate the same identical set of performance goals to all individuals. While there may be a few common goals, as a manager I would want to tap into the potential strengths of each individual on my team. I also have different expectations and levels of contributions from individuals depending on where they are in their career, and also based on their career aspirations.

So, as testers we must learn to establish SMART goals with our managers that include:

  • goals that align with my manager's goals
  • goals that align with the immediate goals of the product team or company
  • and stretch goals that illustrate continued growth and personal improvement relative to the team, group, or company goals

(This last one may be controversial; however, we shouldn't be surprised to know individual performance is never constant in relation to your peer group. )

But, (fair or not) for a variety of reasons most software companies do (at least periodically) evaluate their employee performance in some manner, the key to success is in HPT and agreeing on SMARTer goals upfront.