manual v. automated testing again

In my Future series I was accused of supporting both sides of the manual v. automated debate and flip-flopping like an American politician who can’t decide whether to kiss the babies or their moms. Clearly this is not an either-or proposition. But I wanted to supply some clarity in how I think about this.

This is a debate about when to choose one over the other and which scenarios one can expect manual testing to outperform automated testing and vice versa. I think the simplistic view that automation is better at regression and API testing and manual is better for acceptance and GUI testing diverts us from the real issues.

I think the reality of the problem has nothing to do with APIs or GUIs, regression or functional. We have to start thinking about our code in terms of business logic code or infrastructure code. Because that is the same divide that separates manual and automated testing.

Business logic code is the code that produces the results that stakeholders/users buy the product for. It’s the code that gets the job done. Infrastructure code is the code that makes the business logic work in its intended environment. Infrastructure code makes the business logic multiuser, secure, localized and so forth. It’s the platform goo that makes the business logic into a real application.

Obviously, both types of code need to be tested. Intuitively, manual testing should be better at testing business logic because the business logic rules are easier for a human to learn than they are to teach to a piece of automation. I think intuition is bang-on correct in this situation.

Manual testers excel at becoming domain experts and they can store very complex business logic in the most powerful testing tool around, their brains. Because manual testing is slow, testers have the time to watch for and analyze subtle business logic errors. Low speed but also low drag.

Automation, on the other hand, excels at low-level details. Automation can detect crashes, hangs, incorrect return values, error codes, tripped exceptions, memory usage and so forth. High speed but also high drag. Tuning automation to test business logic is very difficult and risky. In my humble hindsight I think that Vista got bit by this exact issue, depending so much on automation whereas a few more good manual testers would have been worth their weight in gold.

So whether you have an API or a GUI, regress or test fresh, the type of testing you choose depends on what type of bug you want to find. There may be special cases, but the majority of the time manual testing beats automated testing in finding business logic bugs and automated testing beats manual testing in finding infrastructure bugs.

There I go again, fishing in both sides of the pond.