Testing: Did You Do It Right?

Applies to: Agile

Authors: Greg Smith and Ahmed Sidky

Referenced Screen

Get this book in Print, PDF, ePub and Kindle at manning.com. Use code “MSDN37a” to save 37%.

This topic contains the following sections.

  • Unit Testing
  • Integration Testing
  • Functional Testing
  • Exploratory Testing
  • Test Automation
  • Key Points
  • Copyright

Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

—W. Edwards Deming

Greg’s career began in manufacturing. The company he started with focused on using lean manufacturing techniques. Lean manufacturing focuses on eliminating waste and improving process flow when building a product. One of the people who influenced the creation of lean manufacturing was W. Edwards Deming, a noted statistician and manufacturing consultant. In his book Out of the Crisis, Deming outlines 14 points for management. One of those points appears at the beginning of this chapter and has resonated with Greg his entire career.

Building quality into the product sounds clichéd and has been overused by many marketing departments. But in an Agile environment, the concept is real and tangible. Consider the following.

In an Agile environment, you get the tester in the room before programming begins. The tester and the whole team try to break the product’s design before you start building. You do your best to understand the customer’s core needs before beginning production. When you do start production, you demonstrate frequently and try to prevent defects versus focusing on managing them.

Related to this point, you try to mistake-proof the development process. Consider, for example, two digital cameras that Ahmed owns. Both cameras have rechargeable batteries. One camera lets Ahmed put the battery in two different ways; the only way Ahmed can find out if he put it in correctly is to try to turn on the camera. The other camera has a battery that is shaped so it will go in only one way—it’s impossible to put it in incorrectly. The manufacturer has made the process mistake-proof. In Agile, you try to mistake-proof the process by integrating code continuously and automating testing devices. Advanced Agile teams use Test Driven Development (TDD) to mistake-proof the process, writing code until the test passes.

To see how you can add agility to your testing process, let’s look at how Acme has modified its approach to quality.

Unit Testing

When Acme Media reviewed its existing processes, the company noted that the developers already did unit testing. They began doing unit testing after a history of passing nonworking code to the testers. As the development team learned more about Agile testing, they realized their unit testing process could be improved.

Acme’s existing unit testing process meant the developers reviewed the requirements and manually executed the code to see if it passed. Passed meant the developer didn’t detect any code issues and that from their perspective the code functioned correctly.

This was a good improvement for Acme, because it reduced issues for the testers and increased their confidence in the code they received. This change also reduced issues with code integration and breaking the builds.

Note

More Resources

Numerous excellent resources can help you with unit testing. One of our favorites is The Art of Unit Testing by Roy Osherove. Roy focuses on three major principles: that a test should be maintainable, trustworthy, and readable.

But Acme still had some issues. When a tester encountered a bug, it still took time for the developer to find the issue. The developer would try to remember how they manually tested the code and then dig into the components to find the problem area.

In the last few weeks, Acme’s development team has learned more about unit testing and how some teams create code to test the functions, procedures, and classes. To Agile teams, unit tests mean testing scripts that exercise the code and log errors. If Acme follows this approach, it will gain additional benefits:

  • Developers can get quicker feedback on code issues, making it much easier to identify and repair the code.

  • The unit tests can be used during refactoring, to make sure improvements or changes to the code didn’t break the existing work.

  • The unit tests can be automated because they’re actual code.

  • The unit tests can be exercised by anyone on the team.

  • The unit tests can be passed with the functional code during system integration and make the integration test more robust.

  • This work will be foundational to using TDD, if the team chooses to do so in the future.

During Agile training, Acme learned that some teams create their own unit testing system but that most teams use open source testing frameworks such as JUnit and NUnit. Acme decides to use the pilot project as an opportunity to create true unit test scripts. The Auctionator is an enhancement to the existing classifieds functionality, which uses .Net technology; so one of the developers, Matt Lee, downloaded the product during iteration 0 and started to get a feel for the tool.

Matt will find out if creating unit tests provides the same benefits to Acme that he learned about during training. He will write the tests after he creates the code, as opposed to writing them in advance. As Matt becomes more familiar with unit tests, he’ll increase his ability to use TDD in the future.

Integration Testing

Acme Media is relatively happy with its existing integration process. All of the integration tests are automated, and the team revisits the tests after every project/release. Tests are added to exercise new functionality and removed for features that are no longer used. The team also puts a lot of thought into identifying tested module interaction across the system.

After Agile training, Acme identified a few weaknesses that it wanted to improve. In the past, the team performed an integration build based on the capacity of the QA team. At the start of a project, QA estimated how many features they could test at a given time and requested project builds based on that capacity. The issue with this process was that the team might go as long as a week before integrating their code and a subsequent integration test. Several issues were usually uncovered during the build, and it took a while to trace the root issues.

Acme learned about continuous integration during Agile training and understands the value. Because the team is used to going 5 days before integrating, integrating daily will be a big cultural shift. They also learned that another issue was concealed by the fact that they built every 5 days: there was almost no automation of the build process. A build took from 4 to 6 hours and pulled a developer away from their work.

Acme decided on a twofold attack on the integration issue. First, they will go to more frequent builds, building every Tuesday and Thursday. Second, they will work on increasing the automation of the build process. The team knows some places where automation can be added, and they pursued those changes during iteration 0.

The last change relates to unit testing. Matt Lee plans to pass his unit tests along with his code during the builds. The QA team needs to make sure they have a process in place to exercise the test code that Matt sends across.

Functional Testing

Related to integration testing, historically Acme Media tested features in groups every 5 days. The QA team met with the development team and saw what features would be delivered in the forthcoming build. They then pulled the functional specifications and began documenting test cases and scripts. This approach had two issues.

First, features weren’t delivered by priority—they were delivered by how quickly they were completed. The first build for the project might be limited to low-priority features. This issue was resolved by Acme’s new iterative planning process: the first iteration now contains the most critical features, and subsequent iterations contain the next-highest-priority work.

The second issue was acclimation. In the past, the QA team was almost viewed as a group outside of development. The first time they saw the features was after the functional requirements were complete. In Acme’s new Agile model, the tester is in the room during the feature-card exercise and during feature modeling. The tester can bring up risks before the coding begins. In a way, this is early testing; you might even call it design testing. The tester can influence how the feature is created and so reduce the chance of bugs or issues with nonfunctional requirements, such as performance or up time.

Acme’s QA team began creating test cases during iteration 0 and will continue to do so until the last iteration is complete. Test cases are created in the order they will be delivered in the iterations, with the highest-priority features’ test cases created first. See Figures 19.1 and 19.2.

Acme performs functional testing as soon as a build is integrated. The team refers to the integration build as a build verification test (BVT). If the code passes the rudimentary tests, then it’s in good enough shape to begin functional testing.

In a perfect world, functional testing for each iteration would be complete at the end of the iteration. In reality, a few features usually still need to be tested, and testing for those can be wrapped up during the adapt week between iterations.

Figure 19.1 The complexity and criticality of your application determine how detailed and formal your testing needs to be. In this instance, the team has gone formal, documenting the expected results in detail.

Referenced Screen

Figure 19.2 In this instance, the team didn’t need a formal test case, and they listed the test case in terms of a user scenario.

Referenced Screen

Exploratory Testing

As we’ve mentioned throughout the book, Acme Media has a unique practice: a company-wide bug stomp. In many circles, this is known as exploratory testing. Exploratory testing is different than functional testing. Functional testing tries to make sure the software does what it’s supposed to. Exploratory testing tries to make sure the software doesn’t accidentally do things it isn’t supposed to do.

For example, a few years ago Ahmed worked on a team that created an application, tested it internally, and then invited users from outside the company to test it. The application was supposed to be functional without any online help or training for the user. In this instance, the user was creating a new event listing: for example, information about a concert, such as the artist, the location, and the date and time. For the test, users were told the application was for entering event information and to create a fake event record. No other instructions were given.

All the users began entering records, and nine out of ten had a consistent issue: they couldn’t enter the event date in a format the application would accept. Every user had to try to enter the date several times before the system accepted the record.

Ahmed’s team researched the issue and uncovered the root cause. The company standard for entering dates was mm/dd/yyyy. All the developers knew this, and so did all the testers. Internally, no one had to think when entering a date—they knew the correct format to use. But externally, real users didn’t know the standard, and they all had different experiences for entering system dates. Because the user group had no preconceived notions, they were able to expose this usability issue (see Figure 19.3).

Figure 19.3 A usability issue is resolved by providing a format for the user in the date field.

Referenced Screen

Usability testing is great for finding issues that may be blind spots for your team. If you’ve been living with a feature since idea conception—which is true in an Agile environment—you may not be able to objectively scrutinize the application.

We can see an example of this at Acme Media. Acme Media decided to send the Auctionator out for usability testing. Figure 19.4 indicates how organized the Auctionator site was, as viewed by actual end users. The Acme Media team had grown used to the site and knew how to navigate to all the features. End users still saw areas to improve, with 30 percent of the sample saying the site was at least somewhat disorganized.

Figure 19.4 Your team may be blind to how the system is viewed by users. Usability testing reveals what the end-user experience is like.

Referenced Screen

We highly suggest that you use some method of exploratory testing before releasing your product to the public.

Test Automation

You want to prevent defects if possible; and if you can’t prevent defects, you want to find them as early as possible to ensure that your code is always in a deliverable state. To support this objective, let’s look at test automation.

Test automation is a widely discussed subject. The main question is always, “Does the time I take to automate the tests provide return?” Many people find that automation is tedious to set up and that once it’s running, the tests need to be frequently changed, especially if the tests are separate from the code. You’ll need to scrutinize your specific situation, and your Agile coach can help you recognize where automation can help.

In our experience, we’ve followed a few basic tenets on automation.

First, automation is a great tool for regressing code. If you’re performing build verification tests with every build, it’s superb to automate tests for legacy code that is already in place, to make sure new code doesn’t break the old code. In a perfect world you could automate tests for features that are being built during a release, once they have successfully passed all unit tests.

Note

Automation to Save Money

Greg recently worked with a company that used offshore testing. The company outsourced testing to a vendor that provided two onshore test leads and up to eight offshore workers to support the actual testing. The onshore leads were in the room during feature-card creation and had direct interaction with the customer. The onshore leads also performed the build-verification tests.

The company encountered tough financial times, like many companies today, and decided to reduce its testing expense by eliminating one of the onshore leads. The development team knew how critical it was for the tester to be involved in feature creation, but they realized QA interaction would be compromised now that there was only one lead.

The team investigated automating the build-verification test to a point that it could be run by the offshore testers, thereby leaving the onshore lead free to work with the team during feature-card creation. After weeks of piloting and trying various tools, the onshore QA lead was able to automate 95 percent of the build tests, and the offshore team took over running them every night. QA still wasn’t able to attend every feature discussion, but the loss of one test lead was minimized by automating the build test and passing it to the offshore team.

Our main belief is that you should get return on automation, and it may not make sense to automate every test. Acme Media has developed a practice of enhancing its automated build-verification scripts at the conclusion of every release. A sampling of representative scripts is added to the build test to make sure features in the new release didn’t break existing features that were certified in the previous release. The scripts that are automated meet the following criteria:

  • The thread selected is a good test of the overall feature.

  • The test can be automated. The sequence is consistent, so that it can be automated.

  • Automation doesn’t change the behavior of the software. Many times, automation tools can’t emulate true user interaction.

Acme Media is happy with its process for updating the build test at the end of each release, but during the team’s next pilot they will test the ability to continuously enhance the automated build test during the release. This will be another step toward agility and will also provide more time for the QA team to do exploratory testing. (See an example automation tool in Figure 19.5.)

Figure 19.5 Tools such as HP QuickTest can make test automation easier.

Referenced Screen

Our last point related to automation is that you need to have a consistent environment, consistent configuration, and consistent test data to support automated scripts. You should create a process that lets you reset your test environment to a known configuration before testing begins.

Key Points

The key points from this chapter are as follows:

  • Do your best to create a process that minimizes the ability to create defects.

  • Testing should begin as early as possible to minimize the impact a defect can have downstream.

  • A defect is harder to find if it’s been in the code for several builds.

  • In a perfect world, you’d create unit tests before development starts. This is a great goal, but it’s a reach for teams just becoming Agile.

  • If you can’t create unit tests first, at least consider automating them after they’re complete. Automation will help with refactoring and regressing the code with each build.

  • You should have a goal to build every day, but move toward this goal in small steps. Give your team time to acclimate to a more frequent build process.

  • Just as you select which processes to use during a project, you must decide how much testing is needed. Some applications, such as medical software, require more stringent testing, whereas less-critical applications, such as checking out a book from a library, may not require as much testing.

  • Testing usually doesn’t conclude at the end of an iteration. Usually a few items are still open that must be resolved. You need to close these items or clearly document them before pursuing user acceptance of code.

  • Functional testing tells you whether the code supports the requirements. Exploratory testing tells you if the code accidentally supports a bad scenario or has other issues if not used as designed.

Previous article: Agile Workflow

Continue on to the next article: Adapting: Reacting Positively to Change

©2009 by Manning Publications Co. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher.