My approach to agile testing...

I’ve talked about agile testing before, here, here and here. But, a recent thread on the Alt.Net Seattle Google Group got me thinking about it again. Here’s the response I sent to the thread:

Testing is a huge domain. If you’re familiar with Marick’s testing quadrant, you know that there are four basic areas that testing covers:

  • Business Facing tests in Support of Programming (Business Requirements testing – Does the code do what it should?)
  • Business Facing tests to Critique the Product (Business Defect testing – Does the code do something it shouldn’t? Are there missing requirements?)
  • Technology Facing tests in Support of Programming (Technical Requirement testing – Does this method do what the developer intended?)
  • Technology Facing tests to Critique the Product (Technical defect testing – Are there leaks? Can it handle a load? Is it fast enough?)

Typically, testers focus on the business facing tests. And, people with specialized technical skills focus on the technology facing tests. (Developers on the support programming side; Performance testers on the critique product side.)

None of these tests can be run before the software is written. But, the tests in support of technology can be written before the code. And, metrics for perf/load/stress can be defined before the code is written. I recommend doing all of that (unless perf/load/stress isn’t important to you). Obviously, exploratory testing is something that has to wait for the code to be written.

If I were designing an agile team from scratch, I would propose the following approach:

During planning:

  • Track requirements as user stories.
  • Document acceptance criteria with each story, including perf/load/stress criteria (on the back of the 3x5 card, in Rally or TFS, etc.)

During an iteration:

  • One pair works on one story at a time.
  • Acceptance tests are automated first, based on acceptance criteria.
  • Code is written using TDD
  • Story is not functionally complete until all acceptance tests are passing (for the right reasons – no hard coded answers left)

After story is functionally complete:

  • Original pair leverages existing acceptance tests in perf/load/stress tests to determine if those criteria are met.
  • Tweak code as necessary to meet perf/load/stress acceptance criteria.
  • Story is not perf/load/stress complete until all perf/load/stress acceptance tests are passing

Exploratory testing should happen outside the constraints of a single story:

  • Limiting it to a single story would put blinders on that could negatively impact the effort. But, it is important that it happen.
  • Perhaps the team sets aside time during the day or iteration for banging on the software.

Once all acceptance tests are passing:

  • Ship it!

Variations:

  1. Have the entire team bang out the acceptance tests at the beginning of the iteration.  I’ve seen this done. It works. But, quite often, tests get written for stories that end up getting cut from the iteration due to time constraints. That is excess inventory sitting on the production floor until those stories make it into another iteration. In other words, doing this encourages the accumulation of waste.
  2. If you’re concerned about a single pair working a story from beginning to end, mix it up. Give pairs one day to work on something, or 4 hours, or two, whatever works for you. Then switch things up – preferably by keeping one person on the story and bringing in a new pair. Then, the next time you switch, bring the older pair leaves.
  3. Even though exploratory testing should not be constrained by a single story, it really is important to do it before shipping the software. Microsoft calls this a bug bash. They give away prizes for the most bugs, and the hardest to find bugs. But, they don’t do it until very late in their process. It would be most agile to do it continuously.

How do you do agile testing?

0 responses