My approach to agile testing...

I’ve talked about agile testing before, here, here and here. But, a recent thread on the Alt.Net Seattle Google Group got me thinking about it again. Here’s the response I sent to the thread:

Testing is a huge domain. If you’re familiar with Marick’s testing quadrant, you know that there are four basic areas that testing covers:

  • Business Facing tests in Support of Programming (Business Requirements testing – Does the code do what it should?)
  • Business Facing tests to Critique the Product (Business Defect testing – Does the code do something it shouldn’t? Are there missing requirements?)
  • Technology Facing tests in Support of Programming (Technical Requirement testing – Does this method do what the developer intended?)
  • Technology Facing tests to Critique the Product (Technical defect testing – Are there leaks? Can it handle a load? Is it fast enough?)

Typically, testers focus on the business facing tests. And, people with specialized technical skills focus on the technology facing tests. (Developers on the support programming side; Performance testers on the critique product side.)

None of these tests can be run before the software is written. But, the tests in support of technology can be written before the code. And, metrics for perf/load/stress can be defined before the code is written. I recommend doing all of that (unless perf/load/stress isn’t important to you). Obviously, exploratory testing is something that has to wait for the code to be written.

If I were designing an agile team from scratch, I would propose the following approach:

During planning:

  • Track requirements as user stories.
  • Document acceptance criteria with each story, including perf/load/stress criteria (on the back of the 3x5 card, in Rally or TFS, etc.)

During an iteration:

  • One pair works on one story at a time.
  • Acceptance tests are automated first, based on acceptance criteria.
  • Code is written using TDD
  • Story is not functionally complete until all acceptance tests are passing (for the right reasons – no hard coded answers left)

After story is functionally complete:

  • Original pair leverages existing acceptance tests in perf/load/stress tests to determine if those criteria are met.
  • Tweak code as necessary to meet perf/load/stress acceptance criteria.
  • Story is not perf/load/stress complete until all perf/load/stress acceptance tests are passing

Exploratory testing should happen outside the constraints of a single story:

  • Limiting it to a single story would put blinders on that could negatively impact the effort. But, it is important that it happen.
  • Perhaps the team sets aside time during the day or iteration for banging on the software.

Once all acceptance tests are passing:

  • Ship it!

Variations:

  1. Have the entire team bang out the acceptance tests at the beginning of the iteration.  I’ve seen this done. It works. But, quite often, tests get written for stories that end up getting cut from the iteration due to time constraints. That is excess inventory sitting on the production floor until those stories make it into another iteration. In other words, doing this encourages the accumulation of waste.
  2. If you’re concerned about a single pair working a story from beginning to end, mix it up. Give pairs one day to work on something, or 4 hours, or two, whatever works for you. Then switch things up – preferably by keeping one person on the story and bringing in a new pair. Then, the next time you switch, bring the older pair leaves.
  3. Even though exploratory testing should not be constrained by a single story, it really is important to do it before shipping the software. Microsoft calls this a bug bash. They give away prizes for the most bugs, and the hardest to find bugs. But, they don’t do it until very late in their process. It would be most agile to do it continuously.

How do you do agile testing?

The Matrix Reloaded

I wrote this post quite a long time ago – right on the heels of my original test matrix posts. Why I never posted it is beyond me. I’m posting it now to get it out of my “drafts.”

---

A few posts back, I discussed The Marick Test Matrix and my minor modifications to the matrix. In those posts, I described how to classify different types of testing into the four quadrants of the matrix. It turns out that you can also use the same matrix to classify testing tools, like this:

image

Let’s look at each quadrant, in more detail, starting on the right hand side:

Business/Defects

This quadrant represents those types of tests that identify defects in business (or non-technical) terms. In other words, you don’t need to be a programmer to figure out that there is a defect.

Typically, these tests are not automated. So, there are no automation tools to discuss, here.

Technology/Defects

This quadrant represents those types of tests that identify defects in technical terms. In other words, you probably need to be a programmer to figure out that there is a defect. Therefore, one would expect the tools in this quadrant to be highly technical and to require specialized skills. In fact, there are people who specialize in this work. They are typically called testers; but, their knowledge of programming is often greater than the average developer.

The dominant tool in this space is Mercury LoadRunner. Microsoft also has tools in this space, including the Visual Studio Team Test Load Agent and the Microsoft Web Application Stress tool (MS WAS).

Business/Requirements

This quadrant represents those types of tests that define requirements in business terms. As such, you would expect the tools in this category to be (relatively) non-technical. Business Analysts and end users should be able to use these tools to create automated tests without knowledge of computer programming. In fact, these tests should be written by those team members with the most business expertise.

FIT, FitNesse, STiQ, WebTest and Selenium are all examples of tools that allow tests to be expressed in business terms. All of these tools are well suited to use by Business Analysts.

Technology/Requirements

The testing that takes place in this quadrant defines requirement in technical terms. Therefore, you would expect to see lots of low-level, code-based tools, here. These tools are generally used by computer programmers (e.g. developers and testers).

JUnit and NUnit are the big dogs in this space. Other tools include MSTest, WatiN, MBUnit, xUnit, RSpec (for Ruby), and NUnitASP.

My Marick Test Matrix

I introduced the Marick Test Matrix in the previous post. As much as I like it and have come to rely on it. One thing has always bugged me about the matrix: The terms Brian used to describe his horizontal axis seem, IMHO, obtuse. Instead, I prefer these simpler terms:

  • Support Programming = Define Requirements
  • Critique Product = Identify Defects

Given those changes, here’s my updated matrix:

I prefer these axes because now I can refer to each quadrant by a simple name:

  • Business/Requirements
  • Business/Defects
  • Technology/Requirements
  • Technology/Defects

The Marick Test Matrix

One of the leading voices in agile testing is a guy named Brian Marick. Brian is an independent consultant and an author. I find his blog to be an invaluable resource.

The Marick Test Matrix

Back in 2003, Brian published an influential series of articles on agile testing. He was attempting to point the way forward for agile testers. But, in the process, he came up with an elegant method of cataloguing testing methods that has become known as the “Marick Test Matrix.”

I’d like to introduce the matrix here in the hopes of fostering a discussion about what we test and how we test it.

Brian’s work categorized tests by asking two questions:

  1. Is the test business facing or technology facing?
  2. Does the test support programming or critique a product?

When you combine the two questions (or axes), you get a grid (or matrix), like this one:

But, what the heck do these things mean? That’s what the remainder of this post is about…

Business Facing Tests

A business facing test is one that is expressed in terms that are well understood by a business expert. For example:

  • If you withdraw more money than you have in your account, the system should automatically extend you a loan for the difference. (Notice that the italicized words are business terms.)

Business facing tests are best authored by people who understand the business (e.g. product owner, business analyst, etc.).

Technology Facing Tests

A technology facing test is one that is expressed in terms that are well understood by a technology expert. For example:

  • Different browsers implement Javascript differently, so we test whether our product works with the most important ones. (Notice that the italicized words are technology terms.)

Technology facing tests are best authored by people who understand the technology (e.g. developer, tester, etc.).

Tests that Support Programming

A test that supports programming is one that defines what the software should do. For example:

  • Clicking on the Account Details link should take the user to the Account Details screen.
  • Calling the Add method with 2 and 2 should return 4.

These tests may be written before the software exists. These tests are often automated and executed after a change is made to the software to ensure that the software still works as it should (i.e. regression). Once one of these tests passes, it should never be allowed to fail again.

Tests that Critique a Product

A test that critiques a product is one that tries to identify problems in completed software. In other words, this is the class of tests where the tester is actively trying to break the software in order to find bugs. For example:

  • When I logged on as Joe, I saw Tom’s data.
  • When I clicked the blue button after clicking the red button 400 times, the system threw an error.
  • When I configured the load testing tool to send 1,000 simultaneous users to the site, average response times increased to over 10 seconds.

In general, these tests are not automated – until a problem is identified, at which point an test that clearly reproduces the problem can be added to the tests that support programming.

So what?

Let’s take a look at where some standard types of testing might go on the matrix:

Unit Tests

Unit tests are used by developers to ensure that the code they are writing does what they expect it to. In essence, these tests form a specification for a single unit of code. By that definition, unit tests are tests that “support programming.” These tests are (or should be) very close to the code under test, which makes them “technology facing.” So, unit tests belong in the lower left quadrant of the matrix. The benefit of automating these tests is very high.

Functional Tests

Functional tests are used by development teams to ensure that the software they are writing does what they expect it to do. In essence, these tests form a specification for an entire system. That means that these are tests that “support programming.” But, functional tests are (or should be) written in a way that business users understand, making them “business facing.” So, functional tests belong in the upper left quadrant of the matrix. The benefit of automating these tests is high.

Exploratory Tests

Exploratory testing is the practice of trying to identify problems in an application. (Microsoft refers to this practice as a “Bug Bash” where many people are invited to use the software and prizes are given out to the person who identifies the most/worst bugs.) By definition, this makes these tests that “critique a product.” Exploratory tests are considered “business facing” due to the fact that the testers are using the software the same way a real end-user might. So, exploratory tests belong in the upper, right quadrant of the matrix. There is no benefit to automating exploratory tests – until a defect is identified. At that point, a new functional test can be added to ensure that the defect is resolved.

Performance Tests

Performance tests are used to determine the speed of an application within a specific set of parameters. Specialized tools are used to perform this testing. As such, these tests require a good deal of technical knowledge and are therefore “technology facing.” Performance tests require working software, and are therefore tests that “critique a product.” So, performance tests belong in the lower, right quadrant of the matrix.

Putting it all together

My blog editor now tells me that I’m approaching 1000 words. So, to prove the axiom, here’s the best summary I can think of:

That’s enough for one day. In a subsequent post, I’ll dive into why it’s important to cover all four quadrants.

For more information, check out Brian Marick’s original Agile Testing posts, or Google Marick Test Matrix.