So, what do I mean by "Agile?" (Part 3)

Release and Iteration Duration and Structure

Release and Iteration lengths are highly dependent upon local circumstances. But, given the ability, I prefer to do Quarterly Releases and Weekly Iterations.

Quarterly Releases

Quarterly Releases may seem like a long time. But, many businesses are incapable of absorbing change much faster than that. So, unless you’re at a web start-up or in a highly regulated industry, I say try to match your release schedule to an existing organizational heart beat, like quarterly financial reporting. (Though, perhaps you don’t want to try to release new code the same day, week, or month you release financials.)

As for the structure of a release, here’s what I like to see:

  • Twelve development iterations with one day of slack each
  • One review, deployment and planning iteration

Alternatively, I could also support a structure that looked more like this:

  • Three development iterations
  • One slack iteration
  • Three development iterations
  • One slack iteration
  • Three development iterations
  • One slack iteration
  • One review, deployment and planning iteration

A development iteration is a normal iteration. The customer pick stories for the team to build, as described above. A slack day or iteration is just like a development day or iteration except that Team decides what work to take on, from exploratory testing to refactoring items on the Technical Debt Backlog, to spiking new technologies. No new stories are implemented during slack time, though new tests may be added to shore up code coverage. And, the review, deployment and planning iteration is exactly that: review the release, deploy it, and plan the next release.

Weekly Iterations

Short iterations allow for rapid customer feedback. Longer iterations allow for more work to be accomplished. In my experience, Weekly Iterations provide the best balance between getting stuff done and running off into the weeds. Plus, it has the side benefit of guaranteeing you your weekends off!

I like to see the following structure to iterations:

  • Planning happens first thing in the morning on the first day of the week.
  • Review happens on the afternoon of the last day of the week
  • The rest of the time is devoted to implementing user stories, exploratory testing, etc.
  • Except for weekends, which are reserved for personal use

So, what do I mean by "Agile?" (Part 2)

As I hinted in my previous post, planning is broken down into three levels:

  • Product Planning
  • Release Planning
  • Iteration Planning

As I am not a product planner, I will not try to discuss how that process happens. Rather, I’ll focus on the active development process through Release and Iteration Planning. First off, I like some to see symmetry between releases and iterations. The only difference is the scale at which the two types of planning occur. Specifically, I like both releases and iterations to include:

  • Planning,
  • Review, and
  • Retrospective

Here’s what I mean by each…

Planning

Each release and iteration should begin with a planning session designed to select some number of user stories from the parent backlog for inclusion in the current cycle of the process. For example, during Release Planning, user stories are selected from the Product Backlog for implementation in the next Release.

Review

Each release and review should end with a review meeting wherein the Product Owner (and other stakeholders) is shown the running acceptance tests and asked to approve or reject them. If an item is approved, it becomes part of the next release. If it is rejected, it is placed into the backlog for the next iteration (or release, potentially). These meetings are also often the source of additional user stories for the backlog. For example, the Product Owner may accept a user story as it is, but desire that something about it change in the next iteration before accepting the feature in the release review.

Retrospective

At the end of each iteration and release, after the review meeting, the whole team should get together to discuss process improvement. Some teams do this without the Product Owner. I recommend including them, as their perspectives are valuable and it serves to reinforce the concept of the “Whole Team.”

Next up: Release and Iteration Duration and Structure

So, what do I mean by "Agile?" (Part 1)

Since you’re reading my blog, I’ll assume that you’re either a relative, or you’re already somewhat knowledgable about agile software development. So, let’s jump right in... (Sorry, Aunt Gina.)

Given my drothers, I prefer to mix and match terminology and practices from Scrum and Extreme Programming. Here’s what I like to do:

Roles & Responsibilities

Product Owner is responsible for: 

  • Defining and/or collecting the set of features to be implemented in the product
  • Maintaining a Product Backlog of User Stories ordered by Business Value
  • Presenting user stories to the Team during Release Planning
  • Selecting user stories to place on the Release Backlog
  • Presenting user stories to the team in more detail during Iteration Planning
  • Selecting user stories to place on the Iteration Backlog
  • Accepting or rejecting user story implementations during Iteration Review
  • Presenting implemented user stories to other stakeholders during Release Review
  • Determining whether or not to deploy software during Release Review
  • Answering questions from the Team throughout the process

The cross-functional Development Team (or Team) is responsible for:

  • Estimating stories with T-shirt Sizes during Release Planning
  • Estimating stories with Story Points during Iteration Planning
  • Splitting large (epic) stories into smaller, more manageable stories 
  • Calculating their team velocity using Yesterday’s Weather 
  • Incremental Design and Test-First Programming of user stories
  • Maintaining a backlog of Technical Debt
  • Maintaining code and test quality with Refactoring
  • Presenting implemented user stories to Product Owner during Iteration Review
  • Assisting Product Owner with presentation of implemented user stories at Release Review
  • Asking questions of the Product Owner throughout the process

Coach (or Scrum Master) is responsible for: 

  • Identifying and resolving roadblocks hindering the Team, including, process, technical and managerial issues, and more.

The Whole Team, which includes the Product Owner, Development Team and Coach, is responsible for:

  • Conducting Release and Iteration Planning meetings
  • Conducting Iteration and Release Review meetings
  • Conducting Iteration and Release Retrospective

Next up: Release & Iteration Duration and Structure

Zero Defects™ – Part 2

As I suspected, my notion of Zero Defects was mildly controversial. I’ll try to clear up some of the questions in this post. But, rather than coming up with a cohesive argument, which I tried to make the first time, I’ll respond to some of the responses I received on the first post. (NOTE: Comments did not come with the posts when I manually imported them to Posterous.)

  • Defects vs. stories is just a semantic layer; they both boil down to work that needs to be done.

Semantics are important. Yes, they’re both just work items. But, a “defect” carries with it a negative connotation, whereas a “user story” does not.

I believe strongly that using the words “defect” or “bug” sets the inappropriate expectation with business partners that all defects/bugs can and will be removed from the software prior to promotion to production. And, I’ve seen first hand how using the term “defect” led to teams killing themselves trying to fix “bugs” that were never called out in test cases by the end of a Sprint. This is a recipe for failure and burnout.

I much prefer to set the expectation that anything that wasn’t explicitly called out by a user story and/or test case in the current Sprint is new work that needs to be prioritized with all other work. The best way to do this is to label that work the same way all other new work is labeled – as a new user story.

  • Defects tells me the QA's are doing their job.

To me, the statement above is akin to saying “Lines of Code tell me my programmers are doing their job.” Defect counts are meaningless. Point me at any modern website or program, I can probably identify 100 defects in an hour. Most of them will be garbage. But, if I file a defect on them, someone (or more likely, some team) will have to triage those things, beat down the stupid ones and prioritize the rest – probably outside their normal work item prioritization process.

  • My experience is our business partners are not be concerned with defects, unless they are not caught and are promoted into production.

My experience differs. I’ve worked with multiple PO’s, both at my current employer and elsewhere, who tracked defect counts and berated the team over them. In fact, my experience is that this is more common than not.

  • The only business partner of ours looking at Rally is our PO.

Yeah, that seems to be common here. But, there’s nothing preventing you from calling a meeting at the end of each Sprint where you demo the new functionality to a broader community of stakeholders. In fact, Scrum recommends this practice. It’s called a Sprint Review Meeting.

  • The requirements in agile are much less defined than in waterfall.

I disagree 100%. There should be zero theoretical difference in the quality of requirements between agile and waterfall. Agile just prefers to discover and document requirements iteratively, one at a time; whereas waterfall tries to define all requirements up front. In practice, this means that agile methods discover and document requirements closer to the time of implementation. They also allow for new requirements to be discovered by looking at how the software actually works. Waterfall doesn’t provide for that iterative feedback, meaning that the requirements get stale.

  • Defects provide us with a good method to track these issues to resolution. If the issue is new scope... then an user story is a much better alternative.

The problem I see in tracking defects separate from user stories is prioritization. How does the developer know what to work on next? Sure, Rally (and other tools) allow you to see all work in one bucket. And, that helps. But, in my experience, teams end up with silly rules like “defects found during the current sprint will be resolved in the current sprint.” That seems like an innocent enough rule, but what happens when you find more bugs than you can fix?

  • My interpretation of reading Alan's blog was that those items that are actual defects (do not meet defined requirements)... should be a new user story.

Ah, therein lies the misperception. In my original post, I meant to state that every requirement MUST be expressed as a test cases on a user story, and that all test cases MUST be passing before I show the work to my business partner(s). Therefore, by definition, the code has met all of the “defined requirements” and can only be deficient if there remain undefined requirements, which should be defined as new test cases on a new user story.

  • Bottom line, i do not think defects are bad...

I disagree. I’ve experienced enough problems from the semantic difference between defect and user story, that I no longer want to use the term defect ever again. (It’s not the IT folks that generally misinterpret the word “defect.”)

But, that's just my opinion. What's yours?

Lean Software Development

In my previous life, as a developer on the Microsoft Visual Studio Team System Process Team, I helped design workflows for user stories and other types of work items. These workflows were to be integrated into the process as a status field on the work item. At one point, we identified a workflow that boiled down to something like this:

  • New
  • Grooming
  • Groomed
  • Developing
  • Developed
  • Testing
  • Tested
  • Deploying
  • Deployed
  • Accepting
  • Accepted

At the time, we felt that it was important to make a distinction between items that were actively being worked and those that were waiting for the next stage of the process. In the end, we simplified the status field by dropping all the active verbs (those ending in “ing”), in favor of assuming that all work in the current sprint was active.

Part of me likes that decision – it certainly simplified the workflow. But, part of me, to this day, bemoans that decision because it limits a team’s ability to identify waste within the process that needs to be eliminated.

Imagine a process for building widgets. In this process, each widget starts life as raw materials. These materials get flobbled, marned, and volmed before becoming actual widgets. As materials move through the process, sometimes there are too many to marn all of them at once. So, those “in-process” materials sit and wait for their turn at the marner. Similar delays may also happen at the flobbling and volming workstations. And, of course, at any time, one of the machines could break causing downstream process points to run out of work.

This is a typical manufacturing process. And, it’s just like the software process described above.

In Lean manufacturing (a process improvement method invented by Toyota) the time a piece of work has to wait for a machine is considered waste. (It was a waste of money to buy more materials than the process can handle.) Likewise, the time a machine has to sit idle waiting for the next piece of work is also considered waste. (Idle infrastructure is a waste of the investment incurred to obtain and operate said infrastructure.) Lean manufacturing seeks to eliminate all of that waste. Out of this movement have come innovations like just-in-time inventory.

Likewise, in Lean software development, time spent in the inactive states (those verbs ending in “ed”) is considered waste to be eliminated. In other words, why groom 20 stories when the developers can only work on five at a time. The Product Owner may never prioritize some of the other 15 groomed stories high enough for the team to work on them, rendering moot the effort spent grooming those abandoned stories. Likewise, why continue developing code when there are broken tests? Until the tests are fixed, nothing can move forward in the process.

Scrum lends itself to Lean software development through its iterative, incremental approach to building software. So, the team only needs to groom enough work for the next sprint, for example, reducing the potential for time to be spent grooming stories that will end up being abandoned.

The metaphor breaks down a bit, though, when you run into a particularly urgent but poorly understood request from the business. It may take so much time to groom that development stalls, waiting for well understood requirements. This is wasteful in that the developers and testers (the infrastructure of software development) are idled. Extreme Programming’s approach to this situation would be to punt the poorly understood requirement until such a time that it can be adequately articulated. Scrum is less, um, confrontational, but no less dependent upon adequately articulated requirements.

As with everything else on this blog, I just needed to get this out of my head. I hope you found it interesting and/or useful. But, if not, at least my brain can now move on to something else.