Votebox

As much as I love Rally for managing work, I think I may have found something better. Votebox is where Dropbox users can request features and vote for those they’d most like to see. What I like about it is how simple it is:

image

Here you see the three most popular feature requests on their site. Each feature request contains a title, a short description, a category (not shown) and some metadata (in red). Users may vote for and comment on feature requests (in blue). And, Dropbox can update the status of a specific request (in green).

This is the simplest, most elegant site I’ve ever seen for managing a backlog of work. It simultaneously a) houses the backlog, b) tracks feedback, c) gauges interest in competing priorities, d) communicates progress, and e) manages expectations. It’s like a suggestion box that worked out really hard, but never took any steroids. It has all the muscle it needs, without all the bulging veins and other side effects (er, features) of larger sites like Rally.

The only thing I’d be hesitant to do with a tool like this is turn over product decisions to the crowd. What makes Votebox work for Dropbox is that Dropbox has stayed true to their original product vision – a simple, 100% reliable way to backup files to the Internet and sync them across multiple computers. Feature requests outside that vision may be popular, but they would dilute the brand, causing more harm than good to the product.

Rather, I see Votebox as a tool to help talented Product Owners with strong visions for their products interact with their audiences.

Coding Standards

One of the original practices of Extreme Programming was coding standards. Back in 2000, when I read my first XP book, I tensed up a bit at the thought of having to make my code look like everyone else’s. But, over the years, I’ve come to see the wisdom of it: once all the code follows the same standard, it becomes much easier to read.

In fact, now a days, I often reformat a block of code to meet a coding standard before I try to figure out what it’s doing. (But, only when I have strong unit tests!) Long story short, I haven’t had a good knock-down-drag-out argument about curly braces in nearly ten years! (Anyone want to have one for old time’s sake?)

With that in mind, I needed to look up Microsoft’s latest guidelines for .NET class libraries, today. Specifically, I knew they said something about using abbreviations and acronyms when naming things. But, I couldn’t remember exactly what the recommendation was. So, I looked it up. Turns out, Microsoft has done a rather nice job of structuring their design guidance on the web.

It’s not something you’ll need to refer to everyday. But, it’s good to know it’s out there…

http://msdn.microsoft.com/en-us/library/czefa0ke(v=VS.71).aspx

An Agile Approach to Mission Control

NASA’s Cassini spacecraft has been studying Saturn, it’s rings and moons for nearly six years, despite the original mission being slated to expire after only four years. But, due to better than expected performance and lower than expected fuel consumption, NASA has extended the mission for an additional seven years, to 2017.

In order to negotiate the flight path for the next seven years, the engineers in charge of planning the orbital maneuvers consulted with the five science teams affiliated with the project. Each team is assigned to study different things: Saturn, Titan (Saturn’s largest moon), the rings, the icy satellites, and the magnetosphere. Each team presented their wish list for the places they’d like to see over the next seven years, and the engineers got busy running the numbers:

The first time [the engineers] met with the discipline teams, they offered three possible tours. The next time, they offered two, and, in January 2009, the scientists picked one of them. Last July, after six months of tweaking by [the engineers], the final “reference trajectory” was delivered. It now includes 56 passes over Titan, 155 orbits of Saturn in different inclinations, 12 flybys of Enceladus, 5 flybys of other large moons — and final destruction.*

In essence, this team of engineers had to balance the wishes of the five research teams with the remaining fuel and gravity boosts available. The approach they took was to present alternatives and iterate on them until they found the best solution, given the requirements and the constraints:

“It’s not like any problem set you get in college, because you have so many factors pulling in different directions,” Mr. Seal said. “The best way to measure it is to look at how much better the next iteration is than the previous one” until “you’re only making slight improvements.” Then you stop.*

I can’t think of a better way to describe the iterative development process espoused by most agile software development methodologies, including Scrum and XP. You know when to stop when the remaining improvements are no longer worth the investment.

I wonder how many of our customers would like to see three options, then two, then one…

* http://www.nytimes.com/2010/04/20/science/space/20cassini.html

On Clarity and Abstraction in Functional Tests

Consider the following tests:

[Test]
public void LoginFailsForUnknownUser1()
{
    string username = "unknown";
    string password = "password";

    bool loginSucceeded = User.Login(username, password);

    Assert.That(loginSucceeded == false);
}
[Test]
public void LoginFailsWithUnknownUser2()
{
    using (var browser = new IE(url))
    {
        browser.TextField(Find.ById(new Regex("UserName"))).Value = "unknown";
        browser.TextField(Find.ById(new Regex("Password"))).Value = "password";

        browser.Button(Find.ById(new Regex("LoginButton"))).Click();
        bool loginSucceeded = browser.Url.Split('?')[0].EndsWith("index.aspx");

        Assert.That(loginSucceeded == false);
    }
}

Note the similarities:

  • Both methods test the same underlying functional code; and,
  • Both tests are written in NUnit.
  • Both tests use the Arrange / Act / Assert structure.

Note the differences:

  • The first is a unit test for a method on a class.
  • The second is a functional test that tests an interaction with a web page.
  • The first is clear. The second is, um, not.

Abstracting away the browser interaction

So, what’s the problem? Aren’t all browser tests going to have to use code to automate the browser?

Well, yes. But, why must that code be so in our face? How might we express the true intention of the test without clouding it in all the arcane incantations required to automate the browser?

WatiN Page Classes

The folks behind WatiN answered that question with something called a Page class. Basically, you hide all the browser.TextField(…) goo inside a class that represents a single page on the web site. Rewriting the second test using the Page class concept results in this code:

[Test]
public void LoginFailsWithUnknownUser3()
{
    using (var browser = new IE(url))
    {
        browser.Page<LoginPage>().UserName.Value = "unknown";
        browser.Page<LoginPage>().Password.Value = "password";

        browser.Page<LoginPage>().LoginButton.Click();
        bool loginSucceeded = browser.Page<IndexPage>().IsCurrentPage;

        Assert.That(loginSucceeded == false);
    }
}
public class LoginPage : Page
{
    public TextField UserName
    {
        get { return Document.TextField(Find.ById(new Regex("UserName"))); }
    }
    public TextField Password
    {
        get { return Document.TextField(Find.ById(new Regex("Password"))); }
    }
    public Button LoginButton
    {
        get { return Document.Button(Find.ById(new Regex("LoginButton"))); }
    }
}

Better? Yes. Now, most of the WatiN magic is tucked away in the LoginPage class. And, you can begin to make out the intention of the test. It’s there at the right hand side of the statements.

But, to me, the Page Class approach falls short. This test still reads more like its primary goal is to automate the browser, not to automate the underlying system. Plus, the reader of this test needs to understand generics in order to fully grasp what the test is doing.

Static Page Classes

An alternative approach I’ve used in the past is to create my own static classes to represent the pages in my web site. It looks like this:

[Test]
public void LoginFailsWithUnknownUser4()
{
    using (var browser = new IE(url))
    {
        LoginPage.UserName(browser).Value = "unknown";
        LoginPage.Password(browser).Value = "password";

        LoginPage.LoginButton(browser).Click();
        bool loginSucceeded = IndexPage.IsCurrentPage(browser);

        Assert.That(loginSucceeded == false);
    }
}
public static class LoginPage
{
    public static TextField UserName(Browser browser)
    {
        return browser.TextField(Find.ById(new Regex("UserName")));
    }
    public static TextField Password(Browser browser)
    {
        return browser.TextField(Find.ById(new Regex("Password")));
    }
    public static Button LoginButton(Browser browser)
    {
        return browser.Button(Find.ById(new Regex("LoginButton")));
    }
}

This is the closest I have come to revealing the intention behind the functional test without clouding it in all the arcane incantations necessary to animate a web browser. Yes, there are still references to the browser. But, at least now the intention behind the test can be inferred by reading each line from left to right. Furthermore, most of the references to the browser are now parenthetical, which our eyes are accustomed to skipping.

What do you think?

I’d like to know what you think. Are your functional tests as clear as they could be? If so, how’d you do it? If not, do you think this approach might be helpful? Drop me a line!