Mocks and fakes and stubs, oh my!

Yesterday, I started writing an article about mocks and fakes and stubs, but ended up writing an article about Inversion of Control (IoC) / Dependency Injection. Today, I’ll take up the topic of how to isolate your code from that of other objects in your unit tests. But, first, what are mocks, fakes and stubs?

In his article Mocks Aren’t Stubs, Martin Fowler defines mocks, fakes, stubs and dummy objects as follows:

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
  • Mocks are objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

All of these objects are useful when unit testing your code. But, each type of test object has it’s own strengths and weaknesses. Let’s look at how to use each type of test object (except for dummy objects) to isolate the Add method from the Logger object that it uses.

Here, again, is the code for the IocCalculator class (modified slightly, to actually throw overflow exceptions):

public class IocCalculator
{
    public IocCalculator(ILogger logger)
    {
        Logger = logger;
    }

    private ILogger Logger { get; set; }

    public int Add(params int[] args)
    {
        Logger.Log("Add");

        int sum = 0;
        try
        {
            foreach (int arg in args)
                sum = checked(sum + arg);
        }
        catch (OverflowException ex)
        {
            Logger.Log("OverflowException: Add");
            throw ex;
        }

        return sum;
    }
}

Dummy Objects

First, let’s try testing this code with a dummy object:

public class DummyLogger : ILogger
{
    public void Log(string message)
    {
    }
}
[Test]
[ExpectedException(typeof(OverflowException))]
public void AddThrowsExceptionOnOverflow()
{
    var calculator = new IocCalculator(new StubLogger());
    calculator.Add(int.MaxValue, 1);
}

Take a look at the dummy object, above. It doesn’t do anything. It just swallows the calls to the logger from the Add method. Is this useful? Actually, yes. It allowed us to test that the exception was thrown correctly. Is it a good fit for this test? Yes. Can it do everything a mock, fake or stub can do? No.

Mock Objects

For example: What if we actually wanted to ensure that the Add method actually called the logger? In that case, a mock object might be more useful:

[Test]
public void AddLogsExceptionOnOverflow()
{
    var mocks = new Mockery();
    var mockLogger = mocks.NewMock<ILogger>();
    Expect.Exactly(2).On(mockLogger).Method("Log");

    var calculator = new IocCalculator(mockLogger);

    Assert.Throws(typeof(OverflowException), () => calculator.Add(int.MaxValue, 1));
    mocks.VerifyAllExpectationsHaveBeenMet();
}

This test replaces the dummy logger object with a mock logger object that the test itself creates (using the NMock framework). The first three lines setup the mock object by instantiating the mock object framework, instantiating the mock logger object from the ILogger interface, and telling NMock to expect two calls to the “Log” method on the mock logger object. Behind the scenes, mockLogger.Log counts the number of times it gets called. The final line of the test then compares the number of times we expected to call mockLogger.Log with the actual number of times it was called.

(Note: The Assert statement above uses a lambda expression to invoke an anonymous method, which in and of itself is a delegate. This syntax was introduced in C# 3.0. If you find it a bit obtuse, you’re not alone. Perhaps it’d make a good blog post. Anyone want to write it?)

One final note on mocks: Many in the Test Driven Development community are enamored with mock objects. Some developers use mocks to the exclusivity of all other types of test objects. Personally, I prefer to do state-based testing rather than behavior based testing. In other words, I want to setup my object under test, call the method under test, and assert that the state of the object changed in some observable way. I don’t want my test to know about the underlying implementation details of the method under test. Mock objects, in my opinion, have a way of getting cozy with implementation details that makes me uncomfortable.

Fake Objects

But, what if you want to test that the Add method logs the correct messages? In that case, you may need to implement a fake logger and validate it’s contents, like this:

public class FakeLogger : ILogger
{
    public readonly StringBuilder Contents = new StringBuilder();

    public void Log(string message)
    {
        Contents.AppendLine(message);
    }
}
[Test]
public void AddLogsCorrectExceptionOnOverflow()
{
    var fakeLogger = new FakeLogger();
    var calculator = new IocCalculator(fakeLogger);

    Assert.Throws(typeof(OverflowException), () => calculator.Add(int.MaxValue, 1));
    Assert.AreEqual("Add\nOverflowException: Add\n", fakeLogger.Contents.ToString());
}

Note that the fake logger is actually a real logger. It actually logs the messages it receives in a string (using a StringBuilder object). Essentially, this implementation is an “in memory” logger, similar to what Fowler described as an “in memory database.” In fact, if this weren’t an example, I would probably have named the class InMemoryLogger or StringLogger, rather than FakeLogger. That’s more in line with what the code actually does.

So, is the fake logger useful? Absolutely. In fact, this is the approach I would actually take, since the dummy and mock loggers cannot check the text that was logged.

Stub Objects

But, what about stub objects? Well, it turns out that I chose a poor example for illustrating stub objects. As you’ll recall from above, stubs return hard-coded values to trigger specific code paths within the code under test. My logger example doesn’t need this kind of functionality. But, if a method were to call the Add method, it might be handy to use a stub to hard code a response, like this:

So, let’s test some code that needs to call the Calculator object. First, here’s the code:

public class Decider
{
    public bool Decide(params int[] args)
    {
        return ((new IocCalculator(new DummyLogger()).Add(args) % 2) == 1);
    }
}

Hmmm… Well, we can’t isolate this code from the IocCalculator code, yet. Let’s refactor:

public class Decider
{
    private readonly ICalculator _calculator;

    public Decider(ICalculator calculator)
    {
        _calculator = calculator;
    }

    public bool Decide(params int[] args)
    {
        return ((_calculator.Add(args) % 2) == 1);
    }
}

Now, we can pass in a couple off different stub objects to test the Decider.Decide method:

public class EvenCalculator : ICalculator
{
    public int Add(params int[] args)
    {
        return 2;
    }
}

public class OddCalculator : ICalculator
{
    public int Add(params int[] args)
    {
        return 1;
    }
}

[TestFixture]
public class DeciderFixture
{
    [Test]
    public void DecideReturnsFalseWhenEven()
    {
        var decider = new Decider(new EvenCalculator());
        Assert.False(decider.Decide());
    }

    [Test]
    public void DecideReturnsTrueWhenOdd()
    {
        var decider = new Decider(new OddCalculator());
        Assert.True(decider.Decide());
    }
}

Conclusion

So, that (finally) concludes my look at mocks, fakes, stubs and dummies. I hope you found it understandable. And, I hope you find it helpful the next time you need to test a piece of code in isolation. In fact, this same Inversion of Control / Dependency Injection approach can be leveraged in many different ways. My focus in these articles was to demonstrate using the approach in unit testing. But, it can also be applied to functional/system testing.

Say you have a website that displays data returned from a web service. Say you want to hide the Address2 line if it is empty. You could look for test data from the web service that meets your needs. Or, you could program your tests to run against a stub service that returns just the values you need. Not only will you be guaranteed that the test data won’t change, but the test will run much faster, to boot. Sure, an end-to-end that calls the actual web service test will be necessary before going live. But, why pay that price every time you run your test suite.

Furthermore, IoC can also be used in production code to enable things like plug-ins or composite user interfaces. In this scenario, an installed plug-in is generally just a piece of code that implements a specific interface – one that allows a third party to develop software that integrates with your code at runtime via configuration.

As always, feel free to post corrections and questions in the comments.

Zero Defects™ – Part 2

As I suspected, my notion of Zero Defects was mildly controversial. I’ll try to clear up some of the questions in this post. But, rather than coming up with a cohesive argument, which I tried to make the first time, I’ll respond to some of the responses I received on the first post. (NOTE: Comments did not come with the posts when I manually imported them to Posterous.)

  • Defects vs. stories is just a semantic layer; they both boil down to work that needs to be done.

Semantics are important. Yes, they’re both just work items. But, a “defect” carries with it a negative connotation, whereas a “user story” does not.

I believe strongly that using the words “defect” or “bug” sets the inappropriate expectation with business partners that all defects/bugs can and will be removed from the software prior to promotion to production. And, I’ve seen first hand how using the term “defect” led to teams killing themselves trying to fix “bugs” that were never called out in test cases by the end of a Sprint. This is a recipe for failure and burnout.

I much prefer to set the expectation that anything that wasn’t explicitly called out by a user story and/or test case in the current Sprint is new work that needs to be prioritized with all other work. The best way to do this is to label that work the same way all other new work is labeled – as a new user story.

  • Defects tells me the QA's are doing their job.

To me, the statement above is akin to saying “Lines of Code tell me my programmers are doing their job.” Defect counts are meaningless. Point me at any modern website or program, I can probably identify 100 defects in an hour. Most of them will be garbage. But, if I file a defect on them, someone (or more likely, some team) will have to triage those things, beat down the stupid ones and prioritize the rest – probably outside their normal work item prioritization process.

  • My experience is our business partners are not be concerned with defects, unless they are not caught and are promoted into production.

My experience differs. I’ve worked with multiple PO’s, both at my current employer and elsewhere, who tracked defect counts and berated the team over them. In fact, my experience is that this is more common than not.

  • The only business partner of ours looking at Rally is our PO.

Yeah, that seems to be common here. But, there’s nothing preventing you from calling a meeting at the end of each Sprint where you demo the new functionality to a broader community of stakeholders. In fact, Scrum recommends this practice. It’s called a Sprint Review Meeting.

  • The requirements in agile are much less defined than in waterfall.

I disagree 100%. There should be zero theoretical difference in the quality of requirements between agile and waterfall. Agile just prefers to discover and document requirements iteratively, one at a time; whereas waterfall tries to define all requirements up front. In practice, this means that agile methods discover and document requirements closer to the time of implementation. They also allow for new requirements to be discovered by looking at how the software actually works. Waterfall doesn’t provide for that iterative feedback, meaning that the requirements get stale.

  • Defects provide us with a good method to track these issues to resolution. If the issue is new scope... then an user story is a much better alternative.

The problem I see in tracking defects separate from user stories is prioritization. How does the developer know what to work on next? Sure, Rally (and other tools) allow you to see all work in one bucket. And, that helps. But, in my experience, teams end up with silly rules like “defects found during the current sprint will be resolved in the current sprint.” That seems like an innocent enough rule, but what happens when you find more bugs than you can fix?

  • My interpretation of reading Alan's blog was that those items that are actual defects (do not meet defined requirements)... should be a new user story.

Ah, therein lies the misperception. In my original post, I meant to state that every requirement MUST be expressed as a test cases on a user story, and that all test cases MUST be passing before I show the work to my business partner(s). Therefore, by definition, the code has met all of the “defined requirements” and can only be deficient if there remain undefined requirements, which should be defined as new test cases on a new user story.

  • Bottom line, i do not think defects are bad...

I disagree. I’ve experienced enough problems from the semantic difference between defect and user story, that I no longer want to use the term defect ever again. (It’s not the IT folks that generally misinterpret the word “defect.”)

But, that's just my opinion. What's yours?

Inversion of Control (IoC) / Dependency Injection

When I first started building unit tests for my code, I would create a little user interface through which I could plug in various inputs and see the output. It was a highly manual process. But, it was also NOT unit testing. Rather, I was integration testing my objects and any objects referenced by my objects. Even when I began using tools like NUnit to do unit testing, I continued writing integration tests and calling them unit tests. It wasn’t until I learned about the “inversion of control” (or, IoC) design pattern that I began writing truly discreet tests.

Smarter people than I have written a bunch about IoC. So, rather than try to explain it, let’s look at an example. The following code uses a logger to write messages to an event log:

public class Calculator
{
    public int Add(params int[] args)
    {
        var logger = new EventViewerLogger("Application");
        logger.Log("Add");

        int sum = 0;
        try
        {
            foreach (int arg in args)
                sum += arg;
        }
        catch (OverflowException ex)
        {
            logger.Log("OverflowException: Add");
            throw ex;
        }

        return sum;
    }
}

Notice that the method directly instantiates the logger. Imagine trying to test this method without also testing the logger. It is simply not possible. Furthermore, this code violates both the Single Responsibility Principle and the Open/Closed Principle.

The Single Responsibility Principle states that a class/method must have only one responsibility. In this case, the appropriate responsibility is mathematical calculations. Instantiating objects counts as a second responsibility. Thus, the violation.

The Open/Closed Principle states that a class/method should be open for extension, but closed for modification. The way I’m using this term, the violation is in the fact that the Add method creates an instance of a concrete class, rather than referencing an interface. In doing so, it is impossible to change the logging behavior of this class/method without modifying the class itself.

Here’s how that same code would look after addressing these issues:

public class IocCalculator
{
    public IocCalculator(ILogger logger)
    {
        Logger = logger;
    }

    private ILogger Logger { get; set; }

    public int Add(params int[] args)
    {
        Logger.Log("Add");

        int sum = 0;
        try
        {
            foreach (int arg in args)
                sum += arg;

        }
        catch (OverflowException ex)
        {
            Logger.Log("OverflowException: Add");
            throw ex;
        }

        return sum;
    }
}

Now, the logger is instantiated outside the method. In fact, it is instantiated outside the class entirely and passed in via a constructor. (This is called Constructor Dependency Injection, which is a form of IoC.)

Furthermore, the class now references the ILogger interface rather than the EventViewerLogger concrete class. This allows for the client object to determine which logger should be used. In a production environment, that might be the EventViewerLogger, or it might be a DatabaseLogger. More interesting to this discussion, however, is the fact that now we can use a MockLogger, FakeLogger or StubLogger to test the calculation code without also testing the logger code.

So, what are mocks, fakes and stubs? They’re the subject of a future post. Stay tuned.

Reason #467 to use ReSharper: Live Templates

I just learned to use ReSharper’s code snippet feature, called Live Templates. I was looking for a code snippet to generate an NUnit test method. Oddly, despite ReSharper’s support for running NUnit tests, they didn’t see fit to include an NUnit test snippet. (Instead they included templates for the xUnit framework [Fact] and [Theory] methods, so I guess we know what framework they’re using.) So, I just created my own. Here’s how:

Step 1: Open ReSharper’s Live Templates dialog from the ReSharper menu in Visual Studio.

image

Step 2: Navigate to the folder where you’d like to add a Live Template and click the New Template button: (For me, this was the C# folder under the User Templates folder.)

image

(ALT+PrtScn didn’t capture the mouse pointer, but you get the idea.)

Step 3: Give your Live Template a shortcut and description, then enter the same kind of snippet code that you’d put in a traditional Visual Studio code snippet.
image

Step 4: Save your new Live Template and turn it on by selecting the checkbox next to it in the Templates Explorer dialog.

image

(Again, ALT+PrtScn didn’t capture the mouse pointer, but you get the idea.)

So, now, assuming I’m using ReSharper’s IntelliSense (to check go to ReSharper / Options / IntelliSense / General), I can create a test simply by typing “te” and selecting my Live Template from IntelliSense.

image

Sure, you could do all of this in Visual Studio, without ReSharper. In fact, Visual Studio 2010 includes a code snippet to generate tests in their syntax (where [Test] becomes [TestMethod]). But, you would have had to hand edit the snippet XML. Plus, ReSharper also includes Surround Templates (i.e. “surround the selected code in an if/for/try statement, please”) and File Templates (to create that new Test Fixture classes with all the correct NUnit using clause and [TestFixture] statement, for example).

To paraphrase Red Green: If your peers don’t find you handsome, they should at least find you handy.

Zero Defects™

Last week, in a conversation with a coworker, he stated that defects do not make developers look bad. Basically, he asserted the following:

  1. Defects are not used to judge developer competence; and,
  2. Defects are simply artifacts for tracking work that needs to be done.

I agree with these points. However, experience teaches me that defects can and do make teams look bad. I’ve seen defects become a point of contention between a development team and their business partner(s). And, once a rift forms between the two, it is very difficult to heal.

So, how can we simultaneously encourage defect creation (to track work that needs to be done) without creating a rift between the development team and their business partners? My preferred approach is to create Zero Defects. Here’s how:

  1. Manage your work with a backlog of user stories.
  2. Create test cases for each user story before you begin development. Review the test cases with your business partner as well as QA to make sure you didn’t miss any. (This can happen iteratively. There’s no need to create test cases for stories you’re not going to work on right away.)
  3. As you develop a story, automate the test cases to prove that the software does what the tests say it should. When all the tests pass, the code is complete. (If possible, the developer who writes the code should not be the developer/tester who automates the tests. And, it’s very helpful to automate the tests before the code exists.)
  4. When you finish a story (and the associated tests), review both the software and the automated tests with your business partner to make sure everything is as they wanted. If your business partner wants to make changes at that point – either new requests, or things we might typically call defects – write them up as new user stories and prioritize them in the backlog.

That’s all there is to it. You, the developer/tester, get to take credit in the current Sprint/iteration for the work you completed. And, your business partner gets to manage a single backlog with all the outstanding work in it. Plus, no one has to fight over what’s a defect and what’s a user story. It’s a win/win+!

Now, I realize that this might be considered controversial. So, I’ll explain my thought process in a future post. Feel free to tell me how this’ll never work in the comments!

Why in-text ads suck and what to do about it

Have you ever visited a web site with contextual advertisements associated with certain words within the text of the page? It looks like this:

Kontera links

Moving your cursor over one of the links produces a pop-up advertisement that blocks the text you were reading. Here’s the ad I saw when I inadvertently moved my mouse over the “business” link:

Pop-up advertisement

Can anyone explain to me what Bloomberg Businessweek has to do with Dell knowingly selling defective products? So much for the contextual aspect of the contextual ads. All that’s left are the annoying aspects!

The company behind this little “innovation” is called Kontera. The way it works is like this:

  • Advertisers publish ads to be displayed with certain keywords.
  • Publishers sign up to host ads.
  • Publishers place a reference to a Javascript file hosted at kontera.com on their web page.
  • The Javascript file inserts the links into the text, as the page is rendered.

Frankly, I find this sort of thing highly irritating. Here I am, trying to learn about why Dell knowingly shipped faulty computers, and all the sudden the text of the article I’m reading is covered by an animated ad. Now, I have to stop, click the little X to close the darn thing, and find my place all over again.

What this says to me is that neither Kontera, nor the publisher (in this case Gnomelocker.com) have any concern about my experience reading their content. Their only concern is using their content as a means to deliver advertisements, in pursuit of a buck. It makes me not want to read Gnomelocker.com or any other publisher that used Kontera.

At home, I’ve found a solution that works like a champ. I added the following entry into my hosts file:

127.0.0.1 te.kontera.com

Now, whenever my browser requests the Javascript file from te.kontera.com, it is redirected to my localhost and fails to load the file. Problem solved.

Unfortunately, when you use a proxy server – like most corporations – the browser defaults to resolving host names via the proxy server rather than using the hosts file. So, my solution only works at home.

Hmmm… I wonder what it would take to get te.kontera.com added to our corporate black-list? Hmmm…

iPhone 3GS Security Flaw

Apparently, the iPhone 3GS has a security flaw that would allow someone to access the data on a PIN protected device by connecting it to a computer running Ubuntu Lucid Lynx. (There was no mention of the iPhone 3G or the iPad having the problem. And, the original iPhone device didn’t offer encryption at all.)

Obviously, this kind of attack would only work if the hacker had possession of the actual iPhone. But, phones get misplaced all the time. Because of this, if you own an iPhone and you care about the security of the information on the device, you need the ability to reset the device remotely.

For corporations running Microsoft Exchange 2007:

You can initiate a remote wipe using the Exchange Management Console, Outlook Web Access, or the Exchange ActiveSync Mobile Administration Web Tool.

iPhone OS Enterprise Deployment Guide (page 9)

Individual iPhone owners can use MobileMe to “Remote Wipe” their device should it be irretrievably lost. Though, the service costs $99/year, it does include many other services.

Birthday WiX Wishes

My birthday is coming in early June. If I could have anything in the whole wide world, I’d take one billion dollars. But, if I could only have something from the whole WiX world, then here’s what I’d like:

I want a WiX project template specific to my environment:

  1. I want the template to nail down all the little things we never/rarely do in our current installers, like checking for the right version of IIS, checking for previously installed versions of the application, etc.
  2. I want the template to be able to install an application in multiple environments.
  3. I want the template to auto-detect the platform, if possible.
  4. I want the template to accept command-line or UI driven properties where auto-detection of the environment is not possible.
  5. I want the template to separate static code from dynamic code (using WiX include files containing project specific information).
  6. I want to be able to point the template at a directory and have it auto-generate the appropriate Feature, Directory, Component, and File elements for installing all those files into a target directory.
  7. I want the auto-generator to understand files with DEV, TST, ACC, PRD in their names and setup conditional components as appropriate.
  8. I want the auto-generator code to work with our build process so that the WXS file can be generated at build time – preferably based on a recursive crawl of a root folder, to minimize the maintenance costs/risks.

Anyone interested in helping me blow out some of the candles?

WCF Service Configuration Editor

So, I’ve been working on a small WCF service for a while now. Everything was going well. I had a suite of tests that ran just fine when I ran the service locally. I built an installer using WIX. And, blamo! When I installed the service on a DEV server, I started seeing all kinds of strange errors. Apparently, the service web.config and the client app.config that worked locally aren’t sufficient once you leave the safety of localhost.

And, as it turns out, those config files are horrendously complex. Fortunately, there is a tool to make editing those files a little easier: The WCF Service Configuration Editor. This tool, which is available on the Tools menu in Visual Studio 2008, gives you a GUI for editing the <system.serviceModel> node of a web.config. Here’s what it looks like:

WCF Service Configuration Editor

Granted, it’s not the most intuitive thing to use. And, I’ve only used it this one time. But, it sure took the hand out of hand-editing the web.config for the WCF middleware service.