Securing WCF Services: Preventing Unauthorized Access

Suppose you are writing a web service to perform a sensitive calculation. Your web service should only be accessible by authorized users and/or applications. Here’s my advice for configuring your service:

Use .NET to Your Advantage

.NET has built in features for ensuring that the code calling yours is authorized to do so. For example, take a look at this snippet of code:

[PrincipalPermission(SecurityAction.Demand, Role = "MyLocalSecurityGroup"]
public SearchResults Find(string contractNumber)
{
    ...
}

Notice the [PrinciplePermission] attribute. This attribute tells .NET that only principals who are members of the "MyLocalSecurityGroup" role are allowed to run this method (where a principal is a user/service account, and a role is a locally defined security group on the server). In other words, in order to run this method, the caller must be running under an account that is a member of the local security group specified.

Local Security Groups

By telling your code to check a security group on the local server, you get around the problem of having to care about the different test regions in your code. The local group on your DEV server will contain only those accounts which are authorized to access the service in DEV. Likewise, the TST and PRD local security groups will only contain accounts authorized to access TST and PRD respectively.

Domain Security Groups (NT Groups)

Of course, you don’t want to manage access at the local server level. Rather, you want to let User Account Services (UAS) manage access centrally. The way to do this is to create domain security groups for each test level. For the middleware, I created these groups.

  • MyDomainSecurityGroup.DEV
  • MyDomainSecurityGroup.TST
  • MyDomainSecurityGroup.PRD

Then update the local security groups on your servers to ONLY contain one of these domain security groups. If the server is a DEV server, put the DEV make the DEV domain group the only member of the local security group. All other user/service accounts should be removed from the local security group and added to the appropriate domain security group.

Conclusion

If you follow this pattern to secure your web services, you’ll be able to prevent unauthorized access to your service by adding a single line of code to every method that needs to check authorization. Furthermore, in a more complex scenario – say one where users may have read, write or admin access to your service – you can create as many local and domain security groups as you need to ensure that users and applications have the lowest level of access required to do their work.

One final thought: If you web service is hosted in across a pool of servers, remember to create the local security groups on all of the servers. You could probably do this with your installer. Or, you could make it a manual step in your implementation plan that only gets executed once.

WCF Service Configuration Editor

So, I’ve been working on a small WCF service for a while now. Everything was going well. I had a suite of tests that ran just fine when I ran the service locally. I built an installer using WIX. And, blamo! When I installed the service on a DEV server, I started seeing all kinds of strange errors. Apparently, the service web.config and the client app.config that worked locally aren’t sufficient once you leave the safety of localhost.

And, as it turns out, those config files are horrendously complex. Fortunately, there is a tool to make editing those files a little easier: The WCF Service Configuration Editor. This tool, which is available on the Tools menu in Visual Studio 2008, gives you a GUI for editing the <system.serviceModel> node of a web.config. Here’s what it looks like:

WCF Service Configuration Editor

Granted, it’s not the most intuitive thing to use. And, I’ve only used it this one time. But, it sure took the hand out of hand-editing the web.config for the WCF middleware service.

On Logical Layers and Physical Tiers

This post came out of a team design discussion last summer. We were trying to decide how to layer and deploy a new tool for administering part of our system. In order to work through the options, I took a step back and thought about design in the abstract. It helped me a great deal at the time. I hope it helps you, too.

Logical Application Layers

It makes good sense to design applications in logical layers. In most modern business applications, there are (at least) three layers:

  1. A UI layer to display information to the user and handle user input;
  2. A “middle-tier” or business layer to handle data validation and process business rules; and,
  3. A data access layer to handle storage and retrieval of data from some form of repository.

This allows you to keep all of your UI code together in one place, separate from the business rules and data access code (increasing cohesion). And, it makes it easy for you to ensure that the UI code never calls the data access code directly (reducing coupling). These are the hallmarks of good OO design.

Physical Application Tiers

It may also make sense to deploy your logical layers across multiple physical environments (or tiers). For example, you could:

  1. Deploy all three layers to a single physical environment (though not generally recommended), or
  2. Deploy the UI layer to a separate environment in a DMZ, or
  3. Deploy all three layers to separate physical tiers.

This allows you to improve security (through the use of firewalls and service accounts). And, it allows you to improve performance (through the use of load balancers at each tier). These are the hallmarks of a well deployed application. Notice though, that to do this, you need to design logical layers into your application well before you try to deploy it.

Packaging Logical Layers for Deployment to Physical Tiers

Packaging is the process of placing classes into DLLs. In other words, how do I decide where to put a new class when creating it? Obviously, if I plan to deploy the application across multiple physical environments, then it makes sense to create multiple packages (or DLLs). But, if I decide to host two (or more) layers on the same physical tier, I may still want to package those layers separately, and deploy both DLLs. So, packaging decisions can be influenced by deployment decisions, but not the other way around.

Furthermore, how you package your logical layers (in one, two or three DLLs) and where you deploy your logical layers (on one, two or three physical environments) has nothing to do with the logical layers themselves. In other words, no matter how I package and deploy my application, it will still contain the same classes and methods in each logical layer.

Service Layers

If you choose to deploy your logical layers on a single physical environment, each layer can call its subordinate layers directly (regardless of how many DLLs you create). But, if you choose to deploy your logical layers across multiple physical environments, then you’re going to need some glue to allow the layers to communicate across those environments. A common approach to this issue is to introduce a thin service layer between two logical layers to handle the communications, like this:

  • Front-end Tier
    • UI Layer
    • Service Layer Proxy
  • Back-end Tier
    • Service Layer
    • Business Layer
    • Data Access Layer

Notice that the service layer in this case actually spans the two physical environments. A portion of the service layer, called the “proxy” lives on the front-end. And, the remainder of the service layer lives on the back-end. Microsoft does a good job of hiding all this from you in WCF. But, the underlying proxy and service layers are still there.

The important thing to remember about service layers is that they should only handle the communications across physical environments. There should be no UI, business or data access logic in the service layer or its proxy. Again, this goes to increasing cohesion and reducing coupling.