Chapter 4. Service Model

Chapter 3 examined how to get your first service running on Windows Azure. You briefly saw how to create a web role, service definition, and configuration files. Roles that are composed using the service definition file are used to create complex services on Windows Azure. In this chapter, you’ll see what roles are, and how to compose them to model your service. You’ll also see how to use the various features offered by the service definition and configuration files to build complex services on Windows Azure.

Understanding Windows Azure Roles

In Chapter 2, you had a brief peek at how the service model helps you model your service; similar to how you might draw out your service architecture on a whiteboard. Let’s go back to that analogy for a second.

Consider a service architecture such as the one shown in Figure 4-1. When you draw it out on the whiteboard, you typically show the different “types” of boxes you use, and if you have a small number, the individual boxes themselves. For example, you might have a few “web frontend” boxes that host your web server, a few “DB” boxes hosting your database, and so on.

Load balancer and roles

Figure 4-1. Load balancer and roles

Each of these “types” typically has the same kind of code/software/bits running on that machine. All your web frontends typically have the same web frontend code, all your database machines have the same database software installed (with different partitions of data), and all your backend mail servers probably run the same mail transfer agent. This similarity is deliberate. If you want an extra web frontend, you add one more machine with the right bits, and twiddle with the load balancer to bring the new machine into rotation.

Windows Azure takes this informal grouping of machines that most applications do and formalizes it into something called roles. A Windows Azure role roughly corresponds to a “type” of box. However, each role is tweaked for a special purpose. There’s a web role that contains your website and frontend code. There’s a worker role suited to background jobs and long-running transactions.

Taking these core web and worker roles, Visual Studio offers a few role templates that customizes them for specific scenarios such as hosting a Windows Communication Foundation (WCF) service or a FastCGI application. Table 4-1 lists the roles and role templates offered by Windows Azure and what each of them is suited for. You’ll see web and worker roles explored in detail later in this chapter.

Table 4-1. Windows Azure role and role template types

Role type

Description

Web role

This is analogous to an ASP.NET website hosted in IIS (which is, in fact, exactly how Windows Azure hosts your code, as you saw in Chapter 2). This is your go-to option for hosting websites, web services, and anything that needs to speak HTTP, and can run on the IIS/ASP.NET stack.

Worker role

A worker role in Windows Azure fulfills the same role a long-running Windows service/cron job/console application would do in the server world. You get to write the equivalent of an int main() that Windows Azure will call for you. You can put absolutely any code you want in it. If you can’t fit your code in any of the other role types, you can probably find a way to fit it here. This is used for everything, including background jobs, asynchronous processing, hosting application servers written in non-.NET languages such as Java, or even databases such as MySQL.

CGI web role (web role)

Windows Azure offers direct support to host languages and runtimes that support the FastCGI protocol. The CGI Role template makes it easier for you. You’ll see how this FastCGI support works in Chapter 6. Though this is offered as a first-class role type in Visual Studio, under the covers it is just a web role with the CGI option turned on.

WCF service role (web role)

This is another customized version of the web role targeted at hosting WCF Services. Under the covers, this is just a web role with some Visual Studio magic to make it easier to write a WCF service.

Though each role is targeted at a specific scenario, each can be modified to fit a variety of scenarios. For example, the worker role can act as a web server by listening on port 80. The web role can launch a process to perform some long-running activity. You’ll have to resort to tactics such as this to run code that isn’t natively part of the Microsoft stack. For example, to run Tomcat (the popular Java application server), you must modify the worker role to listen on the correct ports.

Role Instances

Let’s consider the boxes and arrows on the whiteboard. When you draw out your architecture, you probably have more than one “type” of box. Any site or service with serious usage is probably going to need more than one web frontend. Just like the “type” of box corresponds to a role in Windows Azure, the actual box corresponds to a role instance.

Note the use of the terms type and instance. The similarity to the same terms as used in object-oriented programming (OOP) is deliberate. A role is similar to a class/type in that it specifies the blueprint. However, the actual code runs in a role instance, which is analogous to an instantiated object.

Also note that there is a strict limitation of one role instance per virtual machine; there is no concept of multiple roles on the virtual machine. If you want to host more than one role on the same virtual machine, you’ll need to pack the code together into a single web role or worker role, and distribute the work yourself.

Role instances and the load balancer

In the boxes-and-diagrams version, how incoming traffic is routed to your machines completely depends on your network setup. Your infrastructure could have a mass of virtual local area networks (VLANs), software and hardware load balancers, and various other networking magic.

In Windows Azure, the relationship between role instances and incoming traffic is simple. As shown earlier in Figure 4-1, all role instances for a role are behind a load balancer. The load balancer distributes traffic in a strict round-robin fashion to each role instance. For example, if you have three web role instances, each will get one-third of the incoming traffic. This even distribution is maintained when the number of instances changes.

How does traffic directed toward foo.cloudapp.net end up at a web role instance? A request to a *.cloudapp.net URL is redirected through DNS toward a virtual IP address (VIP) in a Microsoft data center that your service owns. This VIP is the external face for your service, and the load balancer has the knowledge to route traffic hitting this VIP to the various role instances.

Each of these role instances has its own special IP address that is accessible only inside the data center. This is often called a direct IP address (DIP). Though you can’t get at these DIPs from outside the data center, these are useful when you want roles to communicate with each other. Later in this chapter, you’ll see the API that lets you discover what these IP addresses are used for with the other role instances in your service.

Controlling the number of instances

The number of role instances that your service uses is specified in the service configuration (that is, the ServiceConfiguration.cscfg file). Under each Role element, you should find an Instances element that takes a count parameter. That directly controls the number of role instances your service uses. Since this is part of the service configuration (and not the definition), this can be updated separately, and doesn’t require rebuilding/redeploying your package.

Example 4-1 shows a sample service configuration with one instance. You can specify the instance count separately for each role in your service.

Example 4-1. Sample service configuration

<?xml version="1.0"?>
<ServiceConfiguration serviceName="CloudService1"
 xmlns=
"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="WebRole1">
    <Instances count="1" />
    <ConfigurationSettings />
  </Role>
</ServiceConfiguration>

Note that all instances in your role are identical. Later in this chapter, you’ll see how to have roles on different virtual machine sizes. It is currently not possible to have role instances of the same role with different sizes.

Note

Today, there is a direct mapping between the number of role instances and the number of virtual machines. If you ask for three role instances, you’ll get three virtual machines. This also factors into how your application gets billed. Remember that one of the resources for which you are billed is the number of virtual machine hours used. Every running role instance consumes one virtual machine hour every hour. If you have three role instances, that is three virtual machine hours every hour. Keep this in mind when changing the instance count, because a typo could wind up as an expensive mistake!

Though changing the number of role instances is simple in the service configuration, it can take longer to update in the cloud than other service configuration changes. The reason behind this is that changing the instance count (or, to be specific, adding new instances) means that new machines must be found to run your code. Depending on the current state of the machine, some cleanup may be required before your code can be run on the new machines. Take this into account when making large changes to instance counts.

Role Size

In Chapter 2, you learned how the Windows Azure hypervisor takes a single beefy machine and chops it up into smaller virtual machine sizes.What if you don’t want a small virtual machine? What if you want half the machine, or the entire machine itself?

Though this wasn’t available as part of the initial Windows Azure release, the ability to specify the size of the virtual machine was added in late 2009. To be specific, you get to choose how many “slices” of the machine you want combined. Each size has a different billing rate. As always, consult the Windows Azure website for up-to-date information on the latest billing rates.

To specify the size for your role, use the vmsize attribute in the service definition file. Since this is a service definition attribute, you cannot change this dynamically without reuploading your package, or doing a service upgrade.

Example 4-2 shows a sample configuration file with the virtual machine size set to “extra large” (the largest virtual machine size available). As of this writing, there are four virtual machine sizes, ranging from Small to ExtraLarge.

Example 4-2. Virtual machine size

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="Test"
 xmlns=
"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="Test"  vmsize ="ExtraLarge">
    <ConfigurationSettings/>
  </WorkerRole>
</ServiceDefinition>

Table 4-2 lists all the virtual machine sizes and their equivalent configuration if they were real machines. This is meant to act as a rule of thumb to give you an idea of the kind of performance you can expect. As you go from small to large, you get a bigger slice of the CPU, RAM, and hard disk, as well as I/O benefits. However, you should do performance and load testing with your target workload before picking the virtual machine size on which you want to run your code.

Table 4-2. Virtual machine reference sizes

Virtual machine size

Reference machine specifications

Small

One core, 1.75 GB RAM, 225 GB disk space

Medium

Two cores, 3.5 GB RAM, 490 GB disk space

Large

Four cores, 7.0 GB RAM, 1,000 GB disk space

ExtraLarge

Eight cores, 14 GB RAM, 2,040 GB disk space

Note

With the billing differences between virtual machine sizes, you can test out various interesting configurations to see what works the cheapest. This is especially true if your service doesn’t scale strictly linearly as you add more role instances. For example, a bigger virtual machine size might work out to be cheaper than having multiple role instances while delivering equivalent performance, or vice versa. Experiment with different sizes and numbers of instances to determine how your service performs, and find the cheapest option that works for you.

Service Definition and Configuration

Before digging into the web and worker roles, let’s look at two important files that you have touched a couple of times now: ServiceDefinition.csdef and ServiceConfiguration.cscfg. Using these files, you control everything from how much disk space your application can access, to what ports your service can listen on.

You’ll be modifying these files often when building services to be hosted on Windows Azure. The best way to modify these files is through the Visual Studio tool set, since it not only provides IntelliSense and shows you what all the options can be, but also does some basic checking for syntax errors. It is much quicker to catch errors through Visual Studio than it is to deal with an error when uploading a service configuration file through the Developer Portal, or through the management API.

Figure 4-2 shows Visual Studio’s Solution Explorer with the ServiceConfiguration.cscfg and ServiceDefinition.csdef files for the Thumbnails sample that ships with the SDK.

ServiceConfiguration and ServiceDefinition files in Visual Studio

Figure 4-2. ServiceConfiguration and ServiceDefinition files in Visual Studio

Though both of these files are closely related, they serve very different purposes, and are used quite differently. The single most important difference between the two files is that the definition file (ServiceDefinition.csdef) cannot be changed without rebuilding and reuploading the entire package, whereas the configuration file (ServiceConfiguration.cscfg) can be quickly changed for a running service.

Service Definition

The service definition is everything that goes into ServiceDefinition.cscfg. You’ll need to modify this file fewer times than the service configuration when using the Visual Studio tools because large portions are auto-generated for you. This essentially acts as a simplified service model for your service. It lays out a few critical components of your service:

  • The various roles used by your service.

  • Options for these roles (virtual machine size, whether native code execution is supported).

  • Input endpoints for these roles (what ports the roles can listen on). You’ll see how to use this in detail a bit later.

  • Local disk storage that the role will need.

  • Configuration settings that the role will use (though not the values themselves, which come in the configuration file).

Example 4-3 shows a sample service definition file. This example is taken from the Thumbnails sample which ships with the SDK. When building a package, the CSPack tool uses this file to generate a Windows Azure service model that the fabric can understand, and that has the right roles, endpoints, and settings. When the fabric sees the service model file, it programs the network settings and configures the virtual machine accordingly (see Chapter 2 for the gory details).

Example 4-3. Sample service definition

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="Thumbnails" xmlns=
"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="Thumbnails_WorkerRole"  vmsize ="ExtraLarge">
    <ConfigurationSettings>
      <Setting name="DataConnectionString" />
      <Setting name="DiagnosticsConnectionString" />
    </ConfigurationSettings>
  </WorkerRole>
  <WebRole name="Thumbnails_WebRole" >
    <InputEndpoints>
      <!-- Must use port 80 for http and port 443 for
           https when running in the cloud -->
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
    <ConfigurationSettings>
      <Setting name="DataConnectionString" />
      <Setting name="DiagnosticsConnectionString" />
    </ConfigurationSettings>
  </WebRole>
</ServiceDefinition>

Don’t worry if this seems like a load of mumbo-jumbo at the moment. To effectively use the elements defined in the service definition file, you must use the Windows Azure Service Runtime API, which is discussed shortly. At that time, usage of these individual elements will be a lot clearer.

Service Configuration

The service configuration file (ServiceConfiguration.cscfg) goes hand in hand with the service definition file. There is one important difference, though: the configuration file can be updated without having to stop a running service. In fact, there are several ways in which you can update a running service without downtime in Windows Azure, and updating the configuration file is a key component.

The service configuration file contains two key elements:

Number of role instances (or virtual machines) used by that particular role

Note that the size of these virtual machines is configured in the service definition file. This element is critical because this is what you’ll be changing when you want to add more virtual machines to your service to help your service scale.

Values for settings

The service definition file defines the names of the settings, but the actual values go in the configuration file, and can be read using the Windows Azure Service Hosting API. This is where you’ll place everything from your storage account name and endpoints, to logging settings, to anything that you need tweaked at runtime.

Example 4-4 shows a sample service configuration file. In this example taken from the Thumbnails sample, you can see a web role and a worker role, with one role instance count each. You also see both have two configuration settings called DataConnectionString and DiagnosticConnectionString. Later in this chapter, you’ll see how to use the Service Runtime API to read the values of these settings.

Example 4-4. Sample service configuration file

<?xml version="1.0"?>
<ServiceConfiguration serviceName="Thumbnails"
 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="Thumbnails_WorkerRole">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DataConnectionString"
        value="UseDevelopmentStorage=true" />

      <Setting name="DiagnosticsConnectionString"
value="UseDevelopmentStorage=true" />
    </ConfigurationSettings>
  </Role>
  <Role name="Thumbnails_WebRole">
    <Instances count="1"  />
    <ConfigurationSettings>
      <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />
      <Setting name="DiagnosticsConnectionString"
value="UseDevelopmentStorage=true" />
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

Since the service configuration can be changed at runtime, this is a good spot for placing several settings that you would want to change without bringing down your service. Here are some tips to effectively use your service configuration file:

  • Place your storage account name and credentials in your configuration file. This makes it easy to change which storage account your service talks to. This also lets you change your credentials and make your service use new credentials without having to bring down your service.

  • Place all logging options in your configuration file. Make it so that, by switching flags in your configuration on or off, you control what gets logged, and how.

  • Have different configuration files for staging, production, and any other test environments you’re using. This makes it easy to go from staging to production by just updating your configuration file.

Introducing the Service Runtime API

In Chapter 2, you saw how the Windows Azure fabric launches and hosts your application code, be it a web role or a worker role. Earlier in this chapter, you saw how the service definition and configuration files let you control various aspects of your service. This leads to some obvious questions. How do you interact with this environment set up for you by the Windows Azure fabric? How do you read settings from the configuration file? How do you know what your endpoints are? How do you know when the configuration is being updated?

As you might guess, the answer is the Service Runtime API. This API is meant for code that runs inside the cloud. It is meant to be included as a part of your web and worker roles. It lets you access, manipulate, and respond to the ambient environment in which your code runs.

Note

The Service Runtime API is at times referred to as the Service Hosting API by some blogs. Both refer to the same thing. This is not to be confused with the Service Management API. See the sidebar, How Is This Different from the Service Management API? for more on the difference between the two.

The API itself can be accessed in one of two ways. For managed code, the SDK ships with an assembly called Microsoft.WindowsAzure.ServiceRuntime.dll, which is automatically referenced when you create a new cloud service project using Visual Studio (Figure 4-3). For native code, header and library files ship with the SDK that let you call into the Service Runtime API using C. Both the native and managed libraries have equivalent functionality, and it is easy to translate code from one to the other, since they use the same concepts. In the interest of keeping things simple, this chapter covers only the managed API.

Service Runtime API’s types

Figure 4-3. Service Runtime API’s types

To use the Service Runtime API, add a reference in your projects to Microsoft.WindowsAzure.ServiceRuntime.dll. If you’re using the Visual Studio tools, you’re in luck, since it is one of the three assemblies added automatically by default, as shown in Figure 4-4. Bring in the namespace Microsoft.WindowsAzure.ServiceRuntime to use it in your code.

Automatic project references, including the service runtime

Figure 4-4. Automatic project references, including the service runtime

The most common use of the Service Runtime API is to access information about your service and your roles:

  • It enables you to access the latest value of your configuration settings as defined in the service definition and service configuration files. When you update your configuration for your running service, the API will ensure that the latest configuration value is returned.

  • It is also used to access the topology of your service—what roles are running, how many instances each role has, and so on.

  • In the case of worker roles, it is tightly bound to the role’s life cycle.

Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment is the important type to look at here.

Table 4-3 shows a short list of interesting methods and properties on that type, which you’ll be using frequently.

Table 4-3. RoleEnvironment properties/methods/events

Property/method/event on RoleEnvironment

Description

GetConfigurationSettingValue(string)

Lets you access configuration settings and the latest value from ServiceConfiguration.cscfg.

GetLocalResource(string)

Gets the path to a “local resource”—essentially a writable hard disk space on the virtual machine you’re in.

RequestRecycle()

Recycles the role instance.

DeploymentId

A unique ID assigned to your running service. You’ll be using this when using the Windows Azure logging system.

Roles

A collection of all the roles of your running service and their instances.

Changed, Changing, StatusCheck, Stopping

Events you can hook to get notified of various points in the role’s life cycle.

Now that you’ve had a quick peek at the Service Runtime API, let’s see it in action.

Accessing Configuration Settings

Earlier in this chapter, you saw how to define configuration settings in ServiceDefinition.csdef and assign them values in ServiceConfiguration.cscfg. During that discussion, one critical piece was left out: how to read the values assigned and use them in your code. The answer lies in the GetConfigurationSettingValue method of the RoleEnvironment type.

Consider the two values defined in Examples 4-3 and 4-4. To read the values of the settings DataConnectionString and DiagnosticsConnectionString, you would call RoleEnvironment.GetConfigurationSettingValue("DataConnectionString") and RoleEnvironment.GetConfigurationSettingValue("DiagnosticsConnectionString"), respectively.

Understanding Endpoints

By default, web roles can be configured to listen on port 80 and port 443, while worker roles don’t listen on any port. However, it is often necessary to make worker roles listen on a port. If you want to host an FTP server, an SMTP/POP server, a database server, or any other non-HTTP server, you’ll probably need your server to listen on other ports as well.

Using a combination of the service definition and the Service Runtime API, you can make your service listen on any TCP port and handle traffic, either from the Internet or from other roles.

Warning

As of this writing, Windows Azure does not support worker roles dealing with UDP traffic. For example, you cannot host DNS servers on Windows Azure, or any other protocol that needs UDP.

At this point, you may be wondering why you should go to all this trouble. Why can’t an application listen on any port without having to declare it in the service definition? And why is the Service Runtime API involved in this at all?

The answer to these questions is twofold. The first part of the answer concerns security. Windows Azure tries to lock down open ports as much as possible—opening only the ports that your application uses is a good security practice. The second part of the answer concerns the fact that Windows Azure sometimes maps external ports to different internal ports because of the way its networking capabilities are implemented. For example, though your service is listening on port 80 on the Internet, inside the virtual machine it will be listening on an arbitrary, random port number. The load balancer translates traffic between port 80 and your various role instances.

To understand this in detail, let’s look at a sample time server that must listen on a TCP port. Figure 4-5 shows a hypothetical service, MyServer.exe, listening on port 13 (which happens to be the port used by time servers). In the figure, there are two role instances of the worker role hosting the time server. This means the load balancer will split traffic between the two equally.

Worker role listening on external endpoint

Figure 4-5. Worker role listening on external endpoint

To make MyServer.exe listen on port 13, you must do two things. The first is to declare in your service definition file the port it is going to listen on (13, in this case) and assign it an endpoint name. You’ll use this endpoint name later in your code. This can be any string, as long as it is unique among all the endpoints you declare. (Your server can listen on as many TCP ports as it chooses.)

Example 4-5 shows the service definition file for MyServer.exe. It declares the service will listen on port 13 and the endpoint name is set to DayTimeIn. The protocol is set to tcp (the other options for protocol are http and https, which will set up your worker role as a web server).

Example 4-5. Declaring an endpoint in the service definition

<WorkerRole name="WorkerRole" enableNativeCodeExecution="true">
  <Endpoints>
    <!-- This is an external endpoint that allows a
          role to listen on external communication, this
          could be TCP, HTTP or HTTPS -->
    <InputEndpoint name="DayTimeIn" port="13" protocol="tcp" />
  </Endpoints>
</WorkerRole>

Now that an endpoint has been declared, Windows Azure will allow your traffic on port 13 to reach your role’s instances. However, there is one final issue to deal with.

Since Windows Azure can sometimes map external ports to random internal ports, your code needs a way to “know” which port to listen on. This can be a source of confusion—if you create a socket and listen on port 13, you won’t receive any traffic because you’re not listening on the correct internal port to which the load balancer is sending traffic.

To determine the right port to listen on, you must use the Service Runtime API. Specifically, you use the RoleEnvironment.CurrentRoleInstance.InstanceEndpoints collection. This contains all the endpoints you declared in your service definition, along with the port they’re mapped to inside the virtual machine.

Example 4-6 shows code that uses the Service Runtime API to retrieve the correct port and endpoint on which to listen. It then creates a socket and binds to that endpoint.

Example 4-6. Listening on the correct port

RoleInstanceEndpoint dayTimeIn =
    RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["DayTimeIn"];

IPEndPoint endpoint = dayTimeIn.IPEndpoint;

//Create socket for time server using correct port and address
// The load balancer will map network traffic coming in on port 13 to
// this socket

Socket sock = new Socket(endpoint.AddressFamily, SocketType.Stream,
            ProtocolType.Tcp);
sock.Bind(endpoint);
sock.Listen(1000);

//At this point, the socket will receive traffic sent to port 13
// on the service's address. The percentage of traffic it receives
// depends on the total number of role instances.

Understanding Inter-Role Communication

One important use of declaring endpoints is to enable communication between the various roles of your service. When the Windows Azure fabric spins up your role instances, each role instance gets an internal IP address (the DIP) that is accessible only from within the data center. However, you can’t get to this IP address even if you’re within the same service. Windows Azure locks down firewall and network settings for all ports by default.

Note

Windows Azure sets up networking within the data center so that only role instances belonging to your own service can access other role instances from your service. Several layers protect applications that run on Windows Azure from one another, and the way the network is set up is one of them.

To open up a port, you must declare an endpoint in your service definition. Isn’t this the same as opening up an endpoint to the Internet, like you did previously? Very close.

The difference between declaring an endpoint that anyone on the Internet can access and opening up a port for inter-role communication comes down to the load balancer. When declaring an InputEndpoint in your service definition like you did earlier in this chapter, you get to pick a port on the load balancer that maps to an arbitrary port inside the virtual machine. When you declare an InternalEndpoint (which is the XML element used in a service definition for inter-role communication), you don’t get to pick the port. The reason is that, when talking directly between role instances, there is no reason to go through the load balancer. Hence, you directly connect to the randomly assigned port on each role instance. To find out what these ports are, you use the Service Runtime API.

Figure 4-6 shows you how a web role can connect using inter-role endpoints to two worker role endpoints. Note the lack of a load balancer in the figure. Making this architecture work is simple, and requires two things: declaring the endpoint and connecting to the right port and IP address.

Let’s walk through both steps. The first step is to modify your service definition and an InternalEndpoint element, as shown in Example 4-7. This is similar to the InputEndpoint element you used earlier in this chapter, except that you don’t specify the port.

Example 4-7. Inter-role endpoint

<WorkerRole name="MyWorkerRole" enableNativeCodeExecution="true">
  <Endpoints>
    <!-- Defines an internal endpoint for inter-role communication
      that can be used to communicate between worker
      or Web role instances -->
    <InternalEndpoint name="InternalEndpoint1" protocol="tcp" />
  </Endpoints>
</WorkerRole>
Inter-role communication

Figure 4-6. Inter-role communication

When your worker role starts, ensure that it listens on the correct port. The code to do this is the same as what was shown in Example 4-6.

Finally, you just need to discover the IP addresses and ports, and then connect to them. To do that, enumerate the RoleEnvironment.Roles collection to find your role, and then loop through each instance, as shown in Example 4-8. Once you have an IP address and a port, you can write any common networking code to connect to that role instance.

Example 4-8. Finding the right inter-role endpoint

foreach (var instance in RoleEnvironment.Roles["MyWorkerRole"].Instances)
{
    RoleInstanceEndPoint endpoint =
          instance.InstanceEndpoints["InternalEndpoint1"];

    IPEndPoint ipAndPort = endpoint.IPEndpoint;

   // ipAndPort contains the IP address and port. You can use it with
   // any type from the System.Net classes to connect.

}

Note

Though all the examples in this chapter show .NET code, it is easy to integrate this API with code not written for Windows Azure. For example, one common scenario is to run MySQL on Windows Azure. Since MySQL can’t call the Service Runtime API directly, a wrapper is written that calls the Service Runtime API, calls the right port to listen on, and then calls the MySQL executable, passing the port as a parameter. Of course, whether you can do this depends on whether the software you’re working with has configurable ports.

Subscribing to Changes

When your service runs on Windows Azure, it is running in a highly dynamic environment. Things change all the time: the configuration gets updated, role instances go down and come up, and role instances could disappear or be added.

The Service Runtime API provides a mechanism to subscribe to updates about your service. Let’s look at some common updates and how to get notified when they happen. All changes can be “hooked” using the RoleEnvironment.Changing and RoleEnvironment.Changed events. The argument, RoleEnvironmentChangedEventArgs, contains the actual changes.

The most common form of updates occurs when you add or remove role instances. When you have inter-role communication as shown earlier, it is important to be notified of these changes, since you must connect to new instances when they come up. To get access to the role instances being added, you must poke through the RoleEnvironmentChangedEventArgs and look for RoleEnvironmentTopologyChanges. Example 4-9 shows how to access the new instances when they are added to your service.

Example 4-9. Topology changes

RoleEnvironment.Changed += delegate(object sender,
    RoleEnvironmentChangedEventArgs e)
{
    foreach (var change in e.Changes)
    {
        var topoChange = change as RoleEnvironmentTopologyChange;
        // Return new value
        if (topoChange != null)
        {
            // Name of role that experienced a change in instance count
            string role = topoChange.RoleName;
            // Get the new instances
            var instances =
                RoleEnvironment.Roles[role].Instances;
        }
    }
};

Another common set of changes is when configuration settings are updated. Example 4-10 shows how to use the same event handler to look for new configuration settings.

Example 4-10. Configuration change

RoleEnvironment.Changed +=
     delegate(object sender, RoleEnvironmentChangedEventArgs e)
{
    foreach (var change in e.Changes)
    {
        var configChange =
            change as RoleEnvironmentConfigurationSettingChange;

        if (configChange != null)
        {
           // This gets you new value for configuration setting
            RoleEnvironment.GetConfigurationSettingValue(
                configChange.ConfigurationSettingName);
        }
    }
};

Looking at Worker Roles in Depth

Web roles are fairly straightforward—they are essentially ASP.NET websites hosted in IIS. As such, they should be familiar to anyone well versed in ASP.NET development. All common ASP.NET metaphors—from web.config files to providers to HTTP modules and handlers—should “just work” with web roles.

Worker roles are a different story. Though the concept is simple, their implementation is unique to Windows Azure. Let’s dig into what a worker role is and how to use it.

A worker role is the Swiss Army knife of the Windows Azure world. It is a way to package any arbitrary code—be it something as simple as creating thumbnails, to something as complex as entire database servers. The concept is simple: Windows Azure calls a well-defined entry point in your code, and runs it as long as you want. Why is this so useful?

With a web role, your code must fit a particular mold: it must be able to serve HTTP content. With a worker role, you have no such limitation. As long as you don’t return from the entry point method, you can do anything you want. You could do something simple, such as pull messages from a queue (typically, a Windows Azure storage queue, something you’ll see later in this book). In fact, this is the canonical example of how worker roles are used. Or you could do something different, such as launching a long-running transaction, or even launching an external process containing an entire application. In fact, this is the recommended way to run other applications from non-.NET languages. There is little that can’t be squeezed into the worker role model.

Creating Worker Roles

Creating a worker role is simple using the Visual Studio tools. Figure 4-7 shows how to add a new worker role to your project using the new cloud service dialog (you can add one to an existing solution, too, if you prefer). This will generate a new Visual Studio project for you, along with the correct entries in ServiceDefinition.csdef.

Creating a worker role using the Visual Studio tools

Figure 4-7. Creating a worker role using the Visual Studio tools

A worker role project is very similar to a normal .NET class library project, except that it is expected to have a class inheriting from RoleEntryPoint and to implement a few methods. When your worker role code is run, the assembly is loaded and run inside a 64-bit hosting process.

Understanding the Worker Role Life Cycle

To understand the life of a worker role, let’s look at the boilerplate code that Visual Studio generates for you. Example 4-11 shows the code. The key things to notice are the various methods.

Example 4-11. Worker role boilerplate code

  public class WorkerRole : RoleEntryPoint
    {
        public override void Run()
        {
            // This is a sample worker implementation. Replace with your logic.
            Trace.WriteLine("WorkerRole1 entry point called", "Information");

            while (true)
            {
                Thread.Sleep(10000);
                Trace.WriteLine("Working", "Information");
            }
        }

        public override bool OnStart()
        {
            DiagnosticMonitor.Start("DiagnosticsConnectionString");

            // Restart the role upon all configuration changes
            // Note: To customize the handling of configuration changes,
            // remove this line and register custom event handlers instead.
            // See the MSDN topic on "Managing Configuration Changes"
            // for further details
            // (http://go.microsoft.com/fwlink/?LinkId=166357).
            RoleEnvironment.Changing += RoleEnvironmentChanging;

            return base.OnStart();
        }

        private void RoleEnvironmentChanging(object sender,
RoleEnvironmentChangingEventArgs e)
        {
            if (e.Changes.Any
        (change => change is RoleEnvironmentConfigurationSettingChange))
                e.Cancel = true;
        }
    }

Let’s walk through the life of a worker role:

  1. When Windows Azure launches your worker role code, it calls the OnStart method. This is where you get to do any initialization, and return control when you’re finished.

  2. The Run method is where all the work happens. This is where the core logic of your worker role goes. This can be anything from just sleeping for an arbitrary period of time (as was done in Example 4-11), to processing messages off a queue, to launching an external process. Anything goes.

  3. You have other event handlers for other events in the worker role life cycle. The RoleEnvironmentChanging event handler lets you deal with networking topology changes, configuration changes, and any other role environment changes as you saw earlier. The code inside the event handler can choose to either restart the role when it sees a change, or say, “This change is trivial—I can deal with it and I don’t need a restart.”

Understanding Worker Role Patterns

Though worker roles can be used in a wide variety of ways, you’ll typically find their usage falls into one of a few very common patterns.

Queue-based, asynchronous processing

The queue-based, asynchronous processing model is shown in Figure 4-8. Here, worker roles sit in a loop and process messages of a queue. Any queue implementation can be used, but since this is Windows Azure, it is common to use the Windows Azure Storage queue service. Typically, a web role will insert messages into the queue—be it images that need thumbnails, videos that need transcoding, long-running transactions, or any job that can be done asynchronously. Since you want your frontend web roles to be as responsive as possible, the idea here is to defer all expensive transactions/work by inserting work items into a queue, and having them performed in a worker role.

Queue-based worker roles

Figure 4-8. Queue-based worker roles

There are several advantages to having such an architecture. Chief among them is the fact that your web role and worker roles are asynchronously coupled, so you can scale/add instances/remove instances to one of them without affecting the other. You can also upgrade one of them without affecting the other, or incurring service downtime.

Caching layer

More complex services have multiple layers. For example, it is common to have a frontend layer consisting of web roles, and a caching layer consisting of a distributed cache system (such as memcached), with the two sets of roles talking to each other through inter-role communication.

Summary

Worker roles are the jack-of-all-trades of the Windows Azure world. For almost any application you can think of that needs to run on Windows Azure, there is probably a way to squeeze it into a worker role. Get creative in the use of worker roles, and you can wield a wide range of flexibility.

Get Programming Windows Azure now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.