May 2009

Volume 24 Number 05

Cloud Computing - Patterns For High Availability, Scalability, And Computing Power With Microsoft Azure

By Joshy Joseph | May 2009

Code download available

This article is based on a prerelease version of Microsoft Azure. Current guidance for Azure High Availability can be found here.

This article discusses:

  • Cloud computing architecture
  • Microsoft Azure
  • Cloud patterns and best practices
This article uses the following technologies:
Microsoft Azure

Contents

Cloud Computing
Azure Implementation
Cloud Application Types and Patterns
A Real-World Sample
Application Design and Development
Deploying the Service
Administration
Conclusion

During the last decade, the decoupling of interfaces from implementation, scalable hosting models, service orientation, subscription-based computing, and increased social collaboration became the goals of distributed systems. Now, Internet-hosted distributed applications with connectivity to internal applications—often referred as Software plus Services (S+S)—are gaining popularity. Organizations are leveraging datacenters hosted by third parties to alleviate concerns about hardware, software, reliability, and scalability. These are just some of the new architecture trends that help you build interoperable applications that scale, reduce capital expenditure, and improve reliability. Cloud computing offers many of these benefits.

A cloud computing platform enables applications to be hosted in an Internet-accessible virtual environment that supplies the necessary hardware, software, network, and storage capacities and provides for security and reliability, removing much of the burden of purchasing and maintaining hardware and software in-house. In the cloud, you can develop, deploy, and manage applications as you have in the past and integrate these services to your on-premise applications. You pay only for the time, resources, and capacity you use while scaling up to accommodate the changing business needs.

In this article, I will examine the typical cloud platform architecture and some common architectural patterns, along with their implementation on the Azure offering from Microsoft.

Cloud Computing

Figure 1 illustrates the typical cloud computing platform architecture.

fig01.gif

Figure 1 Layered Architecture of a Cloud Platform(Click the image for a larger view)

In this model, each layer abstracts the layer below it, exposing interfaces that layers above build upon. There is no hard dependency between layers and each layer provides composable or Plug and Play architecture with services from other layers. Each layer provides horizontal scalability as needed.

As you can see, a cloud platform is composed of a number of subsystems. Let's look at each one next.

A hosting platform The hosting platform provides the physical, virtual, and software assets. These assets include physical machines, operating systems, network systems, storage systems, power management, and virtualization software. Bare metal and other operational resources are abstracted as virtual resources to the layers above.

Cloud infrastructure services The most important function of this layer is to abstract the hosting platform as a set of virtual resources and manage those resources based on scalability and availability needs. Fundamentally, this layer provides three kinds of abstract resources: compute, storage, and network, and exposes a set of APIs to access and manage these resource abstractions. Thus you gain access to the underlying physical resources without knowing the details of the underlying hardware and software and can control these systems efficiently through configuration. Services offered by this subsystem are often known as Infrastructure as a Service (IaaS).

Cloud platform services Developing and managing software for cloud computing is complex. It becomes really complex when you integrate on-premise software with hosted services. Platform services provide a set of capabilities exposed as services to help with such integration. For example, in the Azure Services Platform, Microsoft .NET Service Bus helps with discovery and access while the Microsoft .NET Access Control Service helps role- and rule-based claims transformation and mapping. Availability of platform services may differentiate one cloud provider from another. Services provided by this layer are referred to as Platform as a Service (PaaS).

Cloud applications This layer houses applications that are built for cloud computing. These applications expose Web interfaces and Web Services for end users, enabling multitenant hosting models. Some functions include connecting disparate systems and leveraging cloud storage infrastructure to store documents. These services fall under the umbrella of Software as a Service (SaaS).

Security services Security services ensure token provisioning, identity federation, and claims transformation. These services are built on the open standards, WS-Security, WS-Trust, WS-Federation, SAML protocols, and OpenID, for greater interoperability.

Management services Management interfaces cut across all the layers described above. The hosting platform leverages management interfaces and agents for automated scalability and availability administration. Even though the cloud is hosted and managed in a datacenter, customers may need functions that allow them to easily control their application and post deployment configurations, get analytics about service usage, and connect their enterprise management systems.

Tools Tools help you build, test, and deploy applications into the cloud. These tools may be extensions to existing tools (Visual Studio Tools for Azure, for instance) or hosted tools from a specific cloud provider.

Users and providers of cloud computing There are three categories of users in cloud computing: cloud platform providers, cloud consumers, and end users. Cloud platform providers provide the hosting platform and cloud infrastructure services. Cloud consumers utilize the cloud platform and develop applications and services to be consumed by end users. Cloud consumers configure applications for scalability, availability, and security needs. End users leverage the services offered by cloud consumers. These users could be human, organization, or machine and may be hosted anywhere.

In this context, Azure provides a cloud platform while cloud consumers (or ISVs building integrated cloud solutions or enterprises) leverage this platform to build applications. For example the Live Mesh data synchronization platform leverages the Azure Services Platform and Microsoft Azure to develop and host S+S services for end users.

Figure 2maps the Azure Services platform to the layered architecture in Figure 1. This platform provides a set of services to application developers. These services can be used both by applications running in the cloud and by applications running on local systems. Azure, operating system for the cloud, is the foundation of Microsoft's cloud platform offering. As shown in Figure 2, Azure Services Platform provides a set of shared services: SQL Data Services, .NET Services, and Live Services, which can be used individually or collectively. In addition, Microsoft offers various finished cloud applications including Exchange Online, SharePoint Online, and CRM Online. Here, however, I'll focus on only the Azure operating system and related patterns.

fig02.gif

Figure 2 Azure Services Platform Mapped to Layered Architecture

Azure Implementation

Azure provides on-demand compute and storage capabilities to host, scale, and manage Web applications and services on the Internet hosted in Microsoft datacenters.

Azure provides features that consumers of cloud services require. For example, physical hardware resources are abstracted away and exposed as compute resources ready to be consumed by cloud applications. Physical storage is abstracted with storage resources and exposed through well-defined storage interfaces. A common Windows fabric abstracts the physical hardware and software platform and exposes virtualized compute and storage resources. In addition, each instance of the application is monitored for availability and scalability, and automatically managed. For instance, if an application in an instance goes down, the Fabric controller (shown in Figure 3) will be notified and another instance in another virtual machine (VM) will be instantiated with limited impact to end users. Because of the amount of virtualization, when writing your code, you need to make sure that you don't make any assumptions about the state of the machine hosting your application. In Azure your service could easily be moved to a new VM. Azure follows a model-driven service management design with Azure Fabric Controller responsible for mapping declarative service specifications to available resources and managing the lifecycle of the services.

fig03.gif

Figure 3 Microsoft Azure and Roles

By default most of the applications developed in .NET could be hosted in Azure with some specific restrictions on partial trust models, data storage/access, and inter-application communication. A rich developer SDK and programming tools help you get oriented toward this platform.

Right now you can create .NET applications, ASP.NET applications, and WCF-based Web services, using tools supported in Visual Studio.

Azure currently offers two processing methods, Web role and Worker role (see Figure 3). Each role executes on a separate VM and communicates with Azure Fabric through an agent. The agent collects resource metrics and node metrics including VM usage, application status, logs, resource usage, exceptions, and failure conditions. It should be noted that each VM may be executed on a single physical host or on a Windows 2008 hypervisor VM. The specific runtime host configuration is dictated by Azure, depending on the service-level agreement and other business/technical needs. A Web role hosts interactive Web applications and provides in-bound and out-bound connections (request-response patterns). In-bound calls are routed through Azure-provided load balancers to provide high availability. As you have noticed, this mandates that each Web role instance be stateless so that fabric can route the requests to any Web role in the cluster.

The worker role is a specialized application executing a .NET application in the background. These applications don't have in-bound connectivity from external applications. However they can send messages to external services. While interacting with Web roles, Worker roles send/receive messages using the Azure Queue storage service. Another interesting aspect is the scale-out capability of the roles. An application deployment administrator can state how many instances of any role may be needed in the configuration and the Fabric will decide on running these instances depending on the system scale-out needs. In short, Azure provides always-on, on-demand deployment and failure handling.

So far I've covered general concepts around Cloud platforms and specific features of Azure. Next I will introduce some core cloud application types, architecture patterns, and best practices that you should be familiar when developing applications for cloud platforms.

Cloud Application Types and Patterns

There are a number of architecture and design patterns and best practices that help you select a cloud platform and implement cloud services and applications. In general, these patterns fall into four categories: compute, storage, communication, and management.

Compute patterns Once you know which application type you are dealing with, you should select the appropriate compute pattern. As I mentioned, the Web role is used for building interactive application patterns, while the Worker role is suitable for building background and scheduler tasks. In some cases you may need both features. One important consideration when planning your compute tasks is to remember to execute those tasks in such a way as to avoid moving large amounts of data around.

Storage patterns Cloud storage provides remote storage and abstracts the storage medium away from the users. The design is sufficiently flexible to support a wide range of application requirements. Azure addresses two patterns of cloud storage: table storage and blob storage. The table storage pattern allows the applications to store key/value pairs following a table structure while the blob storage pattern can be used to store any data.

Communication patterns These patterns address message exchange. Azure technology leverages Windows Communication Foundation (WCF) and REST APIs for Web service communication. You must consider partial trust models and the stateless nature of the application while implementing communication patterns.

Administration patterns Administration patterns differentiate two core aspects of service management: service deployment and service-level-management. Deployment patterns organize service definition, configuration, and monitoring, while other patterns address service-level management and regular operational maintenance.

Now let's look at cloud application types in detail. I will classify cloud application types into three categories based on the types of scenarios each addresses. The first category is Web applications. These include traditional hosted Web applications, emerging composed applications that may utilize two or more data sources and services. These applications need automatic scale-out and scale-down capabilities. An application like Facebook is a good example. In such scenarios the organization may be a startup that wants to spend little capital on infrastructure while being able to handle increasing demand.

Next there are the analytical applications whose main function is to run processor-intensive operations and data mining, often over the same data many times and thus they require access to a great deal of storage capacity and processor availability all at once. There is no need to pay for such huge capacity twenty-four hours a day, seven days a week, however, so cloud services are appealing.

Finally, there are the parallel computing applications that need to perform multiple tasks in parallel so that a huge project can be executed in a short period of time. Again, paying for the one-time-only large capacity that cloud computing can provide is a cost-effective solution.

Not all applications are suitable for running on the cloud platform. The obvious limiting factors include data security, potential lock-in with a cloud provider, open interfaces for communications, trust model limitations, efficiency of moving data in and out of the cloud, integration with existing services outside of the cloud platform, and legal/privacy concerns. Figure 4lists some scenarios that are suitable for cloud computing, and Figure 5lists suitable patterns, scenarios, and features.

Figure 4 Application Types Suitable for Cloud Computing
Cloud Application Type Context Scenario Example
Web Applications Host traditional Web apps and interactive apps that compose two or more data sources and services. A start-up company creating a Web collaboration application expecting to spend little capital on infrastructure.
Parallel Computing Large-scale parallel execution of compute tasks. Normally these tasks execute for a short period of time utilizing more compute and storage resources. A newspaper decides on digitization of reports for better Web consumption.
Analytical Applications Analytical processing executes various analytical and data mining algorithms over the same data again and again. A financial company executing Monte Carlo simulations on financial data to assess risk periodically.
Figure 5 Patterns, Features, and Their Corresponding Scenarios
System Considerations Pattern Categories Context Azure Feature Scenario Example
Compute On-demand Application Instance Applications that need scale-out and scale-down capabilities. Automated management of Web and Worker Roles Let retail store sites be available during special events.
  Worker Executing parallel batch jobs or background applications. Leveraging multiple instances of Worker Roles to execute background tasks Using schedulers for analytics processing by executing background tasks in parallel.
Storage Blob Storage Storing large amount of unstructured data. Leveraging Azure Blob Storage Company storing legal compliance reports in backup store.
  Structured Storage Storing data in a table structure while not demanding full relational semantics. Azure Table Storage A structured storage to maintain a Web application's state—i.e. storing purchase order or shopping basket information.
Communication Service Interface (Web and Web service API ) Exposing application capabilities through UI and Web services. Azure support for Building applications using ASP .NET, Silverlight, WCF Web Services Company building digital asset management solution exposing APIs to other services.
  Service-oriented integration Invoking external Web services using Web-standard protocols. Azure platform supports WCF clients and REST APIs Web applications leveraging Microsoft-hosted live meeting services for collaboration.
  Messaging Share messages between applications in a scalable, reliable, and asynchronous way. Leveraging Azure Queue storage service for Web to Worker role communication Web application informing a scheduler to execute a specific task.
Administration Cloud Deployment Deploying applications with desired configurations such as scale-out and high-availability requirements. Separation of service definition, services configuration, and packaging to help appropriate roles A retail store configuring Web portal to automatically scale-out when usage exceeds threshold and scale-down as needed.
  Design for Operations How to make my application operations-ready by providing health status and logging. Using Azure RoleManger.WriteToLog API and overriding RoleEntrypoint.GetHealthStatus() in the Worker Role Developers designing cloud applications to be operations friendly through instrumentation.
  Service Instance Management Start, stop, and suspend cloud apps. Manage service configurations. Automatic handling of dynamic configuration changes and failure conditions A Web application administrator using the Azure portal to manage application state.
  Management Alerts Sending Instant Messages, e-mail, or alerts about resource and billing info. Provided through Live integration Enabling applications to send IMs. Default notification on resource usage.
  Service Level Management Get info on app resource consumption such as processor time, bandwidth. Automated service management via model-based approach ISV looking for billing and resource usage info about the application deployed.

In Azure, Web roles are suitable for executing interactive Web applications while Worker roles are suitable for background applications. Data is stored in Azure blob and table storage services. Inter-role communication is enabled through a Queue service, which provides an asynchronous message exchange. The platform supports WCF-based Web services and clients with Basic HTTP and WS* binding. Storage services are exposed through REST-style interfaces for easier consumption by other applications and cloud providers.

As you know, patterns offer flexible solutions to common problems in software development. Next I will identify common pattern categories corresponding to the above application types.

Rather than trying to describe each individual pattern in depth, I'll focus on broad problem areas and associated pattern categories and show how these patterns are implemented in the context of Azure development.

A Real-World Sample

Let me introduce a scenario that will help illustrate some of these concepts. Fabrikam, an ISV, would like to build a hosted digital asset management Web application. This application enables end users to store digital images in the cloud. Users must be able to preview the pictures submitted, tag them, and annotate them with location information. The application must allow users to use their own desktop applications to access these pictures directly from the storage. Figure 6represents a high-level architectural view of the services exposed by Fabrikam. As shown in the diagram, Fabrikam exposes a Web application and Web service to the end users for managing their digital assets. These services store and retrieve digital assets from the asset store implemented on top of Azure blob services. Fabrikam implements a background processor to execute tasks such as tagging, thumbnail generation, and so on.

fig04.gif

Figure 6 Fabrikam Hosted in Azure

Let's see how to design, develop, and deploy such an application to the Azure Services Platform. The discussions that follow demonstrate core concepts and features available in Azure. You can download the code and documentation to see how the entire scenario is implemented and hosted in Azure.

Application Design and Development

The first step is to understand the applications scenarios, select appropriate patterns, best practices, and programming models. During this process you will identify the type of the application and workloads targeting Azure. In addition, you will figure out the inter-application messaging and data storage patterns. Here I'll assume that Fabrikam is building an interactive Web application with scale-out needs and a background processer to generate metadata from digital assets. The above scenario needs Azure Web and Worker roles, respectively. In addition, Fabrikam needs Blob storage to store digital assets and metadata, service interfaces to expose assets, and messaging patterns for communication.

To facilitate development, the Microsoft Azure SDK comes with a development environment that simulates Azure on the local machine, thereby enabling offline development and debugging. The SDK supports Web and Worker roles, storage, Visual Studio templates, tools for packaging, viewing and managing role instances, and a Visual Studio add-in that enables the F5 experience. You can download the SDK from MS Download center.

To begin, use the Azure Tools for Microsoft Visual Studio to create a new project using the Cloud Service project templates. You can add necessary user interface, application logic, and input processing to the Web application. Alternatively, you could start from an existing ASP.NET Web application. Because the application will execute under partial trust, avoid calling APIs that violate this restriction. For details about this restriction, please, see the Microsoft Azure SDK Trust Policy Reference.

Next, import Microsoft.ServiceHosting.ServiceRuntime assembly and use the respective APIs to enable logging, reading local environment configuration, and reading application configuration sections from the service definition file, as shown below.

// Writing log information RoleManager.WriteLog ("Information", "message to show in the log file"); // Getting service configuration information RoleManager.GetConfigurationSetting("MyConfigSection"); // Getting local environment; code snippet below shows getting a pointer //to a local file ILocalResource resource = RoleManager.GetLocalResource( "ComputeLocalStorage")

By the way, it is a programming best practice to replace the above logging and resource access code with reusable application blocks (Logging and Configuration blocks) such as Microsoft Enterprise Library application blocks. You can implement specific providers for Azure logging and configuration management and plug them into the runtime. These best practices decouple applications from platform-specific APIs and help increased reusability between on-premise and cloud environment.

Storage needs for your application will need to be rewritten to take advantage of the Azure storage patterns supported. I will address this shortly.

Next, add a Worker role project into your project and then open WorkRole.cs. This file includes a class called WorkerRole that derives from the RoleEntryPoint class. This class provides methods to manage the initialization, starting, and stopping of a Worker role service, as well as for monitoring the health of the service.

In this step you normally override two methods, as shown in Figure 7.

Figure 7 WorkerRole

public class WorkerRole : RoleEntryPoint { public override void Start() { RoleManager.WriteToLog("Information", "Generating metadata from digital assets"); while (true) { RoleManager.WriteToLog("Information", "background task is executing thumbnail extraction"); Thread.Sleep (1000); } } public override
RoleStatus GetHealthStatus() { return RoleStatus.Healthy; } }

Now run the application in the local development fabric by hitting F5. You will see logs written in the Web and Worker Roles console instances.

Now let's configure a blob store and access it from the Web role.

First you need to set up the blob storage account information.You'll need to specify the following parameters:

AccountName: Specify the name of your Azure account.

AccountSharedKey: Specify the key used to authenticate a request made against Azure storage. To authenticate a request, you must sign the request with the key for the account that is making the request.

BlobStorageEndpoint: Specifies the base URI of the blob storage service.

ContainerName: Specifies the name of the blob container used to store images for this application.

For example, take a look at the following configuration settings:

<ConfigurationSettings> <Setting name="AccountName" value="fabrikamAccount" /> <Setting name="AccountSharedKey" value="Eby111M02xNOcqFlqUwJPLlmEtlCD XJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBekso45Gw==" /> <Setting name="BlobStorageEndpoint" value="http://127.0.0.1:10000"/> <Setting name="ContainerName" value="fabrikamImagegallery"/> </ConfigurationSettings>

The REST API for Blob Storage exposes two resources: containers and blobs. A container is a set of blobs; every blob must belong to a container. Blobs are written using PUT operations as defined by the REST API. Using these APIs you can create a hierarchical namespace to organize assets such as fabrikamImagegallery/images and fabrikamImagegallery/documents, where fabrikamImagegallery is the name of the container and images refers to the blob name. It should be noted that each blob could be constructed from blocks of a maximum of 4MB in size for efficient management.

Working with the REST API is rather cumbersome and hence the SDK ships with a sample library that abstracts the REST calls and exposes well known access patterns through the BlobContainer and StorageAccountInfo classes.

Figure 8 shows how to work with Blob storage.

Figure 8 Using Blob Storage

BlobStorage blobStorage = BlobStorage.Create(StorageAccountInfo.GetDefaultBlobStorageAccountFromConfiguration()); BlobContainer newContainer =
blobStorage.GetBlobContainer("ContainerName"); newContainer.CreateContainer(null, ContainerAccessControl.Public);
this.container.ListBlobs(String.Empty, false); BlobProperties properties = new BlobProperties(string.Format(CultureInfo.InvariantCulture, "image_{0}",
"idenitifier1")); NameValueCollection metadata = new NameValueCollection(); metadata["Id"] = id; metadata["Filename"] = "filename"; metadata["Tags"] =
"Image"; properties.Metadata = metadata; this.container.CreateBlob(properties, imageBlob, true);

Now, let's create a Queue storage service to send message from a Web role to a Worker role. The Azure Queue service provides reliable, persistent messaging within and between services. The REST API for the Queue service exposes two resources: queues and messages. It is possible to create an unlimited number of queues each identified using a unique name. The maximum size of the message is restricted to 8KB. You are responsible for deleting the message after reading; otherwise the message will remain in the queue for the specified time.

Figure 9 shows how to access Queue storage and messages in a Worker Role.

Figure 9 The Queue Storage Operations

public class WorkerRole : RoleEntryPoint { public override void Start() { Uri baseUri = new Uri(RoleManager.GetConfigurationSetting("QueueEndpoint"));
string accountName = RoleManager.GetConfigurationSetting("AccountName"); string accountKey =
RoleManager.GetConfigurationSetting("AccountSharedKey"); StorageAccountInfo account = new StorageAccountInfo( baseUri, null, accountName,
accountKey); QueueStorage service = QueueStorage.Create(account); MessageQueue queue = service.GetQueue("fabrikamimageprocessingqueue");
while (true) { Thread.Sleep(10000); if (queue.DoesQueueExist()) { Message msg = queue.GetMessage(); if (msg != null) { queue.DeleteMessage(msg); } } }
} ...

Now you are ready to try out this application in the local development fabric by hitting F5. You will see Web Roles receiving inputs and sending messages to Worker Roles through Queues. By now you have completed all the necessary development activities needed and are ready to publish the application to Azure.

Deploying the Service

Deploying services to Azure is relatively simple. However, you first need to decide what information to include in the service definition file (application configuration information) versus service configuration file (environment requirements), how to package the application, whether you can leverage existing tools to create and upload packages, and how to store your packages in the cloud storage and later deploy to Azure.

Here's how I will address these requirements for the sample application.

Fabrikam ISV has developer, deployment, and operational roles to manage its cloud applications. Developers create a service definition file that contains information about the application. The service definition file (.csdef) defines the roles available to a service, specifies the service endpoints, and establishes configuration parameters for the service. The service definition settings cannot be changed after a service is deployed. This file becomes part of the Service package file during the packaging process.

Developers also create a service configuration file (specifying the number of instances needed and Azure account information) for local debugging and Azure staging deployment. The service configuration file (.cscfg) specifies values for the configuration settings for one or more role instances within the running service. Operational staff can dynamically modify these service configuration settings without redeploying the service. The service configuration file is not packaged with the service, but is uploaded to the Azure Fabric as a separate file.

The deployment staff updates the services configuration file before production deployment with Azure account-specific information and operational requirements (number of production instances needed).

Developers or the deployment staff create Azure deployment packages using Visual Studio tools or the CSPack tool that comes with the Azure SDK.

It is possible to store deployment packages in the Azure blob storage and retrieve while deploying to the Azure cloud platform

The service package file (.cspkg) is a zipped file that contains role definitions and all role related files (binaries, images and config) packaged into one artifact. You could generate these packages by using Visual Studio or using the CSPack tool. When you use Visual Studio to run on the development fabric the package created will have a .csx extension. However when you use the "Publish" feature, it creates a .cspkg file, which is a zipped and encrypted version of the .csx file. This encrypted and zipped version with .cspkg extension is uploaded to the cloud.

Now you can upload your service configuration file (.cscfg) and package file (.cspkg) to Azure. For this, first you need to obtain Azure compute and storage accounts from the Azure Services Developer Portal. After obtaining these accounts you can deploy the package to the cloud though the Azure developer portal. Once files are uploaded, you'll be provided with an internal staging URL that you can use to test your service in the Azure Fabric. When you move into production you will get the production URL.

Administration

Administration functions cover the management, governance, and operational aspects of the applications deployed in a cloud environment. Leveraging the cloud platforms for application hosting assumes that the services deployed are in a controlled environment with appropriate service-level management, resource management, service provisioning, security/trust models, and monitoring. You should be familiar with the key functional capabilities needed for service management. They include:

  • Developing services with operations-friendly implementation best practices by including proper instrumentation.
  • Monitoring the health and availability of cloud applications and services.
  • Collecting metrics and reporting on service usage, performance, and billing.
  • Enabling automated provisioning of services and updating service configurations.

In Azure, agents monitor each VM instance for failure conditions and collect metrics regarding failures, performance measures, and usage metrics.

Azure Fabric automates the provisioning and management aspect with limited human input. People and process are error prone. This automation helps avoid human errors. In Azure, applications run in a partial-trust sandbox, requests are load balanced, and failure conditions are automatically managed.

As a developer you should be familiar with the operational behaviors of your service. These behaviors are documented as part of service definition and configuration models and communicated to the cloud platform to facilitate automation.

In addition, follow these best practices:

Adding logging information in the code

RoleManager.WriteLog ("Information", "message to show in the log file");
  • Overiding GetHeatlthStatus method
public override RoleStatus GetHealthStatus() { // return the health status of worker role. return RoleStatus.Healthy; }
  • Getting Log files for analyzing application states

Azure Service Management

In Azure, service management follows a model-driven approach where models are used to collect desired configuration information from developers and deployers. Azure Fabric uses these models to manage the service lifecycle, including dynamic configuration updates, automated failure handling, and service monitoring. Azure avoids single point of failures (hardware or software) by introducing fault domains where each role is isolated across multiple compute nodes or machine racks. In addition, Azure ensures services are up and running while updating (rolling forward or backward a subset of service) by requesting upgrade domains though configurations.

You should use the "configure" option and "Copy Log" option in the Azure portal to copy the log messages to the blob storage.

Once the copy is completed you could access the logs from blob storage using the programming model described earlier.

In the future Azure will expose management interfaces to the access and control management and operational activities. This helps on-premises service management tools to monitor and manage cloud applications.

Conclusion

Understanding the cloud computing architecture models, finding relevant patterns, deciding on the programming model, and enabling automated management are critical to the success of cloud computing evolution. Azure is designed with developers in mind by enabling developers to quickly and easily create, deploy, manage, and distribute Web applications and services.

As a developer, you should accumulate a core set of patterns that will help you develop cloud applications most effectively. In addition, you should remember to capture and share recurring solutions to problems and other emerging patterns such that our solutions and products can provide better out-of-the-box support for core cloud computing patterns.

I would like to thank Steve Marx, Jason Hogg, David Hill, Fred Chong, Eugenio Pace, and Nataraja Koduru for their valuable feedback on this article.

Joshy Joseph is a principal architect with Microsoft Services Managed Solutions Group. His primary skills and expertise include distributed computing, grid computing, and Web services. He is the author of the book Grid Computing(Prentice Hall, 2004) and a prolific inventor, with more than 35 patents on file. In addition, he has written numerous technical articles on distributed computing and business process development. Joshy can be reached at jojoseph@microsoft.com.