Azure@home Part 14: Inside the VMs
This post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil. Be sure to read the introductory post for the context of this and subsequent articles in the series.
In the previous post, I covered how to configure the deployment of Azure@home to allow Remote Desktop access, a feature introduced with the recent release of the Azure 1.3 SDK. Now that we know how to get into the VMs, I want to take you on a tour of what’s actually running the code in the cloud. As you might know, the web role and worker role functionality in Azure have converged over time, with the only notable difference being the existence of IIS in a web role (and with 1.3, that’s now full IIS not the Hosted Web Core as was the case previously). So, let’s start by looking at the common aspects of the VMs running Azure@home, and then we’ll dig individually into what’s running (and where) for both the WebRole and WorkerRole implementations.
Repeat after me: Windows Azure is a Platform-as-a-Service (PaaS) Offering! Why is that significant? Well, as much as we (ok, I) love to geek out wondering how everything works under the covers, the beauty of Windows Azure is you can be oblivious to most of the details in this post and still build solid, scalable cloud applications – just as you can be an expert driver without knowing the details of how an internal combustion engine works. So do read on to get an appreciation of all that Windows Azure is doing for you as a developer, but don’t lose sight of Azure’s raison d'être to be the most flexible, productive, and open PaaS offering in the cloud computing landscape.
One of the first things I did when getting into my role VMs via Remote Desktop was to take a peek at the hardware specs - via Computer properties from the Start menu. Both the WebRole and WorkerRole instances in my deployment were hosted on VMs exhibiting the specifications you see to the right, namely:
- Windows Server 2008 Service Pack 2 (64-bit OS)
- 2.10 GHz Quad-core processor
- 1.75 GB of RAM
Ultimately, it’s the Windows Azure Fabric Controller running in the data center that has decided upon that specific machine, but it’s the service configuration deployed from the Visual Studio cloud project that tells the Fabric Controller the characteristics of the VM you’re requesting.
The operating system type and version are parameters taken directly from the ServiceConfiguration.cscfg file via the osFamily and osVersion attributes of the ServiceConfiguration element. In Azure@home, these values aren’t explicit and so default to ‘1’ and ‘*’, respectively, denoting an OS version roughly equivalent to Windows Server 2008 SP2 (versus Windows Server 2008 R2) and indicating that the latest OS patches should be applied automatically when available, which is roughly on a monthly cadence. If you’d like to lock your deployment on a specific OS release (and assume responsibility for manually upgrading to subsequent OS patches), consult the Windows Azure Guest OS Releases and SDK Compatibility Matrix for the appropriate configuration value setting. Within the Windows Azure Management Portal you can also view and update the Operating System settings as shown below.
The processor specs (and RAM and local storage, for that matter) are based on another configuration parameter, the vmSize attribute for each of the role elements in the ServiceDefinition.csdef file; the default value is “Small.” That attribute is also exposed via the Properties dialog for each role project within Visual Studio (as shown below). From this configuration, it follows that all instances of a given role type will use the same vmSize, but you can have combinations of worker roles and web roles in your cloud service project that run on different VM sizes.
|Virtual Machine Size||CPU Cores||Clock Speed||Memory||Disk Space for Local Storage Resources|
Given this chart, each of the Azure@home VM instances should be running a single CPU with a clock speed of 1.6 GHz. From the Computer properties dialog (above), you can see that the machine hosting the VM has a Quad-Core AMD Opteron 2.10 GHz processor, but that’s the physical box, and a little PowerShell magic on the VM itself shows that the instance is assigned a single core with the 2.1 GHz (2100 MHz) clock speed:
A 2.1 GHz processor is a bit of an upgrade from the 1.6 GHz that’s advertised, but note that there is some overhead here. Code you’ve deployed is sharing that CPU with some of the infrastructure that is Windows Azure. Virtualization is part of it, but there are also agent processes running in each VM (and reporting to the Fabric Controller) to monitor the health of the instances, restart non-responsive roles, handle updates, etc. That functionality also requires processing power, so look at the 1.6 GHz value as a net CPU speed that you can assume will be available to run your code.
RAM allocation is straightforward given the correlation with vmSize. Azure@home employs small VMs; each small VM comes with 1.75 GB of RAM, and the Computer properties definitely reflect that. From PowerShell, you’ll note the exact amount is just tad less than 1.75 GB (1252 KB short to be exact). Azure nostalgia buffs might notice the PrimaryOwnerName of RedDog Lab – RedDog was the code name for Windows Azure prior to its announcement at PDC 2008!
Local disk space allotments for the VM are also determined by the vmSize selected for the deployed role, and for a small VM, you get 225 GB of memory, which includes space to house diagnostic files and other transient application-specific data. Local disk space is just that, local to the VM, so it is neither persistent nor covered by any service-level agreement (SLA), as both Windows Azure storage and SQL Azure are; therefore, local storage is really only useful for non-critical functions, like caching, or if storage is designed to be reinitialized every time the role restarts. The latter is the case with Azure@home, and you can review part 9 of this series to see how local storage is leveraged in the WorkerRole.
Using the Disk Management utility (diskmgmt.msc) on either VM, you’ll note three partitions (each is a VHD attached by an agent running on the host OS managing your VMs). Role code binaries will typically be found on the E: drive, which is programmatically accessible via the
ROLEROOT environment variable; D: contains the guest OS release (determined by the choice of osVersion and osFamily discussed earlier). The local disk allotment (225 GB for a small instance) is provided via the C: drive. Here are stored some of the diagnostic files like crash dumps and failed requests prior to being transferred to blob or table storage. As noted earlier, application code in web and worker roles can also access this drive - via named ‘partitions’ specified in the role configuration file (as defined by LocalStorage elements in ServiceDefinition.csdef).
At this point you might be wondering how your code is really executing on Azure, and that’s where Task Manager can provide some insight. What is running can depend on configuration, what type of role you’re looking at – web or worker, as well as what you’ve implemented within your code. As you can tell from table below detailing the complete(!) list of processes running on the WebRole VM and WorkerRole VM for Azure@home, there’s a lot in common across the role types. Many of these processes aren’t all that specific to Windows Azure or our specific application, but I’ve further highlighted those that are and provided some explanation on what each process is doing. (Pale yellow denotes processes that are or may be present for either web and worker roles; pale green are processes supporting just worker roles, and pale blue are those supporting just web roles).
|clouddrivesvc||clouddrivesvc||enables Windows Azure Drives|
|csrss (3)||csrss (3)||client/server runtime subsystem|
|FahCore_b4||Worker role resource – Folding@home application|
|Folding@home-Win32-x86||Worker role resource – Folding@home application|
|IISConfigurator||a WCF named pipes service managing IIS configuration of the web rple|
|LogonUI||LogonUI||login UI and screen switching|
|lsass||lsass||local security authority subsystem management|
|lsm||lsm||local session management (such as the session triggered by Remote Desktop access)|
|MonAgentHost||MonAgentHost||DiagnosticMonitor agent supporting Azure diagnostics (see Part 8 of this series for more details)|
|msdtc||msdtc||Distributed Transaction Coordinator console|
|osdiag||osdiag||Remote Desktop Performance Agent|
|rdpclip||rdpclip||remote copy/paste support for Remote Desktop Services|
|RemoteAccessAgent||RemoteAccessAgent||Azure agent supporting Remote Desktop Services (via port 3389)|
|RemoteForwarderAgent||Azure agent to which all Remote Desktop traffic is routed from the load balancer; this process then routes the traffic to the specific targeted role instance|
|services||services||management of OS services|
|SLsvc||SLsvc||Windows software licensing service|
|smss||smss||Windows OS session manager subsystem|
|svchost (17)||svchost (15)||host application for various Windows services|
|vds||vds||Windows server virtual disk service|
|vmicsvc (2)||vmicsvc (2)||Hyper-V guest VM integration services|
|w3wp||IIS Worker process hosting Azure@home ASP.NET site|
|WaAppAgent||WaAppAgent||Windows Azure guest agent|
|WaHostBootstrapper||WaHostBootstrapper||Bootstrapper run by WaAppAgent|
|WaIISHost||Web role host run by bootstrapper|
|WaWorkerHost||Worker role host by bootstrapper|
|wininit||wininit||core OS process (starts all services)|
|winlogon (2)||winlogon (2)||OS login management|
Note: values in parentheses refer to number of instances of that process running when tasks were sampled
Boiling this chart down, for both web and worker roles:
- clouddrivesvc - is the CloudDrive Service supporting access to Windows Azure Drives. Azure@home doesn’t use that feature, but there seems no way to signal the service configuration that Windows Azure Drives aren’t part of the role implementation.
- MonAgentHost – manages Windows Azure diagnostics.
- RemoteAccessAgent – supports access via Remote Desktop (and so is optional, but rather necessary for this post!) .
- RemoteForwarderAgent – manages forwarding Remote Desktop traffic to the correct role instance; it’s only required on one of the roles in the application, and for this deployment it happens to be part of the WorkerRole.
- WaAppAgent – is the Windows Azure guest agent which the Fabric Controller injects into the role deployment to start up your role implementation. This process is actually a parent process of most of the other Azure-related processes (see right).
- WaHostBoostrapper – is a bootstrapper process spawned by WaAppAgent which then spawns either WaIISHost for a web role or WaWorkerHost for a worker role.
The WebRole VM
A web role really consists of two parts, the actual web application (typically ASP.NET) and the role startup code that implements RoleEntryPoint (by default, it’s in WebRole.cs created for you by Visual Studio). Prior to the 1.3 release, both the entry point code and the actual web site were executed within a Hosted Web Core process (WaWebHost.exe). In 1.3, which I’m using here, the entry point code is executed by the WaIISHost.exe process, and the site itself is managed by a w3wp.exe process in IIS.
Since it’s just IIS, you can use IIS Manager to poke around on the VM just as you might use it to manage a site that you’re hosting on premises. Below, for instance, is IIS Manager within one of the two live Azure@home WebRole instances. Via the Basic Settings… option, it’s easy to find just where the site files are located:
Navigate to that location in the VM, and you can make changes (or cause all kinds of damage!) to the running application. For instance, if I simply edit the markup in status.aspx to have a red background and then navigate to the site from my local machine, the resulting page takes on the rather jarring new color. Now, if I refresh that page, or got to a second browser and view it, I see the original baby blue background! Why is that? There are two web role instances running in Azure for Azure@home, and the Windows Azure load balancer uses a round-robin approach, so every other request to status.aspx will hit the updated, red-background site.
Beware of drift! The scenario with the alternating background colors may be a bit contrived, but it very clearly shows that you need to proceed cautiously when making any modifications to a deployed application in a role instance. The changes you are making are out of band, and will not be saved should that role instance be recycled. Additionally, if you have multiple instances of a role in your applications, applying a change to one of those instances and not another can bring your app crashing down at best or make for some very difficult-to-diagnose runtime issues at worst.
If you poke around a bit more on that E: drive, you’ll also find a number of modules supporting diagnostics, various plug-ins (RemoteAccess and RemoteForwarder, for example), and binaries for Windows Azure Drive storage. And if you poke around even more, you might notice that the Azure@home web site (ASPX pages, bin directory, images, the whole nine yards) appears to be deployed twice – once to
E:\approot and then again to
E:\approot\_WASR_\0! The redundancy brings us back to the observation made at the beginning of this section: a web role has two parts, namely, the startup code and the actual web site run in IIS. We know that IIS sees the site at
E:\approot\_WASR_\0, and as you can verify via Process Explorer, it’s the binary at
E:\approot\bin that the WaIISHost process accesses to invoke the methods of the RoleEntryPoint interface (OnStart, OnStop, and Run).
The WorkerRole VM
Worker roles don’t have IIS running, so the runtime infrastructure in the VM hosting the role is a bit less complex (at least for Azure@home). With Process Explorer it’s pretty easy to see how the code is handled at runtime.
Similar to a web role, WaAppAgent gets the role started via the WaHostBoostrapper. Since this is a worker role, WaWorkerHost (instead of WaIISHost) loads the role implementation you provided – the code that implements the RoleEntryPoint interface. That code was compiled and deployed from Visual Studio as the WorkerRole.dll assembly, which you can see has been loaded (twice – for some reason) by the worker host process. As you can see from the Explorer screen shot, WorkerRole.dll is loaded from essentially the same location as its analog in the WebRole –
E:\approot. Remember, that the
ROLEROOT environment variable is the way to programmatically determine the location where the code resides; it may not always be the E: drive.
Additionally, Process Explorer shows that WaWorkerHost has spawned the Folding@home-Win32-x86.exe process. That’s occurring because the WorkerRole code we wrote (in part 10 of this series) invokes Process.Start on that executable image. The image itself is the one downloaded from Stanford’s Folding@home site. The FahCore_b4.exe process is something that is launched by Stanford’s console client, which from the perspective of Azure@home is a black box.
Take a look at the CPU utilization, by the way. This worker role (or more exactly, FahCore_b4.exe) is pegging the CPU – as we’d expect given that it’s performing some type of computationally intensive simulation under the covers. This is great too from the perspective of our pocketbook: we’re paying 12 cents an hour for this VM whether or not it’s doing anything, so it’s nice to know it’s as busy as possible!
Local Storage in the WorkerRole VM
If you’ve followed the implementation of the WorkerRole, you know that it makes use of local storage on the VM. As part of the Visual Studio project, a resource called FoldingClientStorage was defined of size 1024 MB. From the discussion at the beginning of this post, recall that the 225 GB local storage allotment for the small VM size used for this deployment is reserved for us as the C: drive.
The precise physical location of the 1GB reserved for FoldingClientStorage on the drive can be determined programmatically by code such as this:
String storageRoot = RoleEnvironment.GetLocalResource("FoldingClientStorage").RootPath;
In the role instance I peered into, storageRoot resolves to the physical directory
C:\Resources\Directory\hexstring.WorkerRole.FoldingClientStorage. It’s in a subdirectory thereof (client) that the Azure@home SetupStorage method of the WorkerRole (code we wrote back in Part 9) copied the Folding@home-Win32-x86 executable that, in turn, was originally included as content in the Visual Studio project. The other files you see in the Explorer screen shot below are by-products of the Folding@home-Win32-x86 application churning away. The unitinfo.txt file is the most interesting of the files, because that’s what the Azure@home WorkerRole’s ReadStatusFile accesses periodically to parse out the Folding@home work unit progress and write it to the Azure Storage table called workunit, where it’s then read by the status.aspx page of the WebRole.
To wrap this post up, I’ll leave you with a couple of references to sessions at PDC 2010 that were a great help to me and are likewise available to you to learn more about what’s going on under the hood in Windows Azure.
- Inside Windows Azure, with Mark Russinovich
- Inside Windows Azure Virtual Machines, with Hoi Vo
- Migrating and Building Apps for Windows Azure with VM Role and Admin Mode, with Mohit Srivastava
I still have a few blog posts in mind to explore some of the new and enhanced features as they might apply to Azure@home – including changes to Azure diagnostics in SDK 1.3, utilizing startup tasks and the VM Role, and even playing with the Access Control Service, so stay tuned!